uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
2,869,038,154,528 | arxiv | \section{Introduction}
Many galactic disks in the nearby Universe appear to be non-axisymmetric \citep{Jog2009}. The
feature was first systematically studied in \citet{Baldwin1980}, which considered lopsided galaxies as a
subclass of spirals. They examined about 20 best known galaxies with asymmetries and proposed that the asymmetry is
caused by a lopsided pattern of elliptical orbits. \citet{Rix1995} quantified the asymmetry by odd Fourier modes and
found that out of 18 face-on spirals considered, about 1/3 were substantially lopsided. These findings were confirmed
and extended to 60 field spiral galaxies by \citet{Zaritsky1997}, 54 early-type disk galaxies by \citet{Rudnick1998},
147 galaxies of the OSUBGS sample by \citet{Bournaud2005} and 167 galaxies of different luminosities and morphologies
by \citet{Zaritsky2013}. Galaxies with lopsided morphology are expected to also show large-scale asymmetries in their
kinematics; for example, in the form of different shapes of the rotation curve on both sides of the galaxy
\citep{Swaters1999, Noordermeer2001, Jog2002, Eymeren2011a, Ghosh2021}.
A few observational studies, including \citet{Zaritsky1997}, \citet{Conselice2000}, and \citet{Rudnick2000}, have
suggested a link between the presence of the lopsided disk and recent star formation events and an excess of blue color
in the galaxy. This connection was later established in \citet{Reichard2009}, which studied lopsidedness in a
sample of $\sim$ 25,000 nearby galaxies from the Sloan Digital Sky Survey (SDSS). They found a strong correlation
between lopsidedness of the galactic disk and the youth of the galaxy stellar population. The lopsided galaxies from
SDSS turned out to be more star-forming, more metal-poor, and younger than the symmetric objects. These correlations
were later confirmed for other galaxy samples by \citet{Wang2011} and \citet{Yesuf2021} and are consistent with
scenarios that deliver lower metallicity gas into the galaxy central region.
The possible origin of lopsidedness in galactic disks has been addressed in studies using simulations of
galaxy evolution \citep{Zaritsky1997, Bournaud2005, Mapelli2008, Ghosh2021}. Most often, interactions between
galaxies, such as mergers and flybys, were considered as plausible mechanisms for the generation of such distortions.
It soon became clear, however, that they cannot explain all occurrences of this phenomenon given its presence in
isolated galaxies as well. One other possibility is that a lopsided disk forms as a result of a perturbations from a
lopsided halo \citep{Jog1997, Jog1999, Levine1998}. It has also been shown that long-lived lopsided global modes in the
stellar component can exist in a galaxy evolving in isolation \citep{Saha2007, Dury2008}.
\begin{figure*}
\centering
\includegraphics[width=17cm]{selectionplot.eps}
\caption{Selection of disk galaxy sample. The six panels show the properties of the selected
galaxies in different parameter combinations: the axis ratios $b/a$ and $c/a$, the triaxiality parameter $T,$ and
the rotation parameter $f$. Black points correspond to 1912 selected disk galaxies, and the green points show the
remaining 4595 galaxies of the total of 6507 well-resolved objects.}
\label{selection}
\end{figure*}
In light of the observationally detected correlations, the particularly promising scenario for the formation of the
asymmetric shape in a spiral galaxy seems to be the one proposed by \citet{Bournaud2005}. Their simulations
demonstrated that galaxy interactions and mergers can trigger strong lopsidedness, but to explain all the observational
results, it is required that in many cases lopsidedness results from cosmological, asymmetric accretion of gas on
galactic disks. This picture is confirmed by studies of individual isolated lopsided spiral galaxies such as P11695
\citep{Vulcani2018}.
Recently, it has become possible to investigate the origin of asymmetry in galactic disks not only by the means of
controlled simulations, as was done before, but also in the cosmological context. New sets of cosmological simulations
now available are able to produce large samples of galaxies with sufficient resolution to study their morphology. In
this work and for this purpose, we used the simulations of galaxy formation from the IllustrisTNG project
\citep{Springel2018, Marinacci2018, Naiman2018, Nelson2018, Pillepich2018}. The simulations follow the
evolution of galaxies from the early Universe to the present by solving gravity and hydrodynamics, and
applying additional prescriptions for star formation, galactic winds, magnetic fields, and the feedback from black holes.
Various studies performed thus far have demonstrated that these simulations are able to reproduce many of the observed
properties of galaxies, including their morphologies \citep{Nelson2018, Genel2018, Rodriguez2019}. The set of
simulations comprises the results obtained with different resolution in boxes of 300, 100, and 50 Mpc on one side
(referred to as TNG300, TNG100, and TNG50, respectively).
Asymmetries in galaxies have been addressed using IllustrisTNG in \citet{Watts2020}, which focused on the
distortions in the gas using HI spectral lines in TNG100 galaxies, and \citet{Whitney2021}, which considered
asymmetry in TNG300 and TNG50 galaxies in the context of mergers. Here, we studied the asymmetry of the stellar
component of late-type galaxies in the TNG100 simulation, which provides a sufficient sample of galaxies with
good resolution. In Section~2, we describe our selected sample of galaxies and the identified subsample of
lopsided disks. In Section~3, we discuss the basic properties of these asymmetric objects using three representative
examples. Section~4 is devoted to the origin of the lopsided shape in simulated galaxies and the discussion follows in
Section~5.
\section{Sample selection}
For this study, we made use of the publicly available simulation data from the IllustrisTNG project as described by
\citet{Nelson2019}. We chose the TNG100 run, that is, the simulation performed in the 100 Mpc box, which contains a
sufficient number of galaxies with different morphologies. In order to have enough resolution
in each object and thus obtain a sample eligible for morphological analysis, we selected the galaxies at $z=0$ by
restricting the sample of subhalos to those with the total stellar masses greater than $10^{10}$ M$_\odot$, which
corresponds to about $10^4$ stellar particles per object. This criterion is fulfilled by 6507 objects in the final
snapshot of the Illustris TNG100 simulation. In order to select disk galaxies among them, we imposed two additional
conditions: we required the galaxies to be rotationally supported and rather thin.
Following \citet{Joshi2020}, we assume that disk galaxies have the rotation parameter $f > 0.4$. The rotation
parameter is defined as the fractional mass of all stars with circularity parameter $\epsilon > 0.7$, where
$\epsilon=J_z/J(E),$ and $J_z$ is the specific angular momentum of the star along the angular momentum of the galaxy,
while $J(E)$ is the maximum angular momentum of the stellar particles at positions between 50 before and 50 after the
particle in question in a list where the stellar particles are sorted by their binding energy \citep{Genel2015}.
The disk galaxies were supposed to be sufficiently thin if their shortest-to-longest axis ratio $c/a$ of the
stellar component was lower than 0.5. For these values, we used (and reproduced) the measurements based on the mass
tensor of $c/a$ within two stellar half-mass radii, $2 r_{1/2}$, provided by the Illustris team in the Supplementary
Data Catalogs of stellar circularities, angular momenta, and axis ratios and calculated as described in
\citet{Genel2015}. The axis ratios were estimated from the eigenvalues of the mass tensor of the stellar mass obtained
by aligning each galaxy with its principal axes and calculating three components ($i =$ 1, 2, 3): $M_i = (\Sigma_j m_j
r^2_{j,i}/\Sigma_j m_j)^{1/2}$, where $j$ enumerates over stellar particles, $r_{j,i}$ is the distance of stellar
particle $j$ in the $i$-axis from the center of the galaxy, and $m_j$ is its mass. The eigenvalues were sorted so that
$M_1 < M_2 < M_3,$ which means that the shortest-to-longest axis ratio is $c/a = M_1/M_3$, while the
intermediate-to-longest axis ratio is $b/a = M_2/M_3$.
We note that the values of $c/a$ estimated from the mass tensor within $2 r_{1/2}$ are not directly comparable to axis
ratios estimated from the whole optical images of galaxies, which are usually much lower. For example, a realistic
$N$-body realization of a Milky Way-like galaxy has the $c/a$ ratio within two disk scale lengths on the order of 0.2
in spite of quite a flat appearance \citep{Lokas2019}. However, even the flattest galaxies formed in
IllustrisTNG have $c/a > 0.2$, which means that they are generally thicker than the observed population of disks,
probably as a result of limited resolution \citep{Haslbauer2022}. It may therefore seem more proper to call them disky
galaxies, but in the following we refer to them as disks for simplicity.
The sample of disk galaxies with the rotation parameter $f > 0.4$ and the shortest to longest axis ratio $c/a < 0.5$
contains 1912 objects. The properties of the selected galaxies in comparison to the whole sample are shown in
Fig.~\ref{selection}. In the six panels of the figure, we plot the positions of the galaxies in different planes of
parameters: the axis ratios $b/a$, $c/a$, the triaxiality parameter $T = [1-(b/a)^2]/[1-(c/a)^2]$, and the rotation
parameter $f$. In the three upper panels of Fig.~\ref{selection}, the rotation parameter $f$ lies in the vertical axis.
These panels illustrate the selection with the simple cutoff at $f > 0.4$. In the lower three panels, the positions of
the selected galaxies are less obvious, and in particular we find that they all lie along the border of the whole
distribution in the $T - b/a$ plane in the lower right panel of the figure. We also note that many of the disks are
quite triaxial with $T$ reaching the values of 0.7, but none is decidedly prolate ($T > 0.7$).
\begin{figure}
\centering
\includegraphics[width=7.5cm]{histogramsa1.eps}
\caption{Distributions of $m = 1$ Fourier mode values $A_1$ for lopsided disks with $A_1 > 0.1$ (upper panel) and
remaining disks with $A_1 < 0.1$ (lower panel). Measurements of $A_1$ were done within $(1-2) r_{1/2}$. The bin
size is 0.01 in the upper panel and 0.005 in the lower one.
}
\label{histogramsa1}
\end{figure}
In order to quantify the degree of lopsidedness in all 1912 disks at the present time, we calculated different modes of
the Fourier decomposition of the surface density distribution of stellar particles projected along the short axis, $A_m
(R) = | \Sigma_j m_j \exp(i m \theta_j) |/\Sigma_j m_j$, where $\theta_j$ is the azimuthal angle of the $j$th star,
$m_j$ is its mass, and the sum goes up to the number of particles in a given radial bin along the cylindrical radius
$R$. The galaxies were centered on their dynamical center, which for IllustrisTNG subhalos is determined as the position
of the particle with the minimum gravitational potential energy.
In observational studies of lopsidedness, the measurements of the Fourier modes are usually performed in the radial
range of $(1.5 - 2.5) R_{\rm e}$, where $R_{\rm e}$ is the galaxy exponential radius \citep{Rix1995, Bournaud2005}. In
order to make the comparison with observations meaningful we performed the measurements in a similar radial bin. Since
the disks of the simulated galaxies are only approximately exponential, the stellar half-mass radius, $r_{1/2}$, is a
better, more robust measure of the galaxy size, used for estimating different properties of IllustrisTNG galaxies. In
addition, for an exponential disk $1.5 R_{\rm e}$ contains almost half the light (to be exact, $r_{1/2} = 1.68
R_{\rm e}$). Therefore we chose to estimate the Fourier modes from stars with cylindrical radii $R$ in the
$(1-2) r_{1/2}$ range.
The values of the $m = 1$ mode, $A_1$, provide a measure of the
asymmetry of the stellar distribution. Among the sample of 1912 disks, we identified 161 galaxies with significant
asymmetry $(A_1 > 0.1$) in this outer range, and in the following we refer to these galaxies as lopsided
disks. The distribution of the $A_1$ values for the 161 lopsided disks and the remaining galaxies with $A_1 < 0.1$ are
shown in Fig.~\ref{histogramsa1}. We see that most of the disks have very low values of the asymmetry, while in the
lopsided sample the values of $A_1$ reach 0.29, although only seven galaxies have $A_1 > 0.2$. The mean $A_1$
for the whole sample of 1912 galaxies is 0.051 and the median 0.044.
\section{Properties of lopsided disks}
\begin{table*}
\caption{Properties of three selected lopsided disks from IllustrisTNG at $z=0$.
The masses are the total masses of different components
and the $g-r$ color was estimated from all stars. The values of $A_1$ were measured in the range $(1-2)
r_{1/2}$ and the rest of the parameters within $2 r_{1/2}$.
}
\label{properties}
\centering
\begin{tabular}{c c r r c c c c c c c c c}
\hline\hline
ID \ & $M_{\rm stars}$ & $M_{\rm gas}$ \ \ \ \ & $M_{\rm dm}$ \ \ \ \ &$r_{1/2}$& $b/a$ & $c/a$ & $T$ & $A_1$ & $f_{\rm gas}$ & SFR & $g-r$ & $Z$ \\
& [$10^{10}$ M$_\odot$] & [$10^{10}$ M$_\odot$] & [$10^{11}$ M$_\odot$]& [kpc] & & & & & & [M$_\odot$ yr$^{-1}$] & [mag] & $[Z_\odot]$ \\ \hline
222275 & 1.52 & 1.61 \ \ \ \ & 2.06 \ \ \ \ & 3.17 & 0.92 & 0.42 & 0.18 & 0.165 & 0.27 & 2.67 & 0.25 & 1.56 \\
436552 & 1.39 & 3.15 \ \ \ \ & 2.35 \ \ \ \ & 3.82 & 0.87 & 0.43 & 0.30 & 0.289 & 0.22 & 2.63 & 0.26 & 1.65 \\
568873 & 1.27 & 5.21 \ \ \ \ & 3.50 \ \ \ \ & 4.86 & 0.90 & 0.45 & 0.24 & 0.260 & 0.33 & 1.44 & 0.33 & 1.42 \\
\hline
\end{tabular}
\end{table*}
In this section, we describe the properties of the sample of 161 lopsided disks in more detail, focusing on three
representative examples with high values of $A_1$ measured within $(1-2)
r_{1/2}$. The surface density distributions of these three galaxies in the face-on view at the present time
corresponding to the last simulation snapshot ($z=0$) are plotted in Fig.~\ref{surden}. The left column panels show the
stellar component and the right column ones show the gas. The asymmetry of the stellar disks is well visible in the images
and so is the asymmetry in the gas, although the latter has a different form. The gaseous disks are usually more
extended and less uniform, taking the form of rings and spirals. The basic properties of the three galaxies at the
present time are given in Table~\ref{properties}. The first column of the table gives the identification number of the
subhalo in the IllustrisTNG catalog and the next three list the total masses of the stars, gas, and dark matter. The
fifth column gives the stellar half-mass radii, $r_{1/2}$, and the next three columns list the shape parameters: $b/a$,
$c/a,$ and $T$. The values of $A_1$ are provided in the ninth column. The remaining columns list the values of gas
fraction, star formation rate, color, and metallicity.
\begin{figure}
\centering
\includegraphics[width=4.4cm]{surdenstars_222275.eps}
\includegraphics[width=4.4cm]{surdengas_222275.eps} \\
\vspace{0.3cm}
\includegraphics[width=4.4cm]{surdenstars_436552.eps}
\includegraphics[width=4.4cm]{surdengas_436552.eps} \\
\vspace{0.3cm}
\includegraphics[width=4.4cm]{surdenstars_568873.eps}
\includegraphics[width=4.4cm]{surdengas_568873.eps} \\
\caption{Surface density distributions of stars (left column) and gas (right column) in the face-on view at the present
time for three selected lopsided disks from IllustrisTNG (from top to bottom). The surface densities are
normalized to the central value in each case and the contours are equally spaced in $\log \Sigma$.}
\label{surden}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=7cm]{a1phaseprofiles.eps}
\caption{Profiles of $m = 1$ Fourier mode $A_1 (R)$ (upper panel) and its phase angle $\phi_1 (R)$ (lower panel)
at the present time for three selected lopsided disks from IllustrisTNG. Measurements were carried out in bins of
$\Delta R = 0.5$ kpc of cylindrical radius $R$ in the face-on projection.}
\label{a1phaseprofiles}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=7.5cm]{shape_all.eps}
\caption{Evolution of shape parameters for three selected lopsided disks from IllustrisTNG. The blue, red and green
lines show, respectively, the axis ratios $b/a$, $c/a$ and the triaxiality parameter $T$. Measurements were done within
$2 r_{1/2}$.}
\label{shape_all}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=7.6cm]{oddmodestime_all.eps}
\caption{Evolution of odd Fourier modes $A_1$, $A_3,$ and $A_5$ for three selected lopsided disks from IllustrisTNG.
Measurements were done in the range of $(1-2) r_{1/2}$.}
\label{oddmodestime_all}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=7.5cm]{mass_all.eps}
\caption{Evolution of total mass in different components for the three selected lopsided disks from
IllustrisTNG. The red, green, and blue lines show, respectively, the stellar, gas, and dark matter masses.}
\label{mass_all}
\end{figure}
In order to quantify the asymmetry in more detail for each of the 161 lopsided galaxies, we calculated the profiles of
the $A_1$ mode and their phase angles as a function of the cylindrical radius $R$ in the face-on projection. Three
examples of such profiles for the selected lopsided galaxies are shown in Fig.~\ref{a1phaseprofiles} as a
function of $R/r_{1/2}$. Interestingly, the shapes of the $A_1$ profiles (the upper panel of
Fig.~\ref{a1phaseprofiles}) are similar in the three cases: they have low values of $A_1$ on the order of 0.1
near the center, which increase with radius reaching a maximum of about 0.2-0.35, to finally decrease again
down to about 0.2 at the outer radii. The peak of the $A_1$ profile is reflected in the behavior of the phase (the
lower panel of Fig.~\ref{a1phaseprofiles}) in the sense that the phase remains constant in the range of radii where the
peak occurs. This means that $A_1$ is a global mode, extending over a substantial fraction of the disk: $(0.5 - 2)
R/r_{1/2}$. We find that most of the 161 lopsided galaxies have similar shapes of the $A_1 (R)$ and $\phi_1 (R)$
profiles, which means that they seem to be general properties of lopsided disks.
\begin{figure}
\centering
\includegraphics[width=7.5cm]{sfr.eps}
\caption{Evolution of star formation rate within $2 r_{1/2}$ for the three selected lopsided disks from
IllustrisTNG.}
\label{sfr}
\end{figure}
The first measurements of the $A_1$ profiles in real galaxies were performed in \citet{Rix1995}, which
discovered that they grow in the outer parts of galaxies. \citet{Rudnick1998} averaged the profiles of $A_1$ modes for
lopsided galaxies among their early-type disk sample and found them to be increasing with radius in a way similar to
the ones shown in the upper panel of Fig.~\ref{a1phaseprofiles}. \citet{Angiras2006} and \citet{Angiras2007} measured
the profiles of $A_1$ in HI surface density maps of galaxies in the Eridanus and Ursa Major groups, respectively, while
\citet{Eymeren2011b} performed such measurements for 70 galaxies from the Westerbork HI Survey of Spiral and Irregular
Galaxies. It turns out that in many cases their $A_1$ profiles are growing with radius and similar to those presented
here, although the variety of profile shapes among the observed galaxies is much larger.
In addition to the properties in the final simulation output, which we used to define our sample, we also looked at
the evolution of the lopsided galaxies in time. Measurements of the axis ratios $b/a$, $c/a,$ and the triaxiality
parameter $T$ (within $2 r_{1/2}$) reveal that, except for their present lopsidedness, all these galaxies are bona fide
disks that preserved their disky morphology for a long time. Three examples of the evolution of these shape parameters
are shown in Fig.~\ref{shape_all} for our selected lopsided disks. Their history seems quite uneventful, with high $b/a$
and low $T$ values characteristic of oblate systems, preserved for a long time until the present. Significant
departures from this quiet evolution can only be seen for ID222275 around $t = 6$ Gyr when this galaxy experienced two
mergers and was temporarily distorted. The evolution of these properties is similarly simple for the rest of
our lopsided disks, except for those with bars which have lower $b/a$ and higher $T$ resulting from the prolate
component.
In spite of their long and quiet existence as disks, the lopsided galaxies still must have acquired their asymmetry at
some point in their history. It is thus interesting to check the evolution of the odd Fourier modes in time.
Examples of these are plotted in Fig.~\ref{oddmodestime_all}, where in addition to $A_1$ we also include the higher
modes $A_3$ and $A_5$. Interestingly, the present high values of $A_1$ are often a very recent occurrence,
appearing only in the last one or two simulation outputs, although they are sometimes (as in the case of ID436552)
preceded by a longer period of enhanced or oscillating $A_1$, but at a lower level. The strong peaks of $A_1$ were more
frequent in the past, because then the forming galaxies were more often undergoing mergers and distorted as a result.
Notable examples of such events are the peaks of $A_1$ at $t = (7-8)$ Gyr for ID222275 (the upper panel of
Fig.~\ref{oddmodestime_all}) corresponding to the strong variation of the shape visible in Fig.~\ref{shape_all},
resulting from mergers, as mentioned above. We note that the $A_3$ and $A_5$ modes are usually subdominant with
respect to $A_1$ so the latter is the most useful and sufficient to describe the departures from symmetry.
\section{Origin of the lopsided shape}
A few possible scenarios have been proposed in the literature regarding the origin of lopsided galactic disks
\citep{Bournaud2005, Mapelli2008, Ghosh2021}. The mechanisms include galactic interactions in the form of mergers and
flybys, ram pressure stripping of the gas, and asymmetric star formation in isolated disks. In this section we try to
discriminate between them and find if there is a prevalent way to form lopsided disks.
It is relatively easy to check if the galaxies belonging to our lopsided sample are affected by tidal interactions with
more massive objects. Such events manifest themselves very clearly in the evolution of the total mass in dark matter
and gas in the IllustrisTNG data. When the galaxy in question passes near a more massive object, its mass is stripped
and assigned to the more massive companion. Such effects can be considered significant if a galaxy loses a
substantial fraction of its maximum mass. We find that for our lopsided galaxy sample, 38 out of 161 objects (24\%)
lost more than 10\% of their dark masses and only ten (6\%) lost more than 50\%. The most affected galaxy lost 80\%
of its dark mass, so the interactions were never strong enough to strip almost all of the galaxy's dark matter as is
the case of objects on tight orbits around a massive galaxy cluster \citep{Lokas2020}.
A good example of this subsample of galaxies is ID222275, which about 1 Gyr ago interacted with a group of
galaxies including two objects of mass of the order of $10^{12}$ M$_\odot$. The mass loss in dark matter and gas in
this galaxy is well visible in the upper panel of Fig.~\ref{mass_all} showing the evolution of total mass in time.
As a result of this interaction, the gas is ram-pressure stripped and its distribution is similar to that of jellyfish
galaxies \citep{Yun2019}. The changed distribution of the gas could have affected the stellar disk and cause its
departure from symmetry. A similar process could be at work in other galaxies of this subsample.
\begin{figure*}
\centering
\includegraphics[width=7.5cm]{histogramsgf1.eps}
\hspace{0.5cm}
\includegraphics[width=7.5cm]{histogramssfr1.eps}\\
\vspace{0.3cm}
\includegraphics[width=7.5cm]{histogramscol1.eps}
\hspace{0.5cm}
\includegraphics[width=7.5cm]{histogramsmet1.eps}
\caption{Distributions of gas fractions (upper left), star formation rates (upper right), color (lower left),
and metallicity (lower right) for lopsided disks with
$A_1 > 0.1$ (blue) and remaining disks with $A_1 < 0.1$ (red). Measurements of the properties were
done within $2 r_{1/2,}$ except for the color, which is estimated from all stars in the galaxy. All histograms were
normalized to unity.}
\label{histogramsgfsfr}
\end{figure*}
By inspection of the merger trees of the lopsided disks available in the simulation data, we found that only 15
out of 161 (9\%) galaxies experienced a significant gas-rich merger in their recent history ($z < 0.1$), although
almost all continue to accrete small dark subhalos with mass of the order of $10^{8}$ M$_\odot$ until the
present epoch. It is extremely difficult to ascertain to what extent these more significant mergers could be the cause of
the present lopsidedness of the disks.
An example of galaxies belonging to this subsample is ID436552. It has experienced a relatively recent ($z \sim 0.1$)
merger with a satellite that had orbited it for a long time with five pericenter passages. The disturbance caused by
the merger may have increased the inner gas content and could cause the asymmetric star formation resulting in the
lopsided stellar disk. However, this galaxy is also approaching a bigger neighbor at present, losing
dark matter and gas in the process; this is shown in the middle panel of Fig.~\ref{mass_all}. Therefore, ram pressure
stripping of the gas in the outer parts could also affect its dynamics and shape.
Another mechanism for generating disk lopsidedness, also related to interactions, relies on a long-lived halo
distortion caused by a tidal encounter \citep{Weinberg1995} and the disk's response to it, which results in disk
lopsidedness \citep{Jog1997}. However, the resolution of the dark matter halo is not sufficient in the IllustrisTNG
simulations to study such subtle effects; that is, there are too few dark particles in the radial range occupied by the
stars and their softening length is quite large.
For the remaining galaxies, we were unable to identify any interactions that could have caused the distortion of the
shape. These are isolated objects still growing in mass and forming stars. Their common feature is that even if their
star formation rates (SFR) are not very high, they all contain a significant amount of gas. Still, about 40\% of
lopsided galaxies show a significant increase in SFR in their recent history. This trend is also visible in the
evolution of SFR for our three selected lopsided disks shown in Fig.~\ref{sfr}. It seems, therefore, that the most
frequent mechanism for the formation of lopsided disks is asymmetric star formation.
The galaxy ID568873 is a good example of this category. As can be seen from the lower panel of Fig.~\ref{mass_all},
its stellar and gas content do not show any abrupt changes in recent history. Although its SFR was not very high in
recent history, its stellar disk has become decidedly lopsided in the final simulation output.
In order to verify that indeed asymmetric star formation is the main cause of lopsidedness, we compared the gas
fractions $f_g = M_{\rm gas}/(M_{\rm gas} + M_{\rm stars})$ and SFRs at the present time for our sample of lopsided
disks and the remaining disks in the simulation. The histograms showing the distributions of these quantities within $2
r_{1/2}$ are plotted in the two upper panels of Fig.~\ref{histogramsgfsfr}. We can see that the distributions for these
two samples of galaxies are very different. There are no gas-free galaxies among the lopsided disks, and their gas
fractions can be as high as 0.6, with the most typical values being 0.2-0.25 (with the median of the distribution equal to
0.23). On the other hand, a significant number of objects are completely devoid of gas among the remaining disks,
and their gas fractions are typically lower (with the median of 0.18). A similar difference is seen in the distribution
of the SFRs. All lopsided disks maintain a non-zero level of star formation (with the median of 1.6 M$_\odot$
yr$^{-1}$), while the remaining disks include many quiescent objects and, on average, have a lower SFR (with the median
of 1.0 M$_\odot$ yr$^{-1}$). These two measures of activity are obviously not independent, since in the IllustrisTNG
simulations the star formation is a derivative of gas density and for all galaxies the gas mass and the SFR within $2
r_{1/2}$ follow each other closely.
The two lower panels of Fig.~\ref{histogramsgfsfr} compare the distributions of the $g-r$ color for the whole galaxy
and the metallicity of stars within $2 r_{1/2}$ for the lopsided and the remaining disks. We can see that the asymmetric
galaxies with the median $g-r$ color of 0.36 are decidedly bluer than the rest of the disk population, which have the
median of 0.43. We note that there is only one lopsided disk with $g-r > 0.6$, which can be considered as a
threshold separating the red from the blue population \citep{Nelson2018}, while among the rest of the disks there are
many red galaxies. The metallicity distributions of the lopsided and symmetric disks are also slightly
different: the asymmetric galaxies typically have lower metallicity with the median of 1.62 $Z_\odot$, while the
remaining galaxies are on average more metal-rich with the median of 1.66 $Z_\odot$. These distributions are
consistent with the picture involving the accretion of low-metallicity gas onto the galaxy as one of the scenarios
leading to lopsidedness.
\begin{figure}
\centering
\includegraphics[width=7cm]{a1starsgas.eps}
\caption{Values of $m = 1$ Fourier mode $A_1$ for the stars versus those of the gas within $(1-2) r_{1/2}$ for 161
lopsided galaxies at the present time. The points were color-coded by the gas fraction measured within $2 r_{1/2}$. The
diagonal line indicates the equality of the two quantities.
For clarity, two data points with $A_1 > 0.4$ for the gas were not included in
the plot. }
\label{a1starsgas}
\end{figure}
\section{Discussion}
Using the sample of disk galaxies from the IllustrisTNG project, we calculated the measures of asymmetry in the form of
the $A_1$ Fourier mode of the stellar distribution in the face-on view and identified 161 objects with $A_1 > 0.1$
in the radial range between one and two stellar half-mass radii. These lopsided galaxies are quite similar to
each other in many aspects. They all evolved rather quietly, forming disks early and preserving their oblate shapes for
a long time. In most of them, the value of $A_1$ varies strongly throughout the galaxy's history, especially early
on, reflecting a higher incidence of mergers at that time. In at most 1/3 of the lopsided disks, the asymmetry could
be induced by interactions, either by significant wet mergers or tidal effects and ram pressure stripping caused by more
massive neighbors. The lopsided disks exhibit higher gas fractions and SFRs, bluer colors, and slightly lower
metallicities than the remaining disks, which suggests that asymmetric star formation following the accretion of
low-metallicity gas from the galaxy neighborhood is the dominant mechanism leading to their formation.
We note, however, that this scenario probably cannot explain the asymmetry detected in the old stellar component
\citep{Rix1995, Zaritsky2013}, since the asymmetry in stars is likely to be smeared
out by differential rotation in a few Gyr.
Unfortunately, it is not possible to confirm this scenario more convincingly since the gas cannot be directly traced in
IllustrisTNG simulations. Still, we can verify if the disks are also lopsided in the gas component as this asymmetric
gas could be forming stars and causing the formation of asymmetric stellar disks. For this purpose, we measured
the Fourier mode $A_1$ for the gas in the same region, namely within $(1-2) r_{1/2}$. The values of $A_1$ for the stars
are shown as a function of $A_1$ for the gas in Fig.~\ref{a1starsgas} with the points color-coded by the gas fraction
in the galaxy within $2 r_{1/2}$. The diagonal line indicates the equality between the values for the two components.
We can see that there is little correlation between the $m = 1$ mode for the stars and for the gas, although the points
with a higher gas fraction (green) cluster more around the line. We note that $A_1$ can be much higher for the gas than
for the stars, reaching values as high as 0.66. However, this is the case mostly for galaxies with a low gas fraction
within $2 r_{1/2}$, which means that the number of gas particles included in the calculation is low and the results may
be quite noisy. In fact, the majority of galaxies (88 out of 161, or 55\%) have their $A_1$ for the gas lower than for
the stars. We conclude that there is no direct relation between the present global asymmetry of the gas and the stars.
In general, the gas distribution is more extended and takes the form of rings and spirals, which may contribute randomly
to the measurement of $A_1$. In addition, according to the IllustrisTNG model the stars
are formed from gas with density above some threshold, so the asymmetry in the stars may be caused by stars
forming in a particular overdense region of the gas that has little relation to its global distribution.
Moreover, the present asymmetry of the gas may differ from the one at the time the stars were formed. Measuring
such subtle effects on small subpopulations of stars of different age would, however, require much larger resolution
than is presently available for IllustrisTNG galaxies.
\begin{figure}
\centering
\includegraphics[width=7.5cm]{histogramsa21.eps}
\caption{Distributions of $m = 2$ Fourier mode values $A_2$ for lopsided disks with $A_1 > 0.1$ (blue) and
remaining disks with $A_1 < 0.1$ (red). Measurements of $A_2$ were done within $2 r_{1/2,}$ and the histograms were
normalized to unity. }
\label{histogramsa2}
\end{figure}
It is also interesting to look at the correlations between the lopsidedness of the stellar disks and the presence of
other morphological features such as bars or spiral arms. Although spiral arms are not well resolved in the Illustris
TNG100 simulation used in this study, bars can be reliably detected using the $m = 2$ Fourier mode and have been
studied in the past \citep{Peschken2019, Rosas2020, Zhou2020, Zhao2020, Lokas2021a, Lokas2021b}. In order to address
this issue, we calculated the bar mode $A_2$ within $2 r_{1/2}$ for the lopsided disks and the remaining ones in our
sample. The distributions of this quantity for the two samples are shown in Fig.~\ref{histogramsa2}, and we can see that
they are quite similar with the median $A_2$ values of 0.059 for the lopsided sample and 0.062 for the remaining disks.
In addition, adopting the threshold of $A_2 > 0.2$ as the value indicating a strong bar, we note that only seven
out of 161 lopsided disks (4\%) are strongly barred, while for the remaining disks this fraction rises to 11\%. This
result is in disagreement with the one of \citet{Bournaud2005}, whose observational sample of galaxies showed that
the presence of lopsidedness is correlated with the presence of bars or spiral arms.
Lopsidedness may occur not only in disks but also in bars, and it may be interesting to consider a possible relation
between the two phenomena. Recently, we identified a few lopsided bars in the IllustrisTNG simulation
\citep{Lokas2021b} that bear some resemblance to the bar in the Large Magellanic Cloud \citep{Marel2001, Jacyszyn2016}.
These objects were found among the bar-like galaxies studied earlier \citep{Lokas2021a}, which have almost the whole
stellar component in the form of a prolate spheroid with a negligible disk. The lopsided bars are
characterized by significant values of odd Fourier modes $A_3$ and $A_5$ with $A_1$ subdominant with respect to them.
This is in contrast with the lopsided disks studied here in which $A_1$ is always the strongest odd mode.
We checked our seven lopsided disks with the strongest bars for asymmetry in the bar and found that none of the
bars is strongly lopsided; so, there are no objects that would possess both a lopsided disk and a
lopsided bar. The galaxy with the strongest bar asymmetry among those (ID523489) has the $A_3$ value within $2
r_{1/2}$ (typically the strongest odd mode in lopsided bars) on the level of 0.06. The only connection between the two
phenomena thus seems to be the possibility that the formation of lopsided bars is preceded by a temporary occurrence of
a lopsided disk, but quite often the time difference between the two is too large to warrant a causal relation.
In comparison with observations, our analysis seems to yield a much smaller fraction of lopsided disks in the whole
population. We identified 161 galaxies out of 1912 disks as lopsided, which only accounts for about 8\%, while in
observations this percentage is estimated to be at the level of 30\% \citep{Jog2009}. We note that these
numbers should be comparable since we made the measurements in a similar radial range as in observations and
applied the threshold of $A_1 > 0.1$ recommended in observational studies \citep{Bournaud2005} as the one to be used to
distinguish lopsided galaxies from the symmetric ones. The estimated mean value of $A_1$ for the whole sample of 1912
disk galaxies from IllustrisTNG is 0.051, while in observational studies this value is closer to 0.1
\citep{Zaritsky1997, Bournaud2005}. The $A_1$ profiles of the simulated objects show similar radial variation to
nearby galaxies, although for most cases they tend to decrease rather than saturate in the outer disk.
In spite of this, we found that the simulations
reproduce the trends found in observations \citep{Reichard2009}, namely that lopsided disks contain more gas, have
higher SFRs, lower metallicity, and bluer colors than the rest of the late-type galaxies.
If lopsidedness is related to the presence of star-forming, young populations, it is possible that the
low fraction of asymmetric galactic disks in IllustrisTNG is caused by the overquenching effect known to exist in
these simulations \citep{Angthopo2021}. If too many galaxies prematurely stop forming stars in IllustrisTNG,
they are also less likely to generate lopsided stellar distributions.
However, we cannot discard the possibility that the correlation between the star formation and lopsidedness can
be explained by the reversed causal relation; namely that the lopsidedness, originating from any mechanism,
affects the disk dynamics and leads to an increased, asymmetric star formation \citep{Jog1997}.
It is also possible that the limited resolution of the simulations does not allow us to reproduce the subtler
effects in the dynamics of lopsided disks adequately. The simulated disks are, for example, significantly thicker than
observed \citep{Haslbauer2022}, which may affect their dynamics, and in particular the disk response to halo
distortion proposed as one of the scenarios for generating lopsidedness. This could also explain the lack of
correlation between the presence of asymmetry and the presence of a bar.
\begin{acknowledgements}
I am grateful to the referee, Chanda Jog, for very useful comments and to the IllustrisTNG team for making
their simulations publicly available.
\end{acknowledgements}
|
2,869,038,154,529 | arxiv | \section{Introduction}
\label{sec:introduction}
The announcement of the five-sigma discovery of the Higgs boson at the
LHC on July 4th 2012 officially launched a new program of precision
tests of the standard model (SM) in Higgs physics. By precisely
measuring the production and decay rates of the Higgs boson we aim to
test if the Higgs' couplings agree with SM predictions, and -- if they
don't agree -- we hope to obtain hints about physics beyond the SM.
So far, the focus has been on the largest couplings of the Higgs such
as the couplings to $W$ and $Z$ gauge bosons as well as the top,
bottom and $\tau$ Yukawa couplings~\cite{Khachatryan:2014jba,
atlas:higgs}\footnote{Recently, methods for measuring first and
second generation quark Yukawa couplings were proposed as
well~\cite{Bodwin:2013gca, Delaunay:2013pja, Kagan:2014ila}.}. In
this article, we concentrate instead on the coupling of the Higgs to
electrons which is of course predicted to be one of the smallest
couplings of the Higgs in the SM. We ask what we know about the
electron Yukawa coupling from a purely experimental point of view. It
might be reasonable to expect that new physics in the Higgs sector
couples more strongly to top quarks than to electrons; however,
measuring the Higgs coupling to electrons is interesting precisely
because the SM prediction for the Yukawa coupling is so small. A
higher-dimensional operator from new physics can easily compete with
the SM Yukawa coupling or can even dominate.
In Section~\ref{sec:coupling} we briefly discuss how
higher-dimensional operators can modify the coupling of the Higgs to
electrons. In Section~\ref{sec:direct} we analyze the sensitivity to a
modified Higgs-electron coupling coming from searches for Higgs decays
into $e^+e^-$ at the LHC and at future hadron colliders, as well as
from Higgs production at electron-positron colliders. Indirect
constraints on a modified Higgs-electron coupling from the electric
dipole moment (EDM) and the anomalous magnetic dipole moment (MDM) of
the electron, as well as from rare $B$-meson decays into $e^+e^-$
final states are discussed in Section~\ref{sec:indirect}. We conclude
in Section~\ref{sec:conclusions} with a summary of current and future
constraints. In Appendix~\ref{sec:twoloopEDM} we provide analytic
expressions for the complete set of relevant two-loop contributions to
the electron EDM and MDM that are induced by a modified Higgs-electron
coupling. In Appendix~\ref{sec:loophole} we show that in the Standard
Model the Higgs electron coupling is necessarily suppressed by the
electron mass to all loop orders.
\section{The Higgs-electron coupling beyond the SM}
\label{sec:coupling}
Within the SM, both the electron mass $m_e$ and the Higgs-electron
coupling $g_{eeh}$ are completely determined by the Yukawa coupling
$y_e$ of the first generation leptons to the Higgs doublet $\varphi$,
\begin{equation}
\mathcal{L}_\text{SM} \supset y_e^\text{SM} \bar{\ell}_L \varphi e_R
\, + h.c. \,.
\end{equation}
After electroweak symmetry breaking, we can parametrize $\varphi =
(G^+, (v + h + iG^0)/\sqrt{2})^T$ and thus obtain the electron mass
term and the coupling of the physical Higgs boson $h$ to left and
right handed electrons:
\begin{equation}
\mathcal{L} \supset m_e \bar{e}_L e_R + \frac{g_{eeh}}{\sqrt{2}} \bar{e}_L e_R h \,+ h.c. \,.
\end{equation}
Given the known electron mass $m_e \simeq 0.511$ MeV and the Higgs
vacuum expectation value (vev)
$v=(\sqrt{2} G_F)^{-1/2}\simeq 246$ GeV,
one can predict the Higgs-electron coupling in the SM
\begin{eqnarray}
\label{eq:Eyukawa}
g_{eeh}^\text{SM} = y_e^\text{SM} = \sqrt{2} m_e / v \simeq 2.9 \times 10^{-6} ~.
\end{eqnarray}
We see that the Higgs-electron coupling is of the order of $10^{-6}$
and real. At the quantum level, the magnitude of the coupling receives
well-known perturbative corrections starting at order $y_e \alpha$,
where $\alpha$ is the fine-structure constant, and requires a proper
definition of the quantities involved. We will ignore all such
complications because the experimental uncertainties will turn out to
be much larger than the change in the coupling due to running. In
Appendix~\ref{sec:loophole} we give a general proof based on chiral symmetry
showing that all quantum corrections to the Higgs-electron coupling in the SM
are proportional to the electron mass and are therefore small.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.28\textwidth]{figs/vectorleptons.pdf}~~~~~
\includegraphics[width=0.3\textwidth]{figs/scalar.pdf}~~~~~
\includegraphics[width=0.3\textwidth]{figs/heavyvector.pdf}
\end{center}
\caption{Possible origins of the dimension-six operator in
Eq.~\eqref{dim6}. Left: mixing of the electrons with heavy
vector-like leptons. Middle: mixing of the SM Higgs doublet with a
heavy scalar doublet that couples to electrons. Right: exchange of a
heavy vector.
\label{fig:dim6}}
\end{figure}
How can the coupling of the Higgs to electrons differ from the value
predicted in \refeq{Eyukawa}? Assuming that the field content of the
SM provides an adequate description of physics at the weak scale, any
new physics contributions can be parametrized by higher-dimensional
operators respecting the SM gauge symmetries. Thus, to modify the
Higgs-electron coupling, we must introduce a higher-dimensional
operator coupling the Higgs to electrons that changes the relationship
between the electron mass and the Yukawa coupling. The
lowest-dimension operators which do this are of dimension six and have
zero, one, or two derivatives
\begin{equation} \label{dim6}
\begin{split}
\mathcal{L}_\text{dim6} &\supset \frac{c_0}{M^2} \varphi^\dagger \varphi \bar{\ell}_L \varphi e_R +h.c. \\
&+\frac{c_{1L}}{M^2} \bar{\ell}_L \gamma^\mu \ell_L \partial_\mu (\varphi^\dagger \varphi)
+\frac{c'_{1L}}{M^2} \bar{\ell}_L \gamma^\mu \ell_L\, ({\varphi^\dagger}\!\!
\stackrel{\leftrightarrow}{D}_\mu\! \varphi )
+ (\ell_L \leftrightarrow e_R) \\
&+\frac{c_2}{M^2} \bar{\ell}_L e_R D^2 \varphi + \ldots
\end{split}
\end{equation}
where $M$ is a new-physics scale, $c_0$ and $c_2$ are complex
couplings, and the $c_{1}$ couplings are real. Such operators could
arise from mixing of the leptons with heavy vector-like
fermions, from mixing of the Higgs with a heavy scalar doublet, or
from the exchange of new vector bosons (see Fig.~\ref{fig:dim6}).
Generically, we would expect that the couplings $c_i$ are $3 \times 3$
matrices in lepton flavor space such that the operators in
Eq.~\eqref{dim6} not only modify the Higgs-electron coupling but also
alter the other Higgs-lepton couplings, thereby also inducing
lepton-flavor violating Higgs couplings. Possible relations among the
new physics effects in these couplings are, however, model dependent
and their discussion is beyond the scope of this work.
We now argue that the only dimension-six operator which can
significantly modify the coupling of on-shell electrons to the Higgs
is the one proportional to $c_0$. To start, note that the operators
in the second line preserve chiral symmetry and might therefore be
expected to have a larger coefficient than those in the first and
third line which do break chiral symmetry. Yet, the operators
proportional to $c'_{1L}$ and $c'_{1R}$ do not contribute to the
Higgs-electron coupling. The operators proportional to $c_{1L}$ and
$c_{1R}$ do modify the real part of the Higgs-electron coupling;
however, after integrating by part and applying the equations of
motion one sees that these contributions are suppressed by a factor
$m_e/M$ in addition to $v/M$ and are therefore too small to be
interesting. In addition, there are several potentially interesting
two-derivative operators, but only the one shown in the third line of
Eq.~\eqref{dim6} gives contributions to the Higgs-electron coupling
which are not suppressed by powers of the electron mass. In Higgs
production or decay, the Higgs boson is on shell and the derivatives
can simply be replaced by $M_h^2$, thus allowing the effects of this
operator to be absorbed by a shift of $c_0$. For low-energy
experiments the derivatives get replaced by small momenta and the
effects of the $c_2$ operator are negligible. We will therefore
concentrate on the $c_0$ operator as a plausible source of observable
deviations in the Higgs-electron coupling from now on.
Expanding the Higgs doublet about its vev, the $c_0$ operator in
Eq.~\eqref{dim6} leads to corrections to the electron mass and the
Higgs-electron coupling of order $v^2/M^2$:
\begin{equation}\label{mod}
\begin{split}
m_e &= \frac{v}{\sqrt{2}}\left( y_e + \frac{c_0}{2} \frac{v^2}{M^2} \right) \,, \\
g_{eeh} &= y_e + \frac{3c_0}{2} \frac{v^2}{M^2} = \frac{\sqrt{2} m_e}{v} + c_0 \frac{v^2}{M^2} \,.
\end{split}
\end{equation}
Note the factor of 3 in the ratio of the new physics contributions to
$g_{eeh}$ and $m_e$ relative to the SM contributions. In the presence
of both the SM Yukawa coupling and the dimension-six operator, the
Higgs-electron coupling and the electron mass become independent
parameters\footnote{On the other hand, if one can neglect
contributions of operators with mass dimension higher than six, the
effective couplings of electrons to more than one Higgs boson are
fixed in terms of $g_{eeh}$ and $m_e$.}. As $m_e \ll v$, the new
physics correction to the Higgs-electron coupling can be sizable, even
for very large new physics scales $M \gg v$. However, one should keep
in mind that $g_{eeh} \gg g_{eeh}^\text{SM}$ is only possible if there
is a significant cancellation between the contributions to the
electron mass coming from the Yukawa coupling and the
higher-dimensional operator, cf. Eq.~\eqref{mod}.
Note that given the smallness of the electron mass, operators of
dimension greater than 6 may also play a role in determining the
Higgs-electron coupling and the electron mass. For instance, in the
models by Giudice and Lebedev~\cite{Giudice:2008uua} $g_{eeh}$ is
dominated by contributions from dimension-ten operators, and could be
(naturally) a factor of ${\mathcal O}(10)$ larger than the SM
prediction.
Finally we point out that $g_{eeh}$ can in general be complex; a
non-vanishing imaginary part of $g_{eeh}$ would be a clear sign of new
physics. For a sizable phase to arise there have to be at least two
different operators contributing to $g_{eeh}$, with coefficients of
similar magnitude and different phases (for instance, the
dimension-four and -six contributions in Eq.~\eqref{mod} with $y_e
\sim c_0 v^2/ M^2$). The electron mass term can always be made real by
an appropriate choice of the phase of the electron fields. The
Higgs-electron interaction then has, in general, a complex phase
relative to the mass term.
In order not to commit ourselves to a specific scenario, we find it
convenient to parametrize the modified Higgs-electron coupling more
generally as
\begin{equation} \label{eq:LagYebroken}
g_{eeh} = \kappa_e \frac{\sqrt{2} m_e}{v} \,,
\end{equation}
where $ \kappa_e $ is a complex parameter describing the relative
deviation from the SM prediction $\kappa_e^\text{SM}=1$. In the case
that only dimension-six operators are relevant, we can use the
relation $\kappa_e = 1 + c_0 v^3/(\sqrt{2} m_e M^2)$ together with the
assumption that $c_0$ is a coefficient of order unity to translate a bound on $\kappa_e$
into a lower bound on the NP scale $M$.
Throughout this article we set all couplings of the Higgs boson to
particles other than the electron to their SM values.
\section{Constraints from direct searches} \label{sec:direct}
The coupling of the Higgs to electrons leads to the decay of the Higgs
into electrons. Moreover, it allows resonant production of Higgs
bosons in electron-positron collisions in the $s$-channel. In this
section we will discuss the sensitivity to a modified Higgs-electron
coupling of searches for $h \to e^+e^-$ decays at hadron colliders and
of $s$-channel Higgs production at $e^+e^-$ colliders.
\subsection{Higgs decays at the LHC and beyond}
The recent search for SM Higgs decays in the $\mu^+ \mu^-$ and $e^+
e^-$ channels by CMS~\cite{Khachatryan:2014aep} allows to set a bound
on the Higgs-electron coupling. Modifying the Higgs-electron coupling
will change both the $h \to e^+ e^-$ partial width and the total Higgs
decay width. Accordingly, we find for the modified branching ratio
\begin{equation}
\text{Br}(h \to e^+ e^-) = \frac{|\kappa_e|^2 \, \text{Br}(h \to e^+
e^-)_\text{SM}}{1+(|\kappa_e|^2 - 1) \, \text{Br}(h \to e^+ e^-)_\text{SM}} \,,
\end{equation}
where we neglected terms that are further suppressed by $m_e^2 /
M_h^2$. For a Higgs mass of $M_h = 125.7$~GeV~\cite{Agashe:2014kda},
the SM prediction for the branching ratio
reads~\cite{Heinemeyer:2013tqa}
\begin{equation}
\text{Br}(h \to e^+ e^-)_\text{SM} \simeq 5.1 \times 10^{-9}\,.
\end{equation}
Assuming the SM Higgs production cross section, CMS finds an upper
bound on the branching ratio of~\cite{Khachatryan:2014aep}
\begin{equation}
\text{Br}(h \to e^+ e^-) < 0.0019 ~~@~95\%~\text{C.L.}\,.
\end{equation}
This results in the constraint
\begin{equation} \label{constraint}
|\kappa_e| < 611\,.
\end{equation}
Setting the new physics coupling of the dimension-six operator
in~\eqref{dim6} to $c_0 = 1$ we can translate the constraint on
$\kappa_e$ into a constraint on the new physics scale $M$. We find $M
> 5.8$~TeV. We expect that the bound on $\kappa_e$ from $h\to e^+e^-$
can be improved in the future at the LHC. The gluon fusion Higgs
production cross section increases by approximately by a factor 2.5
going from 8~TeV to 14~TeV~\cite{Heinemeyer:2013tqa,
Dittmaier:2011ti}. Assuming that the sensitivity to the $h \to
e^+e^-$ decay scales with the square root of the number of Higgs
events, we expect sensitivities to $|\kappa_e| \sim 260 $ with 300/fb
and $|\kappa_e| \sim 150 $ with 3/ab. At a 100 TeV proton-proton
collider the Higgs production cross section increases by another
factor of $\sim 15$~\cite{ggF}. An integrated luminosity of 3/ab might
allow to improve the sensitivity down to $|\kappa_e|\sim 75$.
We close this subsection by considering the Higgs two-body decay to
positronium and a photon. This decay is the electron analogue of the
Higgs decays to vector meson and photon which were recently
studied~\cite{Bodwin:2013gca, Delaunay:2013pja, Kagan:2014ila} to
measure the Higgs couplings to light quarks. The idea behind this
method is that the vector meson plus gamma final state can result from
two different amplitudes which interfere. One of the amplitudes
involves the Higgs coupling to the light quarks in the vector meson,
the other amplitude generates the vector meson via mixing with a
virtual photon. Naively, these two amplitudes have different chiral
symmetry properties and cannot interfere. However, chiral symmetry is
broken dynamically by the QCD condensate, allowing the interference
term to be proportional to only one power of the small quark Yukawa
coupling. The case of Higgs decay to positronium is quite analogous
except that here the only source of chiral symmetry breaking is the
electron mass so that the interference term is necessarily
proportional to the electron mass (times powers of alpha) in addition
to the Higgs-electron coupling. Thus this final state is less
sensitive to the Higgs-electron coupling than the $h\rightarrow e^+
e^-$ decay considered above.
\subsection{Higgs production at \texorpdfstring{$e^+e^-$}{e+e-} colliders}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.28\textwidth]{figs/eehbb.pdf}~~~~~~~~~~~~~~~~~~~~
\includegraphics[width=0.28\textwidth]{figs/eeazbb.pdf}
\end{center}
\caption{ Resonant Higgs boson production at LEP~II via radiative
return to the Higgs pole (left diagram). The Higgs is assumed to
decay into a $b \bar b$ final state. The main background is given by
off-shell photons or $Z$ bosons decaying into a $b \bar b$ pair
(right diagram). \label{fig:rr}}
\end{figure}
The electron Yukawa coupling allows for resonant production of Higgs
bosons in $e^+e^-$ collisions in the $s$-channel. While the cross
section for this process is obviously maximized when the center of
mass energy is tuned to the Higgs mass, one can also obtain
sensitivity to $\kappa_e$ from virtual Higgs exchange or through
``radiative return''. Radiative return occurs when the center of mass
energy of the collider exceeds the Higgs mass; in this case
bremsstrahlung off an initial electron can reduce the effective
center-of-mass (CM) energy to the Higgs resonance.
For instance, LEP~II accumulated an integrated luminosity of the order
of 500~pb$^{-1}$ per experiment at a few different CM energies above
the Higgs pole~\cite{Alcaraz:2006mx} so that the radiative return
process was possible. To obtain a rough estimate on the reach of the
LEP~II experiments we approximate the radiative return cross section
simply as a $t$-channel process (that ignores some logarithmic
enhancement for initial-state radiation photons) with the Higgs
decaying into a $b \bar b$ pair (see Fig.~\ref{fig:rr}). We use
\texttt{madgraph}~\cite{Alwall:2014hca} to calculate the corresponding
cross section $\sigma_\text{r.r.}$, restricting the invariant mass of
the $b \bar b$ pair to the Higgs mass within the LEP~II jet energy
resolution $\sigma_{E,\text{jet}} = 10$\,GeV~\cite{Ward:1999xu}. We
further assume that the main background is provided by virtual photons
and $Z$ bosons decaying into a $b \bar b$ pair in the same
invariant-mass bin, with a cross section $\sigma_\text{bkg}$.
We collect the cross sections for various CM energies, as well as the
corresponding integrated luminosities per experiment, in
Tab.~\ref{tab:lep2}. Adding all data sets, we find $N_\text{r.r.} =
3\cdot 10^{-6} \times |\kappa_e|^2$ and $N_\text{bkg} = 121$ for the
total number of signal and background events, respectively. Setting
$N_\text{r.r.}/\sqrt{N_\text{bkg}}=1$ we see that LEP~II was, in
principle, sensitive to $|\kappa_e| \sim 2000$. We find that a similar
sensitivity could be obtained with the 20/pb that have been collected
much closer to the Higgs resonance at a center of mass energy of
130~GeV. Our rough sensitivity estimates are weaker than the LHC bound
derived in the previous section and for example do not take into account
signal efficiencies or backgrounds from fakes. The LHC bound is
expected to improve significantly after run~II. Thus, a more
sophisticated analysis of the LEP~II data does not seem worth while.
\renewcommand{\arraystretch}{1.2}
\begin{table
\begin{center}
\begin{tabular}{cccc}
\hline\hline
$E$ [GeV] & ${\cal L}$ [1/pb] & $10^6 / |\kappa_e|^2 \times \sigma_\text{r.r.}$ [fb] &
$\sigma_\text{bkg}$ [fb] \\
\hline\hline
189 & 170 & 1.40 & 56.9 \\
192 & 30 & 1.33 & 54.2 \\
196 & 80 & 1.25 & 50.8 \\
200 & 80 & 1.18 & 47.4 \\
202 & 40 & 1.14 & 45.8 \\
205 & 80 & 1.08 & 43.4 \\
207 & 140 & 1.04 & 41.9 \\
\hline\hline
\end{tabular}
\end{center}
\caption{\small The integrated luminosity ${\cal L}$ collected by each
experiment at LEP~II at various CM energies $E$, and the
corresponding cross sections for producing a photon, plus a $b \bar
b$ pair with an invariant mass between 115 GeV and 135 GeV, via a
virtual Higgs ($\sigma_\text{r.r.}$) or an off-shell photon or $Z$
boson ($\sigma_\text{bkg}$). }
\label{tab:lep2}
\end{table}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.6\textwidth]{figs/FCCee.pdf}
\end{center}
\caption{Dependence of the $e^+ e^- \to h \to b \bar b$ cross section
on the CM mass energy of the initial electron-positron
pair. Depending on the beam energy spread $R$, the Higgs mass has to
be known within a few to tens of MeV to fully exploit resonant
production. \label{fig:FCCee}}
\end{figure}
On the other hand, resonant Higgs production would be possible at a
potential future $e^+ e^-$ collider running at a CM energy tuned to
the Higgs mass. The cross section for the production of a massless
fermion - antifermion pair via an $s$-channel Higgs is given by
\begin{equation}
\begin{split}
\sigma_{e^+ e^- \to h \to f \bar f}(s) =
\frac{1}{32\pi} \frac{\big(y_e^\text{SM}\big)^2 y_f^2}{4} N_c^f |\kappa_e|^2
\frac{s}{\big(s-M_h^2\big)^2 + \Gamma_h^2 M_h^2}\,,
\end{split}
\end{equation}
where $N_c^f$ is the color factor for the final state fermions
($N_c^f=3$ for quarks and $N_c^f=1$ for leptons). In the SM, the
width of a $125.7$~GeV Higgs is $\Gamma_h^\text{SM} =
4.17$~MeV~\cite{Heinemeyer:2013tqa}. Due to the tiny SM $h \to e^+
e^-$ branching fraction the change in the total width of the Higgs for
$\kappa_e \neq 1$ is completely negligible, given currently allowed
values of $\kappa_e$. Indeed, from the constraint in
Eq.~\eqref{constraint} we find
\begin{equation}
\Delta \Gamma_h = \Gamma_h^\text{SM} \times (|\kappa_e|^2 - 1) \,
\text{Br}(h \to e^+ e^-)_\text{SM} < 7.9 ~ \text{keV}\,.
\end{equation}
In order to calculate the resonant cross section we need to convolve
the parton-level cross section $\sigma(e^+ e^- \to h \to f \bar f)$
with the beam energy resolution. We take it as a Gaussian with
variance $\Delta \equiv R \sqrt{s} / \sqrt{2}$, where $R$ is the
percentage beam energy resolution~\cite{Han:2012rb}. Using
$R=0.05\%$~\cite{Gomez-Ceballos:2013zzn} and assuming an average
center of mass energy exactly at the Higgs mass we find the following
signal cross section for bottom quarks in the final state
\begin{equation}
\sigma_\text{sig}(e^+e^- \to h \to b\bar b) \simeq
|\kappa_e|^2 \times 0.05/\text{fb} \,.
\end{equation}
For 100/fb of data at the Higgs resonance, this corresponds to
approximately $N_\text{sig} \simeq 5 \times |\kappa_e|^2$ signal
events.
The main background will be $f \bar f$ production via an intermediate
photon or $Z$ boson. The corresponding total cross section
is~\cite{Consoli:1989pc}
\begin{equation}
\begin{split}
\sigma_{e^+ e^- \to \gamma, Z \to f \bar f}(s) =
\frac{4\pi\alpha^2}{3s} N_c^f \bigg[ Q_f^2 +
\frac{(v_e^2+a_e^2)(v_f^2+a_f^2)s^2 - 2v_e v_f Q_f s (s-M_Z^2)}{
\big(s-M_Z^2\big)^2 + \Gamma_Z^2 M_Z^2}\bigg] \,.
\end{split}
\end{equation}
The parameters $v_f$ and $a_f$ are the vector and axial-vector
couplings of the $Z$ boson to a fermion $f$. They are given by
\begin{equation}
v_f = \frac{I_3^f - 2Q_f \sin^2\theta_w}{2\sin\theta_w\cos\theta_w}
\,, \quad a_f = \frac{I_3^f}{2\sin\theta_w\cos\theta_w} \,,
\end{equation}
where $I_3^f$ and $Q_f$ denote the third isospin component and the
electric charge of the fermion~$f$, respectively. Assuming again
100/fb of data and $\sqrt{s} = M_h$ we expect roughly $N_\text{bkg} =
10^6$ $b \bar b$ background events. Requiring
$N_\text{sig}/\sqrt{N_\text{bkg}} = 1$ we estimate that one can reach
sensitivity to $|\kappa_e| \lesssim 15$ for 100/fb and to $|\kappa_e|
\lesssim 50$ for 1/fb. Slightly better sensitivities could be achieved
with a smaller beam energy spread.
Note that, in order to exploit the full benefit of resonant Higgs
production, the Higgs mass has to be known with high
precision. Fig.~\ref{fig:FCCee} shows the $e^+ e^- \to h \to b \bar b$
cross section as a function of the center of mass energy of the
initial state electrons for three choices of the beam energy
resolution $R=0.05\%$, $R=0.025\%$, and $R=0.01\%$. The cross-section
drops quickly if the center of mass energy differs from the Higgs mass
by more than a few to tens of MeV, depending on the beam energy
spread.
\section{Precision constraints} \label{sec:indirect}
We have seen that the LHC sensitivity to the Higgs electron coupling
is unlikely to reach values better than $|\kappa_e| \simeq 100$
whereas a future $e^+ e^-$ collider running on the Higgs resonance
could be sensitive to $|\kappa_e|$ of order 10. In addition to these
direct searches, low energy precision observables can be used to
indirectly probe modified Higgs couplings. Constraints from
low-energy flavor observables on flavor-violating fermion-Higgs
couplings have been derived for example in~\cite{Blankenburg:2012ex,
Harnik:2012pb, Gorbahn:2014sha}. Constraints from EDMs on CP
violating top-Higgs and photon-Higgs couplings are discussed for
example in~\cite{McKeen:2012av, Brod:2013cka, Altmannshofer:2013zba}.
In this section we investigate indirect constraints on a modified
Higgs-electron coupling. We will see that the strongest constraints
arise from the electric and magnetic dipole moments of the electron,
whereas rare $B$ decays into $e^+ e^-$ final states do not yield
competitive bounds. Note that the indirect constraints derived in this
section hold barring accidental cancellations with additional
contributions to the low energy observables that might arise in
explicit models that give rise to the higher-dimensional operators
modifying the Higgs couplings. Here we assume that all couplings other
than the Higgs-electron coupling are SM-like.
\subsection{Electric dipole moment of the electron} \label{sec:EDM}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.3\textwidth]{figs/BZtop.pdf}~~~~~
\includegraphics[width=0.3\textwidth]{figs/BZgauge.pdf}~~~~~
\includegraphics[width=0.3\textwidth]{figs/CGgauge.pdf}
\end{center}
\caption{Sample two-loop Feynman diagrams inducing an EDM for the
electron through a CP-violation Higgs coupling to the electron (here
denoted by the black square). \label{fig:2loopEDM}}
\end{figure}
The imaginary part of the Higgs boson coupling to electrons in
Eq.~\eqref{eq:LagYebroken} induces an EDM of the electron\footnote{We
define $\sigma_{\mu\nu} = i[\gamma_\mu, \gamma_\nu]/2$. }
\begin{equation}\label{eq:edmlag}
{\mathcal L}_\text{eff}^e = - \frac{d_e}{2} \, \bar \psi_e \,
\sigma_{\mu\nu} \, i \gamma_5 \, \psi_e \, F^{\mu\nu}
\end{equation}
via two-loop electroweak diagrams\footnote{One-loop contributions are
suppressed by additional powers of the electron Yukawa and electron
mass, and are therefore negligibly small~\cite{Barr:1990vd}.} (see
Fig.~\ref{fig:2loopEDM} for sample Feynman diagrams).
We have calculated the full set of relevant two-loop contributions
that contain exactly one power of the Higgs-electrons coupling. The
analytic expressions can be found in App.~\ref{sec:twoloopEDM}. Taking
the numerical values of the input parameters ($\alpha$, $M_W$, $M_Z$,
$M_h$, $m_t$) from Ref.~\cite{Agashe:2014kda} we obtain
\begin{equation}
\left|\frac{d_e}{e}\right| \simeq 5.1 \times |\text{Im} \kappa_e|
\times 10^{-27} \text{cm} \,.
\end{equation}
Using the most recent bound on the electron EDM obtained by the ACME
collaboration~\cite{Baron:2013eja},
\begin{equation}
\left|\frac{d_e}{e}\right|_\text{exp} < 8.7 \times 10^{-29}
~\text{cm} ~~@ ~90\%~\text{C.L.} \,,
\end{equation}
we find the very stringent constraint
\begin{equation}
|\text{Im}\,\kappa_e| < 1.7 \times 10^{-2} ~.
\end{equation}
If the new physics contribution to the Higgs-electron coupling
contains an O(1) phase, this bound translates into a very strong
constraint on the new physics scale $M$. Setting $c_0 = i$ (in the
basis where the electron mass term is real) we find $M \gtrsim
1000$~TeV. It is expected that the experimental sensitivity to the
electron EDM can be improved by up to two orders of magnitude in the
future~\cite{Hewett:2012ns}. Such sensitivities would allow to probe
$|\text{Im}\,\kappa_e|$ at the level of $10^{-4}$ and new physics
scales as high as $10^4$~TeV.
\subsection{Anomalous magnetic dipole moment of the electron}
The real part of $\kappa_e$ modifies the SM contribution to the
anomalous magnetic dipole moment of the electron,
\begin{equation}
{\mathcal L}_\text{eff}^m = - \frac{e}{4} \frac{a_e}{m_e} \, \bar \psi_e \,
\sigma_{\mu\nu} \, \psi_e \, F^{\mu\nu} \,,
\end{equation}
via the same two-loop diagrams that induce also an EDM (see
Fig.~\ref{fig:2loopEDM} and App.~\ref{sec:twoloopEDM}). Denoting the
contributions of the two-loop diagrams with an anomalous Higgs
coupling by $\Delta a_e$, we find
\begin{equation}
|\Delta a_e| \simeq 2.6 \times (\text{Re}\, \kappa_e - 1) \times 10^{-16} \,.
\end{equation}
The anomaly in the gyromagnetic ratio of the electron, $a_e \equiv
(g-2)_e/2$, is conventionally used to determine the fine-structure
constant $\alpha$~\cite{Hanneke:2008tm,Aoyama:2014sxa}. However, as
pointed out in Ref.~\cite{Giudice:2012ms}, the recent precise
independent measurements of the fine-structure constant in atomic
physics experiments can be used to obtain a SM prediction for $a_e$
with an uncertainty that is only a factor of few larger than the
experimental measurement. Therefore, the anomalous magnetic moment of
the electron can be used as a probe of new physics.
We employ the value $\alpha^{-1} = 137.035999037(91)$ from the most
recent determination of the fine-structure constant using a
measurement of the ratio between the Planck constant and the mass of
the $^{87}$Rb atom~\cite{Bouchendira:2010es}. Using the corresponding
uncertainty induced on $a_e$ around the SM value, we obtain the
allowed range for the new physics contribution to $a_e$
\begin{equation}
|\Delta a_e| < 8.1 \times 10^{-13} \,.
\end{equation}
This translates
into the allowed range for $\kappa_e$,
\begin{equation}
|\text{Re}\,\kappa_e| < 3.1 \times 10^{3} \,.
\end{equation}
This is a factor of five above the direct bound derived from the CMS
search for $h \to e^+e^-$. Note, however, that this bound scales
linearly with $\text{Re}\,\kappa_e$, in contrast to the quadratic
dependence of the collider constraints. The bound from the anomalous
magnetic moment can be improved in the near future by an order of
magnitude~\cite{Giudice:2012ms}, making it competitive to the expected
sensitivities from $h \to e^+e^-$ at run II of the LHC.
\subsection{Rare \texorpdfstring{$B$}{B} decays}
In the Standard Model, the rare decays $B_q \to \ell^+ \ell^-$ are
mediated by $Z$-penguin and box diagrams and require a helicity flip
of the final state leptons due to the pseudo-scalar nature of the
$B_q$ meson. Therefore, the branching ratios are proportional to the
lepton mass squared and extremely small. Higgs mediated contributions
to these decays do not, in general, suffer from the strong helicity
suppression. However, in the SM they are suppressed by the tiny lepton
Yukawa couplings and are negligible. One might therefore hope that
experiments searching for the $B_q \to e^+ e^-$ decays are sensitive
to an enhanced Higgs-electron coupling. Here we show that the current
and expected sensitivities are not competitive with the direct and
indirect bounds discussed so far.
The SM predictions for the time integrated $B_q \to e^+ e^-$ branching
ratios read~\cite{Bobeth:2013uxa}
\begin{equation}
\text{Br}(B_s \to e^+e^-)_\text{SM} = (8.54\pm0.55)\times 10^{-14} ~,
\end{equation}
\begin{equation}
\text{Br}(B_d \to e^+e^-)_\text{SM} = (2.48\pm0.21)\times 10^{-15} ~.
\end{equation}
These values are many orders of magnitude below the current
experimental constraints set by CDF~\cite{Aaltonen:2009vr} at 95\% C.L.
\begin{equation}
\text{Br}(B_s \to e^+e^-) < 2.8 \times 10^{-7} ~,
\end{equation}
\begin{equation}
\text{Br}(B_d \to e^+e^-) < 8.3 \times 10^{-8} ~.
\end{equation}
While the experimental constraints are likely to be improved at LHCb
and Belle~II by one or two orders of magnitude, sensitivities to the
SM predictions will not be reached within the foreseeable future.
In the presence of an enhanced Higgs-electron coupling, we find for
the Higgs-mediated correction to the branching ratios\footnote{Here we
assume that $\kappa_e$ does not contain a CP violating phase. As
discussed in section~\ref{sec:EDM}, such a phase is strongly
constrained by the electron EDM.}
\begin{equation}
\frac{\text{Br}(B_q \to e^+e^-)}{\text{Br}(B_q \to e^+e^-)_\text{SM}}
- 1 ~\propto~ \frac{m_{B_q}^4}{M_h^4} \kappa_e^2 \,,
\end{equation}
with a proportionality factor that is parametrically of order 1. This
implies that significant enhancements of the branching ratios are only
possible for $\kappa_e \gg M_h^2/m_{B_q}^2 \sim 550$. The current
experimental constraints on $B_q \to e^+e^-$ probe couplings of the
order of $\kappa_e \sim O(10^6)$ that are already excluded by orders
of magnitude by the LHC results on $h \to e^+e^-$.
\section{Discussion and Conclusions} \label{sec:conclusions}
The question ``what do we know about the electron Yukawa'' is both
interesting and non-trivial to answer.
NP effects could lead to significant changes to the Higgs coupling to
electrons precisely because it is predicted to be tiny in the SM.
Enhancements of the coupling by orders of magnitude above the SM value
are theoretically possible, however only at the cost of significant
fine tuning of the electron mass. Order one changes to both the
real and imaginary parts of the coupling could be completely natural.
As a side effect, direct verification of an enhanced coupling of the
Higgs to electrons would also lead to stronger indirect constraints on
CP violating couplings of the Higgs boson to top
quarks~\cite{Brod:2013cka}.
In this article, we considered which experiments currently provide the
most stringent bounds on anomalous Higgs-electron couplings. We find
that the strongest bound on the magnitude of the coupling comes from a
CMS search for the $h \to e^+ e^-$ decay. The CP-violating imaginary
part of the Higgs-electron coupling is strongly constrained by the
current upper bound on the electron EDM. The indirect constraint on
the CP-conserving real part of the coupling from the electron $g-2$,
on the other hand, is currently relatively weak; it can, however, be
improved by a new generation of precision experiments and could be
competitive with the bounds derived from future LHC data. Finally, we
showed that rare $B$ decays are not competitive in setting bounds on
deviations from the SM Higgs-electron coupling.
Potentially the best future bounds on the magnitude of the coupling
could be obtained from an electron-positron collider running on the
Higgs resonance. With optimistic assumptions a measurement of the
Higgs-electron coupling only an order of magnitude above its SM value
seems possible. Sensitivity to the SM value itself would require huge
amounts of statistics collected at the Higgs resonance, very precise
knowledge of the Higgs mass of the order of the Higgs width, and
exquisite control of the beam energy at the same level. It does not
seem that precision measurements of the magnitude of the SM electron
Yukawa coupling will ever be possible.
We summarize the current constraints and future expected sensitivities to a
modified Higgs-electron coupling $\kappa_e$ and the corresponding new
physics scale $M$ in Table~\ref{tab:summary}.
\renewcommand{\arraystretch}{1.6}
\setlength\tabcolsep{12pt}
\begin{table
\begin{center}
\begin{tabular}{clll}
\hline\hline
\multirow{4}{*}{$h \to e^+e^-$} & LHC8 (25/fb) & $|\kappa_e| \lesssim 600$ & $M \gtrsim 6$~TeV \\
& LHC14 (300/fb) & $|\kappa_e| \sim 260$ & $M \sim 9$~TeV \\
& LHC14 (3/ab) & $|\kappa_e| \sim 150$ & $M \sim 12$~TeV \\
& 100 TeV (3/ab) & $|\kappa_e| \sim 75$ & $M \sim 17$~TeV \\
\hline
\multirow{3}{*}{$e^+e^- \to h$} & LEP~II & $|\kappa_e| \lesssim 2000$ & $M \gtrsim 3$~TeV\\
& TLEP (1/fb) & $|\kappa_e| \sim 50$ & $M \sim 20$~TeV \\
& TLEP (100/fb) & $|\kappa_e| \sim 10$ & $M \sim 50$~TeV \\
\hline
\multirow{2}{*}{$d_e$} & current & Im\,$\kappa_e \lesssim 0.017$ & $M \gtrsim 1000$~TeV \\
& future & Im\,$\kappa_e \sim 0.0001$ & $M \sim 10^4$~TeV \\
\multirow{2}{*}{$(g-2)_e$} & current & Re\,$\kappa_e \lesssim 3000 $ & $M \gtrsim 2.5$~TeV \\
& future & Re\,$\kappa_e \sim 300 $ & $M \sim 8$~TeV \\
\hline\hline
\end{tabular}
\end{center}
\caption{\small Summary of current constraints and future expected
sensitivities to a modified Higgs-electron coupling $\kappa_e$ and
the corresponding new physics scale $M$. }
\label{tab:summary}
\end{table}
\phantomsection
\addcontentsline{toc}{section}{Acknowledgments}
\section*{Acknowledgments}
This work was initiated at the Aspen Center for Physics with partial
support from the National Science Foundation, Grant No. PHYS-1066293.
J.B. acknowledges insightful discussions with Felix Yu, and support by
the U.S. National Science Foundation under CAREER Grant PHY-1151392,
the ERC Advanced Grant EFT4LHC of the European Research Council, and
the Cluster of Excellence Precision Physics, Fundamental Interactions
and Structure of Matter (PRISMA-EXC 1098). M.S. would like to thank
Andy Cohen for helpful discussions and the US Department of Energy
Office of Science for support under Award DE-SC-0010025. Research at
Perimeter Institute is supported by the Government of Canada through
Industry Canada and by the Province of Ontario through the Ministry of
Economic Development \& Innovation.
\begin{appendix}
\section{Two-loop contributions to dipole moments}\label{sec:twoloopEDM}
In this appendix we give the analytic expressions for the complete set
of relevant two-loop contributions to the electron dipole moments that
are induced by a modified Higgs-electron coupling. For the case
$\kappa_e = 1$ we reproduce exactly the part of the bosonic
contributions in Ref.~\cite{Gribouk:2005ee} that involve the exchange
of a virtual Higgs boson. To our knowledge, this constitutes the first
independent (partial) check of their calculation. For an imaginary
value of $\kappa_e $ our results for the top-loop diagrams with an
internal photon are in agreement with the classic calculation by Barr
and Zee~\cite{Barr:1990vd}, while the corresponding analytic results
with an internal $Z$ boson are new. Results for the considered bosonic
diagrams, in terms of parametric integrals, can in principle be
extracted from Ref.~\cite{Leigh:1990kf, Chang:1990sf} that give
results for two-loop contributions to EDMs in multi-Higgs doublet
models (see also~\cite{Abe:2013qla} for a recent reevaluation of the
Barr-Zee type contributions in two-Higgs doublet models). We find
small numerical discrepancies with the results of~\cite{Leigh:1990kf,
Chang:1990sf} of the order of 10\%.
To obtain our results we performed an off-shell matching calculation,
along the lines of Ref.~\cite{Bobeth:1999mk}, to an effective theory
where all heavy particles (the top quark and the $W$, $Z$, and Higgs
bosons) are integrated out. The two physical operators, yielding the
magnetic and electric dipole moments in the non-relativistic limit,
can be chosen as
\begin{equation}
{\cal O}_m = e \bar \psi_e \sigma^{\mu\nu} \psi_e F_{\mu\nu} \,, \qquad
{\cal O}_e = e \bar \psi_e \sigma^{\mu\nu} i \gamma_5 \psi_e F_{\mu\nu} \,.
\end{equation}
In order to project on the physical matrix elements, we also need the
following two operators that vanish via the electron equations of
motion:
\begin{equation}
{\cal O}_{m}^\text{e.o.m.} = \bar \psi_e \slashed{D} \slashed{D} \psi_e \,, \qquad
{\cal O}_{e}^\text{e.o.m.} = \bar \psi_e \slashed{D} \slashed{D} i \gamma_5 \psi_e \,.
\end{equation}
In our calculation we set the electron mass to zero while keeping the
electron Yukawa nonzero. Therefore, no other off-shell operators can
contribute at this order, and it is sufficient to expand the
integrands to first order in the external momenta.
We have calculated all Feynman diagrams employing the background field
gauge for the electroweak interactions~\cite{Denner:1994xt}. The
two-loop integrals were computed using the recursion relations
in~\cite{Davydychev:1992mt, Bobeth:1999mk}. We decompose our result
for the two-loop electron EDM in the following way
(cf. Eq.~\eqref{eq:edmlag})
\begin{equation} \label{EDM}
d_e^\text{2loop} = d_e^{t\gamma} + d_e^{tZ} + d_e^{W\gamma} + d_e^{WZ} + d_e^{W} + d_e^{Z} ~.
\end{equation}
The first four terms denote contributions from Barr-Zee type
diagrams~\cite{Barr:1990vd} containing top-quark loops and a photon
($d_e^{t\gamma}$), top-quark loops and a $Z$ boson ($d_e^{tZ}$),
$W$ boson loops and a photon ($d_e^{W\gamma}$), and $W$ boson loops
and a $Z$ boson ($d_e^{WZ}$) (see the left and center diagrams in
Fig.~\ref{fig:2loopEDM} for examples). The last two terms in
\eqref{EDM} denote the remaining two-loop contributions that contain
either $W$ bosons ($d_e^W$), or $Z$ bosons ($d_e^Z$) (see the right
diagram in Fig.~\ref{fig:2loopEDM} for an example). We obtain for the
individual contributions
\begin{eqnarray}
\frac{d_e^{t\gamma}}{e} &=& \frac{16 e^2}{3 (16\pi^2)^2}
\frac{y_e^\text{SM}}{\sqrt{2} v} ~\text{Im}\,\kappa_e x_{th} \left[
\left(2x_{th} - 1\right) \Phi\left(\frac{1}{4x_{th}}\right) - 2\left( 2
+ \log x_{th} \right)
\right] \,, \\[16pt]
\frac{d_e^{tZ}}{e} &=& \frac{e^2}{(16\pi^2)^2s_w^2} \frac{y_e^\text{SM}}{\sqrt{2}v}
~\text{Im}\,\kappa_e~ \frac{1}{2c_w^2} \left( 1 - 4 s_w^2\right)
\left( 1 - \frac{8}{3} s_w^2\right) \left(1 - x_{hZ} \right)^{-1} x_{tZ} \nonumber \\[2mm]
&& \times \bigg[\left(1 - 2x_{th} \right)
\Phi\left(\frac{1}{4x_{th}}\right) - 2\logx_{hZ} - \left(1 - 2x_{tZ} \right)
\Phi\left(\frac{1}{4x_{tZ}}\right)\bigg] ~, \\[16pt]
\frac{d_e^{W\gamma}}{e} &=& \frac{2 e^2}{(16\pi^2)^2} \frac{y_e^\text{SM}}{\sqrt{2}v}
~\text{Im}\,\kappa_e \bigg[ \left( 1 + 6x_{Wh} \right) \left( 2 +
\log x_{Wh} \right) \nonumber \\ && \hspace{4cm} - \left(6 x_{Wh} - 7 \right)
x_{Wh} \Phi\left(\frac{1}{4x_{Wh}}\right) \bigg] \,, \\[16pt]
\frac{d_e^{WZ}}{e} &=& \frac{e^2}{(16\pi^2)^2s_w^2} \frac{y_e^\text{SM}}{4\sqrt{2}v}
~\text{Im}\,\kappa_e \left( 1 - 4 s_w^2\right) \left(1 - x_{Zh}
\right)^{-1} \nonumber \\
&& \times \bigg[ \left( 2 + 12x_{Wh} - \frac{1}{c_w^2} - 2x_{Zh} \right) \logx_{Zh}
\nonumber \\ && \qquad + \left(14 -12x_{Wh} - \frac{3}{c_w^2} + 2x_{Zh}
\right) x_{Wh} \Phi\left(\frac{1}{4x_{Wh}}\right) \nonumber \\
&& \qquad + \left(2 + 12x_{Wh} - \frac{1}{c_w^2} + \frac{4x_{Zh}}{c_w^2} - 18x_{Zh}
\right) c_w^2 \Phi\left(\frac{1}{4c_w^2} \right) \bigg] ~.
\end{eqnarray}
\begin{eqnarray}
\frac{d_e^{W}}{e} &=& \frac{e^2}{(16\pi^2)^2}
\frac{y_e^\text{SM}}{18\sqrt{2} v}\frac{1}{s_w^2} ~ \text{Im}
\kappa_e ~ x_{hW} \nonumber \\
&& \times \bigg\{6 \big(x_{Wh}^2 + 4x_{Wh} - 2\big)
\Phi\left(\frac{1}{4x_{Wh}}\right) - 6 \big(4x_{Wh}^3 + 3x_{Wh}^2 - 4\big)
\text{Li}_2(1-x_{Wh}) \nonumber \\
&& \qquad - \pi^2 x_{Wh}^2 \big(3 + 4x_{Wh} \big) + 24x_{Wh}
\big(x_{Wh}-1\big) + 24x_{Wh} \big(x_{Wh}+1\big) \log x_{Wh} \nonumber \\
&& \qquad - 3 (4 x_{Wh}^3 + 3x_{Wh}^2 - 4)\log^2 x_{Wh} \bigg\}~,
\end{eqnarray}
\begin{eqnarray}
\frac{d_e^{Z}}{e} &=& \frac{e^2}{(16\pi^2)^2}
\frac{y_e^\text{SM}}{36\sqrt{2}v} \frac{1}{s_w^2 c_w^2} ~\text{Im}
\kappa_e~ x_{hZ} (8c_w^4-12c_w^2+5) \nonumber \\ &&
\times \bigg \{ - 6 \big( 4x_{Zh}^3 + 3x_{Zh}^2 - 1 \big) \text{Li}_2 (1-x_{Zh})
- 3 \big( 1 - 2 x_{Zh} - 8 x_{Zh}^2 \big) \Phi\left(\frac{1}{4x_{Zh}}\right) \nonumber \\ &&
\qquad - \pi^2 x_{Zh}^2 \big( 3 + 4x_{Zh} \big) - 6 x_{Zh} \big( 1 - 4x_{Zh}
\big) + 6 x_{Zh} \big( 1 + 4x_{Zh} \big) \log x_{Zh} \nonumber \\
&& \qquad - 3 \big( 4x_{Zh}^3 + 3x_{Zh}^2 - 1 \big) \log^2 x_{Zh} \bigg \}
\nonumber \\
&& + \frac{e^2}{(16\pi^2)^2} \frac{y_e^\text{SM}}{6\sqrt{2}v} \frac{1}{c_w^2} ~\text{Im}
\kappa_e~ (s_w^2-c_w^2) x_{hZ}^3 \nonumber \\ &&
\times \bigg \{12 \big( 1 - 4x_{Zh} + x_{Zh}^2 \big)
\text{Li}_2 (1-x_{Zh}) - 3 \big( 1 - 6 x_{Zh} + 8 x_{Zh}^2 \big)
\Phi\left(\frac{1}{4x_{Zh}}\right) \nonumber \\ &&
\qquad - \pi^2 \big( 1 - 4x_{Zh} \big) - 6 x_{Zh}^2 + 12 x_{Zh}^2 \log x_{Zh} \nonumber \\
&& \qquad + 3 \big( 2x_{Zh}^2 - 4x_{Zh} +1 \big) \log^2 x_{Zh} \bigg \}%
\end{eqnarray}
To simplify the expressions we defined the mass ratios $x_{ij} \equiv
M_i^2/M_j^2$, $c_w=M_W/M_Z$, and $s_w = \sqrt{1-c_w^2}$.
The function $\Phi(z)$ is given by~\cite{Davydychev:1992mt}
\begin{equation}
\begin{split}
\Phi(z) & = 4 \bigg( \frac{z}{1-z} \bigg)^{1/2} \text{Cl}_2 \big(2
\arcsin(z^{1/2})\big) \,, \\
\text{Cl}_2 (\theta) & = - \int_0^\theta dx \log |2 \sin (x/2)| \,,
\end{split}
\end{equation}
for $z<1$ and by
\begin{equation}
\begin{split}
\Phi(z) & = \bigg( \frac{z}{z-1} \bigg)^{1/2} \bigg\{ -4 \text{Li}_2
(\xi) + 2 \log^2 \xi - \log^2 (4z) + \frac{\pi^2}{3} \bigg\} \,, \\
\xi & = \frac{1 - \big(\frac{z-1}{z}\big)^{1/2}}{2} \,,
\end{split}
\end{equation}
for $z>1$, where ${\rm Li}_2 (x) = -\int_0^x du \, \ln (1-u)/u$ is the
usual dilogarithm.
The numerical size of the individual contributions is
\begin{multline}
\frac{d_e^\text{2loop}}{e} = \frac{d_e^{t\gamma}}{e} +
\frac{d_e^{tZ}}{e} + \frac{d_e^{W\gamma}}{e} + \bigg( \frac{d_e^{WZ}}{e} +
\frac{d_e^{W}}{e} + \frac{d_e^{Z}}{e} \bigg) \\
= \text{Im}\,\kappa_e \times \big( - 6.44 - 0.12 + 13.85 - 2.22 \big)
\times 10^{-27} \text{cm} \,.
\end{multline}
Note that the Barr-Zee contributions involving $Z$ bosons are
suppressed by the small vector coupling of the $Z$ boson to leptons
proportional to $(1 - 4 s_w^2)$.
We checked explicitly that the corresponding contributions to the
anomalous magnetic moment of the electron can be obtained via
\begin{equation}\label{eq:translate}
\Delta a_e = \frac{(\text{Re}\,\kappa_e - 1)}{\text{Im}\,\kappa_e} 2
m_e \left( \frac{d_e}{e} \right) \,.
\end{equation}
The results of this appendix can also easily be adapted to obtain
expressions for flavor violating dipole transitions such as $\mu \to e
\gamma$ in the presence of flavor-violating Higgs couplings.
\section{Enhanced Higgs production through a loop hole?} \label{sec:loophole}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.3\textwidth]{figs/eeh.pdf}~~~~~~~~
\includegraphics[width=0.3\textwidth]{figs/eeZZh.pdf}
\end{center}
\caption{Sample one-loop Feynman diagrams which naively look like they
might give a SM $s$-channel Higgs production cross section that is not suppressed by
the small electron Yukawa. As shown in the text, chiral symmetry implies
that the amplitude for this process is suppressed by the electron mass for
on-shell external fermions. \label{fig:1loopeeh}}
\end{figure}
Naively one might expect that, in the SM, higher-order Feynman graphs
(for example Fig.~\ref{fig:1loopeeh}) could lead to enhanced
$s$-channel Higgs production, not suppressed by the small Yukawa
coupling. Here we show that this expectation is wrong. To see this
note that in the limit of vanishing electron Yukawa coupling (and
ignoring neutrino masses), the SM Lagrangian has an exact enhanced
(chiral) symmetry rotating the left- and right-handed components of
the electron field. Thus any non-vanishing amplitude of electrons
coupling to the Higgs must either be proportional to the electron
Yukawa coupling (as we want to show) or preserve chiral symmetry. But
any chiral symmetry preserving coupling of electrons has the electrons
combined into a vector which must be dotted into a electron momentum
to form a Lorentz invariant amplitude. Then the electron equation of
motion can be used to turn the momentum into the electron mass.
To make this argument more explicit, consider the amplitude for the
transition of an on-shell electron-positron pair into a (not
necessarily on-shell) Higgs boson (the argument for the reverse
process is very similar). Its most general Lorentz spinor structure is
of the form
\begin{equation}
\bar v_e({\mathbf p}',\sigma') [ A + B \gamma_5 + C_\mu \gamma^\mu +
D_\mu \gamma^\mu \gamma_5 + E_{\mu\nu} \sigma^{\mu\nu} ]
u_e({\mathbf p},\sigma) \,,
\end{equation}
where $A, B, C_\mu, D_\mu, E_{\mu\nu}$ are coefficients which must be
constructed out of the Lorentz invariants $p^2=p'^2=m_e^2$, $p\cdot
p'$ and the two independent Lorentz vectors $p_\mu$ and $p'_\mu$.
The amplitudes proportional to $A,B,E_{\mu\nu}$ violate chiral
symmetry, thus they can only be generated proportional to the electron
Yukawa coupling. The amplitudes with $C_\mu$ and $D_\mu$ preserve
chiral symmetry and need not be suppressed. However, by Lorentz
symmetry $C_\mu$ and $D_\mu$ must be proportional to either $p_\mu$ or
$p'_\mu$. Thus we obtain amplitudes of the form $\bar v_e({\mathbf
p'})\, \slashed{p} \,u_e({\mathbf p})$ and $\bar v_e({\mathbf p'})\,
\slashed{p'} \,u_e({\mathbf p})$ which are proportional to $m_e$ by
the equations of motion for on-shell external electrons. Therefore,
$s$-channel Higgs production in electron-positron collisions is always
suppressed by at least one power of $m_e$ (and possible loop factors).
This general argument based on spin and Lorentz invariance
continues to apply for amplitudes with additional {\it soft} photons
which cannot carry angular momentum. However amplitudes with
additional hard photons need not be suppressed by the electron mass.
For example, the process $e^+e^- \rightarrow \gamma^* \rightarrow
\gamma\, h$ does arise in the SM and is not suppressed by the electron
mass. However it is suppressed by the small loop-induced coupling of the Higgs to
photons and was therefore not relevant for Higgs production at LEP2.
The presence of the hard photon in the final state would of course
allow experimenters to distinguish this process from the s-channel
Higgs production process in attempts to measure the Higgs-electron
coupling.
\end{appendix}
\newpage
\phantomsection
\addcontentsline{toc}{section}{References}
|
2,869,038,154,530 | arxiv |
\section{Introduction and motivation}\label{sec:introduction}
\IEEEPARstart{T}{he} increasing demand for ubiquitous, reliable, fast, and scalable wireless services is pushing today's radio technology toward its ultimate limits.
In this context, it is natural to continue searching for more bandwidth, which in turn pushes the operation towards higher frequencies~\cite{RappaportAccess2019}. 5G is designed to operate in bands up to $71$\,GHz \cite{rel17}. \Ac{THz} communications in the band from 0.1 to 10 THz is considered as a highly promising technology for 6G and beyond~\cite{RappaportAccess2019}. The use of high frequencies translates into higher path losses per antenna, which can be compensated for by antenna arrays. This combination undermines a fundamental assumption of multiple antenna communications: \emph{the wavefronts of radiated waves are locally planar over antenna arrays}~\cite{heath_lozano_2018}.
When an antenna radiates a wireless signal in free space, the wavefront of the electromagnetic waves has a different shape depending on the observation distance. Traditionally, two regions have been distinguished \cite{Selvan2017a}: the Fresnel and the far-field regions. Wireless communications have almost exclusively operated in the antenna (array) far field, which is conventionally characterized by propagation distances beyond the Fraunhofer distance. When arrays between $10$ cm and $1$ m are utilized, the typical communication ranges up to $100$ m are almost entirely in the Fresnel region when using a carrier frequency in the range $30$--$300\,\mathrm{GHz}$~\cite{LozanoMag2021}. Thus, the plane wave approximation does not hold anymore, and spherical wavefront propagation must be considered instead~\cite{LozanoMag2021}. This offers the opportunity for spatial-multiplexing in low-rank single-user \ac{MIMO} systems~\cite{bohagen09,Madhow2011} and for high-accuracy estimation of source position~\cite{Friedlander2019}. However, this line of research constitutes a minor fraction of the vast literature that relies on the plane-wave approximation.
\begin{figure}[t!]\vspace{-0.3cm}
\begin{center}
\begin{overpic}[width=\columnwidth]{figures/fig1}
\put(45,50){\footnotesize{$L_\mathsf{H}$}}
\put(81,25){{\footnotesize{$L_\mathsf{V}$}}}
\put(66,06.5){\footnotesize{$d_\mathsf{H}$}}
\put(71.5,06.0){\footnotesize{$\sqrt{A}$}}
\put(15,33.5){\rotatebox{+90}{\footnotesize{$d_\mathsf{V}$}}}
\put(15,41){\rotatebox{+90}{\footnotesize{$\sqrt{A}$}}}
\put(31,07){\footnotesize{$\mathbf{s}_k$}}
\put(25,02){\footnotesize{$\varphi_k$}}
\put(39,05){\footnotesize{$\theta_k$}}
\put(13,12){\footnotesize{$\mathbf{r}_1$}}
\put(82,49){\footnotesize{$\mathbf{r}_N$}}
\put(93,30){\footnotesize{$X$}}
\put(45,54){\footnotesize{$Y$}}
\put(19,05){\footnotesize{$Z$}}
\end{overpic}
\caption{Diagram of the 2D planar array located in the $XY$-plane.}
\label{fig1}
\end{center}\vspace{-0.7cm}
\end{figure}
Our objective is to show that, in the bands above $6$~GHz, the classical far-field approximation may profoundly underestimate the achievable performance of multi-user \ac{MIMO} communication systems equipped with planar arrays of practical size, i.e., in the order of half-a-meter. The underestimation is already large in the mmWave band around $30\,\mathrm{GHz}$ bands, and is further exacerbated when higher frequencies are considered. Our numerical analysis also shows that, when the radiative near-field channel model is used, \ac{MMSE} combining vastly outperforms \ac{MR}, thanks to the sub-wavelength spatial resolution that largely increases its interference suppression capabilities. Particularly, \ac{MMSE} combining enables serving very many \acp{UE}; in the order of $1500$ \acp{UE}/km$^2$ per channel use (in line with 5G requirements \cite{imt2020}), while ensuring fairness across them (and thus significantly increasing the performance at the cell edge). This makes the combination of \ac{MMSE} combining and electrically large arrays a promising candidate to meet the stringent capacity requirements of next-generation networks.
\section{System and signal model}\label{sec:model}
We consider a planar array centered around the origin of the $XY$-plane, as shown in
\figurename~\ref{fig1}. The array consists of $N_\mathsf{V}$ horizontal rows and $N_\mathsf{H}$ antennas per row, for a total of $N=N_\mathsf{H}N_\mathsf{V}$
antennas. Each antenna has an area $A$
and the spacing is $d_\mathsf{H}$ and $d_\mathsf{V}$ along the horizontal and vertical directions, respectively. Thus, the horizontal and vertical lengths of the array are $L_\mathsf{H}= N_\mathsf{H}\sqrt{A}+\left(N_\mathsf{H}-1\right)d_\mathsf{H}$ and $L_\mathsf{V}= N_\mathsf{V}\sqrt{A}+\left(N_\mathsf{V}-1\right)d_\mathsf{V}$, respectively.
The antennas are numbered from left to right and from the bottom row to the top row so that
antenna $n$ is located at $\mathbf{r}_n=\left[x_n, y_n, 0\right]^\mathsf{T}$, where $x_n = \Delta_\mathsf{H}
\left(-\frac{N_\mathsf{H}-1}{2}
+ \textrm{mod}\left(n-1, N_\mathsf{H}\right)
\right)$ and $y_n = \Delta_\mathsf{V}
\left(-\frac{N_\mathsf{V}-1}{2} + \left\lfloor{(n-1)}/{N_\mathsf{H}}\right\rfloor
\right)$
with $\Delta_\mathsf{H}=\sqrt{A}+d_\mathsf{H}$ and
$\Delta_\mathsf{V}=\sqrt{A}+d_\mathsf{V}$.
We assume that $K$ single-antenna \acp{UE} communicate with the planar array depicted in \figurename~\ref{fig1} and transmit signals with
polarization in the $Y$ direction when traveling in the $Z$ direction~\cite{bjornson2020}.{\footnote{{The analysis can be extended to other polarization dimensions (e.g., linear combination of $X$ and $Y$ polarizations).}}} \Ac{LoS} propagation is considered, as it becomes predominant when considering high frequencies (and hence shrinking the transmission range) \cite{LozanoMag2021}. We denote by $\mathbf{s}_k=\left[x_k, y_k, z_k\right]^\mathsf{T}$ the arbitrary location
for source $k$ so that
the signal impinges on the planar array with azimuth and elevation angles given by
$\varphi_k=\tan^{-1}({x_k}/{z_k})$ and
$\theta_k=\tan^{-1}({y_k}/{\sqrt{x_k^2+z_k^2}})$, respectively.
We let $\mathbf{h}_k=\left[h_{k1},\dots,h_{kN}\right]^\mathsf{T}\in\mathbb{C}^N$ denote the channel of
\ac{UE} $k$. In particular,
$h_{kn}=\left|h_{kn}\right|e^{-\mathsf{j}\phi_{kn}}$ is the channel from source $k$ to receive
antenna $n$, with $\left|h_{kn}\right|^2$ being the channel gain and $\phi_{kn}\in\left[0,2\pi\right)$
denoting the phase shift. In the remainder, perfect channel state information is assumed, since the channels $\left\{\mathbf{h}_k\right\}_{k=1}^{K}$ can be estimated arbitrarily well from pilot signals, thanks to the \ac{LoS} propagation.
\subsection{Channel model}\label{model:channel}
To model $\mathbf{h}_k$, we extend the prior work~\cite{bjornson2020}, which only considers the case $d_\mathsf{H} = d_\mathsf{V}= \sqrt{A}$. The extension to the case $d_\mathsf{H} \ne d_\mathsf{V}$ with $d_\mathsf{H},d_\mathsf{V} \ge \sqrt{A}$ is as follows.
\begin{figure*}[t!]\vspace{-0.9cm}
\setcounter{equation}{0}
\begin{align}\label{eq:channelGain}
\zeta_{kn} =
\frac{1}{12\pi} \sum_{i=0}^{1}{
\sum_{j=0}^{1}{ \frac{ g_i\left( x_{kn} \right) g_j\left( y_{kn} \right) \left|z_k\right| }
{\left( g_j^2\left( y_{kn} \right) + z_k^2 \right)
\sqrt{ g_i^2\left( x_{kn} \right) + g_j^2\left( y_{kn} \right) + z_k^2 } } }
}
+& \frac{1}{6\pi} \sum_{i=0}^{1}{
\sum_{j=0}^{1}{
\tan^{-1}
\left(
\frac{ g_i\left( x_{kn} \right) g_j\left( y_{kn} \right) }
{ \left|z_k\right|
\sqrt{ g_i^2\left( x_{kn} \right) + g_j^2\left( y_{kn} \right) + z_k^2 } }
\right) } }
\end{align}\vspace{-0.2cm}
\hrule
\end{figure*}
\begin{corollary}\label{lm:channelGain}
Consider a lossless isotropic antenna located at $\mathbf{s}_k$ that transmits a signal that has
polarization in the $Y$ direction when traveling in the $Z$ direction. The free-space channel gain
$\zeta_{kn}$ at the $n$th receive antenna, located at $\mathbf{r}_n$ is given by \eqref{eq:channelGain}, shown at the top of the page,
where
\setcounter{equation}{1}
\begin{align}
g_i\left( \alpha \right) \triangleq \sqrt{A}/2+\left(-1\right)^i \alpha
\end{align}
while $x_{kn} = x_k-x_n$ and $y_{kn} = y_k-y_n$.
\hfill$ \blacksquare$
\end{corollary}
From Corollary~\ref{lm:channelGain}, the following model is obtained.
\begin{corollary}[Exact model]\label{cor:nearField}
The channel entry $h_{kn}=\left|h_{kn}\right|e^{-\mathsf{j}\phi_{kn}}$ is obtained as
\begin{align}\label{eq:nearField_modulus}
\left|h_{kn}\right|&=\sqrt{\zeta_{k,n}}\\
\label{eq:nearField_phase}
\phi_{kn}&=2\pi\,\textrm{mod}\left(\frac{\left\|\mathbf{d}_{kn}\right\|}{\lambda}, 1\right)
\end{align}
where $\zeta_{kn}$ is defined in
\eqref{eq:channelGain} and $\mathbf{d}_{kn}={\mathbf{s}_k}-{\mathbf{r}_n}$.
\hfill$ \blacksquare$
\end{corollary}
The above model provides a general expression for $h_{kn}$ that allows to quantify its channel gain in the so-called
\emph{radiative near-field} of the array~\cite{dardari2020,bjornson2020}.\footnote{Throughout this
letter, we assume $\left\|\mathbf{d}_{kn}\right\|\gg\lambda$, so that
the system, although in the near-field region of the array, does not operate in the reactive near-field of the transmit antenna (see \cite{bjornson2020, dardari2020} for details).}
Since it captures the fundamental properties of wave propagation, we call it the \emph{exact model}. Notice that it is substantially different from the classical \emph{far-field model}, e.g.,~\cite{heath_lozano_2018}, that assumes locally planar wavefronts over arrays and is valid for distances beyond the Fraunhofer distance $d_F=2\left(L_\mathsf{H}^2+L_\mathsf{V}^2\right)/\lambda$ \cite[Eq.~(3)]{demir2021}.
\begin{figure}[t]\vspace{-0.25cm}
\begin{center}
\includegraphics[width=\columnwidth]{figures/fig2}\vspace{-0.2cm}
\caption{Difference in the amplitude (left axis) and phase (right axis) between the exact model and the far-field approximation.}
\label{fig2}\vspace{-0.6cm}
\end{center}
\end{figure}
\begin{corollary}[Far-field approximation]\label{cor:farField}
If \ac{UE} $k$ is in the far-field region of the array, i.e.,
$d_k\cos\varphi_k\gg\max\left(L_\mathsf{H}, L_\mathsf{V}\right)$, then
$h_{kn}\approx h_{kn}^{\rm FF}$ with $h_{kn}^{\rm FF} = \left|h_{kn}^{\rm FF}\right|e^{-\mathsf{j}\phi_{kn}}$ being modeled as
\begin{align}\label{eq:farField_modulus}
\left|h_{kn}^{\rm FF}\right|=\sqrt{
\frac{A\cos\varphi_k}
{4\pi d_k^2}}\\
\label{eq:farField_phase}
\phi_{kn}^{\rm FF}=\mathbf{k}^\mathsf{T}\left(\varphi_k, \theta_k\right)\mathbf{r}_n
\end{align}
where $\mathbf{k}\left(\varphi_k, \theta_k\right)=\frac{2\pi}{\lambda}
\left[\cos\theta_k\sin\varphi_k, \sin\theta_k, \right.$
$\left.\cos\theta_k\cos\varphi_k\right]^\mathsf{T}$
is the wave vector, e.g.,~\cite{bjornson2017}.
\hfill$ \blacksquare$
\end{corollary}
We notice that the propagation channel in \ac{MIMO} systems has been almost exclusively modeled as in Corollary~\ref{cor:farField}. For modern arrays of cellular networks\footnote{For instance, the Ericsson AIR 6419 product that contains $64$ antenna-integrated radios in a box that is roughly $1 \times 0.5\,\mathrm{m}^2$ \cite{air6419}.} of size $1 \times 0.5\,\mathrm{m}^2$, this is a justified assumption when sub$-6$ GHz bands are used. In this case, $d_F \le 50\,\mathrm{m}$ and, thus, most receivers are in the far-field of the transmitter.
The situation changes substantially in the frequency range $30$-$300\,\mathrm{GHz}$, in which $d_F \ge 250\,\mathrm{m}$, and typical operating distances are entirely below it. This implies that the far-field approximation cannot be used, and the exact propagation model derived in Corollary~\ref{lm:channelGain} must be considered instead. As is known, the radiative near-field can create both noticeable amplitude variations and phase variations over the wavefront. To measure the impact of such variations, \figurename~\ref{fig2} reports the results for $L_\mathsf{H}=0.5\,\mathrm{m}$, $L_\mathsf{V}=1.0\,\mathrm{m}$, $A=\left(\lambda/4\right)^2$, $d_\mathsf{H}=0.5\lambda$, and $d_\mathsf{H}=2\lambda$, considering a \ac{UE} located at $30\,\mathrm{m}$ from the \ac{BS}, which is elevated by $10\,\mathrm{m}$. Amplitude variations are reported with dotted lines (using the left axis), whereas phase variations are represented by the solid lines (using the right axis).
While the amplitude variations are negligible, the phase variations are significant, particularly when the carrier frequency increases.
Note that the model in Corollary~\ref{lm:channelGain} is also accurate in the far-field, thus there is no need to determine beforehand if the communication scenario is in the radiative near-field region or not. We conclude by noticing that the above discussion does not require the use of \emph{physically large arrays} (cf.~\cite{lu2021}), but holds true for commercially-sized arrays, e.g., in the order of half-a-meter wide and height. What matters is the size relative to the wavelength, the so-called \emph{electromagnetic size}.
\subsection{System model}\label{model:signal}
We consider the uplink.
The received signal is modeled as $
\mathbf{y}\in\mathbb{C}^N=\sum_{k=1}^{K}{\mathbf{h}_k s_k} + \mathbf{n}$,
where $s_k\sim\mathcal{N}_\mathbb{C}\left(0,p_k\right)$
is the data from \ac{UE} $k$ and
$\mathbf{n}\in\mathbb{C}^N$ is the thermal noise with i.i.d. elements distributed as $\mathcal{N}_\mathbb{C}\left(0,\sigma^2\right)$. To decode $s_k$, $\mathbf{y}$ is processed with the combining vector
$\mathbf{v}_k\in\mathbb{C}^N$. By treating the interference as noise, the \ac{SE} for \ac{UE} $k$ is
$\log_2\left(1+\gamma_k\right)$, where
\begin{align}\label{eq:sinr}
\gamma_k = \frac{p_k \left|\mathbf{v}_k^\mathsf{H} \mathbf{h}_k\right|^2}
{ \sum_{i\neq k}{p_i \left|\mathbf{v}_k^\mathsf{H} \mathbf{h}_i\right|^2} + \sigma^2\left\|\mathbf{v}_k\right\|^2}
\end{align}
is the \ac{SINR}. We consider both \ac{MR} and \ac{MMSE} combining. {MR has low computational complexity and maximizes the power of the desired signal, but neglects interference. \ac{MMSE} has higher complexity but it maximizes the SINR in \eqref{eq:sinr}. Other suboptimal schemes, e.g. zero-forcing, are not considered for space limitation.} In the first
case, $\mathbf{v}_k^{\rm MR}=\mathbf{h}_k/\left\|\mathbf{h}_k\right\|$, while in the second case
\begin{align}\label{eq:mmse}
\mathbf{v}_k^{\rm MMSE} = \left( \sum_{i=1}^{K}{{p_i \mathbf{h}_i \mathbf{h}_i^\mathsf{H} }} + {\sigma^2}\mathbf{I}_N\right)^{-1} \mathbf{h}_k
\end{align}
with $\mathbf{I}_N$ being the identity matrix of order $N$.
The vast majority of \ac{MIMO} literature for high frequencies (e.g., in the mm-Wave frequency bands) rely on the far-field approximation in Corollary~\ref{cor:farField} and, instead of estimating $\mathbf{h}_k$ directly, estimate the three parameters $\{d_k, \theta_k, \varphi_k\}$. The latter are used to obtain estimates of $\{\mathbf{h}^{\rm FF}_k; k=1,\ldots,K\}$ through~\eqref{eq:farField_modulus} and~\eqref{eq:farField_phase}, which are eventually used to compute the combiner. If the communication scenario is in the radiative near-field region, then the system operates inevitably in a \emph{mismatched mode}, no matter how good $\{d_k, \theta_k, \varphi_k\}$ have been estimated. From the above discussion, it thus follows that the combining vectors $\left\{\mathbf{v}_k\right\}_{k=1}^{K}$ can in practice follow either the \emph{exact model} defined in Corollary~\ref{cor:nearField} or the far-field approximation defined in Corollary~\ref{cor:farField}. The aim of this letter is to quantify the impact of such inaccurate channel modeling.
\section{The impact of a mismatched design}\label{sec:oneUser}
We assume that the \ac{BS} is located at a height of $b=10\,\mathrm{m}$. We further assume the following parameters, in line with the form factor of current 5G arrays: $L_\mathsf{H}=0.5\,\mathrm{m}$, $L_\mathsf{V}=1.0\,\mathrm{m}$, $A=\left(\lambda/4\right)^2$, $d_\mathsf{H}=0.5\lambda$ and $d_\mathsf{V}=2\lambda$. The communication takes places over a bandwidth of $B=100\,\mathrm{MHz}$, with the total receiver noise power $\sigma^2=-87\,\mathrm{dBm}$. Each \ac{UE} transmits with power $p_k=20\,\mathrm{dBm}$ $\forall k$. We assume a carrier frequency of $f_0=28\,\mathrm{GHz}$ such that $\lambda=10.71\,\mathrm{mm}$, $N_\mathsf{H}=62$, and $N_\mathsf{V}=42$, to focus on a 5G hot-spot scenario. When relevant, throughout the letter, we also consider higher carrier frequencies that cover future use cases and scenarios.
\subsection{Channel gain}
We consider \ac{UE} 1 and assume that it is located along the $Z$ axis with coordinates
$\mathbf{s}_1=\left[0, -b, d\right]^\mathsf{T}$. In \figurename~\ref{fig3}, the dashed black line reports the normalized channel gain $\left|\mathbf{v}_1^\mathsf{H} \mathbf{h}_1\right|^2/\left\|\mathbf{v}_1\right\|^2$ achieved with the exact model, whereas the solid lines correspond to the channel gain measured with the combiners based on the far-field model. Particularly, the cyan and the blue lines refers to the cases $f_0=5\,\mathrm{GHz}$ and $f_0=28\,\mathrm{GHz}$, respectively, whereas the red and the green lines refer to $f_0=71\,\mathrm{GHz}$ and $f_0=300\,\mathrm{GHz}$, respectively.
Markers correspond to the Fraunhofer distances, obtained as
$d_F=2\left(L_\mathsf{H}^2+L_\mathsf{V}^2\right)/\lambda$ \cite[Eq.~(3)]{demir2021}. For $f_0=300\,\mathrm{GHz}$, $d_F=2.5\,\mathrm{km}$ falls outside the selected range.
{We see that, thanks to the large values of $N$, the channel gain with the exact model depends very weakly on the carrier frequency (a difference of at most $0.3\,\mathrm{dB}$ between $5$ and $300$ GHz at the local maximum of the curve), and we thus only report one line, for clarity}. The same channel gain is achieved with the mismatched model only for sub-$6\,\mathrm{GHz}$ frequencies, irrespective of the distance. This validates the accuracy of the far-field approximation for such frequency bands. On the contrary, if higher frequencies are considered, then large differences are observed for transmission ranges below the Fraunhofer distance. This is a direct consequence of the inaccuracy of the far-field approximation in the Fresnel region. The gap increases as $f_0$ increases. Interestingly, we observe that, for transmission ranges of practical interest (up to a hundred of meters), it is significant already for $f_0 \ge 71\,$GHz.
{Finally, note that the normalized channel gain is not monotonically decreasing as the distance from the \ac{BS} increases, but shows a local maximum. This is due to the specific choice of the \ac{BS} height $b$: when the \ac{UE} is too close to the \ac{BS}, the smaller path loss is overwhelmed by the loss due to the array directivity at larger elevations. Thus, the local maximum increases as $b$ increases.}
\begin{figure}[t]\vspace{-0.25cm}
\begin{center}
\begin{overpic}[width=1.1\columnwidth]{figures/fig3}
\put(74.5,27.5){\footnotesize{$d_F$}}
\put(73.5,27){\vector(-2,-1){7}}
\put(78.0,26){\vector(1,-3){3.5}}
\put(72.5,28){\vector(-3,1){34}}
\end{overpic}
\caption{Normalized channel gain $\left|\mathbf{v}_1^\mathsf{H} \mathbf{h}_1\right|^2/\left\|\mathbf{v}_1\right\|^2$ as a function of the distance of \acs{UE} $1$ along $Z$ axis.}
\label{fig3}
\end{center}\vspace{-0.6cm}
\end{figure}
\subsection{Interference gain}\label{sec:twoUsers}
We now analyze the normalized interference gain $\left|\mathbf{v}_1^\mathsf{H} \mathbf{h}_2\right|^2/\left\|\mathbf{v}_1\right\|^2$ with the exact and mismatched models. We assume \ac{UE} $1$ is placed at a fixed position along the $Z$ axis $\mathbf{s}_1=\left[0, -10\,\mathrm{m}, +20\,\mathrm{m}\right]^\mathsf{T}$, whereas the interfering \ac{UE} $2$ is transmitting from different locations $\mathbf{s}_2=\left[x_2, -10\,\mathrm{m}, z_2\right]^\mathsf{T}$
over the $XZ$-plane (i.e., at the same height as \ac{UE} $1$). We assume $f_0=28\,\mathrm{GHz}$ and consider both \ac{MR} and \ac{MMSE}. Note that, at $f_0=28\,\mathrm{GHz}$, $d_F\approxeq233.3\,\mathrm{m}$, and hence both \acp{UE} are in the near-field region.
\figurename~\ref{fig4} reports the normalized interference gain with the exact model. \ac{MMSE} combining is used in \figurename~\ref{fig4}\subref{fig4a}, whereas the \ac{MR} is considered in \figurename~\ref{fig4}\subref{fig4b}. Each figure contains a magnification around \ac{UE} $1$'s location, in which the relative distance of \ac{UE} $2$ from \ac{UE} $1$ is measured in wavelengths (for a total span of around $1.07\times1.07\,\mathrm{m}^2$). \figurename~\ref{fig4}\subref{fig4a} shows that the interference with \ac{MMSE} is high only in a small region around \ac{UE} $1$, whose semi-major axis (along both directions) is fractions of the wavelength. This means that \ac{MMSE} can efficiently reject any interfering signal that comes from a location that is at least a few centimeters away. On the contrary, \figurename~\ref{fig4}\subref{fig4b} shows that \ac{MR} experiences high interference from locations that are either along the $Z$ direction or along a semi-circle with radius equal to the distance of \ac{UE} $1$.
\begin{figure}[t]\vspace{-0.7cm}
\begin{center}
\subfigure[\acs{MMSE} combining.]{
\begin{overpic}[width=1.1\columnwidth]{figures/fig4a}
\put(56,34){\scriptsize{\textcolor{white}{\ac{UE} $1$'s position}}}
\put(55,07){\frame{\includegraphics[scale=.16]{figures/fig4a_zoom}}}
\put(59.5,33){\color{white}\vector(+1,-2){8}}
\put(55.7,33){\color{white}\vector(-1,-1){6}}
\end{overpic}
\label{fig4a}}\vspace{-0.2cm}
\\
\subfigure[\acs{MR} combining.]{
\begin{overpic}[width=1.1\columnwidth]{figures/fig4b}
\put(56,34){\scriptsize{\textcolor{white}{\ac{UE} $1$'s position}}}
\put(55,07){\frame{\includegraphics[scale=.16]{figures/fig4b_zoom}}}
\put(59.5,33){\color{white}\vector(+1,-2){8}}
\put(55.7,33){\color{white}\vector(-1,-1){6}}
\end{overpic}
\label{fig4b}}
\caption{Interference gain behavior using the \emph{exact} model for a fixed \acs{UE} as a function of different locations of an interfering \acs{UE}.}
\label{fig4}
\end{center}\vspace{-0.6cm}
\end{figure}
\begin{figure}[t]\vspace{-0.7cm}
\begin{center}
\subfigure[\acs{MMSE} combining.]{
\begin{overpic}[width=1.1\columnwidth]{figures/fig5a}
\put(56,34){\scriptsize{\textcolor{white}{\ac{UE} $1$'s position}}}
\put(55,07){\frame{\includegraphics[scale=.16]{figures/fig5a_zoom}}}
\put(59.5,33){\color{white}\vector(+1,-2){8}}
\put(55.7,33){\color{white}\vector(-1,-1){6}}
\end{overpic}
\label{fig5a}}\vspace{-0.2cm}
\\
\subfigure[\acs{MR} combining.]{
\begin{overpic}[width=1.1\columnwidth]{figures/fig5b}
\put(56,34){\scriptsize{\textcolor{white}{\ac{UE} $1$'s position}}}
\put(55,07){\frame{\includegraphics[scale=.16]{figures/fig5b_zoom}}}
\put(59.5,33){\color{white}\vector(+1,-2){8}}
\put(55.7,33){\color{white}\vector(-1,-1){6}}
\end{overpic}
\label{fig5b}}
\caption{Interference gain behavior using the \emph{mismatched} model for a fixed \acs{UE} as a function of different locations of an interfering \acs{UE}.}
\label{fig5}
\end{center}\vspace{-0.6cm}
\end{figure}
\figurename~\ref{fig5} plots the results obtained with the mismatched model. Unlike \figurename~\ref{fig4}, we now see that the interference gain with the two combining strategies exhibits a similar behaviour. The impact of the mismatched model is particularly evident with the \ac{MMSE} combiner. In particular, we observe that the lower values of the interference gain are at least three orders of magnitude higher than those in \figurename~\ref{fig4} (note that the same colorbar scale is used in all figures, including the magnifications). The conclusion is that \ac{MMSE} can suppress interference much more efficiently when used with the exact model, whereas \ac{MR} is greatly suboptimal in both cases.
\subsection{Spectral efficiency analysis}\label{sec:multipleUsers}
We now evaluate the \ac{SE} of a single-cell network, assuming that $K$ \acp{UE} are randomly displaced in the sector $\left[-\pi/3, +\pi/3\right)$ with a minimum distance of $15\,\mathrm{m}$ from \ac{BS}.
\figurename~\ref{fig6} reports the \ac{CDF} of the \ac{SE} achieved by \ac{MMSE} and \ac{MR} when the carrier frequency is $f_0=5\,\mathrm{GHz}$, $28\,\mathrm{GHz}$, and $71\,\mathrm{GHz}$. The number of \acp{UE} is $K=100$, and the cell radius is $R=230\,\mathrm{m}$, which corresponds approximately to the Fraunhofer distance at $28\,\mathrm{GHz}$, and is in line with current 5G cell sizes. The dashed and solid refer to the results obtained with the exact and mismatched model, respectively. Let us focus on the results of \figurename~\ref{fig6}\subref{fig6a}, obtained with the \ac{MMSE} combiner. For all frequencies, the performance using the exact model to build the combiner is much better than the one obtained with the mismatched model. However, increasing the carrier frequency has a two-fold beneficial impact on the \ac{SE}. On the one hand, the average \ac{SE} increases as $f_0$ increases. On the other hand, the \ac{CDF} with the exact model exhibits a steeper behavior, meaning that more fairness across the \ac{UE} positions is guaranteed. The conclusion is that the use of the exact model dramatically improves the interference suppression capabilities of \ac{MMSE} combining. This is not the case with \ac{MR} combining. Indeed, \figurename~\ref{fig6}\subref{fig6b} shows that only marginal differences exist between exact and mismatched models, including the fairness properties.
A similar conclusion can be drawn in \figurename~\ref{fig7}\subref{fig7a}, which collects the \acp{CDF} for the \ac{SE} achieved by \ac{MMSE} and \ac{MR} combiners for a variable number of \acp{UE} ($K=50$, $100$, and $150$) when $f_0=28\,\mathrm{GHz}$ and $R=230\,\mathrm{m}$. We notice that the results are only marginally affected by the number of \acp{UE} when the exact model is used to build the combiner (dashed lines). This is not true for the combiners based on the mismatched model (solid lines) and/or based on \ac{MR} combining (not reported here for the sake of brevity, as, similarly to \figurename~\ref{fig6}\subref{fig6b}, both models provide very similar results when using \acs{MR} combining). The same conclusion applies on the fairness performance, which is guaranteed by the usage of \ac{MMSE} combining based on the exact model.
\begin{figure}[t]\vspace{-0.7cm}
\begin{center}
\subfigure[\acs{MMSE} combining.]{
{\includegraphics[width=\columnwidth]{figures/fig6a}}
\label{fig6a}}\vspace{-0.3cm}
\\
\subfigure[\acs{MR} combining.]{
{\includegraphics[width=\columnwidth]{figures/fig6b}}
\label{fig6b}}
\caption{\acs{CDF} of the \acs{SE} as a function of the carrier frequency when $K=100$ and $R = 230$\,m, corresponding to the Fraunhofer distance at $28\,\mathrm{GHz}$.}
\label{fig6}
\end{center}\vspace{-0.5cm}
\end{figure}
To further quantify the benefits brought by \ac{MMSE} combining with the exact model, \figurename~\ref{fig7}\subref{fig7b} provides the average \ac{SE} per \ac{UE} as a function of $K$. The cell sector radius $R$ is $230\,\mathrm{m}$, and the carrier frequency is $f_0=28\,\mathrm{GHz}$. As can be seen, the gap between the exact and mismatched models is very significant. Moreover, the \ac{SE} maintains nearly flat as $K$ increases up to $K=100$, which translates into a density of around $1500$ devices/km$^2$ per channel use, as requested in 5G \cite{imt2020}. Note that the excellent interference suppression capabilities let \ac{MMSE} based on the exact model maintain the gap with other schemes when $K$ reaches very large values. The same trend is confirmed when measuring the average \ac{SE} per \ac{UE} as a function of $R$ (not reported for the sake of brevity): the performance gap using current 5G frequencies is very significant for $R\le50\,\mathrm{m}$, and even more significant when considering sub-THz frequencies and smaller cell sizes.
\section{Conclusion}\label{sec:conclusion}
{The main conclusion of this letter is that it is time for multi-user \ac{MIMO} communication theorists to abandon the far-field approximation when considering carrier frequencies above $6$~GHz. We instead need to consider more complicated channel models that capture the radiative near-field characteristics, in particular concerning the spherical phase variations. This also affects beamforming codebooks.} We showed that interference-aware combining schemes based on the radiative near-field model can effectively exploit the extra degrees-of-freedom offered by the propagation channel to deal with interference so as to enhance the scalability (in terms of number of \acp{UE}) and fairness of the system. This applies already to 5G multi-user \ac{MIMO} communications above $6$\,GHz (e.g., in the range of mmWave bands), and will be further exacerbated by beyond-5G communications, operating in the sub-THz spectrum.
\begin{figure}[t]\vspace{-0.7cm}
\begin{center}
\subfigure[\acs{CDF} of the \acs{SE} with \acs{MMSE} combining for different values of $K$.]{
{\includegraphics[width=\columnwidth]{figures/fig7a}}
\label{fig7a}}\vspace{-0.3cm}
\\
\subfigure[\acs{SE} per \acs{UE} of both \acs{MMSE} and \acs{MR} as a function of $K$.]{
{\includegraphics[width=\columnwidth]{figures/fig7b}}
\label{fig7b}}
\caption{Impact of the number of active \acp{UE} on the \acs{SE} performance.}
\label{fig7}
\end{center}\vspace{-0.6cm}
\end{figure}
\vspace{-0.4cm}
\bibliographystyle{IEEEtran}
|
2,869,038,154,531 | arxiv | \subsection*{Introduction}
This paper presents a proof of the following theorem:
\bte
\label{main}
\footnote
{\rm\ It follows from an e-mail discussion between G.~Hjorth
and the author in May -- July 1995 that G.~Hjorth may have proved
equal or similar theorem independently.}
\ Let\/ $\mathbin{\relf{E}}$ be a\/ $\fs11$
equivalence on reals. Assume that\its
\begin{itemize}
\item[$(\dag)$] each real belongs to a ``virtual''
generic extension~\footnote
{\rm\ By a generic extension of some $M$ we always mean
a set generic extension via a forcing notion $P\in M.$
Here the extensions could be different for different reals.}
of the constructible universe~$\rbox{L}.$\its
\end{itemize}
Then at least one~\footnote
{\rm\ If all reals are constructible from one of them then
the statements are compatible.}
of the following two statements hold$:$\its
\begin{enumerate}
\def{\rmt\arabic{enumi}.}{{\rm\hskip2pt(\Roman{enumi})\hskip2pt}}
\def\theenumi{{\rmt\arabic{enumi}.}}
\itla{1} \hspace{-1\mathsurround}
$\mathbin{\relf{E}}$ admits a\/ $\fdh1$ reduction~\footnote
{\rm\ By $\fdh1$ we denote the class of all subsets of $\rbox{HC}$
(the family of all hereditarily countable sets) which are
$\id{}1$ in $\rbox{HC}$ by formulas which may contain
\underline{reals and countable ordinals} as parameters.}
to the equality on the set\/ $2^{<\om_1}$ of all countable
binary sequences$.$\its
\itla{2} \hspace{-1\mathsurround} $\mathbin{\relf{E}_0}\sqq\mathbin{\relf{E}}$ continuously$.$
\end{enumerate}
\ete
\subsubsection*{Remarks on the theorem}
By a {\it ``virtual'' generic extension $\rbox{L}$\/} we mean a
set generic extension, say, $\rbox{L}[G],$ which is not necessarily
an inner class in the basic universe $\rbox{V}$ (in other words,
$G\in\rbox{V}$ is not assumed).~\footnote
{\rm\ The assumption that a set $S\sq\ord$ belongs to a
``virtual'' set generic extension of $\rbox{L}$ can be adequately
formalized as follows: {\it there exists a Boolean valued extension
of $\rbox{L}[S]$ in which it is true that the universe is a set generic
extension of the constructible universe\/}, see Lemma~\ref{44}
below.}
Notice that the assumption $(\dag)$ of the theorem follows e.\ g.
from the hypothesis that the universe is a set generic extension
of $\rbox{L}.$ In fact the theorem remains true in a weaker assumption
that each real $x$ belongs to a ``virtual'' generic extension
of $\rbox{L}[z_0]$ for one and the same real $z_0$ which does not depend
on $x$.
We refer the reader to Harrington, Kechris, and Louveau~\cite{hkl}
on matters of the early history of ``Glimm -- Effros'' theorems ---
those of type: {\it each equivalence of certain class either
admits a reduction to equality or embeds\/ $\mathbin{\relf{E}_0}$} --- and relevant
problems in probability and the measure theory. (However
Section~\ref{ulm} contains the basic notation.)
The modern history of the topic began in the
paper \cite{hkl} where it is proved that each Borel equivalence
on reals either admits a Borel reduction to the equality on reals
or embeds $\mathbin{\relf{E}_0}.$ The proof is based on an advanced tool in
descriptive set theory, the
{\it Gandy -- Harrington topology\/} on reals, generated by
$\is11$ sets.
Hjorth and Kechris~\cite{hk} found that the case of $\fs11$
relations is much more complicated. Some examples have shown
that one cannot find a reasonable ``Glimm -- Effros'' result for
$\fs11$ relations simply taking a nonBorel reduction in \ref{1}
or discontinuous embedding in \ref{2}; it seemms that the equality
on {\it reals\/} rather than countable binary sequences in \ref{1}
does not match completely the nature of $\fs11$ relations.
Hjort and Kechris \cite{hk} suggested the adequate approach: one
has to take $2^{<\om_1}$ as the domain of the equality in
\ref{1}. (This approach is referred to as the {\it Ulm -- type
classification\/} in \cite{hk}, in connection with a
classification theorem of Ulm in algebra.)
On this way they proved that the dichotomy \ref{1} vs. \ref{2}
holds for each $\fs11$ equivalence relation on reals, in the
assumption of the ``sharps'' hypothesis (and the latter can be
dropped provided the $\fs11$ relation occasionally has only Borel
equivalence classes).
Theorem~\ref{main} of this paper establishes the same result (not
paying attention on the possible compatibility of \ref{1} and
\ref{2}) in the completely different than sharps assumption: each
real belongs to a generic extension of $\rbox{L}.$ Of course it is
the principal problem (we may refer to the list of open problems
in~\cite{hk}) to eliminate the ``forcing'' assumption and prove
the result in $\ZFC$.
One faces much more problems in higher projective classes. In
fact there exists a sort of upper bound for ``Glimm -- Effros''
theorems in $\ZFC.$ Indeed, in a nonwellfounded (of ``length''
$\om_1\times\mathord{{\sf Z}\hspace{-4.5pt}{\sf Z}},$ i. e. $\om_1$ successive copies of the integers)
iterated Sacks extension~\footnote
{\rm\ See Groszek~\cite{g94} or Kanovei~\cite{k-sacks} on matters
of nonwellfounded Sacks iterations.}
of $\rbox{L}$ the $\is12$ equivalence
$$
x\mathbin{\relf{E}} y\hspace{6mm}\hbox{iff}\hspace{6mm}\rbox{L}[x]=\rbox{L}[y]
$$
neither continuously embeds $\mathbin{\relf{E}_0}$ nor admits a real--ordinal
definable reduction to the equality on ${\skri P}(\kappa)$ for a
cardinal $\kappa$.
Thus the interest can be paid on classes $\fp11,$ $\fd12,$
$\fp12.$ One may expect that $\fd12$ relations admit a theorem
similar to Theorem~\ref{main}.~\footnote
{\rm\
G.~Hjorth informed the author that he had partial results in this
domain.}
More complicated relations can be investigated in strong
extensions of $\ZFC$ or in special models. Hjorth~\cite{h-det}
proved that in the assumption of ${\bf AD}$ and
$\rbox{V}=\rbox{L}[{\rm reals}]$ every equivalence on reals either admits
a reduction (here obviously a real--ordinal definable reduction)
to the equality on a set $2^\kappa,$ $\kappa\in\ord,$ or
continuously embeds $\mathbin{\relf{E}_0}.$ Kanovei~\cite{k-sm} proved even
a stronger result (reduction to the equality on $2^{<\om_1}$) in
Solovay model for $\ZF+\DC$.
\subsubsection*{The organization of the proof}
Theorem~\ref{main} is the main result of this paper. The proof is
arranged as follows.
First of all, we shall consider only the case when $\mathbin{\relf{E}}$ is a
lightface $\is11$ relation; if in fact $\mathbin{\relf{E}}$ is $\is11(z)$ in some
$z\in{\skri N}$ then this $z$ simply enters the reasoning in a uniform
way, not influenting substantially any of the arguments.
The splitting point between the statements \ref{1} and \ref{2}
of Theorem~\ref{main} is determined in Section~\ref{ulm}.
It occurs that we have \ref{1} in the assumption that
\begin{itemize}
\item[$(\ddag)$] each real $x$ belongs to a ``virtual'' \dd\la
collapsing generic extension of $\rbox{L}$ (for some ordinal $\la$)
in which $\mathbin{\relf{E}}$ is closed in a
topolody generated by $\rbox{OD}$ sets on the set ${\cD\cap{\sf Weak}_\la(\rbox{L})}$
of all reals \dd\la weak over $\rbox{L}.$ (We say that $x\in \cD$ is
{\it\dd\la weak over\/} $\rbox{L}$ iff it belongs to a \dd\al
collapsing extension of $\rbox{L}$ for some $\al<\la$.)
\end{itemize}
On the opposite side, we have \ref{2} provided the assumption
$(\ddag)$ fails.
Both sides of the proof depend on properties of reals in collapsing
extensions close to those of Solovay model. The facts we need are
reviewed in Section~\ref{clos}.
Section~\ref{ssif} proves assertion \ref{1} of Theorem~\ref{main}
assuming $(\ddag).$ The principal idea has a semblance of the
corresponding parts in \cite{hkl} and especially
\cite{hk}~\footnote
{\rm\ Yet we use a technique different from the approach of
\cite{hk}, completely avoiding any use of recursion theory.}
: in the assumption
of $(\ddag),$ each \dd\la weak over $\rbox{L}$ real in the relevant
``virtual'' \dd\la collapsing extension belongs to a set (one and
the same
for all \dd\mathbin{\relf{E}} equivalent reals) which admits a characterization
in terms of an element of $2^{<\om_1}.$ An absoluteness argument
allows to extend this fact to the universe of Theorem~\ref{main}.
Sections \ref{prod} and \ref{or} prove \ref{2} of
Theorem~\ref{main} in the assumption that $(\ddag)$ {\it fails\/}
(but $(\dag)$ still holds, as Theorem~\ref{main} assumes). In fact
is this case $\mathbin{\relf{E}}$ is {\it not\/} closed on the set
$\cD\cap{\sf Weak}_\la(\rbox{L})$ in a ``virtual'' \dd\la collapsing extension
of $\rbox{L}$ for some $\la.$ This suffices to see that $\mathbin{\relf{E}}$ embeds
$\mathbin{\relf{E}_0}$ continuously in the ``virtual'' universe; moreover, $\mathbin{\relf{E}}$
embeds $\mathbin{\relf{E}_0}$ in a certain special sense which can be expressed by
a $\is12$ formula (unlike the existence of an embedding in general
which needs $\is13$). We conclude that $\mathbin{\relf{E}}$ embeds $\mathbin{\relf{E}_0}$ in the
universe of Theorem~\ref{main} as well by Shoenfield.
The construction of the embedding of $\mathbin{\relf{E}_0}$ into $\mathbin{\relf{E}}$ follows the
principal idea of Harrington, Kechris, and Louveau~\cite{hkl}, yet
associated with another topology and arranged in a different way.
(In particular we do not play the strong Choquet game to define
the necessary sequence of open sets.) \vspace{4mm}
\noi
{\bf Important remark} \\[1mm]
It will be more convenient to consider $\cD=2^\om,$ the {\em
Cantor space\/}, rather than ${\skri N}=\om^\om,$ as the basic Polish
space for which Theorem~\ref{main} is being proved.
\newpage
\subsection{Approach to the proof of the main theorem}
\label{ulm}
First of all, we shall prove only the ``lightface'' case of the
theorem, so that $\mathbin{\relf{E}}$ will be supposed to be a $\is11$ equivalence
on reals. The case when $\mathbin{\relf{E}}$ is $\is11[z]$ for a real $z$ does not
differ much: the $z$ uniformly enters the reasoning.
By ``reals'' we shall understand points of the
{\it Cantor set\/} $\cD=2^\om$ rather than the {\it Baire space\/}
${\skri N}=\om^\om;$ this choice is implied by some technical reasons.
The purpose of this section is to describe how the two cases of
Theorem~\ref{main} will appear. This needs to recall some
definitions.
\subsubsection{Collapsing extensions}
\label{ce}
Let $\al$ be an ordinal. Then $\col\al$ is the forcing to collapse
$\al$ down to $\om.$ If $G\sq\col\al$ is \dd{\col\al}generic over
a transitive model $M$ ($M$ is a set or a class) then $f=\bigcup G$
is a function from $\om$ onto $\al,$ so that $\al$ is countable in
$M[G]=M[f].$ Functions $f:\om\,\lra\,\al$ obtained this way will be
called \dd{\col\al}{\em generic over\/ $M$.}
By {\em \dd\la collapse universe hypothesis\/}, \cuh\la{} in
brief, we shall mean the following assumption: $\rbox{V}=\rbox{L}[f_0]$
for a \dd{\col\la}generic over $\rbox{L}$ collapse function
$f_0\in\la^\om$.
By the assumption of Theorem~\ref{main}, each real $z$ belongs to
a ``virtual'' \dd{\col\la}generic extension of $\rbox{L},$ the
constructible universe, for some ordinal $\la.$ Such an extension
satisfies \cuh\la.
\begin{remark}\rmt\
\label{rr}
The extension is not necessarily supposed to be an inner class in
the universe of Theorem~\ref{main}, see Introduction.\qed
\erem
A set is \dd\la{\em weak over $M$} ($\la$ an ordinal in a model
$M$) iff it belongs to a ``virtual'' \hbox{\dd{\col\al}generic}
extension of $M$ for some $\al<\la.$
We define
$$
{\sf Weak}_\la(M)=\ans{x:x\,\hbox{ is \dd\la weak over }\,M}\,.
$$
In the assumption \cuh\la{}, reals in ${\sf Weak}_\la(\rbox{L})$ behave
approximately like all reals in Solovay model.
\subsubsection{The $\protect\rbox{OD}$ topology}
\label{top}
In $\ZFC,$ Let ${\cal T}$ be the topology generated on a given set
$X$ (for instance, $X=\cD=2^\om,$ the Cantor set) by all $\rbox{OD}$
subsets of $X.$ ${\cal T}^2$ is the product of two copies of ${\cal T},$
a topology on $\cD^2$.
This topology plays the same role in our consideration as the
Gandy -- Harrington topology in the proof of the classical
Glimm -- Effros theorem (for Borel relations) in
Harrington, Kechris, and Louveau~\cite{hkl}. In particular, it
has similar (although not completely similar: some special
\dd{\is11}details vanish) properties.
We define $\mathbin{\overline{\relf{E}}}$ to be the \dd{{\cal T}^2}closure of $\mathbin{\relf{E}}$ in $\cD^2.$
Thus ${x\mathbin{\not{\hspace{-2pt}\overline{\relf{E}}}} y}$ iff there exist $\rbox{OD}$ sets $X$ and $Y$ containing
resp. $x$ and $y$ and such that ${x'\mathbin{\not{\hspace{-2pt}\relf{E}}} y'}$ for all ${x'\in X,}$
${y'\in Y}.$ Obviously $X$ and $Y$ can be chosen as \dd\mathbin{\relf{E}}
invariant (simply replace them by their \dd\mathbin{\relf{E}} saturations), and
then $Y$ can be replaced by the complement of $X,$ so that
$$
x\mathbin{\overline{\relf{E}}} y\;\;\llra\;\;\forall\,X\;
[\,X\hbox{ is }\rbox{OD} \cj X\hbox{ is \dd\mathbin{\relf{E}} invariant}\;\,
\lra\;\,(x\in X\;\llra\;y\in X)\,]\,.
$$
Therefore $\mathbin{\overline{\relf{E}}}$ is an $\rbox{OD}$ equivalence on $\cD$.
\subsubsection{The cases}
\label{cases}
In \cite{hkl}, the two cases are determined by the equality
$\mathbin{\relf{E}}=\mathbin{\overline{\relf{E}}}:$ if it holds that $\mathbin{\relf{E}}$ admits a Borel reduction on
$\Da(\cD),$ otherwise $\mathbin{\relf{E}}$ embeds $\mathbin{\relf{E}_0}.$ Here the splitting
condition is a little bit more complicated. First of all, we
have to consider the equality in different universes. Second,
the essential domain of the equivalence is now a proper subset of
$\cD,$ the set of all weak reals.\vspace{2mm}
\noi
{\bf Case\ 1.}\ \ For each real $z,$ there exist an ordinal
$\la$ and a ``virtual'' \dd{\col\la}generic extension $V$ of the
constructible universe $\rbox{L}$ containing $z$ such that the
following is true in $V:$ $\mathbin{\relf{E}}$ coincides with $\mathbin{\overline{\relf{E}}}$ on
$\cD\cap{\sf Weak}_\la(\rbox{L})$ and $x$ is \dd\la weak over $\rbox{L}$.\vspace{2mm}
(Notice that, for a $\is11$ binary relation $\mathbin{\relf{E}},$ the assertion
that $\mathbin{\relf{E}}$ is an equivalence is $\ip12,$ therefore absolute for
all models with the same ordinals, in particular for $\rbox{L}$ and
all generic extensions of $\rbox{L}$.)\vspace{2mm}
\noi
{\bf Case\ 2.}\ \ Not Case 1.
\bte
\label{mt}
Suppose that each real belongs to a ``virtual'' generic extension
of\/ $\rbox{L}.$ Then, for the given\/ $\is11$ equivalence relation\/
$\mathbin{\relf{E}},$ we have\/ \its
\begin{itemize}
\item[--] assertion
\ref{1} of Theorem~\ref{main} in Case 1,\hfill and\hfill\its
\item[--] assertion \ref{2} of Theorem~\ref{main} in Case 2.
\end{itemize}
\ete
This is how Theorem~\ref{main} well be proved.
\newpage
\newcommand{{\underline S}}{{\underline S}}
\subsection{On collapsing extensions}
\label{clos}
In this section, we fix a limit constructible cardinal $\la.$ The
purpose is to establish some properties of \dd\la collapsing
generic extensions (= the universe under the hypothesis
\hbox{\cuh\la}). It will be shown that weak ponts (introduced in
Section~\ref{ulm}) behave approximately like all reals in
Solovay model.
\subsubsection{Basic properties}
\label{bp}
We recall that a set $S$ is \dd\la{\em weak over $M$} iff $S$
belongs to an \dd{\col\al}generic extension of the model $M$
for some $\al<\la$.
The hypothesis \cuh\la{} (the one which postulates that the
universe is a \dd\la generic extension of $\rbox{L}$) will be assumed
during the reasoning, but we shall not mind to specify \cuh\la{}
in all formulations of theorems.
\bpro
\label{col}
Assume\/ \cuh\la. Let\/ $S\sq \ord$ be\/ \dd\la weak over\/ $\rbox{L}.$
Then\its
\begin{enumerate}
\def{\rmt\arabic{enumi}.}{{\arabic{enumi}}}
\def\theenumi{{\rm{\rmt\arabic{enumi}.}}.}
\itla{sm1}
The universe\/ $\rbox{V}$ is a\/ \dd{\col\la}generic extension of
$\rbox{L}[S]$.\its
\itla{sm2}
If\/ $\Phi$ is a sentence containing only sets in\/ $\rbox{L}[S]$ as
parameters then\/ $\La$ {\rm(}the empty sequence\/{\rm)} decides\/
$\Phi$ in the sense of\/ $\col\la$ as a forcing notion over
$\rbox{L}[S]$.\its
\itla{sm4}
If a set\/ $X\sq\rbox{L}[S]$ is\/ $\rbox{OD}[S]$ then\/ $X\in\rbox{L}[S]$.
\end{enumerate}
\epro
($\rbox{OD}[S]=S$\hspace{-1\mathsurround}{}--{\it ordinal definable\/}, that is, definable
by an \dd\in formula having $S$ and ordinals as parameters.)
The proof (a copy of the proof of Theorem 4.1 in Solovay~\cite{sol})
is based on several lemmas, including the following crucial lemma:
\ble
\label{44}
Suppose that\/ $P\in\rbox{L}$ is a p.o. set, and a set\/ $G\sq P$ is\/
\dd Pgeneric over\/ $\rbox{L}.$ Let\/ $S\in \rbox{L}[G],$ $S\sq\rbox{Ord}.$
Then there exists a set $\Sg\sq P,$ $\Sg\in \rbox{L}[S]$ such that\/
$G\sq\Sg$ and\/ $G$ is\/ \dd{\Sg}generic over\/ $\rbox{L}[S]$.
\ele
\noi{\bft Proof\hspace{2mm} }{}of the lemma. We extract the result from the proof of
Lemma 4.4 in \cite{sol}.
{\it We argue in $\rbox{L}[S]$}.
Let ${\underline S}$ be the name for $S$ in the language of the forcing $P$.
Define a sequence of sets
$A_\al\sq P\;\;(\al\in\ord)$ by induction on $\al$.\its
\begin{enumerate}
\def{\rmt\arabic{enumi}.}{{\rm\hskip1pt(A\arabic{enumi})\hskip1pt}}
\def\theenumi{{\rmt\arabic{enumi}.}}
\itla{aa1}\hspace{-1\mathsurround}
$p\in A_0$ iff either $\sg\in S$ but $p$ forces (in $\rbox{L}$ and in
the sense of $P$ as the notion of forcing) $\sg\not\in{\underline S},$ or
$\sg\not\in S$ but $p$ forces $\sg\in{\underline S}$ \ \ --- \ \ for some
$\sg\in\ord$.\its
\itla{aa2}\hspace{-1\mathsurround}
$p\in A_{\al+1}$ iff there exists a dense set $D\sq P,$ $D\in \rbox{L}$
such that every $q\in D$ satisfying $p\<q$ (means:
$q$ is stronger than $p$) belongs to $A_\al$.\its
\itla{aa3}
If $\al$ is a limit ordinal then $A_\al=\bigcup_{\ba<\al}A_\ba$.\its
\end{enumerate}
The following properties of these sets are easily verifiable
(see Solovay \cite{sol}): first, if\/ $p\in A_\al$ and\/
$p\< q\in P$ then\/ $q\in A_\al$, second,
if\/ $\ba<\al$ then\/ $A_\ba\sq A_\al$.
Since each $A_\al$ is a subset of $P,$ it follows that
$A_\da= A_{\da+1}$ for some ordinal $\da.$ We put
$\Sg=P\setminus A_\da.$ Thus $\Sg$ intends to be the set of all
conditions $p\in P$ which do not force something about ${\underline S}$
which contradicts the factual information about $S$.
We prove, following \cite{sol}, that $\Sg$ is as required. This
yields a pair of auxiliary facts.\vom
$(\Sg1)$ $G\sq\Sg$.\vom
\noi
Indeed assume on the contrary that
$G\cap A_\ga\not=\emptyset$ for some $\ga.$ Let $\ga$ be the least
such an ordinal. Clearly $\ga$ is not limit and $\ga\not=0;$ let
$\ga=\al+1.$ Let $p\in A_\ga\cap G.$ Since $G$ is generic,
definition~\ref{aa2} implies $G\cap A_\al\not=\emptyset,$
contradiction.\vom
$(\Sg2)$ {\it If\/ $D\in \rbox{L}$ is a dense subset of\/ $P$ then\/
$D\cap\Sg$ is a dense subset of\/ $\Sg$}.\vom
\noi
This is easy: if $p\in\Sg$ then $p\not\in A_{\da+1};$ hence
by~\ref{aa2} there exists $q\in D\setminus A_\da,$
$q\>p$.
We prove that $G$ is \dd\Sg generic
over $\rbox{L}[S].$ Let $D\in\rbox{L}[S]$ be a dense subset of $\Sg;$ we
have to check that $D\cap G\not=\emptyset.$ Suppose that
$D\cap G=\emptyset,$ and get a contradiction.
Since ${D\in\rbox{L}[S]},$ there exists an \dd\in formula $\Phi(x,y)$
containing only ordinals as parameters and such that $\Phi(S,y)$
holds in $\rbox{L}[S]$ iff $y=D$.
Let $\Psi(G')$ be the conjunction of the following formulas:\its
\begin{enumerate}
\def{\rmt\arabic{enumi}.}{\hskip2pt(\arabic{enumi})\hskip2pt}
\def\theenumi{{\rmt\arabic{enumi}.}}
\itla{1)}
\hspace{-1\mathsurround} $S'={\underline S}[G']$ (the interpretation of the ``term'' ${\underline S}$ via
$G'$)\hspace{1mm}{} is a set of ordinals, and there exists unique
$D'\in\rbox{L}[S']$ such that $\Phi(S',D')$ holds in $\rbox{L}[S']$;\its
\itla{2)}\hspace{-1\mathsurround}
$D'$ is a dense subset of $\Sg'$ where $\Sg'$ is the set obtained
by applying our definition of $\Sg$ within $\rbox{L}[S']$;\its
\itla{3)}\hspace{-1\mathsurround}
$D'\cap G'=\emptyset$.\its
\end{enumerate}
Then $\Psi(G)$ is true in $\rbox{L}[G]$ by our assumptions. Let
$p\in G$ force $\Psi$ over $\rbox{L}.$ Then $p\in\Sg$ by $(\Sg1).$ By
the density there exists $q\in D$ with $p\< q.$ We can consider
a \dd\Sg generic over $\rbox{L}[S]$ set $G'\sq\Sg$ containing $q.$
Then $G'$ is also \dd Pgeneric over $\rbox{L}$ by $(\Sg1).$
We observe that ${\underline S}[G']=S$ because $G'\sq\Sg.$ It follows that
$D'$ and $\Sg'$ (as is the description of $\Psi$) coinside with
resp. $D$ and $\Sg.$ In particular $q\in D'\cap G',$ a
contradiction because $p$ forces \ref{3)}.
\vspace{3mm}\qed
\noi{\bft Proof\hspace{2mm} }{}of the proposition.
{\em Item \ref{sm1}\/}. Lemma~\ref{44} (for $P=\col\la$)
implies that the universe
is a \dd\Sg generic extension of $\rbox{L}[S]$ for a certain tree
$\Sg\sq\col\la,$ $\Sg\in\rbox{L}[S].$ Notice that $\la$ is a cardinal
in $\rbox{L}[S]$ because $S$ is \dd\al weak over $\rbox{L}$ where $\al<\la;$
on the other hand, $\la$ is countable in the universe by \cuh\la.
It follows that there exists a condition $u\in G$ such that the
set of all \dd\la branching points of $\Sg$ is cofinal over $u$
in $\Sg.$ In other words, the set $\ans{v\in\Sg:u\sq v}$ includes
in $\rbox{L}[S]$ a cofinal subset order isomorphic to $\col\la$.
{\em Items \ref{sm2} and \ref{sm4}\/}. It suffices to
refer to item \ref{sm1} and argue as in the proofs of Lemma 3.5 and
Corollary 3.5 in \cite{sol} for $\rbox{L}[S]$ as the initial model.\qed
\subsubsection{Coding of reals and sets of reals in the model}
\label{coding}
We let $\dF\al(M)$ be the set of all \dd{\col\al}generic over $M$
functions $f\in \al^\om$.
The following definitions intend to introduce a useful coding
system for reals (i.\ e. points of $\cD=2^\om$ in this research)
and sets of reals in collapsing extensions.
Let $\al\in\ord.$ By $\cont\al$ we denote the set of all indexed
sets $=\ang{\al,\ang{_n:n\in\om}}$ -- the ``terms'' -- such that
${_n\sq\col\al}$ for each $n$.
We put $\cont{<\la}=\bigcup_{\al<\la}\cont\al$.
``Terms'' $\in\cont\al$ are used to code functions
$C:\al^\om\;\lra\;\cD=2^\om;$ namely, for every $f\in\al^\om$
we define $x=(f)\in\cD$ by: $x(n)=1$ iff $f\res m\in _n$
for some $m$.
Assume that $=\ang{\al,\ang{_n:n\in\om}}\in\cont\al,$
$u\in\col\al,$ $M$ arbitrary. We introduce the sets
${ u}(M)=\ans{(f):u\subset f\in\dF\al(M)}$ and
$(M)={ \La}(M)={\hbox{\hspace{2pt}\rmt ''}}\dF\al(M)$.
\bpro
\label{solMb}
Assume\/ \cuh\la. Let $S\sq\ord$ be\/ \dd\la weak over\/
$\rbox{L}.$ Then$:$\its
\begin{enumerate}
\def{\rmt\arabic{enumi}.}{{\arabic{enumi}}}
\def\theenumi{{\rm{\rmt\arabic{enumi}.}}.}
\itla{sm6}
If\/ $\al<\la,$ $F\sq\dF\al(\rbox{L}[S])$ is\/ $\rbox{OD}[S],$ and\/
$f\in F,$ then there exists\/ $m\in\om$ such that each\/
$f'\in\dF\al(\rbox{L}[S])$ satisfying\/ $f'\res m= f\res m$ belongs
to\/ $F$.\its
\itla{sm5}
For each real\/ $x\in\cD\cap{\sf Weak}_\la(\rbox{L}[S]),$ there exist\/
$\al<\la,$ $\in\cont\al\cap\rbox{L}[S],$ and\/
$f\in\dF\al(\rbox{L}[S])$ such that\/ $x=(f)$.
\its
\itla{xl1}
Each\/ $\rbox{OD}[S]$ set $X\sq\cD\cap{\sf Weak}_\la(\rbox{L}[S])$ is a union of
sets of the form\/ $(\rbox{L}[S]),$ where $\in\cont{<\la}\cap\rbox{L}[S]$.
\its
\itla{xl2}
Suppose that\/ ${\in\cont\al\cap\rbox{L}[S],\;\;\al<\la,}$ and\/
${u\in\col\al}.$ Then every\/ $\rbox{OD}[S]$ set\/
$X\sq\linebreak[3]{{ u}(\rbox{L}[S])}$ is a union of sets of
the form\/ ${ v}(\rbox{L}[S]),$ where\/ $u\sq v\in\col\al$.
\end{enumerate}
\epro
\noi{\bft Proof\hspace{2mm} } {\em Item \ref{sm6}\/}. We observe that
$F=\ans{f'\in\al^\om:\Phi(S,f')}$ for an \dd\in formula $\Phi.$
Let $\Psi(S,f')$ denote the formula: ``$\La$ \dd{\col\la}forces
$\Phi(S,f')$ over the universe'', so that
$$
F=\ans{f'\in\al^\om:\Psi(S,f')\,\hbox{ is true in }\,\rbox{L}[S,f']}.
$$
by Proposition~\ref{col} (items \ref{sm1} and \ref{sm2}).
Therefore, since $f\in F\sq\dF\al[S],$ there exists $m\in\om$ such
that the restriction $u=f\res m$
\dd{\col\al}forces $\Psi(S,{\hat f})$ over $\rbox{L}[S]$
where $\hat f$ is the name of the \dd\al collapsing function.
{\em Item \ref{sm5}\/}. By the choice if $x,$ this real belongs to
a \dd{\col\al}generic extension of
$\rbox{L}[S].$ Thus $x\in\rbox{L}[S,f]$ where $f\in\dF\al(\rbox{L}[S]).$ Let
${\hat x}$ be the name of $x.$ It suffices to define
$_n=\ans{u\in\col\al:u\,\hbox{ forces }\,{\hat x}(n)=1}$
and take $=\ang{\al,\ang{_n:n\in\om}}$.
{\em Item \ref{xl1}\/}. Consider a real $x\in X.$ We use item 2 to
obtain
$\al<\la,$ $f\in\dF\al(\rbox{L}[S]),$ and\/ $\in\cont\al\cap\rbox{L}[S]$
such that\/ $x=(f).$ Then we apply item 1 to the $\rbox{OD}[S]$ set
$F=\ans{f'\in\dF\al[S]:(f')\in X}$ and the $f$ defined above.
This results in a condition $u=f\res m\in\col\la$ ($m\in\om$)
such that $x\in { u}[S]\sq X.$ Finally the set
${ u}[S]$ is equal to ${'}[S]$ for some other
$'\in \cont\al\cap\rbox{L}[S]$.
{\em Item \ref{xl2}\/}. Similar to the previous item.\qed
\newpage
\subsection{The case of closed relations: classifiable points}
\label{ssif}
In this section, we prove the ``case 1'' of Theorem~\ref{mt}.
Thus let $\mathbin{\relf{E}}$ be a $\is11$ equivalence relation.
\subsubsection{Classifiable points}
First of all, we introduce the notion of an \dd\mathbin{\relf{E}} classifiable
point.
As usual, $\rbox{HC}$ denotes the set of all hereditarily countable
sets. $\ish1$ will denote the collection of all subsets of $\rbox{HC}$
definable in $\rbox{HC}$ by a parameter-free $\is{}1$ formula. The class
$\iph1$ is understood the same way, and $\idh1=\ish1\cap\iph1$.
Let us fix a constructible $\idh1$ enumeration
$\cont{}\cap\rbox{L}=\ans{\tk\xi:\xi<\om_1}$ such that each
$\in\cont{}\cap\rbox{L}$ has uncountably many numbers
$\xi<\om_1$ satisfying $=\tk\xi.$
The following lemma gives a more special characterization for
$\mathbin{\overline{\relf{E}}},$ the \dd{{\cal T}^2}closure of $\mathbin{\relf{E}},$ based on this enumeration.
\ble
\label{har}
Assume\/ \cuh\la. Let\/ $x,\,y\in\cD\cap{\sf Weak}_\la(\rbox{L}).$ Then\/
${x\mathbin{\overline{\relf{E}}} y}$ if and only if for each\/ $\xi<\om_1$ we have\/
$x\in[{\tk\xi}(\rbox{L}_\xi)]_{\mathbin{\relf{E}}}\;\llra\;y\in[{\tk\xi}(\rbox{L}_\xi)]_{\mathbin{\relf{E}}}$.
\ele
\noi{\bft Proof\hspace{2mm} } The ``only if'' part follows from the fact that the sets
${\tk\xi}(\rbox{L}_\ga)$ are $\rbox{OD}.$ Let us prove the ``if'' direction.
Assume that ${x\mathbin{\not{\hspace{-2pt}\overline{\relf{E}}}} y}.$ There exists an $\rbox{OD}$ set $X$ such that
$x\in [X]_{\mathbin{\relf{E}}}$ but $y\not\in [X]_{\mathbin{\relf{E}}}.$ By
Proposition~\ref{solMb}, we obtain $x\in (\rbox{L})\sq [X]_{\mathbin{\relf{E}}},$
where
$=\ang{\al,\,\ang{_n:n\in\om}}\in\cont\al\cap\rbox{L},$ $\al<\la.$
Since $\la$ is a limit cardinal in $\rbox{L},$ there exists a
constructible cardinal $\ga,$ $\al<\ga<\la,$ such that
$\dF\al(\rbox{L})=\dF\al(\rbox{L}_\ga).$ Then
$'=\ang{\ga,\,\ang{_n:n\in\om}}$ is $\tk\xi$ for some
$\xi,$ $\ga\<\xi<\om_1.$
Then $(\rbox{L})={\tk\xi}(\rbox{L}_\xi)$.\qed
\vspace{4mm}
For each $x\in\cD,$ we define $\vpi_x\in 2^{\om_1}$ as
follows: $\vpi_x(\xi)=1$ iff $x\in [{\tk\xi}(\rbox{L}_\xi)]_{\mathbin{\relf{E}}}$.
\bdf
\label{psi}
We introduce the notion of a \hbox{\dd\mathbin{\relf{E}} c}lassifiable
point.
We let $T$ be the set of all triples $\ang{x,\psi,t}$ such that
$x\in\cD,$ $\psi\in 2^{<\om_1},$
$\in\cont\al\cap\rbox{L}_{\ga}[\psi],$ where
$\al<\ga=\rbox{dom}\,\psi<\om_1,$ and the following conditions \ref{aa}
through \ref{cc} are satisfied.
\begin{enumerate}
\def{\rmt\arabic{enumi}.}{\hskip2pt{\rm (\alph{enumi})}\hskip2pt}
\def\theenumi{{\rmt\arabic{enumi}.}}
\itla{aa} \hspace{-1\mathsurround}
$\rbox{L}_{\ga}[\psi]$ models $\ZFC^-$ (minus the Power Set axiom) so
that $\psi$ can occur as an extra class parameter in Replacement
and Separation.\its
\itla{bb}
It is true in $\rbox{L}_{\ga}[\psi]$ that $\ang{\La,\La}$
forces ${({\hat{f}})\mathbin{\relf{E}}({\hat{g}}})$ in the sense of
$\col\al\!\times\!\col\al$ as the forcing, where ${\hat{f}}$ and
${\hat{g}}$ are names for the generic functions in $\al^\om$.\its
\itla{xx}
For each $\xi<\ga,$ $\psi(\xi)=1$ iff
$x\in [{\tk\xi}(\rbox{L}_\xi)]_{\mathbin{\relf{E}}}$ --- so that
$\psi=\vpi_x\res\ga$.\its
\itla{cc}
\hspace{-1\mathsurround}
$x$ belongs to $[(\rbox{L}_\ga[\psi])]_{\mathbin{\relf{E}}}$.
\end{enumerate}
A point $x\in\cD$ is {\em \dd\mathbin{\relf{E}} classifiable} iff there
exist $\psi$ and $$ such that $\ang{x,\psi,}\in T$.\qed
\edf
The author learned from Hjorth and Kechris~\cite{hk} the
idea of forcing over countable models to get a $\id{}1$ reduction
function, the key idea of this definition.
\ble
\label{def}
$T_{\mathbin{\relf{E}}}$ is a\/ $\idh1$ set\/ {\rm(provided\/ $\mathbin{\relf{E}}$ is $\is11$)}.
\ele
\noi{\bft Proof\hspace{2mm} } Notice that conditions \ref{aa} and \ref{bb} in
Definition~\ref{psi} are $\idh1$ because they reflect truth
within $\rbox{L}_\ga[\psi]$ and the enumeration $\tk\xi$ was chosen
in $\idh1$.
Condition \ref{cc} is obviously $\ish1$ (provided $\mathbin{\relf{E}}$ is at least
$\is12$), so it remains to convert it also to
a $\iph1$ form. Notice that in the assumption of \ref{aa} and
\ref{bb}, the set $X=(\rbox{L}_{\ga}[\psi])$ consists of pairwise
\dd\mathbin{\relf{E}} equivalent points.
(Indeed, consider a pair of \dd{\col\al}generic over
$\rbox{L}_{\ga}[\psi]$ functions $f,\,g\in\al^\om$ (not necessarily a
{\it generic pair\/}). Let $h\in\al^\om$ be an \dd{\col\al}generic
over both $\rbox{L}_{\ga}[\psi,f]$ and $\rbox{L}_{\ga}[\psi,g]$ function.
Then, by \ref{bb}, ${(h)\mathbin{\relf{E}}(f)}$ holds in
$\rbox{L}_{\ga}[\psi,f,h],$ therefore in the universe by Shoenfield.
Similarly, ${(h)\mathbin{\relf{E}}(g)}.$ It follows that
${(f)\mathbin{\relf{E}}(g)},$ as required.)
Therefore \ref{cc} is equivalent to
the formula
$\forall\,y\in (\rbox{L}_{\ga}[\psi])\;(x\mathbin{\relf{E}} y)$ because
$(\rbox{L}_{\ga}[\psi])$ is not empty. This is
clearly $\iph1$ provided $\mathbin{\relf{E}}$ is at least $\ip12$.
Let us consider \ref{xx}. The right--hand side of the equivalence
``iff'' in \ref{xx} is $\is11$ with inserted $\idh1$ functions,
therefore $\idh1.$ It follows that \ref{xx} itself is
$\idh1$.~\footnote
{\rm\ Here we do not see how to weaken the assumption that
$\mathbin{\relf{E}}$ is $\is11;$ even if the relation is $\ip11,$ \ref{xx} becomes
$\idh2$.}
\qed
\subsubsection{The classification theorem}
\label{ct}
The following lemma will allow to define a $\idh1$ reduction of
the given $\is11$ equivalence relation $\mathbin{\relf{E}}$ to the equality on
$2^{<\om_1}$.
\ble
\label{key2}
In the assumption of Case 1 of Subsection~\ref{cases}, each
point\/ $x\in\cD$ is\/ \dd\mathbin{\relf{E}} classifiable.
\ele
\noi{\bft Proof\hspace{2mm} } Let $x\in\cD.$ By the assumption of Case 1, there exist an
ordinal $\la$ and a ``virtual'' \dd{\col\la}generic extension $V$
of the constructible universe $\rbox{L}$ containing $x$ such that $\mathbin{\relf{E}}$
coincides with $\mathbin{\overline{\relf{E}}}$ on $\cD\cap{\sf Weak}_\la(\rbox{L})$ in $V$ and $x$ is
\dd\la weak over $\rbox{L}$ in $V$.
Thus we have the two universes, $V$ and the universe of the lemma,
with one and the same class of ordinals. Since by Lemma~\ref{def}
``being \dd\mathbin{\relf{E}} classifiable'' is a $\ish1,$ therefore $\is12$
notion, it suffices to prove that $x$ is \dd\mathbin{\relf{E}} classifiable in the
``virtual'' universe $V$.
We observe that \cuh\la{} is true in $V$.
{\it We argue in $V$}.
Notice that $\vpi=\vpi_x$ is \dd\la weak over $\rbox{L}:$ indeed
$\vpi\in\rbox{L}[x]$ by Proposition~\ref{col} since $\vpi$ is $\rbox{OD}[x]$.
It follows that $[x]_{\mathbin{\relf{E}}}$ is $\rbox{OD}[\vpi]$ by Lemma~\ref{har},
because $\mathbin{\relf{E}}=\mathbin{\overline{\relf{E}}}$ on $\cD\cap{\sf Weak}_\la(\rbox{L}).$ Therefore by
Proposition~\ref{solMb}, $x\in(\rbox{L}[\vpi])\sq[x]_{\mathbin{\relf{E}}}$
for some $\in\cont\al\cap\rbox{L}[\vpi],$ $\al<\la$
The model $\rbox{L}_{\om_1}[\vpi]$ has an elementary submodel
$\rbox{L}_\ga[\psi],$ where $\ga<\om_1$ and $\psi=\vpi\res\ga,$
containing $$ and $\al.$ We prove that
$\ang{x,\psi,}\in T_{\mathbin{\relf{E}}}.$ Since conditions \ref{aa} and
\ref{xx} of Definition~\ref{psi} obviously hold
for $\rbox{L}_\ga[\psi],$ let us check requirements \ref{bb} and
\ref{cc}.\vspace{1mm}
{\em We check\/ \ref{bb}.}
Indeed otherwise there exist conditions $u,\,v\in\col\al$ such
that $\ang{u,v}$ forces ${{}({\hat{f}})\mathbin{\not{\hspace{-2pt}\relf{E}}} {}({\hat{g}})}$ in
$\rbox{L}_{\ga}[\psi]$ in the sense of $\col\al\!\times\!\col\al$ as the
notion of forcing. Then $\ang{u,v}$ also forces
${{}({\hat{f}})\mathbin{\not{\hspace{-2pt}\relf{E}}} {}({\hat{g}})}$ in $\rbox{L}_{\om_1}[\vpi]$.
Let us consider an
\dd{\col\al\!\times\!\col\al}generic over $\rbox{L}[\vpi]$ pair
$\ang{f,g}\in \al^\om\times\al^\om$ such that $u\subset f$ and
$v\subset g.$ Then both $y=(f)$ and $z=(g)$ belong to
$ (\rbox{L}[\vpi]),$ so ${y\mathbin{\relf{E}} z}$ because
$(\rbox{L}[\vpi])\sq [x]_{\mathbin{\relf{E}}}$.
On the other hand, ${y\mathbin{\relf{E}} z}$ is {\em false\/} in
$\rbox{L}_{\om_1}[\vpi,f,g],$ that is, in $\rbox{L}[\vpi,f,g],$ by the
forcing property of $\ang{u,v}.$ Therefore we have ${x\mathbin{\not{\hspace{-2pt}\relf{E}}} y}$
(in the ``virtual'' universe $V$) by Shoenfield,
contradiction.\vspace{1mm}
{\em We check\/ \ref{cc}.} Take any \dd{\col\al}generic
over $\rbox{L}[\vpi]$ function $f\in\al^\om.$ Then $y=(f)$ belongs
to $(\rbox{L}[\vpi]),$ hence ${y\mathbin{\relf{E}} x}.$ On the other hand, $f$ is
generic over $\rbox{L}_{\ga}[\psi]$.\vspace{1mm}
Thus $\ang{x,\psi,}\in T_{\mathbin{\relf{E}}}.$ This means that
$x$ is \dd\mathbin{\relf{E}} classifiable, as required.\qed
\bdf
\label{U}
Let $x\in\cD.$ It follows from Lemma~\ref{key2} that there
exists the least ordinal $\ga=\ga_x<\om_1$ such that
$T_{\mathbin{\relf{E}}}(x,\vpi_x\res\ga,)$ for some $.$ We put
$\psi_x=\vpi_x\res\ga$ and let $_x$ denote the least, in the
sense of the $\rbox{OD}[\psi_x]$ wellordering of $\rbox{L}_{\ga}[\psi_x],$
``term'' $\in\cont{}[\psi_x]\cap \rbox{L}_{\ga}[\psi_x]$ which
satisfies $T_{\mathbin{\relf{E}}}(x,\psi_x,).$ We put
$U(x)=\ang{\psi_x,_x}$.\qed
\edf
\ble
\label{inv}
If each\/ $x\in\cD$ is\/ \dd\mathbin{\relf{E}} classifiable then the map\/ $U$
is a\/ $\idh1$ reduction of\/ $\mathbin{\relf{E}}$ to equality.
\ele
\noi{\bft Proof\hspace{2mm} } First of all, $U$ is $\idh1$ by Lemma~\ref{def}.
If $x\mathbin{\relf{E}} y$ then $U(x)=U(y)$ because Definition~\ref{psi}
is \dd\mathbin{\relf{E}} invariant for $x$.
Let us prove the converse. Assume that $U(x)=U(y),$ that is, in
particular, $\psi_x=\psi_y=\psi\in 2^{<\om}$ and
${_x=_y=\in\cont\al[\psi]\cap\rbox{L}_{\ga}[\psi],}$
where $\al<\ga=\rbox{dom}\,\psi<\om_1$.
By \ref{cc} we have ${(f)\mathbin{\relf{E}} x}$ and ${(g)\mathbin{\relf{E}} y}$ for
some \dd{\col\al}generic over $\rbox{L}_{\ga}[\psi]$ functions
$f,\,g\in\al^\om.$ However $(f)\mathbin{\relf{E}} (g)$ (see the proof
of Lemma~\ref{def}).\qed
\begin{corollary}\TF\
\label{case1}
{\rm[\hspace{1pt}The classification theorem\hspace{1pt}]}\\[1mm]
In the assumption of Case 1 of Subsection~\ref{cases}, $\mathbin{\relf{E}}$
admits a\/ $\idh1$ reduction to the equality on $2^{<\om_1}$.
\end{corollary}
\noi{\bft Proof\hspace{2mm} } The range of the function $U$ can be covered by a subset
$R\sq \rbox{HC}$ (all pairs $\ang{\psi,}$ such that ...) which admits
a $1-1$ $\idh1$ correspondence with $2^{<\om_1}$.\qed\vspace{4mm}
This completes the proof of the ``case 1'' part of Theorem~\ref{mt}.
\newpage
\subsection{$\protect\rbox{OD}$ forcing}
\label{prod}
This section starts the proof of the ``Case 2'' part of
Theorem~\ref{mt}. At the beginning, we reduce the problem to a
more elementary form.
\subsubsection{Explanation}
\label{expl}
Thus let us suppose that each real $x$ belongs to a ``virtual''
generic extension of $\rbox{L},$ but the assumption of Case 1 in
Subsection~\ref{cases} fails.
This means the following. There exists a real $z\in\cD$ such that
for every ordinal $\la$
and a ``virtual'' \dd{\col\la}generic extension $V$ of the
constructible universe $\rbox{L}$ containing $z,$ the following is true
in $V:$ if $z$ is \dd\la weak over $\rbox{L}$ then $\mathbin{\relf{E}}$
{\it does not\/} coincide with $\mathbin{\overline{\relf{E}}}$ on $\cD\cap{\sf Weak}_\la(\rbox{L})$.
We know indeed that
$z$ belongs to a
``virtual'' generic extension of $\rbox{L}.$ Therefore there
exists a limit constructible cardinal $\la$ such that $z$ belongs
to a \dd{\col\la}generic extension $V$ of $\rbox{L}$ and $z$ is weak
in $V.$ (Simply take $\la$ sufficiently large.)
Let us fix $\la$ and $V$.
As the condensed matter of this reasoning, we obtain
\begin{itemize}
\item\hspace{-1\mathsurround}
$V$ is a ``virtual'' \dd{\col\la}generic extension of $\rbox{L},$
$\la$ is a limit cardinal in $\rbox{L},$ and $\mathbin{\relf{E}}\mathbin{\hspace{-0.8mm\mathbin{\overline{\relf{E}}}$
on $\cD\cap{\sf Weak}_\la(\rbox{L})$ in $V$.
\end{itemize}
This is the description of the starting position of the proof of
the ``Case 2'' part of Theorem~\ref{mt}. The aim is to see that in
this case $\mathbin{\relf{E}}$ continuously embeds $\mathbin{\relf{E}_0}$ in the universe of
Theorem~\ref{mt}.
The general plan will be first to prove that $\mathbin{\relf{E}}$ continuously
embeds $\mathbin{\relf{E}_0}$ {\it in the auxiliary ``virtual'' universe\/} $V,$
and second, to get the result in the universe of Theorem~\ref{mt}
by Shoenfield.
After a short examination, one can see a problem in this plan:
the existence of a continuous embedding $\mathbin{\relf{E}_0}$ into $\mathbin{\relf{E}}$ is in fact
a $\is13$ statement:
$$
\exists\,\hbox{ continuous }1-1\,\;U:\cD\,\lra\,\cD\;
\forall\,x,\,y\in\cD\;
\left[
\begin{array}{cccl}
x\mathbin{\relf{E}_0} y & \lra & U(x)\mathbin{\relf{E}} U(y), & \hbox{ and }\\[2mm]
x\mathbin{\not{\hspace{-2pt}\relf{E}_0}} y & \lra & U(x) \mathbin{\not{\hspace{-2pt}\relf{E}}} U(y) &
\eay
\right]
$$
The lower implication in the square brackets is $\ip11,$ which
would match the total $\is12,$ but the upper one is $\is11,$ so
that the total result is $\is13,$ worse than one needs for
Shoenfield.
\subsubsection{Special embeddings and proof of the ``Case 2''
part of Theorem~\protect\ref{mt}}
\label{ne}
To overcome this obstacle, we strengthen the upper implication
to convert it to a $\ip11$ (actually $\id11$) statement. We recall
that the $\is11$ set $\mathbin{\relf{E}}\sq\cD^2$ admits a partition
$\mathbin{\relf{E}}=\bigcup_{\al<\om_1}\mathbin{\relf{E}}_{\al}$ onto Borel sets $\mathbin{\relf{E}}_\al$ -- the
{\it constituents\/}, uniquely defined as soon as we have fixed a
$\ip01$ set $F\sq\cD^2\times{\skri N}$ which projects onto $\mathbin{\relf{E}}.$
\bdf
\label{nice}
A $1-1$ function $\phi:\cD\,\lra\,\cD$ is a {\it special
embedding\/} of $\mathbin{\relf{E}_0}$ into $\mathbin{\relf{E}}$ iff\its
\begin{enumerate}
\def{\rmt\arabic{enumi}.}{\rm({\arabic{enumi}})}
\def\theenumi{{\rmt\arabic{enumi}.}}
\itla{cp}
there exists an ordinal $\al<\om_1$ such that
$\ang{\phi(0^k\we 0\we z),\phi(0^k\we 1\we z)}\in\mathbin{\relf{E}}_\al$\\ for all
$z\in\cD$ and $k\in\om$, \hfill and \its
\item for all $x,\,y\in\cD,$ if $x\mathbin{\not{\hspace{-2pt}\relf{E}_0}} y$ then $\phi(x)\mathbin{\not{\hspace{-2pt}\relf{E}}}\phi(y)$.
\qed
\end{enumerate}
\edf
($0^k$ is the sequence of $k$ zeros.)
First of all, let us see that a special embedding is an embedding in
the usual sense. We have to prove that $x\mathbin{\relf{E}_0} y$ implies
$\phi(x)\mathbin{\relf{E}} \phi(y).$ We say that a pair of points $x,\,y\in\cD$
is a {\it neighbouring pair\/} iff there exist $k\in\om$ and
$z\in\cD$ such that $x=0^k\we 0\we z$ and $y=1^k\we 1\we z$ or vice
versa. Obviously a neighbouring pair is \dd\mathbin{\relf{E}_0} equivalent. Conversely,
if $x\mathbin{\relf{E}_0} y$ then $x$ and $y$ can be connected by a finite chain
of neighbouring pairs in $\cD.$ Therefore condition~\ref{cp} actually
suffices to guarantee that $x\mathbin{\relf{E}_0} y\,\lra\,\phi(x)\mathbin{\relf{E}}\phi(y)$.
Obviously the existence of a {\it special\/} embedding of $\mathbin{\relf{E}_0}$
into
$\mathbin{\relf{E}}$ is a $\is12$ property. Thus, by Shoenfield, to complete the
proof of the ``Case 2'' part of Theorem~\ref{mt}, it suffices to
prove the following theorem (and apply it in the auxiliary
``virtual'' universe $V$).
\bte
\label{mtv}
Assume\/ \cuh\la.
Let\/ $\mathbin{\relf{E}}$ be a\/ $\is11$ relation and\/ $\mathbin{\relf{E}}\mathbin{\hspace{-0.8mm\mathbin{\overline{\relf{E}}}$ on\/
$\cD\cap{\sf Weak}_\la(\rbox{L}).$ Then\/ $\mathbin{\relf{E}_0}$ admits a special continuous
embedding into\/ $\mathbin{\relf{E}}$.
\ete
This theorem is being proved in this and the next section. During
the course of the proof, we assume \cuh\la{} and fix a $\is11$
equivalence $\mathbin{\relf{E}}$ satisfying $\mathbin{\relf{E}}\mathbin{\hspace{-0.8mm\mathbin{\overline{\relf{E}}}$ on the set
$\cD\cap{\sf Weak}_\la(\rbox{L})$
(although the last assumption will not be used at the beginning).
In this section, we consider important interactions between $\mathbin{\relf{E}}$
and $\mathbin{\overline{\relf{E}}}.$ The next section defines the required embedding. This
will complete the proof of theorems~\ref{mtv} and \ref{mt}, and
Theorem~\ref{main} -- the main theorem.
\subsubsection{$\rbox{OD}$ topology and the forcing}
\label{tforc}
We recall that ${\cal T}$ be the topology generated by all $\rbox{OD}$
sets.
A set $X$ will be called \dd{\cal T}{\it separable\/} if the $\rbox{OD}$
power set ${{\skri P}^{\hbox{\tiny\rm OD}}(X)={\skri P}(X)\cap\rbox{OD}}$ has only
countably many different $\rbox{OD}$ subsets.
\ble
\label{dizl}
Assume \cuh\la. Let\/ $\al<\la$ and\/ $\in\cont\al\cap\rbox{L}.$ Each
set\/ $X=(\rbox{L})$ satisfying\/ $X\sq\cD\cap{\sf Weak}_\la(\rbox{L})$
is\/ \dd{\cal T} separable.
\ele
\noi{\bft Proof\hspace{2mm} } By Proposition~\ref{solMb} every $\rbox{OD}$ subset of $X$ is
uniquely determined by an $\rbox{OD}$ subset of $\col\al.$ Since each
$\rbox{OD}$ set $S\sq\col\al$ is constructible, we obtain an $\rbox{OD}$ map
$h:\al^+\,\hbox{ onto }\,{\skri P}^{\hbox{\tiny\rm OD}}(X),$ where $\al^+$ is the least
cardinal in $\rbox{L}$ bigger than $\al.$ Therefore ${\skri P}^{\hbox{\tiny\rm OD}}(X)$ has
\dd{\<\al^{++}}many $\rbox{OD}$ subsets. It remains to notice that
$\al^{++}<\la$ because $\la$ is a limit cardinal in $\rbox{L},$ but
$\la$ is countable in the universe.\qed\vspace{4mm}
Let $\mathord{{\rm X}\hspace{-7pt}{\rm X}}=\ans{X\sq\cD:X\,\hbox{ is }\,\rbox{OD}\;\hbox{ and nonempty}\,}$.
Let us consider $\mathord{{\rm X}\hspace{-7pt}{\rm X}}$ as a forcing notion (smaller sets are
stronger conditions) for generic extensions of $\rbox{L}.$ Of course
formally $\mathord{{\rm X}\hspace{-7pt}{\rm X}}\not\in\rbox{L},$ but $\mathord{{\rm X}\hspace{-7pt}{\rm X}}$ is $\rbox{OD}$ order isomorphic to
a partially ordered set in $\rbox{L}$. (Indeed it is known that there
exists an $\rbox{OD}$ map $\ell:$ ordinals onto the class of all $\rbox{OD}$
sets. Since $\mathord{{\rm X}\hspace{-7pt}{\rm X}}$ itself is $\rbox{OD},$ $\mathord{{\rm X}\hspace{-7pt}{\rm X}}$ is a 1--1 image of an
$\rbox{OD}$ set $\mathord{{\rm X}\hspace{-7pt}{\rm X}}'$ of ordinals via $\ell.$ By Proposition~\ref{col}
both $\mathord{{\rm X}\hspace{-7pt}{\rm X}}'$ and the \dd{\ell\hspace{0.5pt}}preimage of the order
on $\mathord{{\rm X}\hspace{-7pt}{\rm X}}$ belong to $\rbox{L}$.)
\pagebreak[3]
It also is true that a set $G\sq\mathord{{\rm X}\hspace{-7pt}{\rm X}}$ is \dd\mathord{{\rm X}\hspace{-7pt}{\rm X}} generic over $\rbox{L}$
iff it nonempty intersects every dense $\rbox{OD}$ subset of $\mathord{{\rm X}\hspace{-7pt}{\rm X}}$.
\begin{corollary}\TF\
\label{exis}
Assume \cuh\la. If\/ a set ${X\in\mathord{{\rm X}\hspace{-7pt}{\rm X}}}$ satisfies\/
$X\sq\cD\cap{\sf Weak}_\la(\rbox{L})$ then there exists a\/ \dd\mathord{{\rm X}\hspace{-7pt}{\rm X}} generic
over\/ $\rbox{L}$ set\/ $G\sq\mathord{{\rm X}\hspace{-7pt}{\rm X}}$ containing $X$.
\end{corollary}
\noi{\bft Proof\hspace{2mm} } We can suppose, by Proposition~\ref{solMb}, that
$X=(\rbox{L})$ where $\in\cont\al\cap\rbox{L}$ and $\al<\la.$ Now
apply Lemma~\ref{dizl}.\qed
\ble
\label{choq-cor}
Assume \cuh\la. If\/ $G\sq\mathord{{\rm X}\hspace{-7pt}{\rm X}}$ is a generic over\/ $\rbox{L}$ set
containing the set\/ $\cD\cap{\sf Weak}_\la(\rbox{L})$ then the intersection\/
$\bigcap G$ is a singleton $\ans{a}=\ans{a_G}$.
\ele
\noi{\bft Proof\hspace{2mm} } Assume that this is not the case. Let $\mathord{{\rm X}\hspace{-7pt}{\rm X}}'\in\rbox{L}$ be a
constructible p. o. set order isomorphic $\mathord{{\rm X}\hspace{-7pt}{\rm X}}$ via an $\rbox{OD}$
function $\ell:\mathord{{\rm X}\hspace{-7pt}{\rm X}}'\,\hbox{ onto }\,\mathord{{\rm X}\hspace{-7pt}{\rm X}}.$ Then $G'=\ell^{-1}(G)$ is
\dd{\mathord{{\rm X}\hspace{-7pt}{\rm X}}'}generic over $\rbox{L}.$ We assert that the statement that
$\bigcap G$ is not a singleton can be converted to a sentence
relativized to $\rbox{L}[G']$.
(Indeed, it follows from the reasoning in the proof of
Lemma~\ref{dizl} that $\rbox{L}[G']$ is in fact a \dd Pgeneric
extension of $\rbox{L}$ for a certain set $P\in\rbox{L},$ $P\sq\mathord{{\rm X}\hspace{-7pt}{\rm X}}'$ of
a cardinality $\al<\la$ in $\rbox{L}.$ The next \dd\rbox{L} cardinal
$\al^+$ is $<\la$ since $\la$ is a limit cardinal in $\rbox{L}.$
Therefore $G'$ belongs to a \dd{\col{\al^+}}generic extension of
$\rbox{L},$ so $G'$ is weak. Then by Proposition \ref{col} the universe
$\rbox{V}=\rbox{L}[f_0]$ is a \dd{\col\la}generic extension of $\rbox{L}[G'].$
This is enough to convert any statement about $G'$ in $\rbox{V}$ --
like the statement: $\bigcap \ell{\hbox{\hspace{2pt}\rmt ''}} G'$ is not a singleton --
to a sentence relativized to $\rbox{L}[G']$.)
Then there exists ${X\in \mathord{{\rm X}\hspace{-7pt}{\rm X}}},$ ${X\sq\cD\cap{\sf Weak}_\la(\rbox{L})},$ such
that $\bigcap G$ is not a singleton for {\em every\/} generic over
$\rbox{L}$ set $G\sq \mathord{{\rm X}\hspace{-7pt}{\rm X}}$ containing $X.$ We can assume that
$X=(\rbox{L}),$ where ${\in\cont\al\cap\rbox{L}},$ $\al<\la.$ Then $X$
is \dd{\cal T} separable; let $\ans{{\skri X}_n:n\in\om}$ be an enumeration
of all $\rbox{OD}$ dense subsets of ${\skri P}^{\hbox{\tiny\rm OD}}(X).$ Using
Proposition~\ref{col} (item~\ref{sm6}), we obtain an increasing
\dd{\col\al}generic over $\rbox{L}$ sequence
$u_0\sq u_1\sq u_2\sq...$ of $u_n\in\col\al$ such that
$X_n={\hspace{0.1em}{u_n}}(\rbox{L})\in{\skri X}_n.$ Obviously this
gives an \dd\mathord{{\rm X}\hspace{-7pt}{\rm X}} generic over $\rbox{L}$ set $G\sq\mathord{{\rm X}\hspace{-7pt}{\rm X}}$ containing $X$
and all $X_n$.
Now let $f=\bigcup_{n\in\om}u_n;$ $f\in\al^\om$ and $f$ is
\dd{\col\al}generic over $\rbox{L}.$ Then $x=(f)\in X_n$ for all
$n,$ so $x\in\bigcap G.$ Since $\bigcap G$ obviously cannot
contain more than one point, it is a singleton, so we get a
contradiction with the choice of $X$.\qed\vspace{4mm}
Reals $a_G$ will be called \dd\rbox{OD}{\it generic over\/}
$\rbox{L}$
\subsubsection{The product forcing}
We recall that $\mathbin{\relf{E}}$ is assumed to be a $\is11$ equivalence on
$\cD;$ $\mathbin{\overline{\relf{E}}}$ is the closure of $\mathbin{\relf{E}}$ in
the topology ${\cal T}^2$ (the product of two copies of ${\cal T}$).
For a set ${P\sq\cD^2,}$ we put
${{{\bbb\rbox{pr}\bbb}_1\hspace{1.5pt}} P=\ans{x:\exists\,y\;P(x,y)}}$ and
${{{\bbb\rbox{pr}\bbb}_2\hspace{1.5pt}} P=\ans{y:\exists\,x\;P(x,y)}.}$ Notice that if $P$ is
$\rbox{OD},$ so are ${{\bbb\rbox{pr}\bbb}_1\hspace{1.5pt}} P$ and ${{\bbb\rbox{pr}\bbb}_2\hspace{1.5pt}} P$.
The classical reasoning in Harrington, Kechris, and
Louveau~\cite{hkl} plays on interactions between $\mathbin{\relf{E}}$ and $\mathbin{\overline{\relf{E}}}.$
In the forcing setting, we have to fix a restriction by $\mathbin{\overline{\relf{E}}}$
directly in the definition of the product forcing. Thus we
consider
$$
\mathord{{\rm I}\hspace{-2.5pt}{\rm P}}=\mathord{{\rm I}\hspace{-2.5pt}{\rm P}}(\mathbin{\overline{\relf{E}}})=\ans{P\sq\mathbin{\overline{\relf{E}}}:P\,\hbox{ is }\rbox{OD}\;\hbox{ and
nonempty and }\,P=({{\bbb\rbox{pr}\bbb}_1\hspace{1.5pt}} P\times {{\bbb\rbox{pr}\bbb}_2\hspace{1.5pt}} P)\cap\mathbin{\overline{\relf{E}}}}
$$
as a forcing notion. As above for $\mathord{{\rm X}\hspace{-7pt}{\rm X}},$ the fact that formally
$\mathord{{\rm I}\hspace{-2.5pt}{\rm P}}$ does not belong to $\rbox{L}$ does not cause essential problems.
The following assertion connects $\mathord{{\rm I}\hspace{-2.5pt}{\rm P}}$ and $\mathord{{\rm X}\hspace{-7pt}{\rm X}}$.
\vbox{
\bass
\label{proe}
Assume \cuh\la. Then\its
\begin{enumerate}
\def{\rmt\arabic{enumi}.}{\rm{\arabic{enumi}}}
\item If\/ $P\in\mathord{{\rm I}\hspace{-2.5pt}{\rm P}}$ then\/ ${{\bbb\rbox{pr}\bbb}_1\hspace{1.5pt}} P$ and\/ ${{\bbb\rbox{pr}\bbb}_2\hspace{1.5pt}} P$ belong to
$\mathord{{\rm X}\hspace{-7pt}{\rm X}}$.\its
\item If\/ $X,\,Y\in\mathord{{\rm X}\hspace{-7pt}{\rm X}}$ and\/ $P=(X\times Y)\cap \mathbin{\overline{\relf{E}}}\not=\emptyset$
then $P\in\mathord{{\rm I}\hspace{-2.5pt}{\rm P}}$.\its
\itla{i3}
If\/ $P\in\mathord{{\rm I}\hspace{-2.5pt}{\rm P}},$ $X\in\mathord{{\rm X}\hspace{-7pt}{\rm X}},$ $X\sq{{\bbb\rbox{pr}\bbb}_1\hspace{1.5pt}} P,$ then
there exists\/ $Q\in\mathord{{\rm I}\hspace{-2.5pt}{\rm P}},$ $Q\sq P,$ such that $X={{\bbb\rbox{pr}\bbb}_1\hspace{1.5pt}} Q.$
Similarly for ${{\bbb\rbox{pr}\bbb}_2\hspace{1.5pt}}$.
\end{enumerate}
\end{assertion}
}\its\its
\noi{\bft Proof\hspace{2mm} } Set $Q=\ans{\ang{x,y}\in P:x\in X\cj y\mathbin{\overline{\relf{E}}} x}$ in
item~\ref{i3}.\qed\vspace{4mm}
A set $P\in\mathord{{\rm I}\hspace{-2.5pt}{\rm P}}$ is {\em \dd\mathord{{\rm I}\hspace{-2.5pt}{\rm P}} separable} if
the set $\mathord{{\rm I}\hspace{-2.5pt}{\rm P}}_{\sq P}=\ans{Q\in\mathord{{\rm I}\hspace{-2.5pt}{\rm P}}:Q\sq P}$ has only countably many
different $\rbox{OD}$ subsets.
\ble
\label{dizl2}
Assume \cuh\la. Let\/ $,\,'\in\cont{<\la}\cap\rbox{L}.$ Suppose
that the sets\/ $X=(\rbox{L})$ and\/ $Y={'}(\rbox{L})$ satisfy\/
$X\cup Y\sq\cD\cap{\sf Weak}_\la(\rbox{L}),$ and finally that\/
${P=(X\times Y)\cap\mathbin{\overline{\relf{E}}}}$
is nonempty. Then\/ $P\in\mathord{{\rm I}\hspace{-2.5pt}{\rm P}}$ and\/ $P$ is\/ \dd\mathord{{\rm I}\hspace{-2.5pt}{\rm P}} separable.
\ele
\noi{\bft Proof\hspace{2mm} } $P\in\mathord{{\rm I}\hspace{-2.5pt}{\rm P}}$ by Assertion~\ref{proe}. A proof of the
\dd\mathord{{\rm I}\hspace{-2.5pt}{\rm P}} separability can be obtained by a minor modification of
the proof of Lemma~\ref{dizl}.\qed
\ble
\label{dp2oe}
Assume \cuh\la. Let\/ $G\sq \mathord{{\rm I}\hspace{-2.5pt}{\rm P}}$ be a\/ \dd{\mathord{{\rm I}\hspace{-2.5pt}{\rm P}}}generic over\/
$\rbox{L}$ set containing\/ ${(\cD\cap{\sf Weak}_\la(\rbox{L}))^2\cap\mathbin{\overline{\relf{E}}}}.$
Then the intersection\/ $\bigcap G$ contains a single point\/
$\ang{a,b}$ where\/ $a$ and\/ $b$ are\/ \dd\rbox{OD} generic
over $\rbox{L}$ and $a\mathbin{\overline{\relf{E}}} b$.
\ele
\noi{\bft Proof\hspace{2mm} } By Assertion~\ref{proe}, both $G_1=\ans{{{\bbb\rbox{pr}\bbb}_1\hspace{1.5pt}} P:P\in G}$ and
$G_2=\ans{{{\bbb\rbox{pr}\bbb}_1\hspace{1.5pt}} P:P\in G}$ are \dd\rbox{OD} generic over $\rbox{L}$ subsets of
$\mathord{{\rm X}\hspace{-7pt}{\rm X}},$ so that there exist unique \dd\rbox{OD} generic over $\rbox{L}$ points
$a=a_{G_1}$ and $b=a_{G_2}.$ It remains to show that
$\ang{a,b}\in\mathbin{\overline{\relf{E}}}$.
Suppose not. There exists an \dd\mathbin{\relf{E}} invariant $\rbox{OD}$ set
$A$ such that we have $x\in A$ and $y\in B=\cD\setminus A.$ Then
$A\in G_1$ and $B\in G_2$ by the genericity. There exists a
condition $P\in G$ such that ${{\bbb\rbox{pr}\bbb}_1\hspace{1.5pt}} P\sq A$ and ${{\bbb\rbox{pr}\bbb}_2\hspace{1.5pt}} B\sq B,$
therefore ${P\sq (A\times B)\cap\mathbin{\overline{\relf{E}}}=\emptyset},$ which is
impossible.\qed\vspace{4mm}
Pairs $\ang{a,b}$ as in Lemma~\ref{dp2oe} will be called
\dd\mathord{{\rm I}\hspace{-2.5pt}{\rm P}}{\it generic\/} and denoted by $\ang{a_G,b_G}$.
For sets $X$ and $Y$ and a binary relation ${}\,,$ let us write
${X{}Y}$ if and only if
$\forall\,x\in X\;\exists\,y\in Y\;(x{} y)$ \ and \
$\forall\,y\in Y\;\exists\,x\in X\;(x{} y)$.
\ble
\label{1for2}
Assume \cuh\la. Let\/ $P_0\in\mathord{{\rm I}\hspace{-2.5pt}{\rm P}},$ $P_0\sq(\cD\cap{\sf Weak}_\la(\rbox{L}))^2,$
points\/ $a,\,a'\in X_0={{\bbb\rbox{pr}\bbb}_1\hspace{1.5pt}} P_0$ be\/
\dd\rbox{OD} generic over\/ $\rbox{L},$ and\/ ${a\mathbin{\overline{\relf{E}}} a'.}$ There exists a
point\/ $b$ such that both\/ $\ang{a,b}$ and $\ang{a',b}$ belong
to\/ $P_0$ and are\/ \dd{\mathord{{\rm I}\hspace{-2.5pt}{\rm P}}}generic pairs.
\ele
\noi{\bft Proof\hspace{2mm} } By Lemma \ref{dizl2} and Proposition~\ref{solMb} there
exists a \dd\mathord{{\rm I}\hspace{-2.5pt}{\rm P}} separable set $P_1\sq P_0$ such that
$a\in X_1={{\bbb\rbox{pr}\bbb}_1\hspace{1.5pt}} P_1.$ We put $Y_1={{\bbb\rbox{pr}\bbb}_2\hspace{1.5pt}} P_1;$ then $X_1\mathbin{\overline{\relf{E}}} Y_1,$
and $P_1=(X_1\times Y_1)\cap\mathbin{\overline{\relf{E}}}$.
We let $P'=\ans{\ang{x,y}\in P_0:y\in Y_1}.$ Then $P'\in \mathord{{\rm I}\hspace{-2.5pt}{\rm P}}$
and $P_1\sq P'\sq P_0.$ Furthermore $a'\in X'={{\bbb\rbox{pr}\bbb}_1\hspace{1.5pt}} P'.$ (Indeed,
since ${a\in X_1}$ and ${X_1\mathbin{\overline{\relf{E}}} Y_1},$ there exists $y\in Y_1$
such that $a\mathbin{\overline{\relf{E}}} y;$ then $a'\mathbin{\overline{\relf{E}}} y$ as well because $a\mathbin{\overline{\relf{E}}} a',$
therefore $\ang{a',y}\in P'$.) By Lemma \ref{dizl2} and
Proposition~\ref{solMb}
there exists a \dd\mathord{{\rm I}\hspace{-2.5pt}{\rm P}} separable set $P'_1\sq P'$ such that
$a'\in X'_1={{\bbb\rbox{pr}\bbb}_1\hspace{1.5pt}} P'_1.$ Then $Y'_1={{\bbb\rbox{pr}\bbb}_2\hspace{1.5pt}} P'_1\sq Y_1$.
It follows from the choice of $P$ and $P'$ that $\mathord{{\rm I}\hspace{-2.5pt}{\rm P}}$ admits only
countably many different dense $\rbox{OD}$ sets below $P_1$ and below
$P'_1.$ Let $\ans{{\skri P}_n:n\in\om}$ and $\ans{{\skri P}'_n:n\in\om}$
be\pagebreak[3] enumerations of both families of dense
sets. We define sets $P_n,\,P'_n\in\mathord{{\rm I}\hspace{-2.5pt}{\rm P}}\;\;(n\in\om)$
satisfying the following conditions:\its
\begin{enumerate}
\def{\rmt\arabic{enumi}.}{(\roman{enumi})}
\def\theenumi{{\rmt\arabic{enumi}.}}
\itla{i}
$a\in X_n={{\bbb\rbox{pr}\bbb}_1\hspace{1.5pt}} P_n$ \ and \ $a'\in X'_n={{\bbb\rbox{pr}\bbb}_1\hspace{1.5pt}} P'_n$;\its
\itla{ii}
$Y'_n={{\bbb\rbox{pr}\bbb}_2\hspace{1.5pt}} P'_n \sq Y_n={{\bbb\rbox{pr}\bbb}_2\hspace{1.5pt}} P_n$ \ and \ $Y_{n+1}\sq Y'_n$;\its
\itla{iii}
$P_{n+1}\sq P_n\,,\,$ $P'_{n+1}\sq P'_n\,,\,$
$P_n\in {\skri P}_{n-2}\,,\,$ and \ $P'_n\in {\skri P}'_{n-2}$.\its
\end{enumerate}
By \ref{iii} both sequences $\ans{P_n:n\in\om}$ and
$\ans{P'_n:n\in\om}$ are \dd\mathord{{\rm I}\hspace{-2.5pt}{\rm P}} generic over $\rbox{L},$ so by
Lemma~\ref{dp2oe} they result in two generic pairs,
$\ang{a,b}\in P_0$ and $\ang{a',b}\in P_0, $ having the first
terms equal to $a$ and $a'$ by \ref{i} and second terms equal to
each other by \ref{ii}. Thus
it suffices to conduct the construction of $P_n$ and $P'_n$.
The construction goes on by induction on $n$.
Assume that $P_n$ and $P'_n$ have been defined. We define
$P_{n+1}.$ By~\ref{ii} and Assertion~\ref{proe}, the set
${P=(X_n\times Y'_n)\cap\mathbin{\overline{\relf{E}}}\sq P_n}$ belongs to $\mathord{{\rm I}\hspace{-2.5pt}{\rm P}}$ and
$a\in X={{\bbb\rbox{pr}\bbb}_1\hspace{1.5pt}} P.$ (Indeed, $\ang{a,b}\in P,$ where $b$ satisfies
$\ang{a',b}\in P'_n,$ because ${a\mathbin{\overline{\relf{E}}} a'}$.) However ${\skri P}_{n-1}$
is dense in $\mathord{{\rm I}\hspace{-2.5pt}{\rm P}}$ below $P\sq P_0;$ therefore
${{{\bbb\rbox{pr}\bbb}_1\hspace{1.5pt}} {\skri P}_{n-1}=\ans{{{\bbb\rbox{pr}\bbb}_1\hspace{1.5pt}} P':P'\in {\skri P}_{n-1}}}$ is dense in $\mathord{{\rm X}\hspace{-7pt}{\rm X}}$
below\pagebreak[3]
$X={{\bbb\rbox{pr}\bbb}_1\hspace{1.5pt}} P.$ Since $a$ is generic, we have $a\in {{\bbb\rbox{pr}\bbb}_1\hspace{1.5pt}} P'$ for
some $P'\in {\skri P}_{n-1},$ $P'\sq P.$ It\pagebreak[3]
remains to put $P_{n+1}=P',$
and then $X_{n+1}={{\bbb\rbox{pr}\bbb}_1\hspace{1.5pt}} P_{n+1}$ and $Y_{n+1}={{\bbb\rbox{pr}\bbb}_2\hspace{1.5pt}} P_{n+1}$.
\pagebreak[3]
After this, to define $P'_{n+1}$ we let
$P=(X'_n\times Y_{n+1})\cap\mathbin{\overline{\relf{E}}},$ etc.\qed
\subsubsection{The key set}
\label{second}
We recall that \cuh\la{} is assumed, $\mathbin{\relf{E}}$ is a $\is11$
equivalence on $\cD,$ and $\mathbin{\overline{\relf{E}}}$ is the \hbox{\dd{{\cal T}^2}closure}
of $\mathbin{\relf{E}}$ in $\cD^2.$ By the assumption of Theorem~\ref{mtv},
$\mathbin{\relf{E}}\mathbin{\hspace{-0.8mm\mathbin{\overline{\relf{E}}}$ on $\cD\cap{\sf Weak}_\la(\rbox{L}).$ This means that there
exist \dd\mathbin{\overline{\relf{E}}} classes of elements of $\cD\cap{\sf Weak}_\la(\rbox{L})$ which
include more than one \dd\mathbin{\relf{E}} class. We define
the union of all those \dd\mathbin{\overline{\relf{E}}} classes,
$$
H=\ans{x\in\cD\cap{\sf Weak}_\la(\rbox{L}):\exists\,y\in
\cD\cap{\sf Weak}_\la(\rbox{L})\;(x\mathbin{\overline{\relf{E}}} y\cj x\mathbin{\not{\hspace{-2pt}\relf{E}}} y)}\,.
$$
Obviously $H$ is $\rbox{OD},$ nonempty, and \dd\mathbin{\relf{E}} invariant {\em inside\/}
$\cD\cap{\sf Weak}_\la(\rbox{L}),$ and moreover $H'=H^2\cap\mathbin{\overline{\relf{E}}}\not=\emptyset,$
so that in particular $H'\in\mathord{{\rm I}\hspace{-2.5pt}{\rm P}}$ by Assertion~\ref{proe}.
\ble
\label{noE}
Assume \cuh\la. If\/ $a,b\in H$ and\/ $\ang{a,b}$ is\/
\dd\mathord{{\rm I}\hspace{-2.5pt}{\rm P}} generic over $\rbox{L}$ then ${a\mathbin{\not{\hspace{-2pt}\relf{E}}} b}\hspace{1.5pt}.$
\ele
\noi{\bft Proof\hspace{2mm} } Otherwise there exists a set $P\in\mathord{{\rm I}\hspace{-2.5pt}{\rm P}},$ $P\sq H\times H$ such
that $a\mathbin{\relf{E}} b$ holds for {\it all\/} \dd\mathord{{\rm I}\hspace{-2.5pt}{\rm P}} generic $\ang{a,b}\in P.$
We conclude that then $a\mathbin{\overline{\relf{E}}} a'\;\lra\;a\mathbin{\relf{E}} a'$ for all \dd\rbox{OD} generic
points $a,\,a'\in X={{\bbb\rbox{pr}\bbb}_1\hspace{1.5pt}} P;$ indeed, take $b$ such that both
$\ang{a,b}\in P$ and $\ang{a',b}\in P$ are \dd\mathord{{\rm I}\hspace{-2.5pt}{\rm P}} generic,
by Lemma~\ref{1for2}. In other words the relations $\mathbin{\relf{E}}$ and
$\mathbin{\overline{\relf{E}}}$ coincide on the set
${Y=\ans{x\in X:x\,\hbox{ is \dd\rbox{OD} generic over }\,\rbox{L}}\in\mathord{{\rm X}\hspace{-7pt}{\rm X}}.}$
($Y$ is nonempty by corollaries \ref{exis} and \ref{choq-cor}.)
Moreover, $\mathbin{\relf{E}}$ and $\mathbin{\overline{\relf{E}}}$ coincide on the set
${Z=[Y]_{\mathbin{\relf{E}}}\cap\cD\cap{\sf Weak}_\la(\rbox{L}).}$ Indeed if $z,\,z'\in Z,$
${z\mathbin{\overline{\relf{E}}} z'},$
then let ${y,\,y'\in Y}$ satisfy ${z\mathbin{\relf{E}} y}$ and ${z'\mathbin{\relf{E}} y'}.$
Then ${y\mathbin{\overline{\relf{E}}} y'},$ therefore ${y\mathbin{\relf{E}} y'},$ which implies $z\mathbin{\relf{E}} z'.$
We conclude that $Y\cap H=\emptyset$.
(Indeed, suppose that $x\in Y\cap H.$ Then by definition there
exists $y\in\cD\cap{\sf Weak}_\la(\rbox{L})$
such that ${x\mathbin{\overline{\relf{E}}} y}$ but ${x\mathbin{\not{\hspace{-2pt}\relf{E}}} y}.$ Then ${y\not\in Z}$ because
$\mathbin{\relf{E}}$ and $\mathbin{\overline{\relf{E}}}$ coincide on $Z.$ Thus the pair $\ang{x,y}$ belongs
to the $\rbox{OD}$ set $P=Y\times [(\cD\cap{\sf Weak}_\la(\rbox{L}))\setminus Z].$
Notice that $P$
does not intersect $\mathbin{\relf{E}}$ by definition of $Z.$ Therefore
$\ang{x,y}$ cannot belong to the closure $\mathbin{\overline{\relf{E}}}$ of $\mathbin{\relf{E}},$
contradiction.)
But $\emptyset\not=Y\sq X\sq H,$ contradiction.\qed\vspace{4mm}
Lemma~\ref{noE} is a counterpart of the proposition in
Harrington, Kechris, Louveau~\cite{hkl} that $\mathbin{\relf{E}}\res H$ is
meager in $\mathbin{\overline{\relf{E}}}\res H.$ But in fact the main content of this
argument in~\cite{hkl} was implicitly taken by Lemma~\ref{1for2}.
\ble
\label{E}
Assume \cuh\la. Let\/ $X,\,Y\sq H$ be nonempty\/ $\rbox{OD}$ sets
and\/ ${X\mathbin{\overline{\relf{E}}} Y}.$ There
exist nonempty\/ $\rbox{OD}$ sets\/ $X'\sq X$ and\/ $Y'\sq Y$ such
that\/ $X'\cap Y'=\emptyset$ but still\/ $X'\mathbin{\overline{\relf{E}}} Y'$.
\ele
\noi{\bft Proof\hspace{2mm} } There exist points $x_0\in X$ and
$y_0\in Y$ such that $x_0\not= y_0$ but ${x_0\mathbin{\overline{\relf{E}}} y_0}.$
(Otherwise $X=Y,$ and $\mathbin{\overline{\relf{E}}}$ is the equality on $X,$ which is
impossible, see the previous proof.) Let $U$ and $V$ be disjoint
Baire intervals in $\cD$ containing resp. $x_0$ and $y_0.$
The sets $X'= X\cap U \cap [Y\cap V]_{\mathbin{\overline{\relf{E}}}}$ and
$Y'= Y\cap V \cap [X\cap U]_{\mathbin{\overline{\relf{E}}}}$ are as required.\qed
\newpage
\subsection{Embedding $\protect\mathbin{\relf{E}_0}$ into $\protect\mathbin{\relf{E}}$}
\label{or}
In this section we end the proof of Theorem~\ref{mtv}. Thus
we prove, assuming \cuh\la{} and $\mathbin{\relf{E}}\mathbin{\hspace{-0.8mm\mathbin{\overline{\relf{E}}}$ on
$\cD\cap{\sf Weak}_\la(\rbox{L}),$ that $\mathbin{\relf{E}}$ embeds $\mathbin{\relf{E}_0}$ via a
continuous special (see Definition~\ref{nice}) embedding.
\subsubsection{The embedding}
\label{embed}
By the assumption the set $H$ of Subsection~\ref{second} is
nonempty; obviously $H$ is $\rbox{OD}.$ By lemmas \ref{dizl},
\ref{dizl2}, and Proposition~\ref{solMb} there exists a nonempty
\dd{\cal T} separable $\rbox{OD}$ set $X_0\sq H$ such that the set
${P_0=(X_0\times X_0)\cap\mathbin{\overline{\relf{E}}}}$ belongs to $\mathord{{\rm I}\hspace{-2.5pt}{\rm P}}$ and is \dd\mathord{{\rm I}\hspace{-2.5pt}{\rm P}}
separable. We observe that
${{\bbb\rbox{pr}\bbb}_1\hspace{1.5pt}} P_0={{\bbb\rbox{pr}\bbb}_2\hspace{1.5pt}} P_0=X_0\sq H\sq\cD\cap{\sf Weak}_\la(\rbox{L})$.
We define a family of sets $X_u\;\;(u\in 2^{<\om})$ satisfying\its
\begin{enumerate}
\def{\rmt\arabic{enumi}.}{(\alph{enumi})}
\def\theenumi{{\rmt\arabic{enumi}.}}
\itla{a}
$X_u\sq X_0,$ $X_u$ is nonempty and $\rbox{OD},$ and $X_{u\we i}\sq X_u,$
for all $u$ and $i$.\its
\end{enumerate}
In addition to the sets $X_u,$ we shall define relations
${uv}\sq\cD^2$ for {\em some} pairs $\ang{u,v},$ to provide
important interconnections between branches in $2^{<\om}$.
Let $u,\,v\in 2^n.$ We say that $\ang{u,v}$ is a {\em neighbouring
pair\/} iff $u=0^k\we 0\we r$ and $v=0^k\we 1\we r$ for some $k<n$
($0^k$ is the sequence of $k$ terms each equal to $0$) and some
$r\in 2^{n-k-1}$ (possibly $k=n-1,$ that is, $r=\La$).
Thus we define sets ${uv}\sq X_u\times X_v$ for all neighbouring pairs
$\ang{u,v},$ so that the following requirements \ref{b} and
\ref{d} will be satisfied.\its
\begin{enumerate}
\def{\rmt\arabic{enumi}.}{(\alph{enumi})}
\def\theenumi{{\rmt\arabic{enumi}.}}
\setcounter{enumi}{1}
\itla{b} \hspace{-1\mathsurround}
${uv}$ is $\rbox{OD},$ ${{\bbb\rbox{pr}\bbb}_1\hspace{1.5pt}} {uv}=X_u,$ ${{\bbb\rbox{pr}\bbb}_2\hspace{1.5pt}} {uv}=X_v,$ and
${u\we i\,,\,v\we i}\sq {uv}$ for every neighbouring pair
$\ang{u,v}$ and each $i\in\ans{0,1}$.\its
\itla{d}
For any $k,$ the set $ k={0^k\we 0\,,\,0^k\we 1}$ is
\dd{\cal T} separable, and $ k\sq \mathbin{\relf{E}}_\al$ for some ordinal
$\al=\al(k)<\om_1$.\its
\end{enumerate}
Notice that if $\ang{u,v}$ is neighbouring then
$\ang{u\we i,v\we i}$ is neighbouring, but $\ang{u\we i,v\we j}$
is not neighbouring for $i\not=j$ (unless $u=v=0^k$ for some $k$).
It follows that $X_u {uv} X_v,$ therefore
$X_u\mathbin{\relf{E}} X_v,$ for all neighbouring pairs $u,\,v.$~\footnote
{\ We recall that $X{}Y$ means that
$\forall\,x\in X\;\exists\,y\in Y\;(x{} y)$ and
$\forall\,y\in Y\;\exists\,x\in X\;(x{} y)$.}
\begin{remark}\rmt\
\label{newrem}
Every pair of $u,\,v\in 2^n$ can be tied in $2^n$ by a
finite chain of neighbouring pairs. It follows that
${X_u\mathbin{\relf{E}} X_v}$ and ${X_u\mathbin{\overline{\relf{E}}} X_v}$ hold for {\em all} pairs
$u,\,v\in 2^n$.\qed
\erem
Three more requirements will concern genericity.
Let $\ans{{\skri X}_n:n\in\om}$ be a fixed (not necessarily $\rbox{OD}$)
enumeration of all dense in $\mathord{{\rm X}\hspace{-7pt}{\rm X}}$ below $X_0$ subsets of $\mathord{{\rm X}\hspace{-7pt}{\rm X}}.$
Let $\ans{{\skri P}_n:n\in\om}$ be a fixed enumeration of all dense in
$\mathord{{\rm I}\hspace{-2.5pt}{\rm P}}$ below $P_0$ subsets of $\mathord{{\rm I}\hspace{-2.5pt}{\rm P}}.$ It is assumed that
${\skri X}_{n+1}\sq{\skri X}_n$ and ${\skri P}_{n+1}\sq{\skri P}_n.$ Note that
${{\skri X}'=\ans{P\in\mathord{{\rm I}\hspace{-2.5pt}{\rm P}}: P\sq P_0\cj {{\bbb\rbox{pr}\bbb}_1\hspace{1.5pt}} P\cap{{\bbb\rbox{pr}\bbb}_2\hspace{1.5pt}} P=\emptyset}}$
is dense in $\mathord{{\rm I}\hspace{-2.5pt}{\rm P}}$ below $P_0$ by Lemma~\ref{E}, so we can suppose
in addition that ${\skri P}_0={\skri X}'$.
In general, for any \dd{\cal T} separable set $S$ let
$\ans{{\skri X}_n(S):n\in\om}$ be a fixed enumeration of all dense
subsets in the algebra ${\skri P}^{\hbox{\tiny\rm OD}}(S)\setminus\ans{\emptyset}$ such
that ${\skri X}_{n+1}(S)\sq{\skri X}_n(S)$.
We now formulate the three additional requirements.
\its
\begin{enumerate}
\def{\rmt\arabic{enumi}.}{({\rm g}\arabic{enumi})}
\def\theenumi{{\rmt\arabic{enumi}.}}
\itla{g1} \hspace{-1\mathsurround}
$X_u\in {\skri X}_n$ whenever $u\in 2^n$.\its
\itla{g2}
If $u,\,v\in 2^n$ and $u(n\hspace{-1pt}-\hspace{-1pt} 1)\not=v(n\hspace{-1pt}-\hspace{-1pt} 1)$ (that is, the
last terms of $u,\,v$ are different), then
$P_{uv}=(X_u\times X_v)\cap\mathbin{\overline{\relf{E}}}\in {\skri P}_n$.\its
\itla{g3}
If $\ang{u,v}=\ang{0^k\we 0\we r,0^k\we 1\we r}\in (2^n)^2$
then ${uv}\in {\skri X}_n( k)$.\its
\end{enumerate}
In particular \ref{g1} implies by Corollary~\ref{choq-cor} that
for any $a\in 2^\om$ the intersection
$\bigcap_{n\in\om}X_{a\res n}$ contains a single point, denoted
by $\phi(a),$ which is \dd\rbox{OD} generic over $\rbox{L},$ and the map
$\phi$ is continuous in the Polish sense.
\bass
\label{embe}
Assume\/ \cuh\la.
$\phi$ is a special continuous 1--1 embedding\/ $\mathbin{\relf{E}_0}$ to $\mathbin{\relf{E}}$.
\end{assertion}
\noi{\bft Proof\hspace{2mm} }
Let us prove that $\phi$ is 1--1. Suppose that
${a\not=b\in 2^\om.}$ Then ${a(n\hspace{-1pt}-\hspace{-1pt} 1)\not=b(n\hspace{-1pt}-\hspace{-1pt} 1)}$ for
some $n.$ Let ${u=a\res n},$ ${v=b\res n},$
so that we have $x=\phi(a)\in X_u$ and $y=\phi(b)\in X_v.$ But
then the set ${P=(X_u\times X_v)\cap \mathbin{\overline{\relf{E}}}}$ belongs to ${\skri P}_n$ by
\ref{g2}, therefore to ${\skri P}_0.$ This implies
$X_u\cap X_v=\emptyset$ by definition of ${\skri P}_0,$
hence $\phi(a)\not=\phi(b)$ as required.
Furthermore if $a\mathbin{\not{\hspace{-2pt}\relf{E}_0}} b$ (which means that $a(k)\not=b(k)$ for
infinitely many numbers $k$) then $\ang{\phi(a),\phi(b)}$ is
\dd\mathord{{\rm I}\hspace{-2.5pt}{\rm P}} generic by \ref{g2}, so $\phi(a)\mathbin{\not{\hspace{-2pt}\relf{E}}} \phi(b)$ by
Lemma~\ref{noE}.
Let us finally verify that
$\ang{\phi(0^k\we 0\we c),\phi(0^k\we 1\we c)}\in\mathbin{\relf{E}}_\al$ for all
$c\in\cD$ and $k\in \om,$ where $\al=\sup_k\al(k)<\om_1.$
The sequence of sets
$W_m={0^k\we 0\we c\res m\,,\,0^k\we 1\we c\res m}\;\;\,(m\in\om)$
is then generic over $\rbox{L}$ by \ref{g3} in the sense of the forcing
${\skri P}^{\hbox{\tiny\rm OD}}( k)\setminus\ans{\emptyset}$ (we recall that
$ k={0^k\we 0\,,\,0^k\we 1}$), which is simply a copy of $\mathord{{\rm X}\hspace{-7pt}{\rm X}},$
so that by Corollary~\ref{choq-cor} the intersection of all sets
$W_m$ is a singleton. Obviously the singleton can be only equal to
$\ang{\phi(0^k\we 0\we c)\,,\,\phi(0^k\we 1\we c)}.$ We conclude
that $\phi(0^k\we 0\we c)\mathbin{\relf{E}}_\al \phi(0^k\we 1\we c),$ as
required.\qed
\subsubsection{Two preliminary lemmas}
Thus the theorem is reduced to the construction of sets $X_u$
and ${uv}.$ Before the construction starts, we prove a couple
of important lemmas.
\ble
\label{impo}
Assume\/ \cuh\la.
Let\/ $X,\,Y\sq\cD\cap{\sf Weak}_\la(\rbox{L})$ be\/ $\rbox{OD}$ sets such that\/
$(X\times Y)\cap\mathbin{\relf{E}}$ is nonempty. Then\/ $(X\times Y)\cap\mathbin{\relf{E}}$ contains a
weak over\/ $\rbox{L}$ point $\ang{x,y}$.
\ele
\noi{\bft Proof\hspace{2mm} }
First of all, by Proposition~\ref{solMb} we can
assume that $X={}(\rbox{L})$ and $Y={'}(\rbox{L}),$ where $$ and
$t'$ belong to some $\cont\al\cap\rbox{L},$ $\al<\la.$ Then, since
$\la$ is a limit \dd\rbox{L} cardinal, we have $X={}(\rbox{L}_\ba)$ and
$Y={'}(\rbox{L}_\ba)$ for a suitable $\ba,$ $\al\<\ba<\la.$ Take
an arbitrary \dd{\col\ba}generic over $\rbox{L}$ function
$f\in\ba^\om.$ Then the statement ${(X\times Y)\cap\mathbin{\relf{E}}\not=\emptyset}$ turns
out to be a $\is11$ formula with reals in $\rbox{L}[f]$ (those coding
$f,\;,\;'$) as parameters.
Notice that all sets in $\rbox{L}[f]$ are weak over $\rbox{L},$ so it
remains to apply Shoenfield.\qed
\ble
\label{comb}
Assume\/ \cuh\la.
Let\/ $n\in\om,$ and\/ $X_u$ be a nonempty\/ $\rbox{OD}$ set for each\/
$u\in 2^n.$ Assume that an\/ $\rbox{OD}$ set\/ ${uv}\sq {\skri N}^2$ is
given for every neighbouring pair of\/ $u,\,v\in 2^n$ so that
$X_u {uv} X_v$.\its
\begin{enumerate}
\def{\rmt\arabic{enumi}.}{{\rm\arabic{enumi}.}}
\def\theenumi{{\rmt\arabic{enumi}.}}
\item If\/ $u_0\in 2^n$ and\/ $X'\sq X_{u_0}$ is\/ $\rbox{OD}$ and
nonempty then there exists a system of\/ $\rbox{OD}$ nonempty sets\/
$Y_u\sq X_u\;\;(u\in 2^n)$ such that\/ $Y_u {uv} Y_v$ holds for
all neighbouring pairs\/ $u,\,v,$ and in addition $Y_{u_0}=X'$.\its
\item
Suppose that\/ $u_0,\,v_0\in 2^n$ is a neighbouring pair and
nonempty\/ $\hspace{-1pt}\rbox{OD}\hspace{-1pt}$
sets\/ ${X'\sq X_{u_0}}$ and $X''\sq X_{v_0}$ satisfy\/
$X' {u_0v_0} X''.$ Then there exists a system of\/ $\rbox{OD}$
nonempty sets\/ ${Y_u\sq X_u}$ $(u\in 2^n)$ such that\/
${Y_u {uv} Y_v}$ holds for all neighbouring pairs\/ $u,v,$ and
in addition\/ $Y_{u_0}=X',\,\;Y_{v_0}=X''.$
\end{enumerate}
\ele
\noi{\bft Proof\hspace{2mm} } Notice that 1 follows from 2. Indeed take arbitrary $v_0$
such that either $\ang{u_0,v_0}$ or $\ang{v_0,u_0}$ is neighbouring,
and put respectively
${X''=\ans{y\in X_{v_0}: \exists\,x\in X'\;(x {u_0v_0} y)}},$ or
${X''=\ans{y\in X_{v_0}: \exists\,x\in X'\;(y {v_0u_0} x)}}$.
To prove item 2, we use induction on $n.$
For $n=1$ --- then $u_0=\ang{0}$ and $v_0=\ang{1}$ ---
we take $Y_{u_0}=Y'$ and $Y_{v_0}=Y''$.
The step. We prove the lemma for $n+1$ provided it has been proved
for $n;\,\,n\>1.$ The principal idea is to divide $2^{n+1}$ on two
copies of $2^n,$ minimally connected by neighbouring pairs, and handle
them more or less separately using the induction hypothesis. The
two ``copies'' are $U_0=\ans{s\we 0:s\in 2^n}$ and
$U_1=\ans{s\we 1:s\in 2^n}$.
The only neighbouring pair that connects $U_0$ and $U_1$ is the pair
of ${\hat u}=0^n\we 0$ and ${\hat v}=0^n\we 1.$ If in fact $u_0={\hat u}$
and $v_0={\hat v}$ then we apply the induction hypothesis (item~1)
independently for the families ${\ans{X_u:u\in U_0}}$ and
${\ans{X_u:u\in U_1}}$ and the given sets ${X'\sq X_{u_0}}$ and
${X''\sq X_{v_0}.}$ Assembling the results, we get nonempty $\rbox{OD}$
sets ${Y_u\sq X_u\,\;(u\in 2^{n+1})}$ such that ${Y_u {uv} Y_v}$
for all neighbouring pairs\/ $u,\,v,$ perhaps with the exception of the
pair of $u=u_0={\hat u}$ and $v=v_0={\hat v},$ and in addition
$Y_{u_0}=X'$ and $Y_{v_0}=X''.$
Thus finally $Y_{\hat u} {{\hat u}{\hat v}}Y_{\hat v}$ by the choice of $X'$ and
$Y'$.
It remains to consider the case when both $u_0$ and $v_0$ belong
to one and the same domain, say to $U_0.$ Then we first apply the
induction hypothesis (item 2) to the family
${\ans{X_u:u\in U_0}}$ and the sets ${X'\sq X_{u_0}}$ and
${X''\sq X_{v_0}.}$ This results in a system of nonempty $\rbox{OD}$
sets ${Y_u\sq X_u\;\,(u\in U_0);}$ in particular we get an $\rbox{OD}$
nonempty set $Y_{\hat u}\sq X_{\hat u}.$ We put
${Y_{\hat v}=\ans{y\in X_{\hat v}:\exists\,x\in Y_{\hat u}\,
(x{{\hat u}{\hat v}}y)},}$ so that ${Y_{\hat u}{{\hat u}{\hat v}}Y_{\hat v},}$
and apply the induction hypothesis (item 1) to the family
${\ans{X_u:u\in U_1}}$ and the set $Y_{\hat v}\sq X_{\hat v}$.\qed
\subsubsection{The construction}
We put $X_\La=X_0.$
Now assume that the sets $X_s\,\;(s\in 2^n)$
and relations ${st}$ for all neighbouring pairs of $s,\,t\in 2^{\<n}$
have been defined, and expand the construction at level $n+1.$
We first put $A_{s\we i}=X_s$ for all $s\in 2^n$ and
$i\in\ans{0,1}.$ We also define ${uv}={st}$ for any neighbouring
pair of $u=s\we i,\,\,v=t\we i$ in $2^{n+1}$ other than the pair
${\hat u}=0^n\we 0,\,\,{\hat v}=0^n\we 1.$ For the latter one (notice
that $A_{{\hat u}}=A_{{\hat v}}=X_{0^n}$) we put ${{\hat u}{\hat v}}=\mathbin{\overline{\relf{E}}},$
so that $A_u{uv} A_v$ holds for all neighbouring pairs of
$u,\,v\in 2^{n+1}$ including the pair $\ang{{\hat u},{\hat v}}$.
The sets $A_u$ and relations ${uv}$ will be reduced in several
steps to meet requirements \ref{a}, \ref{b}, \ref{d} and
\ref{g1}, \ref{g2}, \ref{g3} of Subsection~\ref{embed}.\vom
{\em Part 1}. After $2^{n+1}$ steps of the procedure of
Lemma~\ref{comb} (item 1) we obtain a system of nonempty $\rbox{OD}$
sets $B_u\sq A_u\;\,(u\in 2^{n+1})$ such that still
$B_u{uv} B_v$ for all neighbouring pairs $u,\,v$ in $2^{n+1},$ but
$B_u\in {\skri X}_{n+1}$ for all $u.$ Thus \ref{g1} is fixed.\vom
{\em Part 2}. To fix \ref{g2}, consider an arbitrary pair of
$u_0=s_0\we 0,$ $v_0=t_0\we 1,$ where $s_0,\,t_0\in 2^n.$ By
Remark~\ref{newrem} and density of the set ${\skri P}_{n+1}$ there
exist nonempty $\rbox{OD}$ sets $B'\sq B_{u_0}$ and $B''\sq B_{v_0}$
such that ${P=(B'\times B'')\cap\mathbin{\overline{\relf{E}}}\in {\skri P}_{n+1}}$ and
${{\bbb\rbox{pr}\bbb}_1\hspace{1.5pt}} P=B',$ ${{\bbb\rbox{pr}\bbb}_2\hspace{1.5pt}} P=B'',$ so in particular ${B'\mathbin{\overline{\relf{E}}} B''}.$ Now we
apply Lemma~\ref{comb} (item 1) separately for the two systems
of sets,
${\ans{B_{s\we 0}:s\in 2^n}}$ and ${\ans{B_{t\we 1}:t\in 2^n}}$
(compare with the proof of Lemma~\ref{comb}~!), and the sets
$B'\sq B_{s_0\we 0},$ $B''\sq B_{t_0\we 1}$ respectively.
This results in a system of nonempty
$\rbox{OD}$ sets ${B'_u\sq B_u}$ ${(u\in 2^{n+1})}$ satisfying
${B'_{u_0}=B'}$ and ${B'_{v_0}=B'',}$ so that we have
${(B'_{u_0}\times B'_{v_0})\cap\mathbin{\overline{\relf{E}}}\in {\skri P}_{n+1},}$ and still
$B'_u{uv} B'_v$ for all neighbouring pairs $u,\,v\in 2^{n+1},$
perhaps with the exception of the pair of
${\hat u}=0^n\we 0,\,\,{\hat v}=0^n\we 1,$ which is the only one that
connects the two domains. To handle this exceptional pair,
note that ${B'_{{\hat u}} \mathbin{\overline{\relf{E}}} B'_{u_0}}$ and ${B'_{{\hat v}} \mathbin{\overline{\relf{E}}} B'_{v_0}}$
(Remark~\ref{newrem} is applied to each of the two domains),
so that ${B'_{\hat u}\mathbin{\overline{\relf{E}}} B'_{\hat v}}$ since ${B'\mathbin{\overline{\relf{E}}} B''}.$ Finally
we observe that ${{\hat u}{\hat v}}$ is so far equal to $\mathbin{\overline{\relf{E}}}$.
After $2^{n+1}$ steps (the number of pairs $u_0,\,v_0$ to be
considered) we get a system of nonempty $\rbox{OD}$ sets
$C_u\sq B_u\;\,(u\in 2^{n+1})$ such that
$(C_u\times C_v)\cap\mathbin{\overline{\relf{E}}}\in {\skri P}_{n+1}$ whenever $u(n)\not=v(n),$
and still $C_u{uv} C_v$ for all neighbouring pairs
$u,\,v\in 2^{n+1}.$ Thus \ref{g2} is fixed.\vom
{\em Part 3}. We fix \ref{d} for the exceptional neighbouring
pair of ${{\hat u}=0^n\we 0},$ ${{\hat v}=0^n\we 1}.$ Since $\mathbin{\relf{E}}$ is
\dd{{\cal T}^2}dense in $\mathbin{\overline{\relf{E}}},$ and ${C_{\hat u}\mathbin{\overline{\relf{E}}} C_{\hat v},}$ the set
${{}=(C_{\hat u} \times C_{\hat v})\cap\mathbin{\relf{E}}}$ is nonempty. We observe that
the $\rbox{OD}$ set
$$
{{}}'=\ans{\ang{x,y}\in {{}}:\ang{x,y}\,\hbox{ is weak over }
\,\rbox{L}}
$$
is nonempty, too, by Lemma~\ref{impo}. Then, since
${{}}'\sq{}\sq\mathbin{\relf{E}},$ the intersection
${{}}''={}'\cap\mathbin{\relf{E}}_\al$ is nonempty for some $\al<\om_1.$
($\mathbin{\relf{E}}_\al$ is the \dd\al th constituent of the \dd{\is11}set
$\mathbin{\relf{E}}$.) Finally some nonempty $\rbox{OD}$
set ${}\sq {{}}''$ is \dd{\cal T} separable by Lemma~\ref{dizl}.
Consider the $\rbox{OD}$ sets $C'={{\bbb\rbox{pr}\bbb}_1\hspace{1.5pt}} {}\,\,(\sq C_{\hat u})$ and
$C''={{\bbb\rbox{pr}\bbb}_2\hspace{1.5pt}} {}\,\,(\sq C_{\hat v});$ obviously $C'{} C'',$ so that
$C'{{\hat u}{\hat v}} C''.$ (We recall that at the moment
${{\hat u}{\hat v}}=\mathbin{\overline{\relf{E}}}.$) Using Lemma~\ref{comb} (item 2) again, we
obtain a system of nonempty $\rbox{OD}$ sets
$Y_u\sq C_u\;\,(u\in 2^{n+1})$ such that still $Y_u{uv} Y_v$
for all neighbouring pairs $u,\,v$ in $2^{n+1},$ and $Y_{\hat u}=C',$
$Y_{\hat v}=C''.$ We re--define ${{\hat u}{\hat v}}$ by ${{\hat u}{\hat v}}={}$
(then ${{\hat u}{\hat v}}\sq \mathbin{\relf{E}}_\al$),
but this keeps $Y_{\hat u}{{\hat u}{\hat v}} Y_{\hat v}$.\vom
{\em Part 4}. We fix \ref{g3}. Consider a neighbouring pair
$u_0,\,v_0$ in $2^{n+1}.$
Then we have $u_0=0^k\we 0\we r,$ $v_0=0^k\we 1\we r$ for
some ${k\<n}$ and ${r\in 2^{n-k}}.$ It follows that
${\Ip{}={u_0v_0}\cap(Y_{u_0}\times Y_{v_0})}$ is a nonempty
(since ${Y_{u_0}{u_0v_0} Y_{v_0}}$) $\rbox{OD}$ subset of
$ k={0^k\we 0\,,\,0^k\we 1}$ by the construction. Let
${}\sq \Ip{}$ be a nonempty $\rbox{OD}$ set in ${\skri X}_{n+1}( k).$ We
now define $Y'={{\bbb\rbox{pr}\bbb}_1\hspace{1.5pt}}{}$ and $Y''={{\bbb\rbox{pr}\bbb}_2\hspace{1.5pt}}{}$ (then ${Y'{}Y''}$
and ${Y'{u_0v_0}Y''}$) and run Lemma~\ref{comb} (item 2) for the
system of sets $Y_u\;\,(u\in2^{n+1})$ and the sets
${Y'\sq Y_{u_0}},$ ${Y''\sq Y_{v_0}}$. After this define
the ``new'' ${u_0v_0}$ by ${u_0v_0}={}$.
Do this consequtively for all neighbouring pairs; the finally
obtained sets -- let them be $X_u\,\;(u\in 2^{n+1})$ -- are as
required. The final relations ${uv}\;\,(u,\,v\in 2^{n+1})$ can
be obtained as the restrictions of sets ${uv}$ to
$X_u\times X_v$.\vom
This ends the construction.
\vspace{4mm}
This also ends the proof of theorems \ref{mtv} and \ref{mt},
and Theorem~\ref{main} (the main theorem), see
Subsection~\ref{ne}.\qed
\newpage
|
2,869,038,154,532 | arxiv | \section{Introduction}
Single-image super-resolution (SR) algorithms aim to construct a high-quality
high-resolution (HR) image from a single low-resolution (LR) input.
Numerous single-image SR
algorithms have been recently proposed for generic images
that exploit priors based on
edges~\cite{DBLP:conf/cvpr/SunXS08}, gradients~\cite{DBLP:journals/tog/ShanLJT08, Kim08_PAMI}, neighboring interpolation \cite{DBLP:journals/cvgip/IraniP91,DBLP:conf/accv/TimofteSG14},
regression~\cite{DBLP:conf/eccv/DongLHT14},
and patches \cite{Glasner2009,DBLP:journals/tip/YangWHM10,DBLP:journals/tip/DongZSW11,DBLP:journals/tip/SunSXS11,DBLP:conf/cvpr/YangLC13,Yang13_ICCV_Fast,DBLP:journals/imst/FarsiuREM04,DBLP:conf/iccv/TimofteDG13,DBLP:conf/cvpr/SchulterLB15}.
Most SR methods focus on generating sharper edges with richer
textures, and are usually evaluated by measuring the similarity
between super-resolved HR and ground-truth images through full-reference
metrics such as the mean squared error (MSE), peak signal-to-noise ratio
(PSNR) and structural similarity (SSIM)
index~\cite{DBLP:journals/tip/WangBSS04}.
In our recent SR benchmark study~\cite{Yang14_ECCV}, we show that the information
fidelity criterion (IFC)~\cite{DBLP:journals/tip/SheikhBV05} performs
favorably among full-reference metrics for SR performance evaluation.
However, full-reference metrics are originally designed to account for image signal and noise
rather than human visual perception~\cite{DBLP:book/Girod}, even for several recently proposed methods .
We present 9 example SR images generated from a same LR image in Figure~\ref{fig:SRimage}. Table \ref{tb:srscore} shows that those full-reference metrics fail to match visual perception of human subjects well for SR performance evaluation.
In addition, full-reference metrics require ground-truth images for evaluation
which are often unavailable in practice.
The question how we can effectively evaluate the quality of SR images based on visual perception still remains open.
In this work, we propose to learn a no-reference metric
for evaluating the performance of single-image SR algorithms.
It is because no-reference metrics are designed to mimic visual perception (i.e., learned from large-scale perceptual scores) without requiring ground-truth images as reference.
With the increase of training data, no-reference metrics have greater potential to match visual perception for SR performance evaluation.
\begin{figure}
\centering
\setlength{\tabcolsep}{.2em}
\small
\begin{tabular}{ccc}
\includegraphics[width=.32\textwidth]{figure/SRimage/Bicubicsf5.pdf}&
\includegraphics[width=.32\textwidth]{figure/SRimage/BPsf5.pdf} &
\includegraphics[width=.32\textwidth]{figure/SRimage/Shan08sf5.pdf}\\
(a) Bicubic interpolation &
(b) Back projection (BP)~\cite{DBLP:journals/cvgip/IraniP91} &
(c) Shan08~\cite{DBLP:journals/tog/ShanLJT08} \\
\includegraphics[width=.32\textwidth]{figure/SRimage/Glasner09sf5.pdf}&
\includegraphics[width=.32\textwidth]{figure/SRimage/Yang10sf5.pdf}&
\includegraphics[width=.32\textwidth]{figure/SRimage/Dong11sf5.pdf}\\
(d) Glasner09~\cite{Glasner2009} &
(e) Yang10~\cite{DBLP:journals/tip/YangWHM10} &
(f) Dong11~\cite{DBLP:journals/tip/DongZSW11} \\
\includegraphics[width=.32\textwidth]{figure/SRimage/Yang13sf5.pdf}&
\includegraphics[width=.32\textwidth]{figure/SRimage/Timofte13sf5.pdf}&
\includegraphics[width=.32\textwidth]{figure/SRimage/SRCNNsf5.pdf}\\
(g) Yang13~\cite{Yang13_ICCV_Fast} &
(h) Timofte13~\cite{DBLP:conf/iccv/TimofteDG13} &
(i) SRCNN~\cite{DBLP:conf/eccv/DongLHT14} \\
\end{tabular}
\caption{SR images generated from the same LR image using (\ref{eq:downsample}) ($s=4,\sigma=1.2$). The quality scores of these SR images are compared in Table \ref{tb:srscore}. The images are best viewed on a high-resolution displayer with an
adequate zoom level, where each SR image is shown with at least 320$\times$480 pixels
(full-resolution).
}
\label{fig:SRimage}
\end{figure}
\begin{table}
\centering
\includegraphics[width=\textwidth]{figure/comparescores.pdf} \\
\caption{Quality scores of SR images in Figure \ref{fig:SRimage} from human subjects,
the proposed metric, rescaled PSNR, SSIM and IFC (0 for worst and 10 for
best).
Note that human subjects favor Dong11 over Glasner09 as
the SR image in Figure \ref{fig:SRimage}(d) is over-sharpened (best viewed on a
high-resolution displayer).
%
However, the PSNR, SSIM and IFC metrics show opposite results
as the image in Figure \ref{fig:SRimage}(f) is misaligned to the reference
image by 0.5 pixel.
%
In contrast, the proposed metric matches visual perception well.}
\label{tb:srscore}
\end{table}
We first conduct human subject studies using a large set of SR
images to collect perceptual scores.
With these scores for training,
we propose a novel no-reference quality assessment algorithm that
matches visual perception well.
Our work, in essence, uses the same methodology as that of general image
quality assessment (IQA) approaches.
However, we evaluate the effectiveness of the signal reconstruction by SR algorithms
rather than analyzing noise and distortions (e.g., compression and fading) as in
existing IQA methods~\cite{DBLP:journals/spl/MoorthyB10,DBLP:journals/tip/MoorthyB11,DBLP:conf/cvpr/TangJK11, DBLP:journals/tip/SaadBC12,DBLP:conf/cvpr/YeKKD12,DBLP:conf/cvpr/TangJK14}.
We quantify SR artifacts based on their statistical properties in both spatial and frequency domains, and regress them to collected perceptual scores. Experimental results demonstrate the effectiveness of the proposed no-reference
metric in assessing the quality of SR images against existing IQA measures.
The main contributions of this work are summarized as follows.
First, we propose a novel no-reference IQA metric, which matches visual perception well, to evaluate the performance of SR algorithms. Second, we develop a large-scale dataset of SR images and conduct human subject studies on these images.
We make the SR dataset with collected perceptual scores publicly available at \url{https://sites.google.com/site/chaoma99/sr-metric}.
\section{Related Work and Problem Context}
The problem how to evaluate the SR performance can be posed as assessing the quality of super-resolved images.
Numerous metrics for general image quality assessment have been used to evaluate SR performance in the literature.
According to whether the ground-truth HR images are referred, existing metrics fall into the following three classes.
\subsection{Full-Reference Metrics}
Full reference IQA methods such as the MSE, PSNR, and SSIM indices~\cite{DBLP:journals/tip/WangBSS04}
are widely used in the SR literature~\cite{DBLP:journals/tog/ShanLJT08,Kim08_PAMI,DBLP:journals/tip/YangWHM10,DBLP:journals/tip/DongZSW11,DBLP:journals/tip/SunSXS11,DBLP:conf/cvpr/YangLC13,Yang13_ICCV_Fast}.
However, these measures are developed for analyzing generic image
signals and do not match human perception (e.g., MSE)~\cite{DBLP:book/Girod}.
In~\cite{DBLP:conf/icip/ReibmanBG06}, Reibman et al. conduct subject
studies to examine the limitations of SR performance in terms of
scaling factors using a set of three images and existing metrics.
Subjects are given two SR images each time and asked to select the
preferred one.
The perceptual scores of the whole test SR images are
analyzed with the Bradley-Terry model~\cite{DBLP:conf/pics/Handley01}.
The results show that while SSIM performs better than others, it
is still not correlated with visual perception well.
In our recent SR benchmark work~\cite{Yang14_ECCV}, we conduct subject studies in a subset of generated SR images, and show that the IFC~\cite{DBLP:journals/tip/SheikhBV05} metric performs well among full-reference measures.
Since subject studies are always time-consuming and expensive, Reibman et al. use only six ground-truth images to generate test SR images while we use only 10 in~\cite{Yang14_ECCV}.
It is therefore of great importance to conduct larger subject study to address the question how to effectively evaluate the performance of SR
algorithms based on visual perception.
\subsection{Semi-Reference Metric}
In addition to the issues on matching visual perception,
full-reference metrics can only be used for assessment
when the ground-truth images are available.
Some efforts have been made to address this problem by
using the LR input images as references rather than the HR ground-truth
ones, which do not always exist in real-world applications.
Yeganeh et al.~\cite{DBLP:conf/icip/YeganehRW12} extract two-dimension
statistical features in the spatial and frequency domains to
compute assessment scores from either a test LR image or a generated
SR image.
However, only 8 images and 4 SR algorithms are analyzed in their work.
Our experiments with a larger number of test images and SR algorithms
show that this method is less effective due to the lack of
holistic statistical features.
\subsection{No-Reference Metrics}
When the ground-truth images are not available, SR images can be
evaluated by the no-reference IQA methods~\cite{DBLP:journals/spl/MoorthyB10,DBLP:conf/cvpr/TangJK11,DBLP:journals/tip/MoorthyB11,DBLP:journals/tip/SaadBC12}
based on the hypothesis that natural images possess certain statistical properties,
which are altered in the presence of distortions (e.g., noise) and
this alternation can be quantified for quality assessment.
In~\cite{DBLP:conf/cvpr/YeKKD12, DBLP:conf/cvpr/TangJK14},
features learned from auxiliary datasets are used to quantify the natural
image degradations as alternatives of statistical properties.
Existing no-reference IQA methods are all learning-based, but the training images are
degraded by noise, compression or fast fading rather than super-resolution.
As a result, the state-of-the-art no-reference IQA methods are less
effective for accounting for the artifacts such as incorrect high-frequency details
introduced by SR algorithms.
On the other hand, since SR images usually contain blur and
ringing artifacts, the proposed algorithm bears some resemblance
to existing metrics for blur and sharpness
estimation~\cite{DBLP:journals/tip/FerzliK09,DBLP:conf/cvpr/ChoJZKSF10,DBLP:journals/tog/LiuWCFR13}.
While the most significant difference lies in
that we focus on SR images, where numerous artifacts are
introduced by more than one blur kernel.
In this work, we propose a novel
no-reference metric for SR image quality assessment by learning
from perceptual scores based on subject studies involving
a large number of SR images and algorithms.
\begin{figure}
\centering
\includegraphics[width=.28\textwidth]{figure/PSNRsort.pdf}
\includegraphics[width=.7\textwidth]{figure/psnr_select.pdf}\\
\caption{Ranked PSNR values on the BSD200 dataset and
the evenly selected three sets of images.
The PSNR values indicate the quality scores of the SR images
generated from the LR images using \eqref{eq:downsample}
with scaling factor ($s$) of 2 and Gaussian kernel width ($\sigma$) of 0.8
by the bicubic interpolation algorithm.}
\label{fig:psnrsort}
\end{figure}
\section{Human Subject Studies}
\label{sec:sujectstudy}
We use the Berkeley segmentation
dataset~\cite{ICCV01:BerklyDataSet} to carry out the experiments as
the images are diverse and widely used for SR
evaluation~\cite{Glasner2009,DBLP:journals/tip/SunSXS11,Yang13_ICCV_Fast}.
For an HR source image $I_h$, let $s$ be a scaling factor, and the width and height of $I_h$ be $s
\rm{\times} n$ and $s \rm{\times} m$.
We generate a downsampled LR image $I_l$ as follows:
\begin{equation}
\label{eq:downsample}
I_l(u, v)=\sum_{x, y}k(x-su, y-sv)I_h(x, y),
\end{equation}
where $u \in \{1,\dots,n\}$ and $v \in \{1,\dots,m\}$ are
indices of $I_l$, and $k$ is a matrix of Gaussian kernel weight determined by
a parameter $\sigma$, e.g.,
$k(\Delta x, \Delta y) = \frac{1}{Z}e^{-(\Delta x^2 + \Delta
y^2)/2\sigma^2}$,
where $Z$ is a normalization term.
Compared to our benchmark work~\cite{Yang14_ECCV}, we remove the noise term from \eqref{eq:downsample} to reduce uncertainty.
The quality of the super-resolved images from those LR images are used to evaluate the SR performance.
In this work,
we select 30 ground truth images from the BSD200 dataset~\cite{ICCV01:BerklyDataSet} according to the PSNR values.
In order to obtain a representative image set that covers a wide range of high-frequency details,
we compute the PSNR values as the quality scores of the SR images generated from the LR images using \eqref{eq:downsample}
with a scaling factor ($s$) of 2 and a Gaussian kernel width ($\sigma$) of 0.8 by the bicubic interpolation algorithm.
The selected 30 images are evenly divided into three sets
as shown in Figure~\ref{fig:psnrsort}.
\begin{table}
\small
\centering
\caption{The scaling factors ($s$) in our experiments with their corresponding kernel width values ($\sigma$).}
\label{fig:pair}
\setlength{\tabcolsep}{1.4em}
\begin{tabular}{ |c|c|c|c|c|c|c|}
\hline
$s$ & 2 & 3 & 4 & 5 & 6 & 8 \\ \hline
$\sigma$ & 0.8 & 1.0 & 1.2 & 1.6 & 1.8 & 2.0 \\ \hline
\end{tabular}
\end{table}
\begin{figure}[!t]
\centering
\small
\setlength{\tabcolsep}{.2em}
\begin{tabular}{ccc}
\includegraphics[width=.32\textwidth]{figure/sup/sk1.pdf} &
\includegraphics[width=.32\textwidth]{figure/sup/sk2.pdf} &
\includegraphics[width=.32\textwidth]{figure/sup/sk3.pdf} \\
\includegraphics[width=.32\textwidth]{figure/sup/sk4.pdf} &
\includegraphics[width=.32\textwidth]{figure/sup/sk5.pdf} &
\includegraphics[width=.32\textwidth]{figure/sup/sk6.pdf}\\
\end{tabular}
\caption{Distribution of mean PSNR on the selected images.
Note the increasing trend of the kernel width along the increase of the scale factor to generate the peak PSNR values.
%
The SR algorithm Dong11 does not converge when $\sigma<0.8$. The vertical dash line highlights the optimal kernel width for each scaling factor.
}
\label{fig:kw}
\end{figure}
The LR image formation of (\ref{eq:downsample}) can be viewed as a
combination of a downsampling
and a blurring operation which is determined by the scaling factor $s$ and
kernel width $\sigma$, respectively.
As subject studies are time-consuming and expensive, our current work focuses on large differences caused by scaling factors, which are critical to the quality assessment of SR images.
We focus on how to effectively quantify the upper bound of SR performance based on human perception.
Similar to~\cite{Yang14_ECCV}, we assume the kernel width is known, and
compute the mean PSNR values of the SR images
generated by 9 SR methods under various settings ($s\in\{2,3,4,5,6,8\}$ and
$\sigma \in \{0.4,0.6,\ldots,2\}$)
using 30 ground truth images.
Figure~\ref{fig:kw} shows that the larger subsampling factor requires larger
blur kernel width for better performance.
We thus select an optimal $\sigma$ for each scaling factor ($s$)
as shown in Table~\ref{fig:pair}.
\begin{table}[t!]
\centering
\small
\caption{Empirical quality scores on SR images generated by bicubic interpolation. GT indicates the ground-truth HR images.}
\label{tb:inst}
\setlength{\tabcolsep}{.7em}
\begin{tabular}{ |c|c|c| c| c |c| c| c| }
\hline
$s$ & \ GT \ & 2 & 3 & 4 & 5 & 6 & 8 \\ \hline
Score ($\approx$) & 10 & $8\sim9$ & $5\sim7$ & $4\sim6$ & $3\sim5$ & $2\sim 4$ & $<2$ \\ \hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[width=.84\textwidth]{figure/screenshot.png}\\
\caption{One screenshot of human subject study. Subjects assign scores
between 0 to 10 to displayed SR images. Test images are randomly
presented in order to reduce bias caused by similarity of image
contents.}
\label{fig:screenshot}
\end{figure}
In the subject studies, we use
absolute rating scores rather than
pairwise comparison scores as we have 1,620 test images,
which would require millions of pairwise comparisons (i.e., $C_2^{1620}\approx1.3\text{M}$).
Although the sampling strategy \cite{CVPR14_PengYe} could alleviate this burden partially, pairwise comparison is infeasible given the number of subjects, images and time constraints.
We note that subject studies in \cite{DBLP:journals/tip/SheikhSB06,Yang14_ECCV} are also based on absolute rating.
In this work, we develop a user interface (See Figure~\ref{fig:screenshot})
to collect perceptual scores for these SR images.
At each time, we simultaneously show 9 images generated from one LR image
by different SR algorithms on a high-resolution display.
These images are displayed in random order to reduce
bias caused by correlation of image contents.
Subjects are asked to give scores from 0 to 10 to indicate
image quality based on visual preference.
We divide the whole test into 3 sections evenly such that subjects
can take a break after each section and keep high attention span in our studies.
To reduce the inconsistency among the individual quality criterion,
we design a training process to conduct the test
at the beginning of each section,
i.e., giving subjects an overview of all the
ground-truth and SR images generated by bicubic interpolation with the referred scale of quality scores
as shown in Table~\ref{tb:inst}.
\begin{table}
\caption{Data sets used for image quality assessment based on subject
studies.}
\centering
\small
\setlength{\tabcolsep}{0.6em}
\label{tb:dataset}
\begin{tabular}{|l|c|c|c|}
\hline
Dataset & \# Reference Images & \ \# Distortions \ & \# Subject Scores \\ \hline\hline
LIVE~\cite{DBLP:journals/tip/SheikhSB06} & 29 & 982 & 22,457 \\ \hline
ASQA~\cite{CVPR14_PengYe} & 20 & 120 & 35,700 \\ \hline
SRAB~\cite{Yang14_ECCV} & 10 & 540 & 16,200 \\ \hline
Our study & 30 & 1,620 & 81,000 \\ \hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\small
\setlength{\tabcolsep}{.5em}
\begin{tabular}{cc}
\includegraphics[width=.42\textwidth]{figure/compareECCV14.pdf} &
\includegraphics[width=.42\textwidth]{figure/outlierpruning.pdf} \\
(a) & (b)
\end{tabular}
\caption{(a) Deviation of 50 perceptual scores
on three pairs of SR images generated by bicubic interpolation from the same
test image in Figure~\ref{fig:SRimage}.
(b) Sorted mean perceptual scores and deviations before and after removing
outliers. }
\label{fig:meanerror}
\end{figure}
We collect 50 scores from 50 subjects for each image, and
compute the perceptual quality index as the mean of the median 40
scores to remove outliers.
To the best of our knowledge, our subject study is the largest so far in
terms of SR images, algorithms, and subject scores (See Table~\ref{tb:dataset}).
In addition to using more images than~\cite{Yang14_ECCV}, we
present subjects color SR images for evaluation as we observe that monochrome SR images introduce larger individual bias as demonstrated in Figure~\ref{fig:meanerror}(a).
It is reasonable that gray-scale images are rear in daily life and subjects hold different quality criterion.
Figure~\ref{fig:meanerror}(b) shows that
the mean perceptual scores are more stable after removing outliers.
\begin{figure}[!t]
\centering
\small
\begin{minipage}{\textwidth}
\includegraphics[width=.495\textwidth]{figure/perceptualscore1.pdf}
\includegraphics[width=.495\textwidth]{figure/perceptualscore2.pdf}\\
\includegraphics[width=.495\textwidth]{figure/perceptualscore3.pdf}
\includegraphics[width=.495\textwidth]{figure/perceptualscore4.pdf}\\
\includegraphics[width=.495\textwidth]{figure/perceptualscore5.pdf}
\includegraphics[width=.495\textwidth]{figure/perceptualscore6.pdf}\\
\caption{Perceptual scores of SR images under 6 pairs of scaling factor ($s$) and kernel width ($\sigma$).
The performance rank of SR algorithms remains relatively consistent,
even while score values change under different scaling factors and kernel widths.
The average perceptual scores of each SR algorithm are shown in the legend
(Shan08 with $s=3$, $\sigma=1.0$
is excluded as the SR images contain severe noise and their perceptual scores are close to 0)}.
\vspace{1em}
\label{fig:score}
\end{minipage}
\begin{minipage}{\textwidth}
\centering
\small
\setlength{\tabcolsep}{1em}
\begin{tabular}{cc}
\includegraphics[width=.42\textwidth]{figure/gt/35028.png}&
\includegraphics[width=.42\textwidth]{figure/gt/134049.png}\\
(a) Image ID 10 & (b) Image ID 23
\end{tabular}
\caption{The scores in Figure~\ref{fig:score} indicated by vertical dash lines for the SR images generated from (a) are much higher than that of (b).}
\label{fig:twoimages}
\end{minipage}
\end{figure}
Figure~\ref{fig:score} shows the computed mean perceptual quality indices
in terms of scaling factor and kernel width.
From the human subject studies, we have the following observations.
First, the performance rank of 9 SR algorithms remains the same (i.e.,
the curves are similar)
across all images in Figure \ref{fig:score}(a)-(f),
which shows consistency of perceptual scores on evaluating SR
algorithms.
Second, the performance rank changes with scaling factors,
e.g., Glasner09 outperforms Bicubic with higher perceptual scores in
Figure~\ref{fig:score}(a) while it is the opposite in Figure~\ref{fig:score}(c).
Since the image quality degradation caused by scaling factors is larger
than that by different SR methods,
the statistical properties for quantifying SR artifacts have to be discriminative
to both scaling variations and SR algorithms.
Third, SR results generated from LR images with more smooth
contents have higher perceptual scores, e.g., the score of the image in
Figure~\ref{fig:twoimages}(a) is higher than that of Figure~\ref{fig:twoimages}(b).
This may be explained by the fact that visual perception is sensitive
to edges and textures and most algorithms do not perform well for
images such as Figure~\ref{fig:twoimages}(b).
\section{Proposed Algorithm}
We exploit three types of statistical properties as features,
including local and global frequency variations and spatial discontinuity,
to quantify artifacts and assess the quality of SR images.
Each set of statistical features is computed on a pyramid to alleviate the
scale sensitivity of SR artifacts.
Figure~\ref{fig:flowchart} shows the main steps of the proposed
algorithm for learning no-reference quality metric.
Figure~\ref{fig:feat} shows an overview of the statistical properties
of each type of features.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figure/flowchart.pdf}
\caption{Main steps of the proposed no-reference metric.
For each input SR image, statistics computed from the
spatial and frequency domains are used as features to represent SR
images. Each set of extracted features are trained in separate ensemble regression trees,
and a linear regression model is used to predict a
quality score by learning from a large number of visual perceptual scores.}
\label{fig:flowchart}
\end{figure}
\subsection{Local Frequency Features}
The statistics of coefficients from the discrete cosine transform (DCT)
have been shown to effectively quantify the degree and type of image
degradation~\cite{DBLP:journals/jmiv/SrivastavaLSZ03},
and used for natural image quality assessment~\cite{DBLP:journals/tip/SaadBC12}.
Since SR images are generated from LR inputs, the task can be
considered as a restoration of high-frequency components on LR images.
%
To quantify the high-frequency artifacts introduced by SR restoration,
we propose to transform SR images into the DCT domain and fit the DCT
coefficients by the generalized Gaussian distribution (GGD) as in \cite{DBLP:journals/tip/SaadBC12}.
\begin{equation}
\label{eq:GGD}
f(x|\mu,\gamma)= \frac{1}{2\Gamma(1+\gamma^{-1})}
e^{-(|x-\mu|^\gamma)},
\end{equation}
where $\mu$ is the mean of the random variable $x$,
$\gamma$ is the shape parameter and $\Gamma(\cdot)$
is the gamma function, e.g., $\Gamma(z)=\int_0^\infty t^{z-1}e^{-t}dt$.
We observe that the shape factor $\gamma$ is more
discriminative than the mean value $\mu$ to characterize the distribution
of DCT coefficients (See Figure~\ref{fig:feat}(a)).
We thus select the value of $\gamma$ as one statistical
feature to describe SR images.
Let $\sigma$ be the standard deviation of a DCT block,
we use $\bar{\sigma}=\frac{\sigma}{\mu}$
to describe the perturbation within one block.
We further group DCT coefficients of each block
into three sets (See Figure~\ref{fig:bb}(a)) and compute the
normalized deviation $\bar{\sigma}_i$ ($i=1,2,3$) of each set and their
variation $\Sigma$ of $\{\bar{\sigma}_i\}$ as features.
As all the statistics are computed on
individual blocks, large bias is likely to be introduced if these measures
are simply concatenated.
We thus pool those block statistics and use the mean values
to represent each SR image.
To increase their discriminative strength, we add the first and
last 10\% pooled variations as features.
\begin{figure}
\small
\centering
\setlength{\tabcolsep}{0.1em}
\begin{tabular}{ccc}
\includegraphics[width=.33\textwidth]{figure/DCT.pdf} &
\includegraphics[width=.332\textwidth]{figure/GSM.pdf} &
\includegraphics[width=.318\textwidth]{figure/PCA.pdf} \\
(a) & (b) & (c)\\
\end{tabular}
\caption{(a) Estimated GGD distribution of the normalized DCT
coefficient in the first block of the images from Figure~\ref{fig:SRimage}.
Note that the shape parameter $\gamma$ effectively characterizes the
distribution difference between SR algorithms ($\mu$ is disregarded).
(b) Wavelet coefficient distribution in one
subband.
The GSM makes the distribution of subband more Gaussian-like (blue).
(c) Distribution of patch singular values of SR images in Figure~\ref{fig:SRimage}.
For SR images generated by Bicubic and BP containing more edge blur
(smoothness), their singular values fall off more rapidly.
In contrast, Glasner09 strengthens the sharpness and the singular
values of its generated SR image decrease more slowly. }
\label{fig:feat}
\end{figure}
\begin{figure}
\small
\centering
\setlength{\tabcolsep}{2em}
\begin{tabular}{cc}
\includegraphics[width=.25\textwidth]{figure/blockdivision.pdf} &
\includegraphics[width=.4\textwidth]{figure/neighbor.pdf} \\
(a) & (b)
\end{tabular}
\caption{
(a) Three groups of DCT coefficients for one block are shown by
different colors. The DC coefficients are excluded.
(b) $N = 15$ neighboring filters. $3\times3$ adjacent positions
in the current band, 5 locations in the neighboring band
and 1 from the parent band.}
\label{fig:bb}
\end{figure}
\subsection{Global Frequency Features}
The global distribution of the wavelet coefficients
of one SR image might not be fitted well by a specific distribution (e.g., GGD).
We sort to the Gaussian scale mixture (GSM) model, which shows effective
in describing the marginal and joint statistics of natural
images~\cite{NIPS/Wainwright99b,DBLP:journals/tip/MoorthyB11}
using a set of neighboring wavelet bands.
An $N$-dimensional random vector $Y$ belongs to a GSM if $Y\equiv
z\cdot U$, where $\equiv$ denotes equality in probability
distribution,
and $U$ is a zero-mean Gaussian random vector with covariance $Q$.
The variable $z$ is a non-negative mixing multiplier.
The density of $Y$ is given by an integral as
\begin{equation}
p_Y(y)=\int\frac{1}{(2\pi)^{N/2}|z^2Q|^{1/2}} e^{\left
(-\frac{Y^TQ^{-1}Y}{2z^2}\right )}p_z(z)dz,
\end{equation}
where $p_z(\cdot)$ is the probability of the mixing variable $z$.
We first apply the steerable pyramid
decomposition~\cite{DBLP:journals/tit/SimoncelliFAH92}
on an SR image to generate neighboring wavelet coefficients.
Compared to~\cite{NIPS/Wainwright99b,DBLP:journals/tip/MoorthyB11},
we apply the decomposition in both the real and imaginary domains
rather than only in the real domain.
We observe that the wavelet coefficients in the complex domain have
better discriminative strength.
As shown in Figure~\ref{fig:bb}(b), we assume that $N$ (e.g.,
$N=15$) filters in neighborhoods that share a mixer estimated by
$\hat{z}=\sqrt{Y^TQ^{-1}Y/N}$.
Such estimation is identical to divisive normalization
\cite{DBLP:journals/tip/WangBSS04,DBLP:journals/tip/MoorthyB11}
and makes the probability distribution of wavelet band more
Gaussian-like (See Figure~\ref{fig:feat}(b)).
Let $d_\alpha^\theta$ be the normalized wavelet subband
with scale $\alpha$ and orientation $\theta$.
We estimate the shape parameter $\gamma$ using (\ref{eq:GGD}) on
$d_\alpha^\theta$ and concatenated bands $d^\theta$ across scales.
In addition,
we compute the structural
correlation~\cite{DBLP:journals/tip/WangBSS04,DBLP:journals/tip/MoorthyB11}
between high-pass response and their band-pass counterparts to measure
the global SR artifacts.
Specifically, the band-pass and high-pass responses
are filtered across-scale by a $15\times15$ Gaussian window with
kernel width $\sigma=1.5$.
The structural correlation is computed by
$\rho=\frac{2\sigma_{xy}+c_0}{\sigma_x^2+\sigma_y^2+c_0}$,
where $\sigma_{xy}$ is the cross-covariance between the windowed regions;
$\sigma_x$ as well as $\sigma_y$ are their windowed variances;
and $c_0$ is a constant for stabilization.
\subsection{Spatial Features}
Since the spatial discontinuity of pixel intensity is closely related
to perceptual scores for SR images in subject studies (See
Figure~\ref{fig:score}),
we model this property in a way similar
to~\cite{DBLP:conf/icip/YeganehRW12}.
We extract features from patches rather than pixels to increase
discriminative strength.
We apply principal component analysis (PCA) on patches
and use the corresponding singular values to describe the spatial
discontinuity.
Singular values of images with smooth contents are squeezed to zero
more rapidly than for those with sharp contents (as they correspond
to less significant eigenvectors).
Figure~\ref{fig:feat}(c) shows the singular values of SR images
generated from Bicubic and BP fall off more rapidly
as the generated contents tend to be smooth.
\subsection{Two-stage Regression Model}
We model the features of local frequency,
global frequency and spatial discontinuity with three independent
regression forests~\cite{DBLP:journals/ml/Breiman01,MSR-TR-2011-114}.
Their outputs are linearly regressed on perceptual scores to predict the quality
of evaluated SR images.
Let $x_n$ ($n=1,2,3$) denote one type of low-level features,
and $y$ be the perceptual scores of SR images.
The $j$-th node of the $t$-th decision tree ($t=1,2,\ldots, T$)
in the forest is learned as:
\begin{equation}
\theta_j^{n*}=\argmax_{\theta_j^n\in \mathcal{T}_j}I_j^n,
\end{equation}
where $\mathcal{T}_j$
controls the size of a random subset of training data to train node $j$.
The objective function $I_j^n$ is defined as:
\begin{equation}
I_j^n=\sum_{x_n\in\mathcal{S}_j}\log(|\Lambda_y(x_n)|)-\sum_{i\in
\{L,R\}}\big(\sum_{x_n\in \mathcal{S}_j^i}\log(|\Lambda_y(x_n)|)\big)
\end{equation}
with $\Lambda_y$ the conditional covariance matrix computed from probabilistic linear fitting,
where $\mathcal{S}_j$ denotes the set of training data arriving at node $j$,
and $\mathcal{S}_j^L$, $\mathcal{S}_j^R$ the left and right split sets.
We refer readers to \cite{MSR-TR-2011-114} for more details about regression forest.
The predicted score $\hat{y}_n$ is thus
computed by averaging the outputs of $T$ regression trees as:
\begin{equation}
\hat{y}_n=\frac{1}{T}\sum_t^Tp_t(x_n|\Theta).
\end{equation}
Consequently, we linearly regress the outputs from all three types of features to perceptual scores,
and estimate the final quality score as $\hat{y}=\sum_n\lambda_n\cdot\hat{y}_n$,
where the weight $\lambda$ is learned by minimizing
\begin{equation}
\lambda^*=\argmin_{\lambda}(\sum_n\lambda_n\cdot\hat{y}_n-y)^2.
\end{equation}
\section{Experimental Validation}
\label{sec:experiment}
In the human subject studies, we generate 1,620 SR images from 180 LR
inputs using 9 different SR algorithms, and collect their perceptual scores
from 50 subjects. The mean of the median 40 subject scores is used as perceptual score.
We randomly split the dataset into 5 sets, and
recursively select one set for test and the remaining for training.
After this loop, we obtain the quality scores estimated by the proposed metric for all SR images.
We then compare the Spearman rank correlation coefficients between the
predicted quality scores and perceptual scores.
In addition to the 5-fold cross validation, we split the training and test sets according to
the reference images and SR methods
to verify the generality of the proposed metric.
Given that there are 30 reference images and 9 SR methods,
we leave $6$ reference images or
2 methods out in each experiment.
Several state-of-the-art no-reference IQA methods and 4 most
widely used full-reference metrics for SR images are included for
experimental validation.
More results and the source code of the proposed metric can be found at \url{https://sites.google.com/site/chaoma99/sr-metric}.
\begin{table}
\caption{List of features used in this work.}
\label{tb:feature}
\centering
\small
\setlength{\tabcolsep}{1em}
\begin{tabular}{ |c|c|c| }
\hline
Feature domain & Feature Description & \# \\
\hline\hline
\multirow{3}*{Local frequency} & $\gamma$ (mean, first 10\% percentile) & 6 \\
\cline{2-3}
& $\bar{\sigma}$ (mean, last 10\% percentile) & 6 \\
\cline{2-3}
& $\Sigma$ (mean, last 10\% percentile) & 6 \\
\hline
\multirow{3}*{Global frequency} & $\gamma$ for each band $d_\alpha^\theta$ and $d^\theta$ & 18 \\
\cline{2-3}
& Across-scale correlation & 12 \\
\cline{2-3}
& Across-band correlation & 15 \\
\hline
Spatial discontinuity & Singular values of patches & 75 \\
\hline
Total & & 138 \\
\hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\small
\includegraphics[width=0.8\textwidth]{figure/RMSE_random.pdf} \\
(a) 5-fold cross validation \\
\includegraphics[width=0.8\textwidth]{figure/RMSE_leaveimage.pdf} \\
(b) Leaving 6 reference images out \\
\includegraphics[width=0.8\textwidth]{figure/RMSE_leavemethod.pdf} \\
(c) Leaving 2 SR methods out \\
\caption{Root-mean-square error between the estimated score and
the subjective score
(measures with smaller values are closer to human
visual perception) using 3 validation schemes.
%
Note that the proposed two-stage regression model (orange bar) on
three types of low-level
features (blue bar) reduces the error between perceptual scores
significantly. }
\label{fig:RMSE}
\end{figure}
\subsection{Parameter Settings}
We use a three-level
pyramid on $7\times7$ blocks of DCT coefficients
to compute local frequency features.
For steerable pyramid wavelet decomposition, we set
$\alpha$ and $\theta$ to be 2 and 6, respectively.
The resulting 12 subbands are denoted by $s_\alpha^\theta$, where
$\alpha\in\{1,2\}$ and
$\theta\in\{0^\circ,30^\circ,60^\circ,90^\circ,120^\circ,150^\circ\}$.
We set the number $N$ of neighboring filters to 15,
i.e., $3\times3$ adjacent positions in the current band,
5 adjacent locations in the neighboring band and 1 from the parent
band share a mixer (See Figure~\ref{fig:bb}(b)).
For spatial discontinuity, we compute singular values on $5\times 5$
patches on a three-level pyramid.
We list the detailed feature information in Table~\ref{tb:feature}.
We vary the parameter $T$ of regression trees
from 100 to 5000 with a step of 50 and
find the proposed algorithm performs best when $T$ is set to 2000.
\begin{table}
\caption{Spearman rank correlation coefficients~\cite{hogg2005introduction}
(metric with higher coefficient matches perceptual score better). The random forest regression (RFR) uniformly performs better than the support vector regression (SVR) for each type of features or the concatenation (-con) of three type of features. The proposed two-stage regression approach (-all) combining three types of features improves the accuracy for both RFR and SVR. Bold: best; underline: second best.}
\label{tb:rfrsvr}
\resizebox{\textwidth}{!}{
\setlength{\tabcolsep}{.0em}
\centering
\small
\begin{tabular}{ | p{5em} || *{5}{p{4.5em}<{\centering}|} | *{5}{p{4.5em}<{\centering}|} }
\hline
& Ours & RFR-con & RFR-DCT & RFR-GSM & RFR-PCA & SVR-all & SVR-con & SRV-DCT & SVR-GSM & SVR-PCA \\\hline
~ Bicubic & \textbf{0.933} & 0.922 & 0.910 & 0.898 & {\ul 0.923} & 0.851 & 0.772 & 0.630 & 0.713 & 0.862 \\
~ BP & \textbf{0.966} & {\ul 0.962} & 0.956 & 0.952 & \textbf{0.966} & 0.881 & 0.876 & 0.776 & 0.838 & 0.889 \\
~ Shan08 & \textbf{0.891} & {\ul 0.887} & 0.830 & 0.870 & 0.874 & 0.504 & 0.373 & 0.499 & 0.522 & 0.044 \\
~ Glasner09 & \textbf{0.931} & {\ul 0.926} & 0.911 & 0.897 & 0.878 & 0.841 & 0.717 & 0.766 & 0.685 & 0.599 \\
~ Yang10 & {\ul 0.968} & 0.961 & 0.954 & 0.948 & \textbf{0.969} & 0.929 & 0.905 & 0.874 & 0.834 & 0.877 \\
~ Dong11 & {\ul 0.954} & 0.946 & 0.922 & 0.929 & \textbf{0.960} & 0.885 & 0.892 & 0.792 & 0.883 & 0.874 \\
~ Yang13 & \textbf{0.958} & {\ul 0.955} & 0.937 & 0.932 & \textbf{0.958} & 0.898 & 0.855 & 0.801 & 0.770 & 0.874 \\
~ Timofte13 & \textbf{0.930} & {\ul 0.928} & 0.911 & 0.880 & 0.927 & 0.883 & 0.814 & 0.859 & 0.628 & 0.839 \\
~ SRCNN & \textbf{0.949} & 0.938 & 0.917 & 0.936 & {\ul 0.945} & 0.866 & 0.853 & 0.778 & 0.816 & 0.843 \\\hline
~ Overall & \textbf{0.931} & {\ul 0.921} & 0.909 & 0.913 & {\ul 0.921} & 0.752 & 0.696 & 0.711 & 0.616 & 0.663 \\\hline
\end{tabular}
}
\end{table}
\begin{table}
\caption{Spearman rank correlation coefficients~\cite{hogg2005introduction}
(metric with higher coefficient matches perceptual score better). The compared no-reference metrics are re-trained on our SR dataset using the 5-fold cross validation. The proposed metric performs favorably against state-of-the-art methods. Bold: best; underline: second best.}
\label{tb:cv}
\resizebox{\textwidth}{!}{
\setlength{\tabcolsep}{0em}
\centering
\small
\begin{tabular}{ | p{5.5em} || *{6}{p{4.2em}<{\centering}|} | *{6}{p{4.2em}<{\centering}|}}
\hline
& Ours & BRISQUE & BLIINDS & CORNIA & CNNIQA & NSSA & DIVINE & BIQI & IFC & SSIM & FSIM & PSNR\\
& & \cite{DBLP:journals/tip/MittalMB12}
& \cite{DBLP:journals/tip/SaadBC12}
& \cite{DBLP:conf/cvpr/YeKKD12}
& \cite{DBLP:conf/cvpr/KangYLD14}
& \cite{DBLP:conf/icip/YeganehRW12}
& \cite{DBLP:journals/tip/MoorthyB11}
& \cite{DBLP:journals/spl/MoorthyB10}
& \cite{DBLP:journals/tip/SheikhBV05}
& \cite{DBLP:journals/tip/WangBSS04}
& \cite{DBLP:journals/tip/ZhangZMZ11} & \\
\hline\hline
~ Bicubic & \textbf{0.933} & 0.850 & 0.886 & 0.889 & {\ul 0.926} & -0.007 & 0.784 & 0.770 & 0.884 & 0.588 & 0.706 & 0.572 \\
~ BP & \textbf{0.966} & 0.917 & 0.931 & 0.932 & {\ul 0.956} & 0.022 & 0.842 & 0.740 & 0.880 & 0.657 & 0.770 & 0.620 \\
~ Shan08 & {\ul 0.891} & 0.667 & 0.664 & \textbf{0.907} & 0.832 & -0.128 & 0.653 & 0.254 & 0.934 & 0.560 & 0.648 & 0.564 \\
~ Glasner09 & \textbf{0.931} & 0.738 & 0.862 & {\ul 0.918} & 0.914 & 0.325 & 0.426 & 0.523 & 0.890 & 0.648 & 0.778 & 0.605 \\
~ Yang10 & \textbf{0.968} & 0.886 & 0.901 & 0.908 & {\ul 0.943} & 0.036 & 0.525 & 0.556 & 0.866 & 0.649 & 0.757 & 0.625 \\
~ Dong11 & \textbf{0.954} & 0.783 & 0.811 & 0.912 & {\ul 0.921} & 0.027 & 0.763 & 0.236 & 0.865 & 0.649 & 0.765 & 0.634 \\
~ Yang13 & \textbf{0.958} & 0.784 & 0.864 & 0.923 & {\ul 0.927} & 0.168 & 0.537 & 0.646 & 0.870 & 0.652 & 0.768 & 0.631 \\
~ Timofte13 & \textbf{0.930} & 0.843 & 0.903 & 0.911 & {\ul 0.924} & 0.320 & 0.122 & 0.563 & 0.881 & 0.656 & 0.756 & 0.620 \\
~ SRCNN & \textbf{0.949} & 0.812 & 0.843 & 0.898 & {\ul 0.908} & 0.165 & 0.625 & 0.617 & 0.885 & 0.660 & 0.780 & 0.645 \\\hline
~ Overall & \textbf{0.931} & 0.802 & 0.853 & {\ul 0.919} & 0.904 & 0.076 & 0.589 & 0.482 & 0.810 & 0.635 & 0.747 & 0.604 \\\hline
\end{tabular}
}
\end{table}
\subsection{Quantitative Validations}
We run the proposed measure 100 times in each validation and choose
the mean values as the estimated quality scores.
We compare the contribution of each feature type
using root-mean-square errors (RMSEs) in Figure~\ref{fig:RMSE}.
The small overall error values, 0.87 in (a) and less than 1.4 in (b)
and (c) compared to the score range (0 to 10), indicate the
effectiveness of the proposed method
by linearly combining three types of statistical features.
In addition,
we carry out an ablation study replacing the random forest regression (RFR) by the support vector regression (SVR) on each type of features. The SVR model is widely used in existing no-reference image quality metrics \cite{DBLP:journals/tip/MittalMB12, DBLP:journals/tip/SaadBC12, DBLP:journals/tip/MoorthyB11, DBLP:journals/spl/MoorthyB10,
DBLP:conf/cvpr/YeKKD12}.
Table \ref{tb:rfrsvr} shows that RFR is more robust to the outliers than SVR on each type of features or a simple concatenation of three types of features. The proposed two stage-regression model effectively exploits three types of features and performs best.
\begin{table}
\caption{Spearman rank correlation coefficients~\cite{hogg2005introduction}
(metric with higher coefficient matches perceptual score better). The compared metrics are retrained on our SR dataset under the leave-image-out validation. Bold: best; underline: second best.}
\label{tb:leaveimage}
\centering
\resizebox{.82\textwidth}{!}{
\setlength{\tabcolsep}{0em}
\small
\begin{tabular}{| p{6em} || p{5em}<{\centering}|p{5em}<{\centering}|p{5em}<{\centering}|p{5em}<{\centering}|p{5em}<{\centering}|p{5em}<{\centering}|}
\hline
& Ours & BRISQUE & BLIINDS & CORNIA & CNNIQA & NSSA \\
& & \cite{DBLP:journals/tip/MittalMB12}
& \cite{DBLP:journals/tip/SaadBC12}
& \cite{DBLP:conf/cvpr/YeKKD12}
& \cite{DBLP:conf/cvpr/KangYLD14}
& \cite{DBLP:conf/icip/YeganehRW12} \\\hline\hline
~ Bicubic & \textbf{0.805} & 0.423 & 0.522 & {\ul 0.761} & 0.736 & 0.093 \\
~ BP & \textbf{0.893} & 0.539 & 0.476 & {\ul 0.873} & 0.853 & -0.046 \\
~ Shan08 & {\ul 0.800} & 0.442 & 0.474 & \textbf{0.832} & 0.742 & 0.048 \\
~ Glasner09 & \textbf{0.867} & 0.277 & 0.399 & {\ul 0.859} & 0.803 & 0.023 \\
~ Yang10 & \textbf{0.904} & 0.625 & 0.442 & 0.843 & {\ul 0.867} & 0.012 \\
~ Dong11 & \textbf{0.875} & 0.527 & 0.411 & 0.819 & {\ul 0.849} & -0.101 \\
~ Yang13 & \textbf{0.885} & 0.575 & 0.290 & {\ul 0.843} & 0.841 & 0.108 \\
~ Timofte13 & {\ul 0.815} & 0.500 & 0.406 & \textbf{0.828} & 0.740 & -0.035 \\
~ SRCNN & \textbf{0.904} & 0.563 & 0.383 & 0.827 & {\ul 0.850} & 0.042 \\\hline
~ Overall & \textbf{0.852} & 0.505 & 0.432 & {\ul 0.843} & 0.799 & 0.017 \\\hline
\end{tabular}
}
\end{table}
\begin{table}
\caption{Spearman rank correlation coefficients~\cite{hogg2005introduction}
(metric with higher coefficient matches perceptual score better). The compared metrics are retrained on our SR dataset under the leave-method-out validation. Bold: best; underline: second best.}
\label{tb:leavemethod}
\centering
\resizebox{.82\textwidth}{!}{
\setlength{\tabcolsep}{0em}
\small
\begin{tabular}{| p{6em} || p{5em}<{\centering}|p{5em}<{\centering}|p{5em}<{\centering}|p{5em}<{\centering}|p{5em}<{\centering}|p{5em}<{\centering}|}
\hline
& Ours & BRISQUE & BLIINDS & CORNIA & CNNIQA & NSSA \\
& & \cite{DBLP:journals/tip/MittalMB12}
& \cite{DBLP:journals/tip/SaadBC12}
& \cite{DBLP:conf/cvpr/YeKKD12}
& \cite{DBLP:conf/cvpr/KangYLD14}
& \cite{DBLP:conf/icip/YeganehRW12} \\
\hline\hline
~ Bicubic & {\ul 0.932} & 0.850 & 0.929 & 0.893 & \textbf{0.941} & 0.036 \\
~ BP & {\ul 0.967} & 0.934 & 0.953 & 0.938 & \textbf{0.971} & 0.021 \\
~ Shan08 & \textbf{0.803} & 0.534 & 0.471 & {\ul 0.799} & 0.767 & -0.087 \\
~ Glasner09 & \textbf{0.913} & 0.677 & 0.805 & 0.817 & {\ul 0.883} & 0.393 \\
~ Yang10 & \textbf{0.965} & 0.834 & 0.895 & 0.914 & {\ul 0.930} & -0.054 \\
~ Dong11 & \textbf{0.932} & 0.774 & 0.780 & 0.917 & {\ul 0.920} & -0.062 \\
~ Yang13 & \textbf{0.944} & 0.716 & 0.845 & {\ul 0.911} & 0.906 & 0.147 \\
~ Timofte13 & 0.774 & 0.760 & {\ul 0.849} & \textbf{0.898} & 0.845 & 0.382 \\
~ SRCNN & \textbf{0.933} & 0.771 & 0.806 & {\ul 0.908} & 0.890 & 0.149 \\\hline
~ Overall & \textbf{0.848} & 0.644 & 0.763 & {\ul 0.809} & 0.797 & 0.053 \\\hline
\end{tabular}
}
\end{table}
For fair comparisons, we generate the IQA indices from 11 state-of-the-art methods
including: (1) six no-reference metrics:
BRISQUE~\cite{DBLP:journals/tip/MittalMB12}, BLIINDS~\cite{DBLP:journals/tip/SaadBC12},
DIVINE~\cite{DBLP:journals/tip/MoorthyB11},
BIQI~\cite{DBLP:journals/spl/MoorthyB10},
CORNIA~\cite{DBLP:conf/cvpr/YeKKD12}, and CNNIQA~\cite{DBLP:conf/cvpr/KangYLD14};
(2) one semi-reference metric: NSSA~\cite{DBLP:conf/icip/YeganehRW12};
and (3) four full-reference metrics:
IFC~\cite{DBLP:journals/tip/SheikhBV05},
SSIM~\cite{DBLP:journals/tip/WangBSS04}, FSIM~\cite{DBLP:journals/tip/ZhangZMZ11}, and PSNR.
As the no-reference metrics are originally designed
to measure image degradations, e.g., noise, compression and fading, rather than for SR evaluation, we retrain them on our SR dataset using the same validation schemes.
Note that both the DIVINE and BIQI metrics apply intermediate steps to estimate
specific types of image degradations \cite{DBLP:journals/tip/SheikhSB06}
for image quality assessment.
However, SR degradation is not considered in
any type of degradations in \cite{DBLP:journals/tip/SheikhSB06}.
We directly regress the features generated by DIVINE and BIQI methods
to the perceptual scores but this approach is not effective as
the quality scores for different SR images are almost the same.
We thus report the original results using the DIVINE and BIQI indices
without retraining on our dataset.
We empirically tune the parameters to obtain best performance during retraining.
The NSSA metric is designed for
evaluating SR images. The other four full-reference metrics are widely
used in SR evaluation although they are not designed for SR.
Figure~\ref{fig:corrDistribution} shows the correlation between subjective scores and
IQA indices.
Table~\ref{tb:cv}, \ref{tb:leaveimage} and \ref{tb:leavemethod} quantitatively compares the Spearman rank
correlation coefficients.
In addition, we compare the original results of BRISQUE, BLIINDS, CORNIA, and CNNIQA in Table \ref{tb:notrain} and Figure \ref{fig:corr_notrain}.
Without retraining on our SR dataset, these metrics generally perform worse.
This shows the contributions of this work by developing a large-scale SR image dataset
and carrying out large-scale subject studies on these SR images.
Note that we do not present the results of NSSA in Table \ref{tb:notrain} and Figure \ref{fig:corr_notrain}
as the learned data file of the NSSA metric is not publicly available.
\begin{figure}
\centering
\small
\setlength{\tabcolsep}{0em}
\begin{tabular}{ccc}
\includegraphics[width=.33\textwidth]{figure/corr_rd_Ours.pdf}&
\includegraphics[width=.33\textwidth]{figure/corr_rd_BRISQUE.pdf}&
\includegraphics[width=.33\textwidth]{figure/corr_rd_BLIINDS.pdf} \\
\includegraphics[width=.33\textwidth]{figure/corr_rd_CORNIA.pdf} &
\includegraphics[width=.33\textwidth]{figure/corr_rd_CNNIQA.pdf} &
\includegraphics[width=.33\textwidth]{figure/corr_rd_NSSA.pdf} \\
\includegraphics[width=.33\textwidth]{figure/corr_rd_DIVINE.pdf} &
\includegraphics[width=.33\textwidth]{figure/corr_rd_BIQI.pdf} &
\includegraphics[width=.33\textwidth]{figure/corr_rd_IFC.pdf}\\
\includegraphics[width=.33\textwidth]{figure/corr_rd_SSIM.pdf} &
\includegraphics[width=.33\textwidth]{figure/corr_rd_FSIMc.pdf} &
\includegraphics[width=.33\textwidth]{figure/corr_rd_PSNR.pdf} \\
\end{tabular}
\caption{Quality indices generated by different methods to perceptual scores.
The proposed metric and other no-reference baseline methods (except DIVINE and BIQI) are leaned under 5-fold cross validation.
%
A metric matches visual perception well if the distribution is compact
and spreads out along the diagonal.
}
\label{fig:corrDistribution}
\end{figure}
\begin{table}
\caption{Spearman rank correlation coefficients~\cite{hogg2005introduction}
(metric with higher coefficient matches perceptual score better). The compared no-reference metrics are not retrained on our SR dataset. Bold: best; underline: second best.}
\label{tb:notrain}
\centering
\resizebox{.9\textwidth}{!}{
\setlength{\tabcolsep}{0em}
\small
\begin{tabular}{| p{6em} || p{5em}<{\centering}|p{5em}<{\centering}|p{5em}<{\centering}||p{5em}<{\centering}|p{5em}<{\centering}|p{5em}<{\centering}|p{5em}<{\centering}|}
\hline
& Ours & Ours & Ours & BRISQUE & BLIINDS & CORNIA & CNNIQA \\
& \textit{5-fold CV} & \textit{image-out} & \textit{method-out} & \cite{DBLP:journals/tip/MittalMB12}
& \cite{DBLP:journals/tip/SaadBC12}
& \cite{DBLP:conf/cvpr/YeKKD12}
& \cite{DBLP:conf/cvpr/KangYLD14}
\\
\hline\hline
~ Bicubic & \textbf{0.933} & 0.805 & {\ul 0.932} & 0.850 & 0.929 & 0.893 & 0.927 \\
~ BP & {\ul 0.966} & 0.893 & \textbf{0.967} & 0.934 & 0.953 & 0.938 & 0.931 \\
~ Shan08 & \textbf{0.891} & 0.800 & {\ul 0.803} & 0.534 & 0.471 & 0.799 & 0.842 \\
~ Glasner09 & \textbf{0.931} & 0.867 & 0.913 & 0.677 & 0.805 & 0.817 & {\ul 0.896} \\
~ Yang10 & \textbf{0.968} & 0.904 & {\ul 0.965} & 0.834 & 0.895 & 0.914 & 0.938 \\
~ Dong11 & \textbf{0.954} & 0.875 & 0.932 & 0.774 & 0.780 & 0.917 & {\ul 0.936} \\
~ Yang13 & \textbf{0.958} & 0.885 & {\ul 0.944} & 0.716 & 0.845 & 0.911 & 0.934 \\
~ Timofte13 & \textbf{0.930} & 0.815 & 0.774 & 0.760 & 0.849 & 0.898 & {\ul 0.906} \\
~ SRCNN & \textbf{0.949} & 0.904 & 0.933 & 0.771 & 0.806 & {\ul 0.908} & 0.924 \\\hline
~ Overall & \textbf{0.931} & {\ul 0.852} & 0.848 & 0.644 & 0.763 & 0.809 & 0.833 \\\hline
\end{tabular}
}
\end{table}
\subsection{Discussion}
As shown in Table~\ref{tb:cv}-\ref{tb:leavemethod} and Figure~\ref{fig:corrDistribution},
the proposed method performs favorably against the state-of-the-art
IQA methods, e.g., the overall quantitative correlation with perceptual
scores is 0.931 under 5-fold cross validation.
The leave-image-out and leave-method-out validations are more challenging since
they take into account the independence of image contents and SR algorithms.
In the leave-image-out setting, the training and test sets do not
contain SR images generated from the same reference image.
In the leave-method-out setting, the SR images in training and test sets
are generated by different SR algorithms.
Table~\ref{tb:leaveimage} and \ref{tb:leavemethod} show that the proposed metric
performs well against existing IQA methods in these two
challenging validations.
Note that the proposed metric performs best in the 5-fold cross validation
as it learns from perceptual scores and favors prior information from
image contents and SR algorithms for training.
\begin{figure}[!t]
\centering
\small
\setlength{\tabcolsep}{0em}
\begin{tabular}{ccc}
\includegraphics[width=.33\textwidth]{figure/corr_notrain_Proposed2.pdf}&
\includegraphics[width=.33\textwidth]{figure/corr_notrain_Proposed3.pdf}&
\includegraphics[width=.33\textwidth]{figure/corr_notrain_BRISQUE.pdf} \\
(a) & (b) & (c) \\
\includegraphics[width=.33\textwidth]{figure/corr_notrain_BLIINDS.pdf} &
\includegraphics[width=.33\textwidth]{figure/corr_notrain_CORNIA.pdf} &
\includegraphics[width=.33\textwidth]{figure/corr_notrain_CNNIQA.pdf} \\
(d) & (e) & (f) \\
\end{tabular}
\caption{Quality indices generated by different methods to perceptual scores.
(a)-(b): Proposed metric under \textit{leave-image-out} and \textit{leave-method-out} validation schemes. (c)-(f): Original baseline no-reference algorithms without retraining on our SR dataset. The proposed metric under two challenging validations still performs well against state-of-the-art metrics.
%
A metric matches visual perception well if the distribution is compact
and spread out along the diagonal.
}
\label{fig:corr_notrain}
\end{figure}
The six evaluated no-reference IQA metrics, BRISQUE, BLIINDS, DIVINE,
BIQI, CORNIA, and CNNIQA, are not originally designed for SR. We retrain them (except DIVINE and BIQI) on our own SR dataset.
For DIVINE and BIQI, we present
the reported results as the performance of these methods by retraining on our dataset is significantly worse.
The reason is that these two metrics apply intermediate steps to quantify specific image distortions in \cite{DBLP:journals/tip/SheikhSB06} rather than SR.
Table~\ref{tb:cv} shows that for most SR algorithms, the DIVINE or BIQI metrics do not match human perception well.
The retrained BRISQUE and BLIINDS metrics perform well against DIVINE and BIQI.
We note that some of the features used by the BRISQUE and BLIINDS metrics are similar to
the proposed DCT and GSM features.
However, both BRISQUE and BLIINDS metrics
are learned from one support vector regression (SVR) model \cite{libsvm},
which are less robust to the outliers of perceptual scores than the
random forest regression (RFR) model. Figure~\ref{fig:corrDistribution} shows that their quality scores scatter more than close to the
diagonal.
The CORNIA method learns a codebook from an auxiliary
dataset~\cite{DBLP:journals/jei/LarsonC10}
containing various image distortions.
The coefficients of densely sampled patches from a test image are computed
based on the codebook as features.
Table \ref{tb:cv} shows that the CORNIA metric achieves
second best results among all the baseline algorithms.
The proposed metric performs favorably against CORNIA
due to the effective two-stage regression model based on RFRs.
While CORNIA only relies on one single SVR.
The CNNIQA metric uses convolutional neural network to assess the image quality, however,
it does not perform as well as the proposed method.
The can be explained by insufficient amount of training examples.
Overall, the proposed method exploits both global and local
statistical features specifically designed to account for SR artifacts.
Equipped with a novel two-stage regression model, i.e., three independent random forests are regressed on extracted three types of
features and their outputs are linearly regressed with perceptual
scores, our metric is more robust to outliers than the compared IQA
methods, which are based on one single regression model (e.g., SVR or CNN).
Although the semi-reference NSSA method is designed for evaluating SR images
and extracts both frequency and spatial features, it does not perform well
as shown in Figure~\ref{fig:corrDistribution} and Table~\ref{tb:cv}-\ref{tb:leavemethod}.
This is because the features used in the NSSA method are two-dimension coefficients and
their regressor is based on a simple linear model.
The quality indices computed by weight-averaging two coefficients are
less effective for evaluating the quality of SR images generated by the state-of-the-art SR
methods.
\begin{figure}[!t]
\centering
\setlength{\tabcolsep}{.1em}
\begin{tabular}{cccc}
\includegraphics[width=.24\textwidth]{figure/sup/SR_BW/Dong11_sf6_sigma18_134049.png} &
\includegraphics[width=.24\textwidth]{figure/sup/SR_BW/Glasner09_sf4_sigma12_187058.png} &
\includegraphics[width=.24\textwidth]{figure/sup/SR_BW/Yang13_sf2_sigma08_335094.png} &
\includegraphics[width=.24\textwidth]{figure/sup/SR_BW/Dong11_sf4_sigma12_94095.png} \\
Shan08~\cite{DBLP:journals/tog/ShanLJT08} &
Glasner09~\cite{Glasner2009} &
Yang13~\cite{Yang13_ICCV_Fast} &
Dong11~\cite{DBLP:journals/tip/DongZSW11}\\
2.68 / 2.68 & 4.70 / 4.70 & 8.65 / 8.65 & 5.17 / 5.18\\
$s=6$, $\sigma=1.8$ & $s=4$, $\sigma=1.2$ & $s=2$, $\sigma=0.8$ & $s=4$, $\sigma=1.2$\\
\end{tabular}
\caption{Four best cases using the proposed metric to evaluate the quality of SR images under the 5-fold cross validation.
The left / right values under each image are the predicted score and the perceptual score respectively.}
\label{fig:bw1}
\end{figure}
\begin{figure}[!t]
\centering
\setlength{\tabcolsep}{.1em}
\begin{tabular}{cccc}
\includegraphics[width=.24\textwidth]{figure/sup/SR_BW/Glasner09_sf5_sigma16_157032.png} &
\includegraphics[width=.24\textwidth]{figure/sup/SR_BW/Shan08_sf2_sigma08_29030.png} &
\includegraphics[width=.24\textwidth]{figure/sup/SR_BW/Shan08_sf2_sigma08_48017.png} &
\includegraphics[width=.24\textwidth]{figure/sup/SR_BW/Yang13_sf3_sigma10_103006.png} \\
Glasner09~\cite{Glasner2009} & Shan08~\cite{DBLP:journals/tog/ShanLJT08} & Shan08~\cite{DBLP:journals/tog/ShanLJT08} & Yang13~\cite{Yang13_ICCV_Fast} \\
0.95 / 4.95 & 8.15 / 5.30 & 0.93 / 6.85 & 2.55 / 4.48 \\
$s=5$, $\sigma=1.6$ & $s=2$, $\sigma=0.8$ & $s=2$, $\sigma=0.8$ & $s=3$, $\sigma=1.0$ \\
\end{tabular}
\caption{Four worst cases using the proposed metric to evaluate the quality of SR images under the 5-fold cross validation.
The left / right values under each image are the predicted score and the perceptual score respectively.}
\label{fig:bw2}
\end{figure}
For the cases when ground truth HR images are available,
the proposed method performs favorably against four widely used
full-reference quality metrics including
PSNR, SSIM~\cite{DBLP:journals/tip/WangBSS04}, IFC~\cite{DBLP:journals/tip/SheikhBV05}, and FSIM~\cite{DBLP:journals/tip/ZhangZMZ11}.
The PSNR metric performs poorly since the pixel-wise difference measurement
does not effectively account for the difference in visual perception (See
Table~\ref{tb:cv} and Figure~\ref{fig:corrDistribution}).
For example, an SR image with slight misalignment from the ground truth
data appears similarly in terms of visual perception, but the PSNR
value decreases significantly.
The SSIM method performs better than PSNR as it aims to mimic human
vision and computes perceptual similarity between SR and ground truth images by using patches instead of pixels.
However, the SSIM metric
favors the false sharpness on the SR images generated by Shan08 and
Glasner09 and overestimates the corresponding quality scores as shown in
Figure~\ref{fig:corrDistribution}.
The FSIM metric is less effective in evaluating the SR performance either.
The IFC method is also designed to match visual perception and
generally performs well for SR images~\cite{Yang14_ECCV}.
Nonetheless, its indices are less accurate for some SR images
(Figure~\ref{fig:corrDistribution}).
This can be explained by the fact that the IFC metric is limited by
local frequency features.
In other words, the IFC metric does not take global frequency and
spatial properties into account, and fails to distinguish them.
Thus it may underestimate the quality of SR images
(See the dots cluster below the diagonal in the last sub figure of
Figure~\ref{fig:corrDistribution}).
We present four best and worse cases using our metric with 5-fold cross validation
to predict the quality of SR images in Figure~\ref{fig:bw1} and Figure~\ref{fig:bw2}.
The reasons that cause the worst cases in Figure 15 can be explained by several factors.
For the first, third and fourth SR images, the proposed metric gives low quality scores due to the fact that human subjects do not always favor oversharp SR images (see also the discussion in Table 1 in the manuscript).
For the second image, the richer high-frequency contents affect
the proposed metric to compute the high score.
Overall, the proposed metric performs favorably against the
state-of-the-art methods, which can be attributed to two reasons.
First, the proposed metric uses three sets of discriminative low-level
features from the spatial and frequency domains to describe SR
images.
Second, an effective two-stage regression model is more robust to outliers
for learning from perceptual scores collected in our large-scale
subject studies.
In contrast, existing methods neither learn from perceptual
scores nor design effective features with focus on representing SR
images.
The proposed metric is implemented in Matlab on a machine with
an Intel i5-4590 3.30 GHz CPU and 32 GB RAM.
We report the average run time (in seconds) as follows,
ours: 13.31, BRISQUE: 0.14, BLIINDS: 23.57, DIVINE: 9.51, BIQI: 1.21, CORNIA: 3.02, CNNIQA: 12.68, NSSA: 0.28, IFC: 0.61, SSIM: 0.13, FSIM: 0.18, and PSNR: 0.02.
\begin{figure}
\centering
\small
\setlength{\tabcolsep}{.2em}
\begin{tabular}{cccc}
\includegraphics[height=0.395\textwidth]{figure/SRimage/388006BicubicInterpolationsf4_color.png} &
\includegraphics[height=0.395\textwidth]{figure/SRimage/srimages.png} &
\includegraphics[trim = 0mm 2mm 0mm 0mm, clip, height=0.395\textwidth]{figure/SRimage/sf4naive.pdf} &
\includegraphics[trim = 0mm 2mm 0mm 0mm, clip, height=0.395\textwidth]{figure/SRimage/sf4fusion.pdf} \\
(a) 4.01 & (b) 4.82/4.70/4.69/4.50 & (c) 4.61 & (d) 4.78 \\
\end{tabular}
\caption{Perception guided SR results (best viewed on a high-resolution displayer)
with quality scores predicted by the proposed metric.
(a) Input LR image ($s=4,\sigma=1.8$).
(b) Selected best SR images with the Dong11, Yang13, Timofte13
and Yang10 methods using the proposed metric.
(c) $3\times 3$ grid integration. (d) Pixel-level integration. }
\label{fig:srresults}
\end{figure}
\begin{figure}
\centering
\small
\setlength{\tabcolsep}{.05em}
\begin{tabular}{ccc}
\includegraphics[width=.32\textwidth]{figure/SR_result/orignal_16068.png} &
\includegraphics[width=.32\textwidth]{figure/SR_result/BicubicInterpolation_16068.png} &
\includegraphics[width=.32\textwidth]{figure/SR_result/ours_16068.png} \\
Ground truth & Input LR image & Ours (6.02) \\
\includegraphics[width=.32\textwidth]{figure/SR_result/Shan08_16068.png} &
\includegraphics[width=.32\textwidth]{figure/SR_result/Glasner09_16068.png} &
\includegraphics[width=.32\textwidth]{figure/SR_result/Yang13_16068.png} \\
Shan08~\cite{DBLP:journals/tog/ShanLJT08}~(5.72) & SRCNN~\cite{DBLP:conf/eccv/DongLHT14}~(5.32) & Yang13~\cite{Yang13_ICCV_Fast}~(5.16) \\
\includegraphics[width=.32\textwidth]{figure/SR_result/Yang10_16068.png} &
\includegraphics[width=.32\textwidth]{figure/SR_result/Timofte13_16068.png} &
\includegraphics[width=.32\textwidth]{figure/SR_result/Dong11_16068.png} \\
Glasner09~\cite{Glasner2009}~(4.78) & Timofte13~\cite{DBLP:conf/iccv/TimofteDG13}~(4.65) & Dong11~\cite{DBLP:journals/tip/DongZSW11}~(4.61) \\
\end{tabular}
\caption{Visual comparison of SR results. The input low resolution images are generated using (1) with $s=4$ and $\sigma=1.2$. We show the best 6 results based on their quality scores in parentheses predicted by the proposed metric, and select the best 4 algorithms to integrate our SR results.}
\vspace{1em}
\label{fig:sr007}
\end{figure}
\begin{figure}[!t]
\centering
\small
\setlength{\tabcolsep}{.1em}
\begin{tabular}{ccc}
\includegraphics[width=.32\textwidth]{figure/SR_result/orignal_2018.png} &
\includegraphics[width=.32\textwidth]{figure/SR_result/BicubicInterpolation_2018.png} &
\includegraphics[width=.32\textwidth]{figure/SR_result/ours_2018.png} \\
Ground truth & Input LR image & Ours (5.81) \\
\includegraphics[width=.32\textwidth]{figure/SR_result/Shan08_2018.png} &
\includegraphics[width=.32\textwidth]{figure/SR_result/Glasner09_2018.png} &
\includegraphics[width=.32\textwidth]{figure/SR_result/Yang10_2018.png} \\
Shan08~\cite{DBLP:journals/tog/ShanLJT08}~(5.48) & Yang10~\cite{DBLP:journals/tip/YangWHM10}~(5.35) & Glasner09~\cite{Glasner2009}~(5.02) \\
\includegraphics[width=.32\textwidth]{figure/SR_result/Timofte13_2018.png} &
\includegraphics[width=.32\textwidth]{figure/SR_result/Dong11_2018.png} &
\includegraphics[width=.32\textwidth]{figure/SR_result/Irani91_2018.png} \\
Timofte13~\cite{DBLP:conf/iccv/TimofteDG13}~(4.64) & Dong11~\cite{DBLP:journals/tip/DongZSW11}~(4.63) & BP~\cite{DBLP:journals/cvgip/IraniP91}~(4.58) \\
\end{tabular}
\caption{Visual comparison of SR results. The input low resolution images are generated using (1) with $s=4$ and $\sigma=1.2$. We show the best 6 results based on their quality scores in innermost parentheses predicted by the proposed metric, and select the best 4 algorithms to integrate our SR results.}
\label{fig:sr010}
\end{figure}
\section{Perception Guided Super-Resolution}
Given an LR input image, we can apply different SR algorithms to
reconstruct HR images and use the proposed metric to
automatically select the best result.
Figure~\ref{fig:SRimage} shows such an example where
the SR image generated by the Timofte13 method has the highest quality
score using the proposed metric (See Figure~\ref{fig:SRimage}(i))
and is thus selected as the HR restoration output.
Equipped with the proposed metric, we can also select the best
local regions from multiple SR images and integrate them into a
new SR image.
Given a test LR image, we apply aforementioned 9 SR algorithms
to generate 9 SR images.
We first divide each of them into a $3\times3$ grid of regions.
We compute their quality scores based on the proposed metric and stitch
the best regions to generate a new SR image (See Figure~\ref{fig:srresults}(c)).
For better integration, we densely sample overlapping patches of
$11\times11$ pixels.
We then apply the proposed metric on each patch and
compute an evaluation score of each pixel of that SR image.
For each patch, we select the one from all results with highest quality scores and
stitch all the selected patches together using the graph cut and
Poisson blending~\cite{DBLP:journals/tog/PerezGB03} method
(See Figure~\ref{fig:srresults}(d)).
It is worth noting that the proposed metric can be used to select SR
regions with high perceptual scores from which a high-quality HR image
is formed.
Figure~\ref{fig:sr007} and Figure~\ref{fig:sr010} show two more pixel-level integrated SR results, which retain most edges and render smooth contents as well.
The integrated SR results effectively exploit the merits of state-of-the-art SR algorithms,
and show better visual quality.
\section{Conclusion}
In this paper, we propose a novel no-reference IQA algorithm
to assess the visual quality of SR images by learning
perceptual scores collected from large-scale
subject studies.
The proposed metric regress three types of low-level statistical
features extracted from SR images to perceptual scores.
Experimental results demonstrate that the proposed metric performs
favorably against state-of-the-art quality assessment methods
for SR performance evaluation.
\section*{References}
\bibliographystyle{elsarticle-num}
|
2,869,038,154,533 | arxiv | \section{Introduction}
\label{sec:intro}
The quantum phase transitions of two (spatial) dimensional systems
have been the focus of much study in the condensed matter community.
Prominent examples include the superfluid-insulator transition in
thin films \cite{gruner,kapi,shahar}, the transitions between
various quantum Hall states \cite{shahar2,engel}, and magnetic
ordering transitions of Mott insulators and superconductors which
have applications to the cuprate compounds
\cite{collin,dsz,younglee}. Of particular interest in this paper are
the transport properties of conserved quantities such as the
electrical charge or the total spin: these are characterized by a
(charge or spin) conductivity $\sigma$, which can in general be a
complicated function of frequency $\omega$, wavevector $k$,
temperature $T$, and various couplings characterizing the ground
state.
It is often the case that the quantum critical point is described by
a strongly interacting quantum field theory in 2+1 spacetime
dimensions $D$. Examples are ({\em i\/}) the superfluid-insulator
transition in the boson Hubbard model at integer filling
\cite{fwgf,bloch,spielman}, which is described by the $\varphi^4$
field theory with O(2) symmetry, and so is controlled by the
Wilson-Fisher fixed point in $D=2{+}1$; ({\em ii\/}) the spin-gap
paramagnet to N\'eel order transition of coupled spin
dimers/ladders/layers which is described by the O(3) $\varphi^4$
field theory \cite{wang,matsumoto}; and ({\em iii\/}) the
`deconfined' critical point of a $S=1/2$ antiferromagnet between a
N\'eel and a valence bond solid state \cite{senthil,anders}, which
is described by the ${\mathbb C}{\mathbb P}^1$ model with a
non-compact U(1) gauge field \cite{mv}. In all these cases the
critical point is described by a relativistic conformal field theory
(CFT). With an eye towards such experimentally motivated
applications, our purpose here is to explore the transport
properties of general interacting CFTs in $D=2{+}1$.
A crucial property of CFTs in $D=2{+}1$ (which actually applies more
generally to any critical theory in 2 spatial dimensions which obeys
hyperscaling) is that the conductivity is $1/\hbar$ times a
dimensionless number. For U(1) currents, there is also a prefactor of
$(e^{\ast})^2$ where $e^\ast$ is the unit of charge --- we will drop
this factor below. For non-Abelian Noether currents, the
normalization of charge is set by a conventional normalization of
the generators of the Lie algebra. We will be working with
relativistic theories, and therefore set $\hbar = k_B = c = 1$.
Initial discussions \cite{mpaf1,mpaf2,wenzee} of this dimensionless
conductivity at the quantum critical point were expressed in terms
of ground state correlations of the CFT. Let $J^a_{\mu}$ represent
the set of conserved currents of the theory; here $\mu =0,1,2$ is a
spacetime index, and $a$ labels the generators of the global
symmetry. In the CFT, $J^a_{\mu}(x)$ has dimension 2, and so current
conservation combined with Lorentz and scale invariance imply for
the Fourier transform of the retarded correlator
$C_{\mu\nu}^{ab}(x)$
at zero temperature%
\footnote{If needed, a ``diamagnetic'' or ``contact''
term has been subtracted to ensure current conservation.
In theories with Chern-Simons terms, an additional term
proportional to $\epsilon_{\mu\nu\lambda} p_\lambda$ is permitted in
Eq.~(\ref{j0}) and the $T>0$ generalization in Eq.~(\ref{j1}).
See Appendix~\ref{app:cs}.
}:
\begin{equation}
C_{\mu\nu}^{ab}(p)\,\big|_{T{=}0} = \sqrt{p^2} \left( \eta_{\mu\nu}
- \frac{p_\mu p_\nu}{p^2} \right) K_{ab}\,, \label{j0}
\end{equation}
where $\eta_{\mu\nu}={\rm diag}(-1,1,1)$, $p_\mu=(-\omega,{\bm k})$ is
spacetime momentum, and $p^2={\bm k}^2-\omega^2$. We define $\sqrt{p^2}$
so that
it is analytic in the upper-half-plane of $\omega$ and $\Im
\sqrt{p^2} \leq 0$ for $\omega>0$. The parameters $K_{ab}$
are a set of universal, momentum-independent dimensionless constants
characterizing the CFT, which are the analog of the central charge
of the Kac-Moody algebra of CFTs in $D=1{+}1$. Application of the
Kubo formula at $T{=}0$ shows that \cite{mpaf1,wenzee} the $K_{ab}$
are equal to the conductivities $\sigma_{ab} = K_{ab}$, thus setting
up the possibility of observing these in experiments.
It was also noted \cite{mpaf2,wenzee} that particle-vortex duality
\cite{peskin,dasgupta,mpaf3} of theories with Abelian symmetry
mapped the $T=0$ conductivities to their inverse (we review this
mapping in Section~\ref{sec:cp1}). In self-dual theories, this
imposes constraints on the values of the $K_{ab}$, possibly allowing
them to be determined exactly. However, the field theories
considered in these early works were not self-dual (see
Appendix~\ref{app:cs}). Duality, and possible self-duality, was also
considered in the context of theories containing Chern-Simons terms,
relevant to quantum Hall systems
\cite{leefisher,fradkin,pryadko,shimshoni,burgess,witten}. We comment
on these
works in Appendix~\ref{app:cs}, but the body of the paper considers
only theories without Chern-Simons terms. For our purposes, more
relevant is the self-dual field theory proposed recently by
Motrunich and Vishwanath \cite{mv}, and we discuss its charge
transport properties below.
It was subsequently pointed out \cite{damle,ssqhe,ssbook} that the
$K_{ab}$ are {\em not} the d.c.~conductivities observed at small but
non-zero temperature. The key point \cite{ssye,ssbook,hod} is that
at non-zero $T$, the time $1/T$ is a characteristic `collision' or
`decoherence' time of the excitations of the CFT. Consequently the
transport at $ \omega \ll T$ obeys `collision-dominated'
hydrodynamics, while that at $\omega \gg T$ involves `collisionless'
motion of excitations above the ground state. Therefore, the limits
$\omega \rightarrow 0$ and $T \rightarrow 0$ do not, in general,
commute, and must be taken with great care; the constants $K_{ab}$
above are computed in the limit $\omega /T \to\infty$, while the
d.c.~conductivities involve $\omega /T \to 0$.
This contrast between the collisionless and collision-dominated
behavior is most clearly displayed in the correlations of the
conserved densities. Taking the $tt$ component of Eq.~(\ref{j0}) we
obtain the response
\begin{equation}
C_{tt}^{ab}(\omega, k) = K_{ab} \frac{-k^2}{\sqrt{k^2 -
\omega^2}}~~~,~~~||\omega| - k| \gg T \,,\label{j0n}
\end{equation}
which characterizes the `collisionless' response of the CFT at
$T=0$. We have also noted above that we expect the same result to
apply at $T>0$ provided $\omega$ and $k = |{\bm k}|$ are large enough,
and away from the light cone. The $T>0$ correlations are the Fourier
transform of the retarded real time correlators. These are related
by analytic continuation to the Euclidean space correlations defined
at the Matsubara frequencies, which are integer multiples of $2 \pi
T$. The low frequency hydrodynamic regime $\omega \ll T$ is only
defined in real time (Minkowski space). In this regime, the
arguments of Ref.~\cite{damle} imply that the
`collision-dominated' response has the structure%
\begin{equation}
C_{tt}^{ab}(\omega, k) = \sum_{\lambda} \chi^\lambda_{ab}
\frac{-D_\lambda k^2}{-i \omega + D_\lambda k^2}~~~,~~~|\omega|, k
\ll T \,,\label{j0d}
\end{equation}
where $D_\lambda$ are the diffusion constants of a set of diffusive
eigenmodes labelled by $\lambda$, and $\chi^\lambda_{ab}$ are the
corresponding susceptibilities. Scaling arguments imply that
\cite{chubukov} $D_\lambda = \mathcal{D}_\lambda /T$ and
$\chi^\lambda_{ab} = \mathcal{C}^\lambda_{ab} T$, where the $
\mathcal{D}_\lambda, \mathcal{C}^\lambda_{ab}$ are a set of
universal numbers characterizing the hydrodynamic response of the
CFT. The d.c. conductivities can be obtained from the Kubo formula
by $\sigma_{ab} = \lim_{\omega \rightarrow 0} \lim_{k \rightarrow 0}
(i \omega/k^2) C_{tt}^{ab}$, where the order of limits is
significant. At any fixed $T>0$, the limits of small $k$ and
$\omega$ imply that this Kubo formula has to be applied to
Eq.~(\ref{j0d}), and leads to Einstein relations between the
$T$-independent universal conductivities and the diffusivities. The
distinct forms of Eqs.~(\ref{j0n}) and (\ref{j0d}) make it clear
that, in general, the universal d.c. conductivities bear no direct
relationship to the $K_{ab}$; the latter, as we will see below in
Eq.~(\ref{sigma0}), are related to the high frequency conductivity.
It is worth noting here in passing that the structure in
Eq.~(\ref{j0d}) does {\em not\/} apply to CFTs in $D=1{+}1$, where
a result analogous to
Eq.~(\ref{j0n}) holds also in the low frequency and low momentum
limit; see Appendix~\ref{app:cft2} for further discussion of this
important point.
Returning to consideration of all the components of the
$C_{\mu\nu}^{ab}$ in $D=2{+}1$, an alternative presentation of the
collisionless-to-hydrodynamic crossover is obtained by writing down
the generalization of Eq.~(\ref{j0}) to $T>0$. Current conservation
and spatial rotational invariance, without Lorentz invariance at
$T>0$, generalize Eq.~(\ref{j0}) to
\begin{equation}
C_{\mu\nu}^{ab}(\omega , {\bm k})
= \sqrt{p^2} \Bigl( P^T_{\mu\nu}\, K^T_{ab} (\omega, k)
+ P^L_{\mu\nu}\, K^L_{ab} (\omega, k) \Bigr)
\label{j1}
\end{equation}
where $k = |{\bm k}|$, and $P^T_{\mu\nu}$ and $P^L_{\mu\nu}$ are
orthogonal projectors defined by
\begin{equation}
P^T_{00} = P^T_{0i} = P^T_{i0}=0~~,~~P^T_{ij} = \delta_{ij} -
\frac{k_i k_j}{k^2}~~,~~P^L_{\mu\nu} =
\Big(\eta_{\mu\nu} - \frac{p_\mu p_\nu}{p^2}\Big) - P^T_{\mu\nu},
\end{equation}
with the indices $i,j$ running over the 2 spatial components. The
constants $K_{ab}$ have each been replaced by {\em two}
dimensionless, universal, temperature-dependent functions
$K_{ab}^{L,T}(\omega,k)$, characterizing the longitudinal and
transverse response. These functions are dimensionless, and
hence they can {\em only\/} depend upon the dimensionless ratios
$\omega/T$ and $k/T$, as is also the case for the conductivities.
Spatial rotational invariance, and the existence of finite
correlation length at $T>0$ which ensures analyticity at small ${\bm k}$,
imply that the longitudinal and transverse response are equal to
each other at ${\bm k}=0$, and, by the Kubo formula, are both equal to
the zero momentum, frequency dependent complex conductivity,
$\sigma_{ab} (\omega/T)$:
\begin{equation}
\sigma_{ab}(\omega/T) = K_{ab}^L(\omega,0) = K^T_{ab}(\omega,0).
\label{sigmak}
\end{equation}
Also at $T=0$, these functions reduce to the constants in
Eq.~(\ref{j0}):
\begin{equation}
\sigma_{ab}(\infty)= K_{ab} = K_{ab}^L(\omega,k)\,\big|_{T{=}0} =
K_{ab}^T(\omega,k)\,\big|_{T{=}0}. \label{sigma0}
\end{equation}
The functions $K_{ab}^{L,T}(\omega,k)$ are clearly of great physical
interest, and it would be useful to compute them for a variety of
CFTs. A number of computations have appeared
\cite{damle,ssqhe,eric,sondhi1,sondhi2,sondhi3}, and show
interesting structure in the conductivity as a function of
$\omega/T$, encoding the hydrodynamic-to-collisionless crossover for
a variety of tractable models. Here we will present some additional
results which shed light on the role duality can play on the form of
these functions.
In Section \ref{sec:cp1} we will consider the role of duality in
Abelian systems, by examining the self-dual non-compact, easy-plane,
$\mathbb{CP}^1$ field theory discussed by Motrunich and
Vishwanath~\cite{mv}. Closely related results apply to other Abelian
CFTs whose particle-vortex duals have been described in the
literature \cite{witten,intrili,strassler1,strassler2,balents}, some
of which are supersymmetric (in which case, particle-vortex
duality is known as `mirror symmetry'). The Lagrangian formulation of
the
${\mathbb C}{\mathbb P}^1$ theory involves two complex scalar fields
and one gauge field $A_\mu$, which is coupled to a gauge current
$J_{1\mu}$. The theory has a global U(1)$\times Z_2$ symmetry, and
we will denote by $J_{2\mu}$ the Noether current arising from the
U(1) global symmetry. There is another conserved current, the
topological current $J_{\rm
top}^\mu=\epsilon^{\mu\nu\lambda}\partial_\nu A_\lambda$, which is
conserved by the Bianchi identity. The topological and Noether
currents exchange under the self-duality. As we will see in
Section~\ref{sec:cp1}, the two-point correlator of $J_{\rm top}^\mu$
is the inverse of that of $J_{1\mu}$. We use the notations of
Eqs.~(\ref{j0}), (\ref{j1}) with $a,b=1,2$.
The $Z_2$ symmetry ensures that the cross-correlations of the
$J_{1\mu}$, $J_{2\mu}$ currents vanish, and consequently there are
only two constants $K_{1}\equiv K_{11}$ and $K_{2}\equiv K_{22}$ in
Eq.~(\ref{j0}), and similarly for the $T>0$ functions in
Eq.~(\ref{j1}). We examine the duality transformations of these
function in Section~\ref{sec:cp1} and show that the existence
of a self-dual critical point leads to the functional relations%
\footnote{
We only keep the one-photon irreducible (1PI) part
in $K_1^{L,T}$, as explained in Section \ref{sec:cp1}.
}
\begin{subequations}
\label{cp1dual}
\begin{eqnarray}
K_1^L (\omega, k)\; K_2^T (\omega, k) &=& \frac{1}{\pi^2}\,,\\
K_2^L (\omega, k)\; K_1^T (\omega, k) &=& \frac{1}{\pi^2}\,,
\end{eqnarray}
\end{subequations}
which hold for general $T$, while for the constants in Eq.~(\ref{j0})
this implies $K_1 K_2 = 1/\pi^2$. Note that these relations are not
sufficient to determine the conductivities $\sigma_{1,2}(\omega/T)$;
from Eq.~(\ref{sigmak}), only their product obeys
$\sigma_1(\omega/T)\,\sigma_2(\omega/T ) = 1/\pi^2$, at all
$\omega/T$. Thus we expect that for this self-dual model, the
conductivities will remain non-trivial functions of $\omega/T$
exhibiting the hydrodynamic-collisionless crossover, and their
functional form has to be determined from the solution of a quantum
Boltzmann equation.
In Section~\ref{sec:m}, we turn to a field theory with non-Abelian
symmetries: the supersymmetric Yang Mills (SYM) gauge theory with a
SU($N$) gauge group and $\mathcal{N}{=}8$ supersymmetry
\cite{Seiberg}.
At long distances, the theory flows under the renormalization group
to a strongly coupled 2+1 dimensional $\mathcal{N}{=}8$
superconformal field theory (SCFT), which is believed to describe
degrees of freedom on a stack of $N$ M2-branes \cite{Sethi-Susskind,IMSY}.
In the
limit of large $N$, the SCFT can be analyzed by using the AdS/CFT
correspondence \cite{MAGOO}. The gravity description of the SCFT is
given by
M-theory on 3+1 dimensional anti-de Sitter space times a seven-sphere,
and in the large $N$ limit corresponds to 10+1 dimensional
supergravity on $\mathrm{AdS}_4\times S^7$. The AdS/CFT
correspondence provides a
method to compute real-time response functions at finite temperature
\cite{recipe,Herzog:2002pc}, in which case the gravity theory
contains a black hole in AdS$_4$. In the limit of low frequency and
momentum $\omega\ll T$, $k\ll T$ one finds hydrodynamic behavior
in the SCFT \cite{CH}.%
\footnote{
Hydrodynamic charge transport at small $\omega$ and $k$
is of course not specific to the $\mathcal{N}{=}8$ SCFT
in 2+1 dimensions.
Hydrodynamics from the supergravity description
was first found in strongly coupled $\mathcal{N}{=}4$ SYM
in 3+1 dimensions \cite{PSS-hydro}, and later in a variety
of other strongly coupled field theories
\cite{membrane,Buchel:2004hw,Benincasa:2006ei,Dp-Dq}.
In strongly coupled ${\cal N}{=}4$ SYM in $D{=}3{+}1$,
hydrodynamic to
collisionless crossover functions $K^{L,T}(\omega,k)$
were computed in \cite{photons-sym}. Note that in $D=3+1$ the
conductivity
is not dimensionless \cite{ssbook}, but is proportional to $T$ in
the hydrodynamic
limit $\omega \ll T$.
}
The surprising solvability in this limit therefore demands our
attention.%
\footnote{
Of course, there are other
well-known $D=2{+}1$ CFTs which are solvable in the large $N$ limit,
such as the $O(N)$ $\varphi^4$ field theory. However, all of these
are theories of particles which are infinitely long-lived at
$N=\infty$, and so do not exhibit hydrodynamic behavior in this
limit. Indeed,
an infinite-order resummation of the $1/N$ expansion is
invariably necessary \cite{ssbook} (via the quantum Boltzmann
equation) to obtain hydrodynamics. These solvable
theories become weakly coupled as $N\to\infty$, while the
$\mathcal{N}{=}8$ SYM remains strongly coupled even as $N\to\infty$.
}
The 2+1 dimensional SCFT has a global SO(8) R-symmetry (the
symmetry of the seven-sphere in the supergravity description), and
therefore has a set of conserved currents $J^a_{\mu}$, $a=1,\ldots,
28$. The SO(8) symmetry implies that $K_{ab} = K \delta_{ab}$, and
so there is only a single universal constant $K$ at zero
temperature. Similarly, in Eq.~(\ref{j1}) there are only two
independent functions $K^L (\omega, k)$ and $K^T (\omega, k)$ which
characterize the CFT response at finite temperature. In
Section~\ref{sec:m} we will compute these functions in the
$N{\to}\infty$ limit, for all values of $\omega/T$ and $k/T$. We
also prove that these functions obey the identity
\begin{equation}
K^L(\omega, k)\; K^T(\omega, k) = \frac{N^3}{18 \pi^2},
\label{mdual}
\end{equation}
at general $T$, which is strikingly similar to Eqs.~(\ref{cp1dual}).
Now this relation and Eq.~(\ref{sigmak}) do indeed determine $\sigma
(\omega/T)$ (and $K$) to be the frequency-independent constant which
is the square root of the right-hand-side of Eq.~(\ref{mdual}). In
other words, for this model, the hydrodynamic and high-frequency
collisionless conductivities are equal to each other. Nevertheless,
the theory does have a hydrodynamic-to-collisionless crossover at
all nonzero $k$ (as we will review in Section~\ref{sec:m}), where
$K^L(\omega,k)\neq K^T(\omega,k)$, and so Eq.~(\ref{mdual}) is not
sufficient to fix the correlators at $k{\neq}0$. Thus the identity
Eq.~(\ref{mdual}) causes all signals of the
hydrodynamic-collisionless crossover to disappear {\em only\/} at
$k{=}0$.
The similarity of Eq.~(\ref{mdual}) to Eq.~(\ref{cp1dual})
suggests that explanation of the frequency independence of the
conductivity of
the $\mathcal{N}=8$ SYM SCFT lies in a self-duality property.
Section~\ref{sec:m} demonstrates that this is indeed the case.
Under the AdS/CFT correspondence, the two-point
correlation function of the SO(8) R-currents in $D=2{+}1$ is
holographically
equivalent to the correlator of a SO(8) gauge field on an
asymptotically AdS$_4$ background. In the large $N$ limit, the
action of the SO(8) gauge field is Gaussian, and is easily shown to
possess electromagnetic (EM) self-duality under which the
electric and magnetic fields are interchanged. We demonstrate in
Section~\ref{sec:emduality} that it is precisely this EM self-duality
of the
3+1 dimensional gauge field
which leads to the constraint (\ref{mdual}) in the SCFT.
Thus the SYM theory obeys a self-duality which is not readily
detected in 2+1 dimensions,
but becomes explicit in the holographic theory in 3+1 dimensions.
The generalization of the particle-vortex duality of Abelian
CFTs in $D=2+1$ to non-Abelian CFTs is facilitated by the holographic
extension
to the theory on AdS$_4$.
There have been a few earlier studies connecting dualities in $D=4$
to those in $D=3$. Sethi \cite{sethi} considered the Kaluza-Klein
reduction
of S-duality from $D=4$ to $D=3$ by compactifying the $D=4$ theory
on a circle in one dimension. This is quite different from the
connection above,
using a holographic extension.
The work of
Witten \cite{witten} makes a connection which is the same as ours
above (see also
the work of Leigh and Petkou \cite{petkou1}).
He examined the connection between Abelian
particle-vortex duality (`mirror symmetry') of CFTs in $D=2+1$
to the action of SL(2,$Z$) on Abelian gauge theories
on AdS$_4$ at zero temperature.
We have considered a similar connection at non-zero temperature for the
${\cal N}{=}8$ SCFT, and shown that it is ``holographically self
dual'' in the large $N$ limit;
combined with
the non-Abelian SO(8) symmetry (which implies a single $K$), the
constraints
for the current correlators are stronger than those for Abelian
theories.
We will also consider in Appendix~\ref{app:d2}
other non-Abelian theories with known gravity descriptions.
In particular, we will show that for a theory on a stack of D2
branes, a non-trivial dilaton profile prevents EM self-duality.
In this case, we do not have the constraint (\ref{mdual}), and so find
a frequency dependent conductivity.
\section{Abelian, non-compact ${\mathbb C}{\mathbb P}^1$ model}
\label{sec:cp1}
\noindent
This section will consider duality properties and current
correlations of the Abelian, easy-plane ${\mathbb C}{\mathbb P}^1$
model of Ref.~\cite{mv}. This is a theory of two complex scalars
$z_{1,2}$ and a non-compact U(1) gauge field $A_\mu$; the
non-compactness is necessary to suppress instantons (monopoles), and
we indicate below Eq.~(\ref{j2w}) the modifications required when
monopoles are present.
More generally, one can consider dualities of the
non-compact ${\mathbb C}{\mathbb P}^{N-1}$ model
where the global SU($N$) flavor symmetry has been explicitly
broken down to U(1)$^{N-1} \times G_N$,
with $G_N$ some subgroup of the permutation group of $N$ objects \cite
{balents}.
The $N=1$ case, which is better known as
the Abelian Higgs model, will be described in Appendix~\ref{app:cs}. The
$N=2$ case (with $G_2 = Z_2$) is described below.
The $T>0$ results below have a generalization to all $N > 2$, with
the mappings spelled out in Ref.~\cite{balents}. Only the $N=2$
case is self-dual, and this is our reason for focusing on it.
It is interesting to note that the duality properties of the
non-compact ${\mathbb C}{\mathbb P}^{N-1}$ models have strikingly
similar
counterparts in $D=2+1$ theories with $\mathcal{N}=4$
supersymmetry \cite{intrili,strassler1,strassler2}.
In particular, the correspondence is to the theories with one
U(1) vector (gauge) multiplet and $N$ matter hypermultiplets (SQED-$N$).
SQED-1 is dual to a theory
of a single hypermultiplet, with no vector multiplet%
\footnote{The theory of a single
hypermultiplet is free. This is because the Gaussian fixed point is
protected by $\mathcal{N}=4$ supersymmetry \cite{strassler2}. In the
non-supersymmetric
case, the Gaussian fixed point is unstable to the interacting Wilson-
Fisher fixed point.};
this corresponds
to the duality, reviewed in Appendix~\ref{app:cs},
of the Abelian Higgs model to the theory of a single complex scalar
with no gauge field
(also known as the XY model or the O(2) $\varphi^4$ field theory).
Next, SQED-2 is self-dual, as is our $N=2$ case.
For $N>2$, the dual of SQED-$N$ is a quiver gauge
theory, as is the case for the ${\mathbb C}{\mathbb P}^{N-1}$
models \cite{balents}.\footnote{%
A quiver gauge theory consists of a direct product of gauge group
factors along with matter fields transforming in the bifundamental
representation of pairs of group factors. The word quiver is used
because the bifundamental fields
are often represented as arrows.
}
Our results below for $T>0$
should have straightforward extensions to these $\mathcal{N}=4$
supersymmetric
theories.
\subsection{Conserved currents}
Let us now begin our analysis of the non-supersymmetric $N=2$ case.
The action of the non-compact ${\mathbb C}{\mathbb P}^1$ theory is
\begin{eqnarray}
\mathcal{S} &=& \int\!\! d^2\!x\, dt\; \Bigl[
\left|\left(\partial_\mu - i A_\mu\right) z_1 \right|^2 +
\left|\left(\partial_\mu - i A_\mu\right) z_2 \right|^2 + s \left(
|z_1|^2 + |z_2 |^2 \right) + u
\left(|z_1|^2 + |z_2|^2 \right)^2 \nonumber \\
&~&~~~~~~~~~~~~~~+ v |z_1|^2 |z_2 |^2 + \frac{1}{2e^2} \left(
\epsilon^{\mu\nu\lambda} \partial_\nu A_\lambda \right)^2 \Bigr],
\label{sz}
\end{eqnarray}
with $u>0$ and $-4u<v<0$. For these negative values of $v$, the
phase for $s$ sufficiently negative has $|\langle z_1 \rangle | =
|\langle z_2 \rangle | \neq 0$. We can also define a gauge-invariant
vector order parameter $ \vec{N} = z^\ast \vec{\sigma} z$, where
$\vec{\sigma}$ are the Pauli matrices, and the constraint $v<0$
implies that $\vec{N}$ prefers to lie in the $xy$ plane: hence
`easy-plane' (for $v>0$, $\vec{N}$ would be oriented along the $z$
`easy-axis', realizing an Ising order parameter). The ${\mathbb
C}{\mathbb P}^1$ model is usually defined with fixed length
constraint $|z_1|^2 + |z_2|^2 = 1$, but here we have only
implemented a soft constraint by the quartic term proportional to
$u$; we expect that the models with soft and hard constraints have
the same critical properties. We are interested in the nature of the
quantum phase transition accessed by tuning the value of $s$ to a
critical value $s=s_c$. For $s>s_c$, we have a `Coulomb' phase
$\langle \vec{N} \rangle = 0$ with a gapless photon, while for
$s<s_c$ there is a `Higgs' phase with $\langle \vec{N} \rangle \neq
0$. The phase diagram \cite{mv} of the model in the $s$, $T$ plane
is shown in Fig.~\ref{phasediag}.
\begin{figure}
\includegraphics[width=5in]{phasediag}
\caption{Phase diagram \cite{mv} of the easy-plane non-compact ${\mathbb
C}{\mathbb P}^1$ model (Eq.~(\ref{sz})) in 2 spatial dimensions as a
function of the coupling $s$ and temperature $T$. The quantum
critical point is at $s=s_c$, $T=0$. The finite $T$ correlations of
the CFT describe the shaded quantum critical region; the boundary of the
shaded region is a crossover into a different physical region, not
a phase transition. The full lines
are Kosterlitz-Thouless (KT) phase transitions. The
KT line for $s<s_c$ describes the disappearance of quasi-long-range
$xy$ order
of $\vec{N}$. The KT transition for $s>s_c$
describes the deconfinement of $z$ quanta which are logarithmically
bound by the Coulomb interaction
in the low temperature phase into particle-anti-particle
pairs. The phase diagram can also be described in terms of the dual
$w$ theory in Eq.~(\ref{sw}). Duality interchanges the
two sides of $s=s_c$ ($T$ remains invariant under duality), and the $z
$ Coulomb phase
is interpreted as a $w$ Higgs phase and vice versa.} \label{phasediag}
\end{figure}
Both the Higgs and Coulomb phases have phase transitions as the
temperature is raised: for the former it is driven by the loss of
the Higgs (quasi)-long-range order, while for the latter it is a
``confinement-deconfinement'' transition of the $z$
particle-anti-particle pairs formed from the logarithmic Coulomb
force. Neither of these transitions is of interest to us in this
paper. Rather, we will compute $T>0$ correlations of the CFT
associated with the quantum critical point, and these describe the
physical properties of the shaded quantum critical
region in Fig.~\ref{phasediag}.
The theory has a discrete $Z_2$ symmetry which exchanges $z_1$ and
$z_2$. The continuous symmetries are a gauge U(1) symmetry
\begin{equation}
z_1 \rightarrow z_1 e^{i \phi}~~;~~ z_2 \rightarrow z_2 e^{i
\phi}~~;~~A_\mu \rightarrow A_\mu + \partial_\mu \phi
\end{equation}
and a global U(1) symmetry
\begin{equation}
z_1 \rightarrow z_1 e^{i \varphi}~~;~~ z_2 \rightarrow z_2 e^{-i
\varphi}.
\end{equation}
Associated with these symmetries we can define two currents
\begin{equation}
J_{1\mu} = i \left( z_1^\ast (\partial_\mu - i A_\mu) z_1 - z_1
(\partial_\mu + i A_\mu) z_1^\ast \right) + i \left( z_2^\ast
(\partial_\mu - i A_\mu) z_2 - z_2 (\partial_\mu + i A_\mu) z_2^\ast
\right).
\end{equation}
and
\begin{equation}
J_{2\mu} = i \left( z_1^\ast \partial_\mu z_1 - z_1 \partial_\mu
z_1^\ast \right) - i \left( z_2^\ast \partial_\mu z_2 - z_2
\partial_\mu z_2^\ast \right) \ .
\end{equation}
Note that $J_1$ is even under the $Z_2$ symmetry, while $J_2$ is
odd. Current conservation implies that at $T>0$ these have two-point
correlators of the form in Eq.~(\ref{j1}), with the 4 distinct
functions $K_{1,2}^{L,T}$.
Now consider the correlators of the gauge field $A_\mu$. It is
useful to write this in terms of the leading quadratic terms in the
Coleman-Weinberg effective potential:
\begin{eqnarray}
W &=& \frac{1}{2} \int_{k,\omega} \Biggl\{ -(k_i A_0 + \omega A_i
)^2 \left[ \frac{1}{e^2} +
\frac{\Pi^L (k, \omega)}{-\omega^2 + k^2} \right] \nonumber \\
&~&~~+ A_i A_j \left(
\delta_{ij} - \frac{k_i k_j}{k^2} \right)\left[ \frac{k^2}{e^2} +
\Pi^T (k, \omega) + \frac{\Pi^L (k, \omega) \omega^2 }{-\omega^2 +
k^2} \right] \Biggr\} + \ldots
\end{eqnarray}
where $\Pi^{L,T}$ are the two components of the photon self energy
(the `polarization' operator); these are related to the current
correlations by $\Pi^{L,T} = \sqrt{p^2} K^{L,T}_1$.
A key point is that at the conformal fixed point describing the
phase transition at the quantum critical point $s=s_c$ we can safely
take the limit $e \rightarrow \infty$ in the above. This is because
$\mbox{dim}[\Pi] = 1$, and so the induced polarizations are more
singular than the bare Maxwell term. This is a very generic property
of CFTs with gauge fields in $D=2{+}1$. From the effective potential
we can obtain the form of the gauge-invariant two-point correlators
in the critical regime (it is easiest to work this out in the Coulomb
gauge $k_i A_i = 0$):
\begin{eqnarray}
\left\langle \epsilon_{ij} k_i A_j ~;~ \epsilon_{i'j'} k_{i'} A_{j'}
\right\rangle &=& \frac{k^2}{\Pi^T (k, \omega)}\,, \nonumber
\\
\left\langle \epsilon_{i'j'} k_{i'} A_{j'} ~;~ (k_i A_0 + \omega
A_i) \right\rangle &=& \epsilon_{i'i} \frac{\omega k_{i'}}{\Pi^T
(k,
\omega)}\,, \nonumber \\
\left\langle (k_i A_0 + \omega A_i)~;~ (k_j A_0 + \omega A_j)
\right\rangle &=& \left( \delta_{ij} - \frac{k_i k_j}{k^2} \right)
\frac{\omega^2}{\Pi^T (k, \omega)} - \frac{k_i k_j}{k^2}
\frac{(-\omega^2+k^2)}{\Pi^L (k, \omega)}\,. \label{aa}
\end{eqnarray}
\subsection{Vortices and duality}
\label{sec:dual}
Here we will build a dual description of the ${\mathbb
C}{\mathbb P}^1$ model, treating the vortices of
the original model as complex scalar fields in the dual
description.
Consider the topological vortex excitations in the Higgs state of
the action (\ref{sz}). These are characterized \cite{babaev} by a
pair of winding
numbers $(n_1, n_2)$ associated with the phases of $z_1$ and $z_2$
out at spatial infinity. In general, such a vortex has a
logarithmically diverging energy because the currents are only
partially screened by the gauge field $A_\mu$. By an extension of the
Abrikosov-Nielsen-Olesen argument, it can be seen that
the co-efficient of the logarithmically divergent energy is
proportional to
\begin{equation}
\left(2 \pi n_1 - \int\!d^2x\,\epsilon_{ij} \partial_i A_j\right)^2
+
\left(2 \pi n_2 - \int\!d^2x\,\epsilon_{ij} \partial_i A_j\right)^2,
\end{equation}
and this is minimized when the total $A_\mu$ flux is quantized as
\cite{mv,babaev,balents}
\begin{equation}
\int\!d^2x\,\epsilon_{ij} \partial_i A_j = \pi (n_1 + n_2).
\label{eq:flux}
\end{equation}
Let us now identify the $(1,0)$ vortex as the worldline of a dual
particle $w_1$, the $(0,1)$ vortex as the worldline of a dual
particle $w_2$, and try to construct a dual theory by
introducing complex scalar fields $w_1(x)$, $w_2(x)$.
Then from Eq.~(\ref{eq:flux}),
Lorentz covariance implies that the total $w$
current is related to the $A_\mu$ flux:
\begin{equation}
\frac{1}{\pi}\epsilon_{\mu \nu \lambda} \partial^\nu A^\lambda = i
\left( w_1^\ast \partial_\mu w_1 - w_1 \partial_\mu w_1^\ast \right)
+ i \left( w_2^\ast \partial_\mu w_2 - w_2
\partial_\mu w_2^\ast \right). \label{aw}
\end{equation}
A second key property is that there are forces with a logarithmic
potential between the $w_{1,2}$ particles. These are also easily
seen from the structure of the classical vortex solutions of
Eq.~(\ref{sz}). Also, it is the {\em difference\/} of the $z_1$ and
$z_2$ currents, which is not screened by the $A_\mu$ field, which
contributes to an {\em attractive \/} logarithmic potential between
the $w_{1}$ and $w_2$ particles. Another way to see this is to
consider the configuration of the gauge-invariant Higgs field $(N_x,
N_y)$ around each vortex: the $w_1$ has an anti-clockwise winding of
the $\mbox{arg}(N_x+iN_y)$, while the $w_2$ has a clockwise winding.
Because there is a finite stiffness associated with this Higgs
order, a $w_1$ particle will attract a $w_2$ particle, while two $w_1$
(or $w_2$) particles will repel each other.
We can now guess the form of the effective theory for the $w_{1,2}$
particles. We mediate that logarithmic potential as the Coulomb
potential due to a new `dual' gauge field $\widetilde{A}_\mu$. Then
general symmetry arguments and the constraints above imply the dual
theory \cite{mv}
\begin{eqnarray}
\widetilde{\mathcal{S}} &=& \int\!d^2x\, dt \Bigl[
\left|\left(\partial_\mu {-} i \widetilde{A}_\mu\right) w_1
\right|^2 + \left|\left(\partial_\mu {+} i \widetilde{A}_\mu\right)
w_2 \right|^2 + \widetilde{s} \left( |w_1|^2 + |w_2 |^2 \right) +
\widetilde{u}
\left(|w_1|^2 + |w_2|^2 \right)^2 \nonumber \\
&~&~~~~~~~~~~~~~~+ \widetilde{v} |w_1|^2 |w_2 |^2 +
\frac{1}{2\widetilde{e}^2} \left( \epsilon^{\mu\nu\lambda}
\partial_\nu \widetilde{A}_\lambda \right)^2 \Bigr]. \label{sw}
\end{eqnarray}
Note especially the difference in the charge assignments from
(\ref{sz})---now the $w_{1,2}$ particles have opposite charges under
$\widetilde{A}_\mu$. Apart from this, the theories have an identical
form, and so current correlation functions
$\widetilde{K}^{L,T}_{1,2}$, associated with the global and gauge
U(1) symmetries, will have the same dependence upon the couplings in
$\widetilde{\mathcal{S}}$ as the $K^{L,T}_{1,2}$ have on
$\mathcal{S}$. However, the explicit expressions for the current in
terms of the field operators have a sign interchanged:
\begin{equation}
\widetilde{J}_{1\mu} =
i \left( w_1^\ast (\partial_\mu - i \widetilde{A}_\mu) w_1 - w_1
(\partial_\mu + i \widetilde{A}_\mu)
w_1^\ast \right)
- i \left( w_2^\ast (\partial_\mu + i \widetilde{A}_\mu) w_2 - w_2
(\partial_\mu - i \widetilde{A}_\mu) w_2^\ast
\right)
\end{equation}
and
\begin{equation}
\widetilde{J}_{2\mu} = i \left( w_1^\ast \partial_\mu w_1 - w_1
\partial_\mu w_1^\ast \right) + i \left( w_2^\ast \partial_\mu w_2 -
w_2
\partial_\mu w_2^\ast \right).
\label{j2w}
\end{equation}
We note in passing the extension of the above analysis to a {\em
compact\/} ${\mathbb C}{\mathbb P}^1$ theory of the $z$ particles.
Following Polyakov \cite{polyakov}, we have to include monopoles
which change the $A_\mu$ flux by $2\pi$. This can be achieved by
adding the term $-y_m (w_1 w_2 + w_1^\ast w_2^\ast)$ to the $w$
action $\widetilde{\mathcal{S}}$, where $y_m$ is the monopole
fugacity. This monopole operator is neutral under
$\widetilde{A}_\mu$ charge and, from Eq.~(\ref{aw}), catalyzes the
required change in $A_\mu$ flux. This is a relevant perturbation:
the theories for the $z$ and $w$ particles are no longer equivalent
under
duality,
and the universality class of the transition is changed. We will not
consider the compact case further; for more details, see the review
\cite{ssmott}.
Returning to the non-compact theory, we note the duality mapping can
now also be carried backwards from the $w$ theory to the $z$ theory,
and from (\ref{aw}) we see that the theories $\mathcal{S}$ and
$\widetilde{\mathcal{S}}$ are connected by the relations
\begin{subequations}
\label{dualj}
\begin{eqnarray}
\frac{1}{\pi}\epsilon_{\mu \nu \lambda} \partial^\nu A^\lambda &=&
\widetilde{J}_{2\mu}, \\
\frac{1}{\pi}\epsilon_{\mu \nu \lambda} \partial^\nu
\widetilde{A}^\lambda &=& J_{2\mu}.
\end{eqnarray}
\end{subequations}
From these relations, Eq.~(\ref{aa}), and the definition (\ref{j1}),
we immediately obtain the relation between $K_1$ and $K_2$:
\begin{subequations}
\label{eq:KKdual}
\begin{eqnarray}
&& K_1^T(\omega,k)\, \widetilde K_2^L(\omega,k) = \frac{1}{\pi^2}
\,,\quad
\widetilde K_1^T(\omega,k)\, K_2^L(\omega,k) = \frac{1}{\pi^2}\,,
\\
&& K_1^L(\omega,k)\, \widetilde K_2^T(\omega,k) = \frac{1}{\pi^2}
\,,\quad
\widetilde K_1^L(\omega,k)\, K_2^T(\omega,k) = \frac{1}{\pi^2}\,.
\end{eqnarray}
\end{subequations}
Now, assuming a single second-order transition obtained by tuning
the parameter $s$, the above reasoning implies that this critical
point must be self-dual, $K_1^{T,L}=\widetilde K_1^{T,L}$, and
$K_2^{T,L}=\widetilde K_2^{T,L}$. Self-duality thus immediately
implies relation (\ref{cp1dual}), as claimed in the Introduction.
Monte Carlo simulations \cite{proko} of a current loop model related
to $\mathcal{S}$ observe a weak first-order transition. This is
possibly because they are using a particular lattice action which is
not within the domain of attraction of the self-dual point. In any
case, the duality mappings between the two phases on either side of
the transition apply, and the constraints on a possible CFT remain
instructive.
\section{The M2-brane theory}
\label{sec:m}
This section examines the transport properties of the non-Abelian
SU($N$) Yang Mills
theory in $D=2{+}1$ with $\mathcal{N}=8$ supersymmetry. The weak-
coupling action and
field content
of this theory is most directly understood by dimensional reduction
of the $\mathcal{N}=1$ SYM theory in $D=9{+}1$ on the flat torus $T^7
$ \cite{tasi}.
This reduction shows that the $D=2{+}1$ theory has an explicit SO(7)
R-charge global symmetry. The $D=9{+}1$ SYM theory has only a single
gauge
coupling constant, and therefore, so does the $D=2{+}1$ theory. The
latter
coupling has a positive scaling dimension, and flows to strong-coupling
in the infrared. It is believed \cite{Seiberg} that the flow
is to an infrared-stable fixed point that describes a SCFT.
It was also argued that this SCFT has
an emergent R-charge symmetry which is expanded to SO(8). We shall
be interested in the transport properties of this SO(8) R-charge
in the SCFT at $T>0$ in the present section.
We are faced by a strongly-coupled SCFT, and a perturbative analysis
of the field theory described above is not very useful. Instead,
remarkable progress is possible using the connection to string theory
and the AdS/CFT correspondence. The $D=2{+}1$ SYM theory is contained
in the low energy description of Type IIA string theory in the presence
of a stack of $N$ D2-branes. The flow to strong coupling of the SYM
theory corresponds in string theory to the lift of ten-dimensional
Type IIA strings to eleven-dimensional M-theory \cite{MAGOO}.
So we can directly access the $D=2{+}1$ SYM SCFT by considering M-theory
in the presence of a stack of $N$ M2-branes \cite{IMSY}. In the large
$N$ limit, M-theory can be described by the semiclassical
theory of eleven-dimensional supergravity, and this
will be our main tool in the analysis described below. This formulation
also makes the SO(8) R-charge symmetry explicit, because the M2-branes
curve the spacetime of
eleven-dimensional supergravity to AdS$_4 \times S^7$.
Another powerful feature of the supergravity formulation is that
it can be extended to $T>0$. We have to consider supergravity in a
spacetime
which is asymptotically AdS$_4$, but which also contains a black hole.
The Hawking temperature of the black hole then corresponds to the
temperature of the SCFT \cite{wittenm} (for example,
fluctuation-dissipation theorems are satisfied~\cite{Herzog:2002pc}).
Hydrodynamics of the SCFT
emerges from the semiclassical supergravity dynamics in the presence
of the
black hole.\footnote{%
Strictly speaking, the appearance of a black hole is dual to being
at finite
temperature \emph{and} being in a deconfined phase; it is possible
to have a finite temperature
gravitational description without a black hole \cite{wittenm,
HerzogPRL}.
}
Turning to our explicit computation of dynamics in M-theory, we
consider
the gravitational background associated with
a stack of $N$ M2-branes, with $N \gg 1$ \cite{andy,IMSY,CH},
\begin{equation}\label{metric}
ds^2 = \frac{r^4}{R^4} \left[ -f(r) dt^2 + dx^2 + dy^2\right]
+ \frac{R^2}{r^2}
\left[ \frac{dr^2}{f(r)} + r^2d\Omega_7^2 \right],
\end{equation}
where $f(r)=1-r_0^6/r^6$. It is more convenient for us to change
coordinates from $r$ to $u=(r_0/r)^2$, in terms of which
\begin{equation}
ds^2 = \frac{r_0^4}{R^4u^2}[-f(u)dt^2+dx^2+dy^2]
+ \frac{R^2}{4u^2f}du^2 + R^2 d\Omega_7^2
\end{equation}
and $f(u)=1-u^3$. The horizon of the black hole is located at $u=1$, and
the boundary of AdS$_4$ is at $u=0$.
The relationship between the quantities in the worldvolume SCFT
($N$ and temperature $T$) and those of the metric ($R$ and $r_0$)
are given by~\cite{IMSY,CH}
\begin{equation}
\pi^ 5 R^9 = \sqrt 2\, N^{3/2}\kappa^2, \qquad
T = \frac3{2\pi} \frac{r_0^2}{R^3}\,,
\label{kelevennorm}
\end{equation}
where $\kappa$ is the gravitational coupling strength of
$D=10{+}1$ supergravity.
There is a precise correspondence between correlation functions
computed in the D=2{+}1 CFT and correlation functions of
supergravity fields computed in the metric
(\ref{metric}) \cite{MAGOO,recipe,Herzog:2002pc}.
We will use this to compute charge transport properties.
In the metric~(\ref{metric}) a 7-sphere factors out:
$R^2d\Omega_7^2$. The spacetime thus has a SO(8) symmetry. This
matches with the global symmetry in the M2 worldvolume theory: there
is a R-charge which transforms under the same global symmetry. The
following subsections will compute the two-point correlations of the
R-charge currents, $J_{a\mu}$, with $a=1, \ldots, 28$.
The existence of a compact 7-sphere makes it possible to do
Kaluza-Klein reduction on this space. We expand all fields in terms
of spherical harmonics on the 7-sphere. The original fields of
M-theory are the metric tensor $g_{\mu\nu}$ and a three-index
antisymmetric tensor $A_{\mu\nu\lambda}$. Upon Kaluza-Klein
reduction, an SO(8) gauge field
appears from the components of the
metric and the three-form where only one index is in the AdS$_4$
directions ($t$, $x$, $y$, and $u$) and the others are in the $S^7$
directions (see Appendix~\ref{app:g} for details).
The action for this gauge field is
\begin{equation}\label{M2action}
S = -\frac 1{4\gfourD^2}
\int\!d^4x\,\sqrt{-g}\, g^{MA} g^{NB}
F^a_{MN} F^a_{AB},
\end{equation}
where uppercase Latin indices $A$, $B$, $M$, $N$ run four values of
$t$, $x$, $y$, and $u$ (in contrast to Greek indices $\alpha$,
$\beta$, $\mu$, $\nu$ which run $t$, $x$ and $y$). The
four-dimensional gauge coupling constant $\gfourD$ is {\em
dimensionless}, and its large $N$ value is computed in
Appendix~\ref{app:g}
\begin{equation}
\frac1{\gfourD^2} = \frac{\sqrt{2}}{6 \pi} N^{3/2}.
\end{equation}
Although we focus on the gravity background constructed from a stack
of $N$ M2-branes in flat 11-dimensional space, there are a number
of related examples which are
easily understood from considering (\ref{M2action}).
The key observation, which we discuss further in Section~\ref
{sec:emduality},
is that (\ref{M2action}) exhibits
classical electric-magnetic duality.
In the case of our M2-brane theory, this duality is close enough to a
self-duality to
enforce a relation on the current-current two point functions and result
in a frequency independent conductivity.
In fact, this self-duality holds in a more general context.
Consider
an eleven dimensional space which factorizes into $\mathbb{R}^{2,1}$
and a Calabi-Yau four-fold which develops a local singularity. By
placing
a stack of M2-branes at the singularity, we should obtain a more exotic
2+1 dimensional conformal field theory which still has at least a U(1)
global R-symmetry. Kaluza-Klein reduction of the gravity theory
will yield
precisely (\ref{M2action}) and our results on holographic self-duality
will carry over to these more general cases.
There are two other interesting generalizations to consider in which
holographic self-duality fails.
After Kaluza-Klein reduction, the gauge fields $F_{AB}$ will support
electrically charged black holes \cite{duffliu}. These
black holes are dual to introducing an R-charge chemical potential to
the field
theory.
Another
interesting 2+1 dimensional field theory with a holographic
description is the theory living on a stack
of D2-branes in type IIA string theory.
In both cases, there is generically a nontrivial scalar
which
appears in a modification of (\ref
{M2action}) as a coupling
constant which depends on the holographic radial direction. The
relation on the two-point
functions will be between a theory with coupling $\gfourD(u)$ and one
with coupling $1/\gfourD(u)$.
For details concerning this more general perspective, see Appendix
\ref{app:d2}.
\subsection{Current-current correlators}
\noindent
We now proceed to the computation of the two-point correlators of
the $J_{a\mu}$ in the CFT at $T>0$. Here we will work in Minkowski
space (real frequencies and time), and so define the current
correlation as follows:
\begin{equation}
C_{\mu\nu}(x-y) \delta_{ab} =
-i\,\theta(x^0{-}y^0) \langle [J_{a\mu}(x),J_{a\nu}(y)]\rangle.
\label{defC}
\end{equation}
The $\delta_{ab}$ follows from SO(8) symmetry. The expectation value
is taken in a translation-invariant state, so we can Fourier
transform to $C_{\mu\nu}(p)$, where $p_\mu=(-\omega,{\bm k})$.
Spectral density is proportional to the imaginary part of the
retarded function,
\begin{equation}
\rho_{\mu\nu}(p)=-2\Im C_{\mu\nu}(p).
\end{equation}
It is an odd, real function of $p$, whose diagonal components are
positive (for positive frequency).
Expectation values of all global conserved charges are assumed to
vanish in the equilibrium state; in other words we consider systems
without chemical potentials.
Conservation of $J_{a\mu}(x)$ implies that the correlation functions
may be defined so that they satisfy the Ward identity\footnote{%
One may choose to define the correlation functions in such a way
that
local (in position space) counter-terms appear on the right-hand
side
of the Ward identities.
The correlation functions defined in this way will differ from
$C_{\mu\nu}(p)$ by analytic functions
of $\omega$ and ${\bm k}$.}
$p^\mu C_{\mu\nu}(p) = 0$.
Then, as in Section~\ref{sec:intro} and in Eq.~(\ref{j1}), we can
write $C_{\mu \nu}$ in the form
\begin{equation}
C_{\mu\nu}(p) = P_{\mu\nu}^T\, \Pi^T(\omega,k) +
P_{\mu\nu}^L\, \Pi^L(\omega,k)\ .
\label{eq:C-rotation-inv}
\end{equation}
(The relationship between $\Pi$ and $K$ is $\Pi^{T,L}= \sqrt{p^2} K^
{T,L}$.)
Without loss of generality one can take the spatial momentum
oriented along the $x$ direction, so that $p=(\omega,k,0)$. Then the
components of the retarded current-current correlation function are
\begin{equation}
C_{yy}(\omega,k) = \Pi^T(\omega,k)\ ,
\end{equation}
as well as
\begin{equation}
C_{tt}{=} \frac{k^2}{\omega^2{-}k^2}\, \Pi^L(\omega,k),\ \
C_{tx}{=}C_{xt}{=}\frac{-\omega k}{\omega^2{-}k^2}\, \Pi^L
(\omega,k),\ \
C_{xx}{=} \frac{\omega^2}{\omega^2{-}k^2}\, \Pi^L(\omega,k)\ .
\label{eq:Czz}
\end{equation}
\subsection{Correlation functions from AdS/CFT}
\label{sec:ads-cft}
\noindent
In order to find the retarded function, one needs to study
fluctuations of vector fields on the background spacetime created by
a stack of M2-branes. At the linear order the fields satisfy the
equations
\begin{equation}
\d_M (\sqrt{-g}\, g^{MA}g^{NB} F_{AB}) =0.
\end{equation}
These equations are to be solved with the boundary conditions
\begin{equation}
\lim_{u\to 0} A_\mu (u, x) = A_\mu^0(x),
\end{equation}
at $u{=}0$. Near $u{=}1$ one imposes the outgoing-wave boundary
condition, which means that for $u$ slightly less than 1 the
solution is purely a wave that propagates toward the horizon. Due to
translational invariance with respect to $x$ one can solve for each
Fourier mode $e^{ip\cdot x}$ separately. The result can be
represented in the form
\begin{equation}\label{AFA0}
A_\mu(u, p) = {M_\mu}^\nu(u,p) A_\nu^0(p).
\end{equation}
Then, according to the AdS/CFT prescription formulated in
Ref.~\cite{recipe}, the current-current correlator can be found from
the formula%
\footnote{
Greek indices on $M_{\mu\nu}$ are raised using the flat space
Minkowski metric.
}
\begin{equation}\label{C-prescr}
C_{\mu\nu}(p) = -\chi \lim_{u\to0} M_{\mu\nu}'(u,p),
\end{equation}
where $\chi$ is the constant that appears in the normalization of
the action,
\begin{equation}
S= \frac{\chi}{2}\int \!du\, d^3 x\left(A_t^{\prime2}
- f A_x^{\prime2} - f A_y^{\prime2} +\dots \right)\,,
\end{equation}
(only terms with two derivatives with respect to $u$ are written).
In our case $\chi={4\pi T}/{3\gfourD^2}$. It turns out that $\chi$
is precisely the charge susceptibility.\footnote{%
The hydrodynamic density-density response function found in
\cite{CH} is $ C_{tt} = (1/\gfourD^2){k^2}/(i\omega{-}D_c k^2)$.
Comparing this to the hydrodynamic form $C_{tt} = \chi D_c k^2/(i
\omega{-}D_c k^2)$, we find
the above value for charge susceptibility $\chi$.
}
The prescription given above might appear ad-hoc. However
it is a special case of a more general AdS/CFT prescription that
gives real-time correlators of any number of
operators~\cite{Herzog:2002pc}. For our task, however, the above
prescription is technically most straightforward to implement.
We work in the radial gauge $A_u=0$, and take all fields $A_\mu(x)$
to be proportional to $e^{-i\omega t+i{\bm k}\cdot{\bm x}}$. Taking
momentum ${\bm k}$ along the $x$ direction, ${\bm k}=(k,0)$, one finds that
the fluctuating vector fields satisfy the following equations
\cite{CH}
\begin{eqnarray}
w A_t'+q f A_x' =0\,, && \label{eq:AtAx}\\
A_t''-\frac{1}{f}(\wnq A_x + q^2 A_t)=0\,, &&
\label{eq:Atpp}\\
A_x'' + \frac{f'}{f} A_x' + \frac{1}{f^2} (\wnq A_t + w^2 A_x)
=0\,, &&
\label{eq:Axpp}\\
A_y'' + \frac{f'}{f} A_y' + \frac{1}{f^2} (w^2-q^2 f)A_y =0\,.&&
\label{eq:Ay}
\end{eqnarray}
Here prime denotes derivative with respect to $u$; $w$ and $q$
are the dimensionless frequency and momentum, $w\equiv
3\omega/(4\pi T)$, $q\equiv 3k/(4\pi T)$. Note that the equation
for the transverse potential $A_y$ decouples from the rest.
Moreover, Eq.~(\ref{eq:Axpp}) can be shown to follow from
Eqs.~(\ref{eq:AtAx}) and (\ref{eq:Atpp}) and so is not independent.
Combining Eqs.~(\ref{eq:AtAx}) and (\ref{eq:Atpp}) one can obtain an
equation that does not involve $A_x$,
\begin{equation}\label{eq:Atppp}
A_t''' + \frac{f'}f A_t'' + \frac1{f^2}(w^2-q^2f) A_t' =0.
\end{equation}
One can think about this equation as a second-order equation for
$A_t'$. It was observed in \cite{CH} that {\it Eq.~(\ref{eq:Atppp})
has the same form as the equation for $A_y$}. Such degeneracy is
unusual, and we now proceed to explore its implications.
\subsubsection{Transverse channel}
\noindent
Let us start with the retarded function for transverse currents,
$C_{yy}(\omega,{\bm k})$. According to the AdS/CFT
prescription~(\ref{C-prescr}),
\begin{equation}
C_{yy}(p) = -\chi \lim_{u\to0} M_{yy}'(u,p).
\end{equation}
The function $M_{yy}(u,p)$ is the solution to Eq.~(\ref{eq:Ay})
which satisfies the outgoing-wave boundary condition on the horizon
$u{=}1$, and $M_{yy}(0,p)=1$ at the boundary $u{=}0$.
Let us denote a solution to Eq.~(\ref{eq:Ay}) which satisfies the
outgoing boundary condition at the horizon as $\psi(u)$. The
normalization of $\psi(u)$ is left arbitrary. Near $u{=}0$,
Eq.~(\ref{eq:Ay}) allows two asymptotic solutions, which can be
expressed in terms of the Frobenius series,
\begin{eqnarray}
Z_I(u) &=& 1+h Z_{II}(u) \ln u + b_{I}^{(1)}u +\dots,\\
Z_{II}(u) &=& u(1+b_{II}^{(1)}u + b_{II}^{(2)}u^2+\dots).
\end{eqnarray}
The coefficient $b_{I}^{(1)}$ is arbitrary, and we set it to zero.
All other coefficients are determined by substituting expansion
(\ref{eq:AB}) in the original equation (\ref{eq:Ay}). In particular,
we find that $h{=}0$, therefore
\begin{eqnarray}
Z_I(0) &=& 1, \qquad Z_I'(0) = 0,\nonumber\\
Z_{II}(0) &=& 0, \qquad Z_{II}'(0) = 1.\label{ZZp}
\end{eqnarray}
The outgoing-wave solution $\psi(u)$ can be exprressed as
\begin{equation}
\psi(u) = {\cal A} Z_{I}(u) + {\cal B} Z_{II}(u),
\label{eq:AB}
\end{equation}
where ${\cal A}$ and ${\cal B}$ depend on the parameters of the equation, in
particular on $w$ and $q$. From Eq.~(\ref{ZZp}) it follows that
$\psi(0)={\cal A}$ and $\psi'(0)={\cal B}$. The properly normalized mode
function is $M_{yy}(u,p)=\psi(u)/\psi(0)$, and therefore we find
\begin{equation}
C_{yy}(w,q) = - \chi\, \frac{{\cal B}(w,q)}{{\cal A}(w,q)}\,.
\end{equation}
\subsubsection{Longitudinal channel}
\noindent
Let us now look at the correlators in the longitudinal channel:
$C_{tt}$, $C_{tx}$, and $C_{xx}$. For that we need to solve
Eqs.~(\ref{eq:AtAx}) and (\ref{eq:Atpp}).
First, we know that $A_t'(u)$ satisfies the same equation as
$A_y(u)$. Therefore, we can write $A_t'(u)= c\psi(u)$, where $c$ is
some coefficient. This coefficient can be fixed from the boundary
conditions at $u=0$ by employing Eqs.~(\ref{eq:Atpp}) and
$\psi'(0)={\cal B}$. We find
\begin{equation}
A_t'(u) = \left[ \frac{\cal A}{\cal B} Z_I(u) + Z_{II}(u)\right]
(\wnq A_x^0 + q^2 A_t^0).
\end{equation}
From Eq.~(\ref{eq:AtAx}) we also find
\begin{equation}
A_x'(u) = -\frac1f \left[ \frac{\cal A}{\cal B} Z_I(u) + Z_{II}(u)\right]
(w^2 A_x^0 + \wnq A_t^0).
\end{equation}
These equations are to be compared with Eq.~(\ref{AFA0}), from which
one extracts $M_{\mu\nu}'(u,p)$. Putting $u=0$, one find the
correlators
\begin{equation}
C_{tt}(w,q) = \chi q^2\, \frac{{\cal A}(w,q)}{{\cal B}(w,q)}\,,
\quad\quad
C_{xx}(w,q) = \chi w^2\, \frac{{\cal A}(w,q)}{{\cal B}(w,q)}\,.
\end{equation}
In Appendix \ref{app:soln} we show that at zero momentum, $q{=}0$,
the mode equation (\ref{eq:Ay}) can be solved analytically,
which allows one to determine $\Pi^T(w,0)=\Pi^L(w,0)$.
However, one can determine the conductivity without
explicitly solving the mode equation, as we now show.
\subsection{Conductivity}
\label{sec:conductivity}
\noindent
We see that both $C_{yy}$ and $C_{xx}$ are expressed in terms of the
same connection coefficients ${\cal A}$ and ${\cal B}$. Eliminating the
coefficients, we find
\begin{equation}
C_{xx}(w,q)C_{yy}(w,q) = -\chi^2 w^2\,,\quad\quad
C_{tt}(w,q)C_{yy}(w,q) = -\chi^2 q^2\,.
\label{eq:cc}
\end{equation}
Expressed in terms of the self-energies $\Pi^T$, $\Pi^L$ this reads
\begin{equation}
\Pi^T(w,q)\, \Pi^L(w,q) = -\chi^2 (w^2{-}q^2).
\label{eq:pipi}
\end{equation}
Note that this relation holds for all $w$ and $q$: we have not
made any small-frequency approximations anywhere. In fact, we did
not even have to solve the mode equations! Combining
Eqs.~(\ref{j1}), (\ref{eq:C-rotation-inv}), and (\ref{eq:pipi}), we
obtain our main result in Eq.~(\ref{mdual}).
As discussed in Section~\ref{sec:intro}, at zero momentum, rotation
invariance implies that $\Pi^T{=}\Pi^L$, therefore relation
(\ref{eq:pipi}) uniquely determines the self-energy%
\footnote{
Up to a sign, which can be fixed by requiring positivity
of the spectral function $\rho_{yy}=-2\,\Im \Pi^T$.
}
$\Pi^T(\omega,0)=\Pi^L(\omega,0)=-i\chiw$ for all $w$. The
conductivity is given by $\sigma(\omega/T)=i\Pi^T(\omega,0)/\omega$,
and we find
\begin{equation}
\sigma(\omega/T)=\chi \frac{3}{4\pi T} = \chi D_c = \frac{1}
{\gfourD^2},
\label{eq:sigma}
\end{equation}
where $D_c=3/(4\pi T)$ is the diffusion constant found in \cite{CH}.
Note that the Einstein relation between the conductivity and the
diffusion
constant is satisfied. Also, as noted earlier, it is surprising that
$\sigma(\omega/T)$ is actually independent of $\omega/T$.
[Dependence
upon $\omega/T$ is found at all non-zero $k$, as is shown below.]
This $\omega$-independence is a consequence of the relation
(\ref{eq:pipi}), which in turn follows from the fact that $A_t'$ and
$A_y$ satisfy the same equation in the bulk.
It can be traced back to the electromagnetic duality of the
classical action (\ref{M2action}), as we now show.
\subsection{Electric-magnetic duality}
\label{sec:emduality}
\noindent
Even though the origin of the relation (\ref{eq:pipi})
is puzzling from the point of view of the microscopic
degrees of freedom in the ${\cal N}{=}8$ SCFT,
its origin from the bulk point of view
can be traced to electric-magnetic (EM) duality
of an abelian gauge field.
Indeed, current-current correlators are computed from the
Maxwell equations in the four-dimensional bulk, and it is
precisely in four dimensions that Maxwell equations may possess
EM duality.
Although in general the R-symmetry may be non-abelian and hence
be dual to a non-abelian gauge field in the bulk, we work in the
classical
supergravity limit and must keep $N$ large.
At large $N$, the gauge coupling $g_{4\rm{D}} \propto N^{-3/4}$ is
very small, and our non-abelian gauge field factorizes into a number of
effectively abelian pieces to leading order in $1/N$.
If we write equations of motion in terms of the gauge-invariant
$F_{MN}$ (rather than the vector potential), then Maxwell equations
have to be supplemented by a Bianchi identity,
\begin{subequations}
\label{eq:ME}
\begin{eqnarray}
&& \partial_M(\sqrt{-g} \, F^{MN}) = 0 \\
&& \partial_M(\sqrt{-g} \,
\frac12 \varepsilon^{MNAB} F_{AB}) =0\,,
\end{eqnarray}
\end{subequations}
where $\varepsilon^{MNAB}$ is the totally antisymmetric tensor,
with $\varepsilon^{0123}=1/\sqrt{-g}$.
Now, one can introduce $G_{MN}$ defined as
$F^{MN}=\frac12\varepsilon^{MNAB}G_{AB}$,
which can be inverted to give
$G^{MN}=-\frac12 \varepsilon^{MNAB}F_{AB}$.
Expressed in terms of $G$, the equations of motion become
\begin{subequations}
\label{eq:ME-dual}
\begin{eqnarray}
&& \partial_M(\sqrt{-g} \,
\frac12 \varepsilon^{MNAB} G_{AB}) =0\,,\\
&& \partial_M(\sqrt{-g} \, G^{MN}) =0\,.
\end{eqnarray}
\end{subequations}
Maxwell equations for $F$ become a Bianchi identity for $G$,
and vice versa.
$G_{MN}$ is the dual field strength tensor, and we can also
define a dual vector potential $B_M$ by
$G_{MN}=\partial_M B_N-\partial_N B_M$.
Note that the validity of EM duality does not depend
on the background spacetime having any particular symmetries
such as Lorentz symmetry, or rotational symmetry.
From the point of view of AdS/CFT, the EM dual theory in the bulk
will correspond to some theory on the boundary, which is a dual
of the original SCFT.
In particular, the dual vector potential $B_\mu$ will couple to the
dual current $\tilde J_\mu$, and one can compute two-point functions
$C_{\mu\nu}^{\rm dual}(\omega,k)$ in the dual theory.
In components we have $F^{tz}=G_{xy}/\sqrt{-g}$.
This means that the equation for $\sqrt{-g}F^{tz}$
obtained from equations (\ref{eq:ME}) is the same as the equation
for $G_{xy}$, obtained from the dual equations (\ref{eq:ME-dual}).
In our particular example of the non-extremal M2 background metric,
we have $\sqrt{-g}F^{tu}\propto A_t'(u)$, and $G_{xy}\propto kB_y(u)$
(in the radial gauge).
Thus the equation for $A_t'(u)$ is the same as the equation for $B_y
(u)$.
Then, by the argument in section \ref{sec:ads-cft} we
find a relation between the self-energies $\Pi^{T,L}$
in the original theory, and the self-energies $\widetilde\Pi^{T,L}$
in the dual theory:
\begin{subequations}
\begin{eqnarray}
&& \Pi^T(w,q)\, \widetilde\Pi^L(w,q) = -\chi^2 (w^2{-}
q^2)\,,\\
&& \widetilde\Pi^T(w,q)\, \Pi^L(w,q) = -\chi^2 (w^2{-}
q^2) \ .
\end{eqnarray}
\end{subequations}
For our M2-branes, EM duality is a self-duality, and the EM dual
theory
is the same as the original theory, as is evident
from equations (\ref{eq:ME}), (\ref{eq:ME-dual}).
Therefore, $C_{\mu\nu}=C_{\mu\nu}^{\rm dual}$,
and $\widetilde\Pi^T=\Pi^T$, $\widetilde\Pi^L=\Pi^L$.
This gives back our main result (\ref{eq:pipi}).%
\footnote{
The present discussion assumes that the coupling constant
$\gfourD^2$ is not inverted
in the dual theory, which is justified for a free,
sourceless, abelian gauge field.
One could formally repeat the same steps leading to
Eq.~(\ref{eq:pipi}), assuming $\widetilde{g}_{\rm 4D}^2=1/
\gfourD^2$,
as is standard in EM duality.
However, in this case the coupling constant
$\widetilde{g}_{\rm 4D}^2\propto N^{3/2}$ becomes large,
invalidating the bulk description in terms of a classical
gauge field.
}
In the case when there are non-trivial background profiles
for scalar fields, the EM dual theory is not equivalent
to the original theory.
This is discussed in Appendix \ref{app:d2}.
\subsection{Full spectral functions}
\begin{figure}
\begin{picture}(0,0)(0,0)
\put(110,-10){$w$}
\put(320,-10){$w$}
\end{picture}
\includegraphics[width=2.8in]{imgryy}
\includegraphics[width=2.8in]{imgryyw}
\caption{
Imaginary part of the retarded function $C_{yy}(\omega,k)$, plotted
in units of $(-\chi)$,
as a function of dimensionless frequency $w\equiv 3\omega/(4\pi
T)$,
for several values of dimensionless momentum $q\equiv 3k/(4\pi T)$.
Curves from left to right correspond to $q=0,0.5,1.0,2.0,3.0$.
Left: $\Im C_{yy}(w,q)$,
Right: $\Im C_{yy}(w,q)/w$.
} \label{fig:ImCyy}
\end{figure}
\begin{figure}
\begin{picture}(0,0)(0,0)
\put(110,-10){$w$}
\put(320,-10){$w$}
\end{picture}
\includegraphics[width=2.8in]{imgrxxw2}
\includegraphics[width=2.8in]{imgrxxw2-3}
\caption{
Imaginary part of the retarded function $C_{tt}(w,q)/q^2$,
plotted in units of $(-\chi)$,
as a function of dimensionless frequency $w\equiv 3\omega/(4\pi
T)$,
for several values of dimensionless momentum $q\equiv 3k/(4\pi T)$.
Curves from left to right correspond to $q=0.2,0.5,1.0$
(left panel), and $q=1.0,2.0,3.0,4.0$ (right panel).
The dashed curves are plots of Eq.~(\ref{chigh}) divided by $k^2$.
} \label{fig:ImCtt}
\end{figure}
\noindent
We will now evaluate the spectral functions numerically, for all
$\omega$ and $k$. To do so, we find a solution $\psi(u)$ to the mode
equation (\ref{eq:Ay}) with the outgoing boundary conditions at the
horizon $u{=}1$. Then, as described in Section \ref{sec:ads-cft},
the retarded two-point function $C_{yy}(\omega,k)$ is proportional
to $\psi'(0)/\psi(0)$, while $C_{tt}(\omega,k)$ is proportional to
$\psi(0)/\psi'(0)$.
Figure \ref{fig:ImCyy} shows the imaginary part of the transverse
current-current correlation function, plotted in units of $(-\chi)$.
At zero momentum, $\Im C_{yy}$ is a linear function of
$w \equiv 3 \omega/(4\pi T)$ for
all $w$, as shown in the previous subsection. At large frequency,
the spectral function asymptotes to $\Im C_{yy}\sim(-\chi)w$,
regardless of the value of $q \equiv 3k/(4 \pi T)$.
The longitudinal correlators are directly related to the conserved
R-charge density, and so are more direct probes of hydrodynamic
behavior, and the hydrodynamic-to-collisionless crossover. Figure
\ref{fig:ImCtt} shows the imaginary part of the density-density
correlation function divided by $q^2$. At small momentum and
frequency, one clearly sees the diffusive peak, consistent with the
hydrodynamic expression in Eq.~(\ref{j0d})
\begin{equation}
\Im C_{tt}(\omega,k) = D_c\chi\frac{-\omega k^2}{\omega^2+(D_c
k^2)^2}~~~,~~~\mbox{$|\omega| \ll T$ and $k \ll T$.} \label{clow}
\end{equation}
At large frequency, the asymptotic form of the spectral function is
expected to be determined by the `collisionless' ground state
correlator. The latter was presented in Eq.~(\ref{j0n}), and here
has the form
\begin{equation}
\Im C_{tt}(\omega,k) = \frac{1}{\gfourD^2} \mbox{sgn}(\omega)
\frac{(-k^2)}{\sqrt{\omega^2 - k^2}}~~~,~~~|\omega|-k \gg T.
\label{chigh}
\end{equation}
Fig.~\ref{fig:ImCtt}, right, shows that this form is indeed well
obeyed. Indeed, Eqs.~(\ref{clow}) and (\ref{chigh}) are exactly the
correlators expected across a hydrodynamic-to-collisionless
crossover in a generic system \cite{forster}: the prefactor of $k^2$
in Eq.~(\ref{chigh}) is required by charge conservation even at
large $\omega$, while the factor of $1/\sqrt{\omega^2 - k^2}$ is set
by the CFT current scaling dimension and Lorentz invariance.
\begin{figure}
\begin{picture}(0,0)(0,0)
\put(100,-10){$q$}
\end{picture}
\includegraphics[width=2.5in]{wmaxq}
\caption{The position of the peak of the spectral function in Fig.~
\ref{fig:ImCtt}.
The dashed line is $w=q$.}
\label{fig:wmaxq}
\end{figure}
In Fig.~\ref{fig:wmaxq}, we illustrate the crossover from the
hydrodynamic regime to the collisionless regime. For each value of
$q$ we find the value $w_{\rm max}$ where the function
$\Im C_{tt}(w, q)$ reaches its maximal value, and plot the resulting
function $w_{\rm max}(q)$. As we see on Fig.~\ref{fig:wmaxq}, at
small $q$ the location of the peak is $w_{\rm max}=q^2$, in
accordance with hydrodynamics. At large $q$ it slowly reaches the
asymptotic collisionless behavior $w_{\rm max}=q$.
What is unexpected, is that the two prefactors in Eqs.~(\ref{clow})
and (\ref{chigh}), $D_c \chi$ and $\gfourD^{-2}$, happen to be equal
to each other, as we saw in Eq.~(\ref{eq:sigma}). We have also seen
that this surprising feature is a consequence of the general
functional relations in Eqs.~(\ref{eq:pipi}) and (\ref{mdual}). As
we have discussed, such functional relations are not expected to
apply to a typical $D=2{+}1$ CFT, but only those which enjoy special
self-duality symmetries. Here the self-duality of the gauge
theory on AdS$_4$ led to the identical form
of Eqs.~(\ref{eq:Ay}) and (\ref{eq:Atppp}) which was shown eventually
to lead to Eqs.~(\ref{eq:pipi}) and (\ref{mdual}). In Appendix~\ref
{app:d2},
we consider a R-symmetry gauge field action with a non-trivial dilaton
which spoils the holographic self-duality and the frequency
independent conductivity. The field theory on a D2-brane
in type IIA string theory is an example with such a dilaton.
\section{Conclusions}
\label{sec:conc}
We considered finite temperature charge transport of quantum field
theories in
$D=2+1$ dimensions: the easy-plane $\mathbb{CP}^1$
model, and the CFT living on a stack of $N$ M2-branes in M-theory
(the $\mathcal{N}=8$, SU($N$) SYM theory).
In the former theory, Abelian particle-vortex
self-duality imposes a relationship (Eq.~(\ref{cp1dual})) between
different current correlators. In the latter theory, we found a
strikingly similar relationship (Eq.~(\ref{mdual}))
between longitudinal and transverse
components of the correlators of the SO(8) R-charge.
This relationship led to a frequency-independent conductivity
for the M2 worldvolume theory at zero wavevector, but hydrodynamic
behavior and the hydrodynamic-collisionless crossover did appear
at non-zero wavevectors. We also demonstrated that
for the D2-brane theory, our argument
for frequency independent conductivity fails because of a
nontrivial dilaton background.
We traced the origin of the SO(8) charge correlation constraint
of the SYM theory, and its frequency-independent conductivity,
to an electromagnetic self-duality of the holographic
theory on AdS$_4$. Thus, the generalization of
three-dimensional Abelian particle-vortex duality to non-Abelian
theories
becomes manifest only after a holographic extension to a four-
dimensional
theory. For Abelian theories, the AdS/CFT connection between particle-
vortex
duality in three dimensions and the SL(2,$Z$) invariance of four-
dimensional
Abelian gauge theories was explored earlier in \cite{witten,petkou1}.
Our results for the SU($N$) SYM theory were established at large $N$.
Does holographic self-duality,
and the relationship%
\footnote{Of course, the constant on the right-hand-side of Eq.~(\ref
{mdual}) would
have finite $N$ corrections. The issue is whether the right-hand-side
remains
independent of $\omega$ and $k$ for $T>0$ also at finite $N$.}
Eq.~(\ref{mdual}), hold also for finite $N$? The fact that the large
$N$ theory
has hydrodynamic behavior is evidence for
the ``generic'' nature of this limit. Furthermore, Eq.~(\ref{mdual})
has the same structure as Eq.~(\ref{cp1dual}), and the latter is
believed to be
an exact relationship, obtained without a large $N$ limit.
While these facts are encouraging, establishing self-duality
at finite $N$ requires looking at the full M-theory
on AdS$_4$. Its low energy limit is $\mathcal{N}=8$ supergravity
\cite{cremmer,dewit,duff,duffliu,freund}
(Section~\ref{sec:m} considered only the SO(8) gauge
fields of this theory), and its ``generalized E$_{7(7)}$ duality
invariance'' \cite{dewit} (which appears to include EM duality)
has remnants in M-theory \cite{hull}.
It would be very interesting
to find an Abelian field theory which obeyed a relationship
as simple as Eq.~(\ref{mdual}), found here for the SYM theory.
An unsuccessful attempt to find such a theory is
described in Appendix~\ref{app:cs}. The closest we could get
is Eq.~(\ref{cp1dual}), obeyed by the easy-plane
$\mathbb{CP}^1$ model \cite{mv} and its expected generalization to
the SQED-2 theory with $\mathcal{N}=4$
supersymmetry \cite{intrili,strassler1,strassler2}. A fundamental
feature
of Abelian particle-vortex duality is exchange of U(1)
`flavor' and `topological' currents, and we have not been able
to construct a theory in which these currents are equivalent to
each other (which would lead to a single $K$ in Eq.~(\ref{j0})).
However, non-Abelian theories can have additional symmetries which
rotate different U(1) currents into each other; this was important
for the simplicity of Eq.~(\ref{mdual}).
Finally, we would like to emphasize that the unexpected relation
between the self-energies found in this paper,
\begin{equation}
K^L(\omega,k)\, K^T(\omega,k) = {\rm const}\ ,
\label{eq:mm}
\end{equation}
holds beyond the ${\cal N}=8$ SYM theory.%
\footnote{
As described in Appendix \ref{app:g}, there is a whole class of
2+1 dimensional CFTs satisfying Eq.~(\ref{eq:mm}).
For large-$N$ field theories which are
dual to M-theory on $AdS_4 \times X$, where
$X$ is a seven dimensional Sasaki-Einstein manifold, with currents
normalized as in Appendix~\ref{app:g}, the value of
the constant in the right-hand side of Eq.~(\ref{eq:mm}) is
${N^3}/({2 \pi^{10}})\, \mbox{Vol}(X)^2$.
}
It applies to the CFTs whose electromagnetic response is described by the
Maxwell action (\ref{M2action}) in the 3+1 dimensional
asymptotically AdS space. Thus the relation (\ref{eq:mm}) should be
viewed as another example of universality that characterizes
finite-temperature response in the AdS/CFT correspondence. Previous
examples of such universality include the universal value of the
viscosity to entropy density ratio $\eta/s=1/4\pi$ \cite{KSS}, and a
possible universal value of the friction coefficient for a heavy
particle \cite{CH-R}. Unlike these other examples, the universal
relation (\ref{eq:mm}) applies only to 2+1 dimensional CFTs at
finite temperature. On the other hand, unlike these other examples,
the universal relation (\ref{eq:mm}) applies at arbitrary $\omega$
and $k$.
\acknowledgments We thank M.~Ernebjerg, J.~Liu, D.~Shih, M.~Strassler,
A.~Strominger,
C.~Vafa, and A.~Vishwanath for useful
discussions. This research was supported by the NSF under grants
PHY99-07949 (PK and CH) and DMR-0537077 (SS), and by the DOE under
grants
DE-FG02-96ER40956 (CH) and DE-FG02-00ER41132 (DTS).
C.H. thanks the organizers of the
String Phenomenology workshop at the KITP, UCSB.
P.K. thanks the organizers of the INT workshop
``From RHIC to LHC: achievements and opportunities''
at the University of Washington,
and the Harvard University Physics Department, where part
of this work was completed.
|
2,869,038,154,534 | arxiv | \section{Introduction}
The Skyrme model \cite{skyrme} is an effective low-energy action for QCD
\cite{witten},
where the primary ingredients are meson fields, whereas baryons appear
as solitonic excitations, and the baryon
number is identified with the topological charge.
\\
The original Skyrme Lagrangian has the following form
\begin{equation}
L=L_2 + L_4 + L_0,
\end{equation}
where
\begin{equation}
L_2=-\frac{f_{\pi}^2}{4} \; \mbox{Tr} \; (U^{\dagger} \partial_{\mu} U \;
U^{\dagger} \partial^{\mu} U )
\end{equation}
is the sigma model term, and
a quartic term, referred as Skyrme term, has to be added to
circumvent the standard Derrick argument for the non-existence of
static solutions,
\begin{equation}
L_4=-\frac{1}{32 e^2}\; \mbox{Tr} \; ([U^{\dagger} \partial_{\mu}
U,U^{\dagger} \partial_{\nu} U]^2 ).
\end{equation}
Here $U$ is a $2\times 2$ matrix-valued field with values in the group SU(2).
The last term, which is optional from the point of view of the Derrick
argument, is a potential
\begin{equation}
L_0= -\mu^2 V(U,U^{\dagger}),
\end{equation}
which explicitly breaks the the chiral symmetry. Its particular form is
usually adjusted to a concrete physical
situation. The model has two constants, the pion decay constant
$f_{\pi}$ and the interaction parameter $e$. Additional
constants may appear from the potential.
\\
The modern point of view on the Skyrme model is to treat it as an
expansion in derivatives of the true non-perturbative low-energy
effective action of QCD, where higher terms in derivatives have
been neglected. However, as extended (solitonic) solutions have regions where
derivatives are not small, there is no reason for omitting such terms.
Therefore, one should take into account also
higher terms. In fact, many generalized Skyrme models have
been investigated \cite{modify marleau}, \cite{modify neto},
\cite{modify sk 2}, \cite{piette},
\begin{equation}
L=L_2 + L_4 + L_0+..., \label{skyrme full}
\end{equation}
where dots denote higher derivatives terms.
A simple and natural extension of the Skyrme model is the addition of
sextic terms, among which one is rather special. Namely, we will
consider the expression
\begin{equation}
L_6=\frac{\lambda^2}{24^2} \; \left( \mbox{Tr} \; (\epsilon^{\mu
\nu \rho \sigma} U^{\dagger} \partial_{\mu} U \; U^{\dagger}
\partial_{\nu} U \; U^{\dagger} \partial_{\rho} U) \right)^2. \label{6}
\end{equation}
In standard phenomenology, the addition of this term to the effective action
represents the inclusion of the interactions generated by the vector mesons
$\omega$. In fact, this term effectively appears if one considers a
massive vector field coupled to the chiral field via the baryon density
\cite{modify sk 1}. Further, this term is at most quadratic in time
derivatives (like the quartic Skyrme term) and allows for a standard time
dynamics and hamiltonian formulation. In addition, it leads to a significant
improvement in the Skyrme model phenomenology when applied to nucleons.
Indeed, as explained first in \cite{modify sk 2}, once the sextic
term is present, it becomes the main responsible for stabilization,
and then the quartic contribution changes sign as it corresponds to
the scalar exchange it represents (solving an old puzzle). This
compensation with the quadratic term holds also for the moments of
inertia, when the rotation of all the mass is taken into account, as
it should in the classical computation.
\\
In this letter we want to study the model restricted to the potential
and the sextic term, $L_{06} = L_6 + L_0$, because this submodel has some unique
properties. First of all, it has a huge amount of symmetry \cite{ab-dif}
and, therefore,
it is integrable in the sense of generalized integrability \cite{alvarez}
(its symmetries and integrability properties shall be discussed in detail
in a separate publication; its symmetries are also important for its rather
close relation to the liquid droplet model of nuclei, as we shall discuss
at length in
the last section). As a consequence, the model has infinitely many
exact solutions in all topological sectors, such that both energies and
profiles can be determined exactly. Finally, the model has a Bogomolny bound
which is saturated by all the exact solutions we construct below.
The existence of static solutions which saturate a Bogomolny bound is very
welcome for the description of nuclei, for the following reasons. Firstly,
the resulting soliton energies obey an exactly linear relation with the
baryon charges. Physical nuclei are well-known to obey this linear law with
a rather high precision. Secondly, binding energies of higher solitons are
zero, again as a consequence of their Bogomolny nature. This conforms rather
well with the binding energies of physical nuclei, which are usually quite
small (below the 1\% level). Thirdly, the forces between sufficiently
separated solitons are exactly zero. This result is a consequence of
another crucial feature of our solitons, namely their compact nature.
Again, this absence of interactions, although not exactly true, is a
rather reasonable approximation for physical nuclei, given the very
short range character of interactions between them.
\\
So we find a rather striking coincidence between some qualitative features
of nuclei, on the one hand, and properties of our {\em classical} soliton
solutions, on the other hand. One important question is, of course, whether
this coincidence can be maintained at the quantum level. A detailed
investigation of the quantization of the model is beyond the scope of this
letter, but we shall comment further on it in the discussion section.
In any case, the model seems to correspond to a rather non-trivial "lowest
order" effective field theory approximation to nuclei which already
reproduces some of their features quite well.
We also want to remark that part of the pseudoscalar
meson dynamics is possibly taken into account already by the potential $L_0$,
which
breaks the chiral symmetry, as goldstone condensation.
\\
All the unique properties of the model may be ultimately
traced back to the geometric properties of the proposed term $L_6$,
i.e., to the fact that it is
the square of the pullback of the volume form on the target space three-sphere
$S^3$ (we remind that as a manifold SU(2) $\simeq S^3$) or, equivalently,
the square of the topological (baryon) current.
We remark that models which are similar in some aspects, although with a
different target space geometry, have been
studied in \cite{Tchr 1}, \cite{Tchr 2}.
Further, the model studied in this letter, as well as its "baby Skyrme"
version in 2+1 dimensions have already been introduced in \cite{Tchr 3}.
There,
the main aim was a study of more general properties of Skyrme models in
any dimension. Concretely,
the limiting behaviour of the full generalized Skyrme model for small
couplings of the quadratic and quartic terms $L_2$ and $L_4$ was studied
numerically. In addition, an exact solution for the simplest hedgehog
ansatz was constructed, both in 2 and in 3 dimensions. For the
three-dimensional solution, a rather complicated potential was chosen in
order to have exponentially localized solutions, whereas in this letter we
shall focus on the case of the simple standard Skyrme potential, which
naturally leads to compact solitons. Besides,
our main purpose is to make contact with the phenomenology of nuclei.
The 2+1 dimensional baby Skyrme version of the model has been further
investigated in
\cite{GP} and recently in \cite{restricted-bS}, with results which are
qualitatively similar to the ones we shall find in the sequel (e.g. compact
solitons, infinitely many symmetries, Bogomolny bounds).
\section{Exact solutions}
The lagrangian of the proposed restriction of the Skyrme model is
\begin{equation}
L_{06}=\frac{\lambda^2}{24^2 } \; \left( \mbox{Tr} \; ( \epsilon^{\mu \nu
\rho \sigma} U^{\dagger} \partial_{\mu} U \;
U^{\dagger} \partial_{\nu} U \;
U^{\dagger} \partial_{\rho} U) \right) ^2 - \mu^2 V(U,U^{\dagger}).
\end{equation}
We start from
the standard parametrization for $U$ by a real scalar field
$\xi$ and a three component unit vector $\vec{n}$ ($\vec \tau$ are the Pauli
matrices),
$$
U=e^{i \xi \vec{n} \cdot \vec{\tau}}.
$$
The vector field may be related to a complex scalar $u$ by the
stereographic projection
$$
\vec{n}=\frac{1}{1+|u|^2} \left( u+\bar{u}, -i ( u-\bar{u}),
|u|^2-1 \right)
$$
giving finally ($\tau_\pm = (1/2)(\tau_1 \pm i \tau_2) $)
$$ U^{\dagger} \partial_{\mu} U=
W^\dagger \left( -i\xi_{\mu} \tau_3+\frac{2\sin \xi}{1+|u|^2}
\left( e^{i\xi} u_{\mu} \tau_+-
e^{-i\xi} \bar{u}_{\mu} \tau_-\right) \right) W
$$
where the SU(2) matrix field $W$ is
$$
W= (1+u\bar u)^{- \frac{1}{2}} \left(
\begin{array}{cc}
1 & iu \\
i\bar u & 1
\end{array} \right)
$$
and obviously cancels in the lagrangian.
Using this parametrization we get ($u_\mu \equiv \partial_\mu u$, etc.)
\begin{equation}
L_{06}= -\frac{ \lambda^2 \sin^4 \xi}{(1+|u|^2)^4} \;\left( \epsilon^{\mu
\nu \rho \sigma} \xi_{\nu} u_{\rho} \bar{u}_{\sigma} \right)^2
-\mu^2 V(\xi)
\end{equation}
where we also assumed that the potential only depends on $\mbox{tr} \, U$.
The Euler--Lagrange equations read ($V_\xi \equiv \partial_\xi V$)
$$ \frac{\lambda^2 \sin^2 \xi}{(1+|u|^2)^4} \partial_{\mu} ( \sin^2 \xi \;
H^{\mu}) - \mu^2 V_{\xi}=0,$$
$$ \partial_{\mu} \left( \frac{K^{\mu}}{(1+|u|^2)^2} \right)=0,$$
where
$$ H_{\mu} = \frac{\partial ( \epsilon^{\alpha \nu \rho \sigma} \xi_{\nu}
u_{\rho} \bar{u}_{\sigma})^2}{ \partial \xi^{\mu}}, \;\;\; K_{\mu} =
\frac{\partial ( \epsilon^{\alpha \nu \rho \sigma} \xi_{\nu} u_{\rho}
\bar{u}_{\sigma})^2}{\partial \bar{u}^{\mu}}.$$
These objects obey the useful formulas
$$
H_{\mu} u^{\mu}=H_{\mu} \bar{u}^{\mu}=0, \; K_{\mu}\xi^{\mu}=K_{\mu}
u^{\mu}=0,
\;\; H_{\mu} \xi^{\mu}=K_{\mu} \bar{u}^{\mu} = 2 ( \epsilon^{\alpha \nu \rho
\sigma} \xi_{\nu} u_{\rho} \bar{u}_{\sigma})^2.
$$
We are interested in static topologically non-trivial solutions. Thus $u$ must
cover the whole complex plane ($\vec{n}$ covers at least once $S^2$)
and $\xi \in [0,\pi]$. The natural (hedgehog) ansatz is
$$ \xi = \xi (r), \;\;\; u(\theta, \phi) = g (\theta) e^{in \phi}.$$
Then, the field equation for $u$ reads
$$
\frac{1}{\sin \theta} \partial_{\theta} \left( \frac{ g^2g_\theta}{(1+g^2)^2
\sin \theta} \right) - \frac{gg_\theta^2}{(1+g^2)^2\sin^2 \theta}=0,
$$
and the solution with the right boundary condition is
$$
g(\theta) = \tan \frac{\theta}{2}.
$$
Observe that this solution holds for all values of $n$.
The equation for the real scalar field is
$$
\frac{n^2\lambda^2 \sin^2 \xi }{2r^2} \partial_r \left(\frac{\sin^2 \xi \;
\xi_r}{r^2} \right) - \mu^2 V_{\xi}=0.
$$
This equation can be simplified by introducing the new variable
$z=\frac{\sqrt{2}\mu r^3}{3 |n|\lambda}$,
\begin{equation} \label{xi-eq}
\sin^2 \xi \; \partial_z \left(\sin^2 \xi \; \xi_z\right) - V_{\xi}=0,
\end{equation}
and may be integrated to
\begin{equation}
\frac{1}{2} \sin^4 \xi \; \xi^2_z=V, \label{bps eq}
\end{equation}
where we chose vanishing integration constant to get finite energy solutions.
Now, we have to specify a concrete potential.
The most obvious choice is the standard Skyrme potential
\begin{equation}
V=\frac{1}{2}\mbox{Tr} (1-U) \;\; \rightarrow \;\; V(\xi)=1- \cos \xi.
\end{equation}
Thus,
$$
\sin^2 \xi \; \xi_z=\pm \sqrt{2(1-\cos \xi)}\;\; \Rightarrow \;\;
\int \frac{\sin^2 \xi}{\sin \xi /2}= \pm 2(z-z_0).
$$
The general solution reads
$$
\cos^3 \frac{\xi}{2} = \pm \frac{3}{4} (z-z_0).
$$
Imposing the boundary conditions for
topologically non-trivial solutions we get
\begin{equation}
\xi = \left\{
\begin{array}{lc}
2 \arccos \sqrt[3]{ \frac{3z}{4} } & z \in \left[0,\frac{4}{3} \right] \\
0 & z \geq \frac{4}{3}.
\end{array} \right.
\end{equation}
The corresponding energy is
\begin{equation}
E=\int d^3x \left( -\frac{\lambda^2 \sin^4 \xi}{(1+|u|^2)^4}
(\nabla_r \xi )^2 ( \nabla_{\theta} u \nabla_{\phi} \bar{u} - \nabla_{\phi} u
\nabla_{\theta} \bar{u})^2 +\mu^2 V \right).
\end{equation}
Inserting the solution for $u$ and (\ref{bps eq}) we find
\begin{eqnarray}
E &=& 4\pi \int r^2 dr \left( \frac{\lambda^2 n^2\sin^4 \xi}{4r^4} \xi^2_r
+\mu^2 V \right) \nonumber \\
&=& 4\pi \cdot 2\mu^2 \int r^2 dr V(\xi (r))= 4 \sqrt{2}\pi
\mu \lambda |n| \int dz V (\xi (z)) \nonumber \\
&=& 8 \sqrt{2}\pi \mu \lambda |n| \int_0^{4/3}
\left(1-\left( \frac{3}{4} \right)^{2/3} z^{2/3}\right)dz =
\frac{64\sqrt{2} \pi}{15} \mu \lambda |n| .
\end{eqnarray}
The solution is of
the compacton type, i.e., it has a finite support
(compact solutions of a similar
type in different versions of the
baby Skyrme models have been found in \cite{GP},
\cite{comp-bS}).
The function $\xi$ is continuous
but its first derivative is not. The jump of the derivative is, in fact,
infinite at the compacton boundary
$z=4/3$, as the left derivative at this point tends to minus infinity.
Nevertheless, the energy density and the topological charge density
(baryon number density) are continuous functions at the compacton boundary,
and the field equation (\ref{xi-eq}) is well-defined there, so the solution is
a strong solution. The reason is that $\xi_z$ always appears in the
combination $\sin^2 \xi \, \xi_z$, and this expression is finite (in fact, zero)
at the compacton boundary. We could make the discontinuity disappear
altogether by introducing a new variable $\tilde \xi$ instead of $\xi$
which satisfies
$
\tilde \xi_z = \sin^2 \xi \, \xi_z .
$
We prefer to work with $\xi$ just because this is the standard
variable in the Skyrme model.
\\
In order to extract the energy density it is useful to rewrite the energy with
the help of the rescaled radial coordinate
\begin{equation}
\tilde r = \left( \frac{\sqrt{2}\mu}{4 \lambda} \right)^\frac{1}{3} r =
\left(\frac{3|n|z}{4}\right)^\frac{1}{3}
\end{equation}
like
$$
E = 8 \sqrt{2} \mu \lambda \left( 4 \pi \int_0^{|n|^\frac{1}{3}} d\tilde r
\tilde r^2 (1- |n|^{-\frac{2}{3}}\tilde r^2) \right)
$$
such that the energy density per unit volume (with the unit of length set by
$\tilde r$) is
\begin{equation}
{\cal E}= 8 \sqrt{2} \mu \lambda (1- |n|^{-\frac{2}{3}} \tilde r^2 ) .
\end{equation}
In the same fashion we get for the topological charge (baryon number), see
e.g. chapter 1.4 of \cite{mak}
\begin{eqnarray} \label{Bcharge}
B &=& -\frac{1}{\pi^2} \int d^3 x \frac{\sin^2 \xi }{(1+|u|^2)^2}
i\epsilon^{mnl} \xi_m u_n \bar u_l \\
&=& \frac{2n}{\pi} \int dr \sin^2 \xi \, \xi_r =
\frac{4n}{\pi} \int_0^\frac{4}{3} dz
\left( 1-\left(\frac{3}{4}\right)^\frac{2}{3} z^\frac{2}{3}\right)^\frac{1}{2}
\nonumber \\
&=& \mbox{sign} (n) \frac{4}{\pi^2} \left(4\pi \int_0^{|n|^\frac{1}{3}}
d\tilde r \tilde
r^2 (1- |n|^{-\frac{2}{3}}\tilde r^2)^\frac{1}{2} \right) =n \nonumber
\end{eqnarray}
and for the topological charge density per unit volume
\begin{equation}
{\cal B} = \mbox{sign} (n) \frac{4}{\pi^2} (1- |n|^{-\frac{2}{3}}\tilde
r^2)^\frac{1}{2} .
\end{equation}
Both densities are, of course, zero outside the compacton radius $\tilde r
=|n|^\frac{1}{3}$.
We remark that the values of the densities at the center $\tilde r=0$ are
independent of the topological charge $B=n$, whereas the radii grow like
$n^\frac{1}{3}$. For $n=1$, we plot the two densities in Fig. (\ref{rys1}),
where we normalize both densities (i.e., multiply them by a constant) such
that their value at the center is one.
\begin{figure}[h!]
\includegraphics[angle=0,width=0.55 \textwidth]{sexticskyrme-energy.eps}
\includegraphics[angle=0,width=0.55 \textwidth]{sexticskyrme-charge.eps}
\caption{Normalized energy density (left figure) and topological charge
density (right figure) as a function of the
rescaled radius $\tilde r$, for topological charge n=1. For $|n|>1$, the
height of the densities remains the same, whereas their radius grows like
$|n|^\frac{1}{3}$ }\label{rys1}
\end{figure}
\\
We now want to compare the phenomenological parameters of our model
(masses and radii) to the corresponding values for physical nuclei.
One should keep in mind, of course, that the comparison is done at the purely classical level, and all quantum corrections are absent.
First, observe that the energy of the solitons is proportional to the
topological (baryon) charge
$$ E=E_0 |B|, $$
where $E_0=64\sqrt{2}\pi \mu\lambda / 15$. Such a linear dependence is a basic
feature in nuclear physics. Let us fix the energy scale by setting
$E_0 = 931.75 $ MeV. This is equivalent to the assumption that the mass
of the $B=4$ solution is equal to the mass of He$^4$, which is usually
assumed because the ground state of He$^4$ has zero spin and isospin.
Therefore, corrections to the mass from spin-isospin interactions
are absent. In table (\ref{table}) we compare the energies of the solitons
in our model with the experimental values.
We find that
the maximal deviation in our model is only about $0.7\%$.
For the numerical determination of soliton masses in current versions of the
Skyrme model we refer to \cite{massive skyrme} (the standard massive Skyrme
model) and to \cite{vec skyrme} (the vector Skyrme model, where a coupling of
the Skyrme field to vector mesons is used instead of the quartic Skyrme term
for the stabilisation of the Skyrmions). There, typically, the Skyrmions with
low baryon number are heavier by a few percent, whereas they reproduce the
linear growth of mass with baryon number for higher baryon number. In
\cite{kope} the Skyrmion masses have been determined with the help of the
rational map approximation \cite{rationalmap} for Skyrmions, with similar
results.
\\
Secondly, the sizes of the solitons can be easily computed and read
$$ R_B= R_0 \sqrt[3]{|B|}, \;\;\; R_0=\left( \frac{2\sqrt{2} \lambda }{\mu}
\right)^{\frac{1}{3}},$$
which again reproduces the well-known experimental relation.
The numerical value is fixed by assuming $R_0=1.25$ fm.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
B & $E_{06}$ & $E_{experiment}$ \\
\hline
1 & 931.75 & 939 \\
2 & 1863.5 & 1876 \\
3 & 2795.25 & 2809 \\
4 & 3727 & 3727 \\
6 & 5590.5 & 5601 \\
8 & 7454 & 7455 \\
10 & 9317.5 & 9327 \\
\hline
\end{tabular}
\caption{Energies of the soliton solutions in our model ($E_{06}$),
compared with
the experimental masses of physical nuclei.
All numbers are in MeV} \label{table}
\end{center}
\end{table}
\section{Bogomolny bound}
Now, we show that our solitons are of the BPS type and saturate a Bogomolny
bound.
Let us mention here that a Bogomolny bound also exists for the original Skyrme
model $L_2 + L_4$, but it is easy to prove that non-trivial solutions of this
model cannot saturate the bound (see, e.g., \cite{mak}; this bound has been
found already by Skyrme himself \cite{skyrme}).
\\
The energy functional reads
$$
E=\int d^3 x \left( \frac{\lambda^2 \sin^4 \xi}{(1+|u|^2)^4}
(\epsilon^{mnl} i \xi_m u_n\bar{u}_l)^2 +\mu^2 V \right) =
$$
$$
= \int d^3 x \left( \frac{\lambda \sin^2 \xi}{(1+|u|^2)^2}
\epsilon^{mnl} i \xi_mu_n\bar{u}_l \pm \mu \sqrt{V} \right)^2 \mp \int d^3 x
\frac{2\mu \lambda \sin^2 \xi \sqrt{V}}{(1+|u|^2)^2} \epsilon^{mnl} i \xi_m
u_n \bar{u}_l
$$
$$
\geq \mp \int d^3 x \frac{2\mu \lambda \sin^2 \xi \sqrt{V}}{(1+|u|^2)^2}
\epsilon^{mnl} i \xi_m u_n \bar{u}_l =
$$
\begin{equation} \label{bobo}
\pm (2\lambda \mu \pi^2 )\left[ \frac{-i}{\pi^2}
\int d^3 x \frac{ \sin^2 \xi \sqrt{V}}{(1+|u|^2)^2}
\epsilon^{mnl} \xi_m u_n \bar{u}_l \right]
\equiv 2\lambda \mu \pi^2 C_1 |B|
\end{equation}
where $B$ is the baryon number (topological charge) and the sign has to
be chosen appropriately (upper sign for $B>0$). If we replace $\sqrt{V}$ by
one, then the result (i.e., the last equality in (\ref{bobo}))
follows immediately (and the constant $C_1=1$). Indeed, for $V=1$
the expression in brackets is just the topological charge (\ref{Bcharge}).
An equivalent derivation, which shall be
useful below, starts with the observation that this expression is just the
base space integral of the pullback of the volume form on the
target space $S^3$, normalized to one. Further, while the target space $S^3$
is covered once, the base space $S^3$ is covered $B$ times, which
implies the result.
The same argument continues to hold with the factor $\sqrt{V}$ present
(remember
that $V=V(\xi)$), up to a constant $C_1$. Indeed, we just have to introduce
a new target space coordinate $\bar\xi $ such that
\begin{equation} \label{xi-xi'}
\sin^2 \xi \sqrt{V(\xi)} \, d\xi = C_1 \sin^2 \bar \xi \, d \bar\xi .
\end{equation}
The constant $C_1$ and a second constant $C_2$, which is provided by
the integration of Eq. (\ref{xi-xi'}), are needed to impose the two conditions
$\bar\xi (\xi=0)=0$ and $\bar \xi (\xi =\pi) =\pi$, which have to hold if
$\bar\xi$ is a
good coordinate on the target space $S^3$. Obviously, $C_1$ depends on the
potential $V(\xi)$. Specifically, for the standard Skyrme potential
$V=1-\cos\xi$, $C_1 $ is
$$
C_1 = \frac{32\sqrt{2}}{15\pi}
$$
as may be checked easily by an elementary integration. We remark that an
analogous Bogomolny bound in one lower dimension has been derived
in \cite{deI-W}, \cite{restricted-bS} for the baby Skyrme model.
\\
The Bogomolny inequality is saturated by configurations obeying the first order
Bogomolny equation
$$ \frac{\lambda \sin^2 \xi}{(1+|u|^2)^2} \epsilon^{mnl} i \xi_mu_n\bar{u}_l
= \mp \mu \sqrt{V} ,$$
which, in the case of our ansatz, reduces to the square root of equation
(\ref{bps eq}). The saturation of the energy-charge inequality by our
solutions proves their stability. It is not possible to find configurations
with lesser energy in a sector with a fixed value of the
baryon charge.
\section{Discussion}
In this letter we proposed an integrable limit of the full Skyrme model which
consists of two terms: the square of the pullback of the target space volume
(topological density) and a non-derivative part, i.e., a potential.
Both terms are needed to circumvent the Derrick argument.
Then we explicitly solved the static model for a specific choice of the
potential (the standard Skyrme potential). The resulting solitons satisfy, in
fact, a Bogomolny equation.
These exact Bogomolny solutions provide a linear relation between
soliton energy (=nuclear mass) and topological charge (=baryon number $B$),
reproducing the experimental nuclear masses with a high precision.
Besides, these solitons have the remarkable property of being compact, which
allows to define a strict value of the soliton size (=nuclear radius).
The resulting radii $R$, too,
follow the standard experimental relation $R\sim |B|^{1/3}$
with a high precision.
\\
These findings lead to the question of the nature and quality of the
approximation which our model provides for the properties of physical nuclei.
Obviously, the model as it stands cannot reproduce all features of nuclei,
even qualitatively,
because some essential ingredients are still missing.
First of all, the binding energy of higher nuclei is zero due to the Bogomolny
nature of the solutions. Although not entirely correct for physical nuclei,
this is, however, not such a bad approximation, because the nuclear binding
energies are known to be rather small. Their smallness is, in fact, one of the
motivations for the search for theories which saturate a Bogomolny bound.
Secondly, there are
no pionic excitations, because the corresponding term in the Lagrangian is
absent. This is related to the complete absence of forces between separated,
non-overlapping solitons. The absence of forces is a direct consequence of the
compact nature of these solitons, because
several non-overlapping solitons still represent an exact solution of the
field equations.
Physical nuclei are not strictly
non-interacting, but given the very short range character of forces between
nuclei, the absence of interactions in the model may, in fact,
be welcome from a phenomenological
point of view, within a certain approximation. Further, for physical nuclei a
finite radius may be defined with good accuracy, so the compact nature of
the solitons may be a virtue also from this point of view.
\\
For the energy density we find that it is of the core type
(i.e., larger in the center and decreasing towards the boundary), see Fig. 1.
The baryon density profile is again of the core type, but flatter near the
center, and with a smaller and more pronounced surface (=region where the
density decreases significantly).
For physical nuclei the density profile is quite flat (almost constant) and for
some nuclei even with a shallow valley in the center, so here the
phenomenological coincidence is reasonable but not perfect. Let us also
mention that the independence of the profile heights of the baryon number
conforms well with the known properties of nuclei.
\\
Our results for the profiles,
however, have to
be taken with some care. First of all, they depend on the form of the
potential term, in contrast to the linear mass-charge relation (which holds
for all potentials) or the compact nature of the solitons
(which holds for a wide class of
potentials). The second argument is related to the huge amount of symmetry of
the model. Indeed, for the energy functional for static field configurations,
the volume-preserving diffeomorphisms on the
three-dimensional base space are a subset of these symmetries. In
physical terms, all deformations of solitons
which correspond to these volume-preserving
diffeomorphisms may be performed without any cost in energy.
But these deformations are exactly
the allowed deformations for an ideal, incompressible droplet of liquid where
surface contributions to the energy are neglected.
These symmetries are not symmetries of a physical nucleus. A physical nucleus
has a definite shape, and deformations which change this shape cost energy.
Nevertheless, deformations which respect the local volume conservation (i.e.,
deformations of an ideal incompressible liquid) cost much less energy
than volume-changing deformations, as an immediate consequence
of the liquid droplet model of nuclear matter.
\\
This last observation also further explains the nature of the approximation our
model provides for physical nuclei. It reproduces some of the classical
features of the
nuclear liquid droplet model at least on a qualitative level, and the huge
amount of symmetries of the model is crucial for this fact.
Its soliton energies, e.g., correspond to the bulk (volume) contribution of
the liquid droplet model, with the additional feature that the energies are
quantized in terms of a topological charge.
\\
In other words, the model provides,
besides a conceptual understanding with exact solutions,
a new starting point or ``zero order''
approximation which is different from other approximations. It already covers
some nuclear droplet properties of nuclear matter, and is topological in
nature. For a more quantitative and phenomenological application to nuclei,
obviously both the inclusion of additional terms and the quantization of some
degrees of freedom are necessary.
\\
So let us briefly discuss the question of possible generalizations of the
model. A first,
simple generalization consists in the choice of different potentials. The
resulting solitons continue to saturate a Bogomolny bound, therefore the
linear relation $E\sim |B|$ between energy and baryon number continues to
hold. The energy and baryon charge densities for a spherically symmetric
ansatz (hedgehog), and even the compact or
non-compact nature of the solitons, however, will depend
on the specific form of the potential.
\\
A further generalization consists in including additional terms in
the Lagrangian (like the terms $L_2$ and $L_4$ of the standard Skyrme model)
which we have neglected so far.
From the effective field theory point of view
there is no reason not to include them.
If we omit, e.g., the kinetic term for the
chiral fields, then
there are no obvious pseudo-scalar degrees of freedom ($\eta, \vec{\pi}$).
These additional terms break the huge symmetry of the original model, such
that the solitons now have fixed shapes. In order to describe nuclei, these
shapes should be at least approximately spherically symmetric. A detailed
investigation of this issue is beyond the scope of the present letter, but
let us mention that at least under simple volume-preserving deformations from
a spherical to an ellipsoidal shape both the $E_2$ term and the $E_4$ term
energetically prefer the spherical shape.
Further, the reasonable qualitative success of the restricted model might
indicate that the additional terms should be small in some sense (e.g., their
contribution to the total energy should not be too big). This opens the
possibility
of an approximate treatment, where the solitons of the restricted model
$L_{06}$ provide the solutions to ``zeroth order'' (with all the topology and
reasonable energies already present), whereas the additional terms provide
corrections, which may be adapted to the experimental energies and
shapes of nuclei.
\\
Further, a more realistic treatment certainly requires the investigation of
the issue of quantization. We emphasize again that the rather good
phenomenological properties of
the model up to now are based exclusively on the classical solutions, and it
is a different question whether quantum corrections are sufficiently small or
well-behaved such that this success carries over to the quantized model. A
first step in this direction consists in applying the rigid rotor quantization
to the (iso-) rotational degrees of freedom,
as has been done already for the
standard Skyrme theory \cite{skyrme-quant-wit}, for some recent applications
to the spectroscopy of nuclei see e.g. \cite{skyrme-quant-sut}.
Some first calculations related to this rigid rotor quantization have
been done already, with encouraging results. A second issue is, of course,
the collective coordinate quantization of the (infinitely many) remaining
symmetries. This point certainly requires further study. A pragmatic
approach could assume that a more realistic application to nuclei requires,
in any case, the inclusion of more interactions (even if they are in some
sense small), breaking thereby the huge symmetry explicitly. Nevertheless,
the quantization of the volume-preserving diffeomorphisms may be of some
independent interest, although the solution of this problem might be
difficult.
Finally, the semi-classical quantization of the remaining degrees of freedom,
which are not symmetries, probably just amounts to a renormalization of the
coupling constants in the effective field theory. These are usually taken
into account implicitly by fitting the coupling constants to experimentally
measured quantities.
\\
In any case, we think that we have identified and solved
an important submodel in the
space of Skyrme-type effective field theories, which is singled out both by its
capacity to reproduce qualitative properties of the liquid droplet
approximation of nuclei, at least at the classical level, and by its unique
mathematical structure.
The model directly relates the nuclear mass to the topological charge, and it
naturally provides both a finite size for the nuclei and the liquid droplet
behaviour, which probably is not easy to get from an effective field
theory. So our model solves a conceptual problem by explicitly deriving said
properties from a (simple and solvable) effective field theory.
Last not least, our exact solutions might provide a calibration for the
demanding
numerical computations in physical applications of more generalized Skyrme
models.
\\
Given this success, it is appropriate to discuss the circumstances
which make the model relevant.
First of all, from a fundamental QCD point
of view, there is no reason to neglect the sextic term, just like there is no
reason to ignore the quadratic and quartic terms $L_2$ and $L_4$.
So the good properties of the $L_{06}$ model seem to indicate that in certain
circumstances the sextic term could be more important than the terms $L_2$ and
$L_4$. The quadratic term is kinetic in nature, whereas the
quartic term provides, as a leading behaviour, two-body interactions. On the
other hand, the sextic term is essentially topological in nature, being the
square of the topological current (baryon current). So in circumstances where
our model is successful this seems to indicate that a {\em collective}
(topological)
contribution to the nucleus is more important than kinetic or two-body
interaction contributions. This behaviour is, in fact, not so surprising for a
system at strong coupling (or for a strongly non-linear system).
A detailed study of the generalizations mentioned above, or of the more
conceptual cosiderations of this paragraph, is beyond the scope of
this letter and will be presented in future publications.
\\
Finally, let us briefly mention a recent paper \cite{inf-vec}, which appeared
after completion of this letter.
There, a generalized Skyrme
model saturating a Bogomolny bound is derived along completely different lines.
The model of that paper consists of a Skyrme field coupled to an infinite
tower of vector mesons, where these vector mesons may be interpreted as the
expansion coefficients in a basis of eigenfunctions along a fourth spatial
direction. Simple Yang--Mills theory in four Euclidean dimensions is the
master theory which gives rise to the generalized Skyrme model
via the expansion into the eigenfunctions along the fourth direction, and the
Bogomolny equation for the latter is a simple consequence of the self-duality
equations for instantons in the former theory. If only a finite number of
vector mesons is kept, the topological bound is no longer saturated, but
already for just the first vector meson, the energies are quite close to their
topological bounds. This latter observation might be in some sense related to
the results for our model, because integrating out the vector meson produces
precisely the sextic Skyrme term in lowest order. One wonders whether it is
possible to integrate out all the vector mesons, which
should lead directly to a topological (Bogomolny) version of the Skyrme
model.
\section*{Acknowledgements}
C.A. and J.S.-G. thank MCyT (Spain), FEDER (FPA2005-01963) and
Xunta de Galicia (grant INCITE09.296.035PR and
Conselleria de Educacion) for financial support.
A.W. acknowledges support from the
Ministry of Science and Higher Education of Poland grant N N202
126735 (2008-2010). Further, A.W. thanks Prof M.A. Nowak for an interesting
discussion.
|
2,869,038,154,535 | arxiv | \section{Introduction}
Many of the ideas presented in this paper have also appeared
previously in \cite{us}. The intent of this article is to assemble
what we view as the important points in a short and coherent
summary, and to add some results concerning a relation between
Lee-Yang zeroes and Stokes phenomena.
Quantum field theories may be defined either by a path integral or
by a set of functional differential equations which follow from the
Schwinger action principle \cite{Schwinger},
\begin{equation}
\delta \ip{t_1}{t_2} = \ipop{t_1}{\delta S}{t_2} \; .
\end{equation}
For the sake of illustration, consider a zero dimensional ``quantum
field theory'' defined by the action $S(\phi)$. The generating
functional for correlation functions of $\phi$ is
\begin{equation}
\label{genf}
Z(J) = \int_{-\infty}^{+\infty} d\phi\, \exp\bigl(-S(\phi) + J\, \phi\bigr) \; ;
\end{equation}
where it is assumed that the integral is convergent. This is a
solution of the action principle equations
\begin{align}
\label{SD1}
&\bigl(S'(\partial_J) + J\bigr)\, Z(J) = 0\\
\label{AP1}
&\biggl(\partial_{g_i} - \frac{\partial S(\partial_J)}{\partial
g_i}\biggr)\, Z(g,J) = 0\; ;
\end{align}
where $g_i$ are the parameters of the theory. For the specific action
\begin{equation}
\label{phifour}
S(\phi) = \frac{1}{2}\, \mu\, \phi^2 + \frac{g}{4}\, \phi^4 \; ;
\end{equation}
equations \eqref{SD1} and \eqref{AP1} become
\begin{align}
\label{SD}
&(g\, \partial_J^3 + \mu\, \partial_J + J)\, Z(J) = 0 \\
\label{AP}
&(\partial_g - \frac{1}{4}\, \partial_J^4)\, Z(J) = (\partial_\mu -
\frac{1}{2}\, \partial_J^2)\, Z(J) = 0 \; .
\end{align}
Although all these equations are included in the
Schwinger action principle, it will be convenient to refer to
equations involving variations of the fields (e.g., \eqref{SD1}) as
Schwinger-Dyson equations, and those arising from variation of
parameters (e.g., \eqref{AP1}) as action principle equations.
In general the equations \eqref{SD1} and \eqref{AP1} have a several
parameter class of solutions. For the action \eqref{phifour}, the
corresponding equations \eqref{SD} and \eqref{AP} have a three
parameter class of solutions, which includes \eqref{genf}. The
Schwinger-Dyson equations are solved by
\begin{equation}
Z(J) = \sum_I c_I(g,\mu)\, \int_{\Gamma_I} d\phi\, \exp(-S(\phi) + J\, \phi)
\; ;
\end{equation}
where $\Gamma_I$ are inequivalent integration paths in the complex
$\phi$ plane over which the integral converges and $c_I$ are
arbitrary functions of the coupling constants $g$ and $\mu$. The
number of such contours matches the order of the differential
equation \eqref{SD1}. Figure 1 shows the domains of convergence,
$\cos(4\arg(\phi))>0$, and a basis set of contours for real positive
$g$.
\begin{center}
\includegraphics{ThreeSolns1-v2.eps}
\bigskip
\noindent Figure 1.
\end{center}
Roughly speaking, the action principle requires the coefficients
$c_I$ to be independent of the parameters $\mu$ and $g$. This
statement is imprecise since, if $g$ is is taken to be complex and
the argument of $g$ is varied sufficiently, the contours of
integration $\Gamma_i$ must be changed to maintain convergence. For
a general polynomial action, this statement holds for the ``top
coupling'' associated with the highest power of $\phi$ in the
action. For a given top coupling $g$, each contour $\Gamma_I$
belongs to an equivalence class of contours for which integration
gives the same result. The equivalence classes (see figure 2)
consist of contours for which $|\phi|\rightarrow\infty$ within the
same domains of convergence; the action is analytic in $\phi$ except
at infinity. There is always a choice of contour $\Gamma_I$ within
an equivalence class which can be held fixed while making
infinitesimal variations of the top coupling. The action principle
requires that the coefficients associated with these contours do not
vary as one makes infinitesimal changes in the couplings.
\begin{center}
\includegraphics{EquivalenceClasses2-v2.eps}
\bigskip
\noindent Figure 2. For $\arg(g_4) = 0$, (a) the contours $B$ and
$B^\prime$ are equivalent. However for $\arg(g_4) = 2\, \pi/3$, (b) the
integral over $B^\prime$ remains convergent while the integral over $B$
diverges.
\end{center}
To dispel doubt that the ``exotic'' solutions with complex
integration contours have physical relevance, consider the effective
action for the theory defined by \eqref{phifour}, with real $\mu \le 0$ and real
$g > 0$. The expectation value for $\phi$ is a solution of the equation
\begin{equation}
J = \frac{d\Gamma}{d\phi} \; ;
\end{equation}
where $\Gamma$ is the one particle irreducible effective potential. The tree
level effective potential (in this case the action) has
minima at $\phi = 0,\; \pm \sqrt{\mu/g}$. The minimum $\langle\phi\rangle =
\sqrt{\mu/g} + \dotsb$ corresponds to a solution of the
Schwinger-Dyson equations with an integral representation involving
a sum of the contours $A$ and $C$ drawn in figure 1. These integrals
have contributions only from the saddle point at $\phi= \sqrt{-\mu/g}$. Hence
symmetry breaking vacua, whose existence is normally attributed to a
thermodynamic limit, exist even in zero dimensions when the full set of
solutions of the Schwinger-Dyson and action principle equations are considered.
Note that no symmetry breaking term has been added to the action; the symmetry
is broken by the choice of integration path\footnote{The standard way to
non-perturbatively describe symmetry breaking vacua is to introduce
a small symmetry breaking term which is taken to zero only after
taking an infinite volume limit.}.
Note that the solutions associated with the contours $A$ and $C$ in
figure 1. have complex parts which do not appear at any order in a
perturbative expansion about the saddle point. However, due to the
linearity of the Schwinger-Dyson equations, one can sum the contours
to get a solution for which the non-perturbative contribution is
real. The reader might be concerned that the ``exotic'' solutions
which are not integrals over real $\phi$ do not satisfy the axioms
of Euclidean quantum field theory. Although one can easily arrange
for the Greens functions to be real, one might still worry that even
Greens functions are not manifestly positive. In fact one should
postpone these questions for the higher dimensional case as
perversities of the exotic solutions could vanish in the
thermodynamic/continuum limit. In fact, we expect this to be the
case, as symmetry breaking vacua and theta vacua are examples of
exotic solutions.
In the subsequent section, we argue that the appearance of phase
boundaries in quantum field theories is intimately related to a
collapse of the solution set in the thermodynamic limit. This
proposal can be made concrete in a zero dimensional analogue, for
which we demonstrate a correspondence between the collapse of the
solution set as a top coupling is set to zero and the accumulation
of Lee-Yang zeros, leading to a non-analyticity in a coupling
constant. In this context, the limit of a vanishing top coupling is
analogous to the thermodynamic limit. The complementary descriptions
of phase boundaries in terms of the accumulation of Lee-Yang zeroes
and the collapse of the solution set share a common origin in Stokes
phenomena.
In section 3, we prove the equivalence of Borel resummation of the
perturbative expansion about saddle points and exotic solutions of
the Schwinger-Dyson equations, for various singularity avoiding
contours in the Borel plane. Finally, in section 4, we argue that
the action principle may emerge from the Schwinger-Dyson equations
under suitable conditions, due to the collapse of the solution set
in the thermodynamic limit, rather than being an independent set of
equations.
\section{Collapse of the Solution Set and the Accumulation of Lee-Yang Zeros}
When complex values of the parameters of a quantum field theory are
considered, phase boundaries which appear in the thermodynamic limit
can be understood in terms of the accumulation of zeroes of the
partition function which pinch the real axis, known as Lee-Yang
zeroes \cite{LeeYang}. As we shall explicitly demonstrate below, the
accumulation of Lee-Yang zeroes is not confined to the thermodynamic
limit. In zero dimensional polynomial theories, Lee-Yang zeroes
also accumulate in the limit that the top coupling $g_T$ goes to
zero. In this limit, a non-analyticity in the coupling $g_{T-1}$
appears. In this context, we will propose an alternative (and
complementary) description of the appearance of phase boundaries in
terms of the collapse of the solution set of the Schwinger-Dyson and
action principle equations.
Consider the zero dimensional action $S= \sum_{l=1}^{T} g_l\, \phi^l$.
There is a branch point at $g_T = 0$, due to the necessity of rotating
the contours of integration in the integral representation, in order
to maintain convergence as the argument of $g_T$ is varied. It is
always possible to find a contour in an equivalence class which can
be held fixed under infinitesimal variations of the top coupling
(see figure 2), so as to satisfy the action principle. However large
variations in the argument of the top coupling require changes in
the contour. In particular, under a rotation by $2\, \pi\, T$ the
solutions transits among $T$ Riemann sheets. The solutions are
analytic in the couplings $g_{l}$ for $l < T$, except at infinity,
since these couplings do not effect the domains of convergence. In
the limit $g_T \rightarrow 0$, the solution develops a branch point
in the new top coupling, $g_{T-1}$. The limit $g_T \rightarrow 0$ is
analogous to a thermodynamic limit. For the case $T = 3$ we will
explicitly show that this limit is accompanied by the accumulation
of Lee-Yang zeroes in the complex $g_{T-1}$ plane.
The appearance of this branch point can be understood in terms of
the collapse of the solution set of the Schwinger-Dyson and action
principle equations. In the limit $g_T \rightarrow 0$, with fixed
$\arg(g_T)$ solutions of the Schwinger-Dyson and action principle
equations either diverge, vanish or have a finite limit. It is easy
to see which by considering the overlap of the domains of
convergence in the complex $\phi$ plane for the case in which $g_T$
is the top coupling or for which $g_{T-1}$ is the top coupling
($g_T=0$). If a convergent contour for $g_T \ne 0$ is equivalent to
one which lies within a single wedge of convergence for the case
$g_T=0$, then the integral will vanish in the $g_T\rightarrow 0$
limit with fixed $\arg(g_T)$. On the other hand if a convergent
contour for $g_T \ne 0$ is not equivalent to any contour lying
within the wedges of convergence for $g_T = 0$, then the
$g_T\rightarrow 0$ limit is divergent. A finite $g_T\rightarrow 0$
limit exists only if an equivalent contour lies within two different
wedges of convergence for the $g_T=0$ case. Figure 3 illustrates the
various possibilities for the case $T=4$.
\begin{center}
\includegraphics{Asymptotics2-v2.eps}
\bigskip
\noindent Figure 3. For $\arg(g_4)=0$ and $\arg(g_3)=3\, \pi/4$, the integral
over the contour $\Gamma_A$ or the equivalent contour $\Gamma_A^\prime$
vanishes in the $g_4\rightarrow 0$ limit, while the integral over $\Gamma_B$
is finite. For $\arg(g_4)=0$ and $\arg(g_3)=\pi$ the integral over $\Gamma_C$
diverges.
\end{center}
Suppose that the argument of $g_T$ is kept fixed as $g_T \rightarrow
0$. In this limit the behavior of the partition function, defined
by a particular contour of integration in the complex $\phi$ plane,
will change discontinuously as the argument of $g_{T-1}$ is varied
across certain critical values. For example the $g_T\rightarrow 0$
limit may go from finite to divergent upon crossing a Stokes line.
However it is possible to keep this limit finite by considering
variations in the contour of integration which violate the action
principle by terms which vanish in the $g_T \rightarrow 0$ limit.
For $g_T=0$, these variations are equivalent to analytic
continuation in $g_{T-1}$.
To illustrate how this works, consider the case $T=4$. As one varies
the argument of $g_3$ keeping that of $g_4$ fixed, a generic
solution of the Schwinger-Dyson and action principle equations will
become alternately divergent, finite, or vanishing in the
$g_4\rightarrow 0$ limit. One can can keep the limit finite by
adding contours such as $\Gamma_A$ in figure 3 when the argument of
$g_3$ enters a wedge in which the contribution from this contour
vanishes as $g_4 \rightarrow 0$.
This process is illustrated in figure 4. The domains of convergence
for $g_4 \ne 0$ are indicated in light gray, while the domains of
convergence for $g_4=0, g_3\ne 0$ are indicated in dark gray.
Starting with a solution of the Schwinger-Dyson and action principle
equations corresponding the contour $A$, $\arg(g_3)$ is varied from
$\pi$ to $\pi/2$ keeping $arg(g_4) =0$. Initially the solution is
finite as $g_4\rightarrow 0$, since the contour $A$ lies
asymptotically within two domains of convergence for $g_4 = 0$. At
$arg(g_3) = 3\, \pi/4$ the contour is modified to $C \equiv A+B$. Note
that the integration over $B$ vanishes in the $g_4 \rightarrow 0$
limit for a neighborhood of $\arg(g_3) = 3\,\pi/4$, since $B$ is
equivalent to a contour lying within a single domain of convergence
when $g_4=0$. One can continue varying $\arg(g_3)$ to $\pi/2$ without
a change in the asymptotic behavior as $g_4\rightarrow 0$; the
partition function remains finite in this limit. On the other hand,
had the contour been fixed as $A$, the integral would be divergent
as $g_4\rightarrow 0$ for $\arg(g_3)$ on the other side of the Stokes
line $\arg(g_3) = 5\,\pi/8$. For $\arg(g_3)=\pi/2$ and $g_4=0$, the
contour $A$ does not lie within the domains of convergence
asymptotically, unlike the the contour $C$.
\begin{center}
\includegraphics{StokesSequence1-v2.eps}
\bigskip
\noindent Figure 4. As the argument of $g_3$ is varied, the contour is changed
to keep the integral finite in the $g_4\rightarrow 0$ limit.
\end{center}
Considering larger variations of $\arg(g_3)$ one can repeat this process to give
a solution of the Schwinger-Dyson equations which is finite and satisfies the
action principle in the $g_4 \rightarrow 0$ limit. For $g_4 = 0$ this process
corresponds to analytic continuation in $g_{3}$. The changes in contour
necessary to maintain a finite $g_4\rightarrow 0$ limit (and avoid Stokes
phenomena) give rise to the third order branch point at $g_3 = 0$.
We thus arrive at the picture that non-analyticity in a coupling
constant arises due to the collapse of the solution set of the
Schwinger-Dyson and action principle equations. In the example we
have given, non-analyticity in $g_{T-1}$ arises as the top coupling
$g_T \rightarrow 0$. In this limit, some solutions are finite,
while others diverge or vanish. Because the equations are linear
and certain solutions vanish in this limit, various classes of
solutions with inequivalent integral representations also coalesce
as $g_T\rightarrow 0$. This permits larger variations of the contour
which satisfy the action principle. At the same time larger
variations of the contour may become necessary to obtain a finite
partition function as coupling constants are varied. While we have
explicitly demonstrated this mechanism for the $g_T\rightarrow 0$
limit of a zero dimensional polynomial theory, we propose that the
same phenomena hold true for the thermodynamic limit,
$N\rightarrow\infty$ where $N$ is the number of degrees of freedom,
in a multi-dimensional field theory. In other words, non-analyticity
in a parameter of a quantum field theory should be related to the
collapse of the solution set of the Schwinger-Dyson and action
principle equations in the $N\rightarrow\infty$ limit.
The analogy we have proposed between a non-analyticity arising in
the $g_T\rightarrow 0$ limit of a zero dimensional theory and
non-analyticities arising in the thermodynamic limit is strengthened
by noting that the $g_T\rightarrow 0$ limit is accompanied by the
accumulation of Lee-Yang zeroes along Stokes lines in the complex
$g_{T-1}$ plane. One can explicitly see how this occurs for $T=3$.
Consider the partition function
\begin{equation}
\label{aint}
Z = \int_C d\phi\, e^{-(\frac{g}{3}\, \phi^3 + \frac{\mu}{2}\, \phi^2)} \, ;
\end{equation}
for positive real $g$ and a contour $C$ which goes to infinity with
$\arg(\phi) = \pm 2\, \pi/3$, as in figure 5. The integral \eqref{aint}
can be evaluated in terms of an Airy function;
\begin{equation}
\label{airy}
Z = 2\, \pi\, i\, e^{-\frac{1}{12}\, \frac{\mu^3}{g^2}}\, g^{-1/3}\,
Ai(\frac{\mu^2}{4\, g^{4/3}}) \; .
\end{equation}
\begin{center}
\includegraphics{Airy3-v2.eps}
\bigskip
\noindent Figure 5.
\end{center}
The zeroes of $Ai(z)$ lie along the negative real $z$ axis. Since the argument
of the Airy function in \eqref{airy} is $z= \mu^2/4\, g^{4/3}$, zeroes of the
partition function accumulate on the imaginary axis in the complex $\mu$ plane
as $g\rightarrow 0$, pinching the real axis at $\mu=0$. In fact, $\arg(\mu) =
\pm \pi/2$ are Stokes lines. The $g\rightarrow 0$ behavior of $Z$ in the
neighborhood of $arg(\mu) = -\pi/2$ can be seen by inspecting figure 6, in which
the domains of convergence for $g\ne 0$ and $g = 0$ are indicated by the lighter
and darker shaded regions respectively. For $\arg(\mu) = -\pi/2 + \epsilon$, $Z$
diverges in the $g\rightarrow 0$ limit, with the leading term in a saddle point
expansion given by
\begin{equation}
\label{saddle1}
Z \sim \sqrt{\frac{2\pi}{\mu}}\exp(-\frac{1}{6}\, \frac{\mu^3}{g^2}) \; .
\end{equation}
On the other side of the Stokes line, $\arg(\mu) = -\pi/2 - \epsilon$,
$Z$ converges in the $g\rightarrow$ limit, with the leading term in a
saddle point expansion given by
\begin{equation}
Z \sim \sqrt{\frac{2\pi}{\mu}}\label{saddle2}
\end{equation}
\begin{center}
\includegraphics{AiryStokes1-v2.eps}
\bigskip
\noindent Figure 6. On either side of the Stokes line at
$\arg(\mu)= -\pi/2$, the integral is either finite (c) or divergent
(a) in the $g\rightarrow 0$ limit. Zeroes of the partition function
accumulate as $g\rightarrow 0$ for values of $\mu$ on the Stokes
line (b).
\end{center}
On the Stokes line, $\arg(\mu) =-\pi/2$, the two saddle
point expansions \eqref{saddle1} and \eqref{saddle2} become
comparable in magnitude as $g\rightarrow 0$, since the real part of
the exponential in \eqref{saddle1} vanishes. The integral over the
contour in figure 5 is equivalent to the sum of the integrals over
two constant phase (steepest descent) contours, $C_1$ and $C_2$ in
figure 7, which pass through classical solutions for which the real
part of the action is degenerate. The accumulation of zeroes on the
Stokes line is related to the fact that the relative phase of the
two integrals oscillates wildly with variations of $\mu$ in the
$g\rightarrow 0$ limit, due to the factor of $\exp(\mu^3/g^2) =
\exp(i\, |\mu|^3/g^2)$.
\begin{center}
\includegraphics{AirySaddles-v2.eps}
\bigskip
\noindent Figure 7. Steepest descent contours passing
through the classical solutions at $\phi = 0,\; -\mu/g$, on the Stokes
line $\arg(\mu) = -\pi/2,\; \arg(g) = 0$.
\end{center}
Thus, the accumulation of Lee-Yang zeroes and the collapse of the
solution set of the Schwinger-Dyson and action principle equations
are complementary descriptions of the appearance of non-analyticity
in a parameter of a zero dimensional theory as the top coupling
$g_T\rightarrow 0$. The two descriptions have a common origin in
Stokes phenomena. We conjecture that these are also complementary
descriptions of non-analyticity in parameters of quantum field
theories which arise in the thermodynamic limit
$N\rightarrow\infty$. Upon completion of this work, we discovered
\cite{Pisani}, in which Lee-Yang zeroes in a 1D model with a wetting
transition were shown to lie along Stokes lines associated with the
asymptotic expansion in $N$. The correspondence between Stokes lines
and Lee-Yang zeroes has also been suggested in \cite{Itzykson}, in
the context of Ising and gauge models.
\section{Borel Resummation and Complexified Path Integrals}
It is well known that the loop expansion in quantum field theory is
an asymptotic rather than a convergent series. One approach to
obtain non-perturbative information from the loop expansion is the
Borel resummation, in which a convergent series (the Borel
transform) in a variable $t$ is obtained from the asymptotic series
in $\hbar$. The Borel transform is then inverted to give a function
having the correct asymptotic expansion, but which contains
non-perturbative information. Since there are actually an infinite
number of such functions, differing by essential singularities at
$\hbar \rightarrow 0$, it is a non-trivial statement that the Borel
transformation corresponds to the path integral. Moreover, there are
frequently singularities of the Borel transform on the positive real
$t$ axis, due to instantons and renormalons, which prevents an
unambiguous Borel resummation.
Under generic conditions, the number of classical solutions is equal
to the number of independent solutions of the Schwinger-Dyson
equations. For an arbitrary polynomial action in a zero dimensions,
\begin{equation}
S(\phi) = \sum_{n=1}^T \frac{g_n}{n}\, \phi^n \; ;
\end{equation}
we will prove that various Borel resummations of perturbative expansions about
classical solutions satisfy both the Schwinger-Dyson equations and the action
principle equations, and therefore correspond to various complexified path
integrals.
Consider the partition function
\begin{equation}
\label{loopZ}
Z = \int_\Gamma d\phi\, e^{\frac{1}{\hbar}\, S(\phi)} \; ;
\end{equation}
where the path $\Gamma$ is equivalent to a steepest descent path passing through
a dominant saddle point (classical solution) $\phi =\bar\phi_{\alpha}$. The loop
expansion yields a contribution to the generating function of the form;
\begin{equation}
\label{works}
Z_{\alpha} \approx \sqrt{ \frac{\pi \hbar}{S^{\prime\prime} (
\bar\phi_{\alpha} )} }\, e^{- \frac{1}{\hbar}\, S(\bar\phi_{\alpha})}
\sum_{n=0}^{\infty} c_n \hbar^n \; .
\end{equation}
This series is asymptotic, but its Borel transform defined by
\begin{equation}
\label{relbor}
B_{\alpha}(t) = \sqrt{ \frac{\pi}{S^{\prime\prime} (\bar\phi_{\alpha} ) } }
\sum_{n=0}^{\infty} \frac{c_n}{\Gamma (n+\frac{1}{2} ) } t^n\; ;
\end{equation}
converges to
\begin{equation}
\label{subrl}
B_{\alpha}(t) = \frac{\sqrt{t}}{2\pi i} \oint_C d\phi\frac{1}{t - ( S(\phi)-
S(\bar\phi_{\alpha}) ) } \; ;
\end{equation}
where in the vicinity of $t=0$ the contour $C$ encloses, in the opposite sense
(see figure 8), the two poles $\phi_{\alpha,j} (t)$, for $j=1,2$, which coalesce
to $\bar\phi_{\alpha}$ as $t\rightarrow 0$. All the other poles are taken to lie
outside the contour. The Borel transform has a singularity when one of the
exterior poles coalesces with one of the interior poles, which occurs when
$t=S(\bar\phi_{\alpha'}) - S(\bar\phi_{\alpha})$ where $\bar\phi_{\alpha'}$ is a
neighboring classical solution. Doing the $\phi$ integral gives
\begin{equation}
\label{als}
B_{\alpha}(t)= \sqrt{t}\sum_{j=1,2} (-1)^j \frac{1}{S^{\prime}(\phi_{\alpha,j}(t))} \; .
\end{equation}
\begin{center}
\includegraphics{BorelContours1-v2.eps}
\bigskip
\noindent{Figure 8.}
\end{center}
Thus far everything we have said is well known \cite{Bor}. We now
exhibit an exact relation between the Borel resummation and the
exotic solutions of the Schwinger-Dyson and action principle
equations. We invert the Borel transform by writing
\begin{equation}
\label{sqig}
Z_{\alpha} = e^{-\frac{1}{h}S(\bar\phi_{\alpha}) } \int_\Sigma \, dt\, e^{
-\frac{t}{\hbar } } \frac{ B_{\alpha}(t)}{\sqrt{t} }
\end{equation}
with an as yet unspecified integration contour $\Sigma$ in the complex $t$
plane. Equivalence of \eqref{sqig} with a solution of the Schwinger-Dyson
equations requires
\begin{equation}
\label{equival}
e^{-\frac{1}{h}S(\bar\phi_{\alpha}) } \int_\Sigma \, dt\, e^{ -frac{t}{\hbar }
} \oint_C d\phi\frac{1}{t - ( S(\phi)- S(\bar\phi_{\alpha}) ) } = c_I
\int_{\Gamma_I} d\phi \oint_{\infty}\, dt\, e^{-t/\hbar} \frac{1}{t-S(\phi)}
\end{equation}
where $\Gamma_I$ indicate open paths in the complex $\phi$ plane which
asymptotically lie within the domains of convergence determined by the the top
coupling $g_T$. Since $C$ and $c_I\Gamma_I$ are inequivalent, \eqref{equival}
involves a non-trivial exchange in the order of integration under which contours
are not preserved. Instead of directly proving \eqref{equival}, we will show
that \eqref{sqig} solves the Schwinger-Dyson equations and satisfies the action
principle.
We set $\hbar = 1$ in what follows. If $Z_{\alpha}$ satisfies both
the Schwinger-Dyson equations and the action principle then it is
annihilated by the operators;
\begin{equation}
\label{sdandsac}
\hat L = \sum_{n=2}^T (n-1)g_n \frac{\partial}{\partial_{g_{n-1}}} - g_1 \; ;
\end{equation}
and
\begin{equation}
\label{sdic}
\hat H_n = \frac{\partial}{\partial g_n} - \frac{1}{n}\, \frac{\partial^n}{\partial g_1^n} \; .
\end{equation}
It is convenient to define the quantity
\begin{equation}
\label{fdef}
F_{\alpha} \equiv \int_\Sigma dt e^{-t} \frac{ B_{\alpha}(t)}{\sqrt{t} } =
\int_\Sigma dt e^{-t} \sum_{j=1,2} (-1)^j \frac{1}{S^{\prime}(\phi_{\alpha,j})} \; .
\end{equation}
To show that $\hat L Z_{\alpha} = 0$, for $Z_{\alpha}$ defined in \eqref{sqig},
it suffices to show that $ \hat{\cal L} F_{\alpha} =0$ where
\begin{equation}
\label{covar}
\hat{\cal L} \equiv \sum_{n=2}(n-1)g_n D_{g_{n-1}} - g_1 \; ; \qquad
D_{g_{n}} \equiv \frac{\partial}{\partial g_{n}} - \frac{\partial}{\partial g_{n}}S(\bar\phi_\alpha) \; .
\end{equation}
Before proceeding, we list several simple but useful identities. Due
to the equation of motion, $S^{\prime} (\bar\phi_{\alpha} ) = 0$,
one has
\begin{equation}
\label{erg}
\frac{\partial}{\partial g_n}S(\bar\phi_{\alpha}) = \frac{1}{n}{\bar\phi_{\alpha}}^n
\end{equation}
Identities obtained by differentiating the relation $t - S(\phi_{\alpha,j}(t)) +
S(\bar\phi_\alpha)=0$, which defines $\phi_{\alpha,j}(t)$, are
\begin{equation}
\label{jig}
\frac{\partial}{\partial t}\phi_{\alpha,j}= \frac{1}{S^{\prime}(\phi_{\alpha,j} ) }
\end{equation}
\begin{equation}
\label{wuc}
\frac{\partial}{\partial g_n}\phi_{\alpha,j} = \frac{ \frac{1}{n}(\phi^n_{\alpha,j} -
\bar\phi^n_{\alpha} )}{S^{\prime} ( \phi_{\alpha,j} ) } \; ;
\end{equation}
where the equations of motion have been used again in deriving the last equation.
Using these identities and the equations of motion, one can show
that
\begin{equation}
\label{qant}
\sum_{n=2}^T [(n-1)g_n D_{g_{n-1}} - g_1]\frac{1}{S^\prime(\phi_{\alpha,j})}
=0 \; ;
\end{equation}
implying $\hat{\cal L} F_\alpha =0$, or $\hat{\cal L} Z_\alpha =0$, for any
integration path in $t$.
We are not done yet, since the equation ${\hat{\cal L}}Z_\alpha=0$
is a combination of the Schwinger-Dyson and the action principle
equations. The integration path $\Sigma$ will be constrained further
by requiring the action principle to be separately satisfied. To
this end, consider the quantity
\begin{align}
\nonumber
{\cal A}&\equiv [klD_{g_l}D_{g_k} + (k+l)D_{g_ {k+l} }]F \\
\label{yip}
&= \int_\Sigma dt e^{-t} \frac{\partial}{\partial t} \sum_j(-1)^j[klD_{g_l}D_{g_k} +
(k+l)D_{g_ {k+l} }]\phi_{\alpha,j}(t)
\end{align}
which will vanish if the action principle is also satisfied. Using
the same identities discussed above, ${\cal A}$ may rewritten as
\begin{equation}
\label{hur}
{\cal A} = \int_\Sigma dt \frac{\partial}{\partial t} \left[ e^{-t} \sum_j (-1)^j
kl \frac{\partial}{\partial g_k}\frac{\partial}{\partial g_l}\phi_{\alpha,j}(t) \right] =
\left. \sum_j (-1)^j e^{-t} kl\frac{\partial}{\partial g_k} \frac{\partial}{\partial
g_l}\phi_{\alpha,j}(t) \right|_{\partial\Gamma} \; .
\end{equation}
Clearly ${\cal A}$ vanishes if the contour $\Sigma$ begins at $t=0$ and ends at
$Re(t) = +\infty$, avoiding singularities. The contribution from the
boundary at $t=0$ vanishes because $\phi_{\alpha,1}(t)$ and
$\phi_{\alpha,2}(t)$ coalesce at $t=0$, so that the factor of
$\sum_j (-1)^j$ in \eqref{hur} leads to a cancellation. Closed
contours encircling branch cuts in the complex $t$ plane also give
${\cal A}=0$. So long as a contour $\Sigma$ for which ${\cal A}=0$
is used, the Borel resummation gives a solution of the
Schwinger-Dyson and action principle equations.
It would be very interesting to see to what extent this analysis
extends to theories with non-zero dimension. The analysis is surely
more difficult since, among other possible complications, there are
singularities in the Borel plane due to renormalons as well as
finite action classical solutions (instantons).
The ``exotic'' solutions of the Schwinger-Dyson equations given by
$Z = c_I Z_I$, where the $Z_I$ are generated from classical
solutions by Borel resummation, may in some sense be thought of as a
generalized form of theta vacua, in which the $c_I$ play the role of
theta parameters. In the usual approach a particular theta vacuum is
selected by adjusting a surface term in the action and integrating
over real fields. The surface term term effects the Schwinger-Dyson
equations only at the space-time boundary, so in the infinite
volume limit its role is only to set a boundary condition. It does
this by putting a different weight on the contributions to $Z$
coming from perturbative expansions about different classical
solutions. We conjecture that when appropriately resummed, this is
equivalent to a weighted sums over complexified path integrals.
\section{Emergence of Action Principle in the Thermodynamic Limit}
So far we have treated the Schwinger-Dyson and action principle
equations as independent. This is certainly true for a finite
number of degrees of freedom. If the action principle is not
imposed, the manner in which a solution of the Schwinger-Dyson
equations changes as one varies a coupling is un-determined, as the
there is a continuous set of solutions to the Schwinger-Dyson
equations for any value of the coupling. However, with certain
assumptions, the action principle arises due to the collapse of the
solution set of the Schwinger-Dyson equations in the thermodynamic
limit.
Suppose that the solution set collapses in the thermodynamic limit,
i.e. some solutions coalesce while others diverge, such that are
solutions of the form $Z \sim \exp(-N {\cal F})$ as
$N\rightarrow\infty$ where there are only discrete possibilities for
the free energy ${\cal F}$. Discreteness of the solutions
automatically fixes the dependence on the couplings. Let us assume
that this dependence is described by a differential equation of the form
\begin{equation}
\label{opdef}
\hat O _{\alpha} Z \equiv ( \frac{\partial}{\partial g_{\alpha} } - \hat K_{\alpha}
)\, Z[g_\alpha,J_i]=0 \; ;
\end{equation}
where $\hat K$ is a linear operator\footnote{The operator $\hat K_\alpha$ is
necessarily linear for this equation to make sense in the thermodynamic
limit.}. We will show below that these equations are equivalent to the action
principle equations. The argument follows from the commutation relations of
operators associated with the action principle and the Schwinger-Dyson
equations.
Writing the action in the form $S\{\phi_i\} = g_\alpha
f_\alpha\{\phi_i\}$, the operators associated with the
Schwinger-Dyson equations are
\begin{equation}
\hat L_i \equiv g_\alpha \left.\frac{\partial f_\alpha}{\partial
\phi_i}\right|_{\{\phi_j\}\rightarrow \{\partial_{J_j}\}} - J_i
\end{equation}
while those associated with the action principle are
\begin{equation}
\hat H_\alpha \equiv \partial_{g_\alpha} -
\left.f_\alpha\right|_{\{\phi_j\}\rightarrow \{\partial_{J_j}\}} \; .
\end{equation}
Thus one obtains the commutation relations
\begin{equation}
[\hat L_i, \hat H_\alpha] = 0 \, , \qquad [\hat H_\alpha, \hat H_\beta] = 0 \; .
\end{equation}
Let us now write $\hat O_\alpha = \hat H_\alpha + \hat
\Delta_\alpha$. If $\hat L_i Z =0$ and $\hat O_\alpha Z=0$, then
$[\hat O_\alpha, \hat L_i]Z = 0$, or
\begin{equation}
\hat L_i \hat \Delta_\alpha Z = 0 \; .
\end{equation}
If $\hat\Delta_\alpha Z$ is non-zero, it is a solution of the Schwinger
Dyson equations. Moreover there is only one discrete possibility:
$\Delta_\alpha Z = c_\alpha\{g_\beta\} Z$, where $c_\alpha$ is some
function of the couplings, so that $\hat H_\alpha Z = -
c_\alpha\{g_\beta\}Z$. The coefficients $c_\alpha$ can be absorbed
by a coupling constant dependent re-scaling $Z\rightarrow
Z^{\prime}= e^{\Omega\{g_\beta\}}Z$, where
\begin{equation}
\hat H_\alpha Z^\prime = [\hat H_\alpha, e^\Omega] Z - e^\Omega c_\alpha Z =
\left((\partial_{g_\alpha}- c_\alpha )e^\Omega\right) Z = 0 \; .
\end{equation}
The existence of a solution to the equations $(\partial_{g_\alpha} -
c_\alpha)e^\Omega =0$ requires $\partial_{g_\alpha}c_\beta -
\partial_{g_\beta}c_\alpha =0$, which follows from $(\partial_{g_\alpha}c_\beta
- \partial_{g_\beta}c_\alpha)Z = -[\hat H_\alpha, \hat H_\beta] Z = 0$. The
re-scaled partition function satisfies both the Schwinger-Dyson and Schwinger
action principle equations, $\hat L_i Z^\prime = \hat H_\alpha Z^\prime =0$,
even though we started with just the Schwinger-Dyson equations.
\section{Non-Polynomial Actions}
Although we have focused on polynomial actions, our discussion of
the solution set of the Schwinger action principle equations can be
readily extended to non-polynomial actions. A simple non-polynomial
action is that one plaquette QED, with action $S= \beta
\cos\theta$. The generating functional
\begin{equation}
\label{standgen}
Z(J,\tilde J) = \int_{-\pi}^{\pi} d\theta e^{\beta\cos\theta
+ Je^{i\theta} + \tilde J e^{-i\theta}} \; ;
\end{equation}
is a solution of the differential equations
\begin{align}
\label{SDP}
&\left[\frac{\beta}{2}(\partial_J - \partial_{\tilde J}) + (J\partial_J
-\tilde J \partial_{\tilde J})\right]Z(J,\tilde J) = 0 \\
\label{PAP}
&\left[\partial_\beta -\frac{1}{2}(\partial_J + \partial_{\tilde
J})\right]Z(J,\tilde J) = 0\\
\label{cons}
&\partial_J \partial_{\tilde J} Z(J,\tilde J) = Z(J,\tilde J) \; ;
\end{align}
where \eqref{SDP} and \eqref{PAP} are the Schwinger-Dyson and action principle
equations, while \eqref{cons} is a constraint equation. In fact these equations
have a two parameter (one if you neglect the normalization) class of solutions
given by linear combinations of basis solutions
\begin{equation}
Z(J,\tilde J) = \int_\Sigma d\theta e^{\beta\cos\theta + Je^{i\theta} + \tilde
J e^{-i\theta}} \; ;
\end{equation}
for contours $\Sigma$ equivalent to either $\Sigma_1$ and $\Sigma_2$
in figure 9 (assuming real positive $\beta$).
\begin{center}
\includegraphics{OnePlaquettePic-v2.eps}
\bigskip
\noindent{Figure 9.}
\end{center}
Note that integration over $\Sigma_1-\Sigma_2$ is equivalent to the
integral over the usual compact path $\theta =[-\pi,\pi]$. The
possible physical relevance of the exotic solutions, upon
generalizing to a theory in a finite number of dimensions, is not
manifest as it was for polynomial theories. In the former case,
symmetry breaking vacua were clearly related to exotic solutions.
Our experience with the polynomial theories leads us to speculate
that the exotic solutions for lattice gauge theories are related to
physically realizable phases of gauge theory. Certainly we expect
that the appearance of phase boundaries in gauge theories, via the
accumulation of Lee-Yang zeros, is closely related to the collapse
of the solution set in the thermodynamic limit.
\section{Conclusions and Outlook}
We have examined the complete set of solutions of the differential
equations which follow from the Schwinger action principle. While
only one of these solutions corresponds to the usual path integral,
the other solutions, which involve complexified path integrals have
physical relevance. On the one hand the manner in which the full
solution set collapses in the thermodynamic (or analogous
$g_T\rightarrow 0$) limit is related, via Stokes phenomena, to the
accumulation of Lee-Yang zeroes at phase boundaries. On the other
hand, exotic solutions may themselves be physical, with theta vacua
and symmetry breaking vacua as known examples. In the zero
dimensional case, we have proven that Borel resummations of
perturbation series, having various singularity avoiding contours of
integration in the complex Borel variable, solve the action
principle equations and therefore correspond to various complexified
``path'' integrals.
While we have explicitly discussed the solution set of the Schwinger
Dyson and action principle equations for zero dimensional models,
one can readily generalize to lattice models in multi-dimensions, in
which case one finds a huge solution set. The basis set of
solutions to the Schwinger-Dyson equations for a scalar field theory
on a lattice can be written in terms of the zero dimensional
solutions $Z^{(0)}(J)$ as follows;
\begin{equation}
Z = \exp(K_{ij}\frac{\partial}{\partial J_i}\frac{\partial}{\partial
J_j})\prod_k Z^{(0)}_k(J_k)
\end{equation}
where $K_{ij}$ is the lattice kinetic term, and the zero dimensional solution
$Z^{(0)}_k$ may be different at each lattice site $k$. For a polynomial
potential, the number of independent solutions grows exponentially with the
number of lattice sites. Like internal symmetries, space-time symmetries may be
broken by the choice of integration paths. Determining the collapse of the
solution set in the thermodynamic and continuum limits is a difficult problem.
It would be very interesting if exotic solutions with different integration
paths at different sites have physical relevance.
\section*{Acknowledgements}
We wish to thank Santiago Garcia for past collaboration related to the
present work. G.G. thanks D. Ferrante and C. Pehlevan for many enlightening
conversations.
|
2,869,038,154,536 | arxiv | \section{Abundances in Dwarf Galaxies and the Halo}
The idea that the stellar halo of the Milky Way (MW) formed {\it
predominantly} through the infall of smaller star systems ---
presumably dwarf galaxies --- has a long history (Searle \& Zinn
1978), strong observational evidence (e.g., Majewski 1993,
Majewski, Munn \& Hawley 1996), and currently a strong theoretical
backing by way of hierarchical, $\Lambda$CDM models (e.g., Bullock
\& Johnston 2005; Robertson et al. 2005; Abadi et al.\ 2006; Font
et al.\ 2006). But a longstanding puzzle in this picture is why,
if they are the seeds of halo formation, do MW satellite galaxies
have different stellar populations (e.g., Unavane, Wyse \& Gilmore
1996) and chemical abundance patterns (e.g., Fulbright 2002;
Shetrone et al.\ 2003; Tolstoy et al.\ 2003; Venn et al.\ 2004;
Geisler et al.\ 2005) than typical MW halo stars? One explanation
(Majewski et al.\ 2002; Font et al.\ 2006) is that prolonged tidal
disruption will naturally lead to evolution in the types of stars
a particular satellite contributes to a halo. Indeed, it has
become clear that abundance patterns (e.g., [$\alpha$/Fe]) among
the most metal-poor stars in dSphs --- possibly the residue of a
formerly much larger metal-poor population that may have been
predominantly stripped from the satellites over their lifetime ---
do overlap those of halo stars of the same metallicity (Shetrone
et al.\ 2003; Geisler et al.\ 2005; Tolstoy 2005). But the true
connection of these ancient dSph stars with Galactic halo stars
remains speculative, or at least non-definitive.
The Sagittarius (Sgr) dSph provides a striking example of a
satellite galaxy being disrupted and slowly assimilated into the
MW halo field population. It is the primary contributor of both
carbon stars and M giants to the upper ($|Z_{GC}| > 10$ kpc) halo
(Ibata et al.\ 2001; Majewski et al.\ 2003, hereafter Paper I) and
yields strong overdensity signatures of MSTO and RR Lyrae stars at
halo distances (Newberg et al.\ 2002; Vivas, Zinn \& Gallart
2005). Yet the current Metallicity Distribution Function (MDF) of
the Sgr core, with median [Fe/H] $\sim -0.4$ (Fig.\ 7), is quite
unlike that of the Galactic halo (median [Fe/H]$=-1.6$) and thus
the Sgr system would seem to present one of the most dramatic
examples of the apparent dSph/halo star abundance dichotomy. In
this paper we explore the possible origins of this dichotomy by
making high resolution, spectroscopic observations of stars not
only known to have been contributed to the Milky Way halo from a
{\it specific} dSph satellite, but also {\it when}. In the case
of the Sgr dSph we show that the origin of the abundance dichotomy
with the Galactic halo arises from preferential tidal stripping of
metal poor stars, which leads to divergent MDFs between lost and
retained Sgr stars, as well as a significant variation in the Sgr
MDF along its tidal tails from the core to debris lost from the
core several Gyr ago.
\section{Previous Abundance Studies of the Sgr System}
Initial photometric estimates indicated that Sgr is largely
dominated by a population of old to intermediate age stars
(Bellazzini et al.\ 1999; Layden \& Sarajedini 2000), but with an
MDF spanning from [Fe/H] $\sim -2.0$ to $\sim-0.5$ (see also
Cacciari, Bellazzini \& Colucci 2002). However, a more metal-rich
population with [Fe/H] $\geq -0.5$ was found with high resolution
spectra (Bonifacio et al.\ 2000, 2004; Smecker-Hane \& McWilliam
2002; Monaco et al.\ 2005) as well as in a recent, deep
color-magnitude diagram (CMD) from the Hubble Space Telescope ACS
centered on M54 (Siegel et al. 2007). These chemical abundance
studies thus present a Sgr MDF dominated by a metal-rich
population with median [Fe/H] $\sim -0.4$, but having a metal-weak
tail extending to [Fe/H] $\sim -2.0$ (Smecker-Hane \& McWilliam
2002; Zaggia et al.\ 2004; Monaco et al.\ 2005). Monaco et al.\
(2003) and Cole et al.\ (2005) have found Sgr to have a similar
MDF to the LMC (which has a dominant population of median
[Fe/H]$=-0.4$) with a similar fraction of metal-poor stars, which
suggests that Sgr may have had a progenitor resembling the LMC
(Monaco et al.\ 2005). In a recent reanalysis of the
age-metallicity relationship in Sgr, Bellazzini et al. (2006) find
that the dSph may have enriched to near-solar metallicity as early
as 6 Gyr ago, though a more recent analysis by Siegel et al.
(2007) suggests a somewhat slower evolution to this enrichment
level.
Thus far, abundance studies of the Sgr tails have been less
detailed. Dohm-Palmer et al.\ (2001) obtained spectra of some K
giants apparently in the northern leading arm (near its
apogalacticon) and inferred the stream there was about a half dex
more metal poor than the Sgr core; these authors suggested that
the Sgr dSph may originally have had a strong metallicity
gradient. Alard (2001) noted differences in the Sgr giant branch
position in the $(J-K_s,K_s)_o$ CMD between the Sgr center and a
field $7\fdg5$ down its major axis implying a $-0.2$ dex
metallicity variation between these two points (see \S7). Paper I
also suggested the possibility of a metallicity variation along
the Sgr tidal arms because giant stars in the arms with different
$(J-K_s)_o$ colors seemed to yield different photometric parallax
distances for the stream when the color-magnitude relation of the
Sgr core was used for all colors; the differences could be
explained by varying mean RGB slopes along the stream (see Figure
14 and Footnote 14 of Paper I). Adding information derived from
isochrone-fitting to main sequence turnoff stars,
Mart{\'i}nez-Delgado et al. (2004) argued that there is a
substantial metallicity gradient along the Sgr stream. Vivas et
al.\ (2005) obtained a mean [Fe/H] $=-1.77$ from low/medium
resolution spectra of sixteen RR Lyrae stars in the Sgr leading
arm; but since only the oldest and hence metal-poor populations in
Sgr would produce RR Lyrae, this age-biased sample cannot be used
to infer information on the full extent of the stream MDF. On the
other hand, Bellazzini et al. (2006) found significant differences
in the relative numbers of blue horizontal branch to red clump
stars between the Sgr core and a position about 75$^{\circ}$
forward along the Sgr leading arm, an imbalance that suggests a
significant metallicity variation along the Sgr stream. Thus,
while compelling evidence has been gathering for metallicity
variations along the Sgr stream, no {\it direct} measurement of
this variation has been made by sampling with high resolution
spectroscopy the actual [Fe/H] distributions of constituent stars.
\section{Observations}
\subsection{Sample Selection}
We have begun a systematic survey of the chemical abundance patterns of
stars in the Sgr stream.
The goal of the present contribution is a first systematic exploration of the MDF along the
Sgr stream; future work will focus on chemical abundance {\it patterns} in Sgr stream stars.
The design of our study, and in particular the rationale for the
specific stars targeted for observation, has been driven by
several practical considerations. First, because information on
potential {\it variations} in metallicity along the stream is
sought, multiple portions of the Sgr stream representing different
dynamical ages (i.e. the times when the debris was stripped) is
needed. Second, because the Sgr core itself exhibits a
metallicity {\it spread}, insufficient information is gained by
only sampling a few stars at any particular part of the tail;
rather, exploration of {\it distributions} in metallicity is
needed. This requires reasonable numbers of stars at each sampled
section of the stream. With a limited amount of telescope time it
is easier to build large samples with brighter targets, but, even
focusing purely on the intrinsically brightest stars identified in
the stream --- the M giants explored, e.g., in Paper I and
Majewski et al. (2004, ``Paper II'' hereafter), this is still a
challenging project if spectra at echelle resolution are needed.
The difficulty of securing large samples of stars partly motivated
our strategy in this first study of Sgr debris stars to explore
the Sgr leading arm --- which passes quite near the solar
neighborhood (Paper I). In contrast, the Sgr trailing arm, in its
most clearly discernible parts in the southern Galactic
Hemisphere, never gets closer than $\sim$15 kpc to the Sun. By
observing the leading arm both just above and just below the
Galactic plane we access two different points along this tidal
stream with fairly local stars bright enough to take maximal
advantage of our particular instrument access (two echelle
spectrographs on 4-m class telescope in the Northern Hemisphere
and only about one night per year on an echelle spectrograph in
the Southern Hemisphere).
This strategy for exploring the leading arm, however, has some
drawbacks in that (1) the trailing arm is dynamically much better
understood than the leading arm (Helmi 2004; Law, Johnston \&
Majewski 2005, hereafter ``Paper IV''), (2) the sorting of stars
by dynamical age is much cleaner in the trailing arm than the
leading arm (Paper IV; see also \S 7), (3) major sections of the
leading arm are very much farther away ($\sim 50$ kpc) --- out of
range of our accessible instrumentation and requiring 10-m
telescopes should we ever desire to ``fill the gap" of our
coverage of the leading arm in the same way, and (4) by focusing
on rather nearby Sgr stars there is some potential for sample
contamination by Milky Way disk M giants. We revisit the latter
possibility in \S 5.
To facilitate our discrimination of Sgr stream targets from other
Milky Way stars we take advantage of the ongoing studies of M
giants in the stream that are the focus of this series of papers.
Apart from their intrinsic luminosity, M giants confer a
particular advantage in the study of the Sgr stream in that, as
Paper I demonstrated, the Sgr stream has contributed the majority
of the M giants found in the Milky Way halo. Thus, M giants
selected far enough away from the disk already have a high
likelihood of being from Sgr.\footnote{In addition, as was shown
in Paper I, using combinations of 2MASS colors it is possible to
cleanly separate M giants from any potential nearby, contaminating
M dwarfs --- though these should be fairly rare.} Figure 1
(adapted from Fig.\ 9 of Paper I) shows the distribution of M
giants with $(J-K_s)_o > 1.00$ lying within 10$^{\circ}$ of the
nearly polar Sgr orbital plane, as derived in Paper I. Stellar
distances from the Sun (at the center) in this representation are
given by the corresponding dereddened $K_{s,o}$ magnitudes. This
kind of map has the benefit of creating an approximate relative
spatial distribution free of biases imposed by presuming
particular metallicities and color-magnitude relations needed to
convert apparent magnitudes to photometric parallax distances, and
works best when stars of a limited color range are
used.\footnote{See similar representations using stars with colors
filtered to be at the main sequence turn-off in Newberg et al.
(2002), for example.} Since most of the M giants in the figure lie
in the range $1.0 < (J-K_s)_o < 1.1$, this magnitude-based
distribution reveals the basic structure of the Milky Way and Sgr
stream (modulo metallicity-based variations in the absolute
magnitudes of these stars), albeit with an approximately
logarithmic distance scale. This log scale has the benefit not
only of compressing the apparent width of the distant parts of
both the Sgr leading and trailing arm, making them more visible,
but of expanding the relatively small volume of space occupied by
stars we have targeted in the northern Galactic hemisphere, to
make their relative positions more clear. However, as pointed out
in Paper I, the substantial stretching of the nearby Sgr leading
arm in such a rendition makes it appear more diffuse than it
really is. The reader is directed to Figures 10 and 11 in Paper I
for a linear distance version of this distribution where the
nearby leading arm is less ``fuzzed out", and to Figure 9 of that
paper for a ``clean version" (without colored dots) of the Figure
1 distribution, for comparison. The reader is also referred to
Figure 1 of Paper IV for an N-body model representation of the
observed debris that provides a useful guide to the expected
positions of leading (and trailing) arm debris in the Sgr orbital
plane.
Figure 1 (and its modeled counterpart in Paper IV) provides one
basis on which stars were selected for study here. But, in
addition to specifically targeting M giant stars apparently {\it
positioned} in particular portions of the Sgr leading arm, we also
pre-select stars that have radial velocities appropriate to these
positions based on Sgr debris models (Figure 10 of Paper IV)
constrained to fit all available positional and radial velocity
data for Sgr (e.g., Fig.\ 2). The velocities used for this
project --- both those of the stars we targeted here and those
that provide the constraints for the fitted models --- have been
collected through an ongoing medium resolution spectroscopic
survey of 2MASS M giants (Paper II, Majewski et al. in
preparation; see also the data presented in Paper
IV).\footnote{However, the echelle spectra obtained here allow us
to derive improved velocities, and these new velocities are also
presented in Table 1 (see \S3.2).} Figure 2 shows, as a function
of the Sgr orbital plane longitude ($\Lambda_{\sun}$), the
observed radial velocities, converted to the Galactic Standard of
Rest (GSR), of M giants lying near the Sgr orbital plane. The
rather velocity-coherent trend of the Sgr trailing arm (not
explored here) is obvious on the right. The RV distribution of
leading arm stars is less coherent, especially where it comes
close to the Sun, because of the considerable angular spread of
the stream on the sky at this point (and therefore a wider
variation in the projection of the stellar space motions on the
line of sight). Additional RV spreading in the leading arm occurs
because of the greater overlap of stars with different orbital
energies at the same orbital phase compared to the trailing arm
(See Fig.\ 1 of Paper IV). The trend of Sgr leading arm stars in
Figure 2 is sinusoidal (see also Fig.\ 10 of Paper IV). From left
to right in Figure 2: (1) Leading arm stars are first moving away
from the Sgr core (at $\Lambda_{\sun} = 0^{\circ} = 360^{\circ}$)
and have positive $v_{GSR}$ at high $\Lambda_{\sun}$; (2) after
apo-Galacticon the leading arm bends towards the general direction
of the Sun, and leading arm stars develop negative $v_{GSR}$ which
continue to decrease as the leading arm curves towards the solar
neighborhood and approaches from the general direction of the
North Galactic Cap (NGC, centered near
$\Lambda_{\sun}=256^{\circ}$); (3) as the leading arm traverses
the Galactic plane near the Sun, the $v_{GSR}$ changes sign again
with the trailing arm stars now speeding away from the solar
neighborhood and arcing under the Galactic Center ($\Lambda <
100^{\circ}$). It is worth noting that after passing below the
Galactic plane, the leading arm crosses the trailing arm; the
velocity trends of the two arms also cross in this region
($\Lambda < 100^{\circ}$) as shown in Fig.\ 10 of Paper IV.
Because the leading arm has yet another apogalacticon at $\Lambda
< 100^{\circ}$, the debris, and the associated velocities, is
expected to become less coherent. This can be seen by the green
points to the lower right of Figure 1 in Paper IV, but is not
obvious by Figure 10 of that same paper, which did not show this
dynamically older debris. That the overall spatial and velocity
distribution of the leading arm at this point becomes more diffuse
can also be seen in the models of Ibata et al. (2001; see their
Fig.\ 3).
\subsection{Spectroscopic Observations}
Figures 1 and 2, and the associated figures from our models in
Paper IV, guided the selection of four samples of stars for
analysis here:
(1) A large sample of stars (red symbols in Figs.\
1 and 2) were selected to have both positions and velocities
consistent with being in the leading arm north of the Galactic
plane, and in the general direction of the NGC (with Sgr
longitudes $\Lambda_{\sun}$ = 220-290$\arcdeg$). Of these, 21 were
observed with the $R$=$35,000$ resolution Mayall 4-m Echelle on
the nights of UT 05-09 May 2004. On UT 10-13 Mar 2004,
$R$=$46,000$ SARG spectra for nine additional M giants in the same
part of the stream were obtained with the TNG telescope in the
Canary Islands.
This ``leading arm" sample is the largest in our survey, because of
our mostly northern hemisphere telescope access. A large range of
$K_{s,o}$ has been explored, partly because when weather conditions
were non-ideal we resorted to brighter, generally closer stars. Indeed, some
of the stars explored have initially projected (i.e. Paper I) distances as low
as 1 kpc. Stars this close do lie among the Galactic thick disk stars, but
when selecting such stars we deliberately chose stars that lie along the
leading arm trend in Figure 2, and which, for the most part, have strongly
negative $v_{GSR}$'s (e.g., $<-65$ km s$^{-1}$) that are unlike the typical
thick disk star.
Nevertheless, as a means to explore and limit the extent to which
our analysis of this leading arm sample may have been affected by
thick disk contaminants that just happen to have the ``right"
velocity, we further divide this group even into a ``best"
subsample (the fainter, generally farther seventeen stars that are
very highly likely to be in the Sgr leading arm) and a ``less
certain" subsample of thirteen stars, including those stars marked
with red symbols within the boundary drawn in Figure 1. The
latter subsample includes the ten leading arm north stars with
$K_{s,o}<7.5$ as well as three stars at the highest
$\Lambda_{\odot}$ that are closer to the Galactic bulge. If there
is contamination of the leading arm north group by thick disk
stars, it will most likely be among the latter sample, which has
initially estimated distances from 1-5 kpc (based on the
color-magnitude relation for an [Fe/H]$\sim$-0.4 population
assumed in Paper I).\footnote{We will show below
that these distances are, in the mean, underestimated because most
of the stars are more metal-poor than [Fe/H]=-0.4.} We further
discuss the issue of contamination, and the fact it is not
expected to be affecting the overall conclusions of this study, in
\S 5.
(2) Ten M giant stars with positions and velocities of leading arm
stars south of the Galactic plane (green symbols) were observed
with the $R$=$19,000$ MIKE spectrograph on the 6.5-m Clay
telescope at Las Campanas Observatory on the night of UT 15 Aug
2005. These stars, with $\Lambda_{\sun}$ = 20-45$\arcdeg$, include
stars with projected distances both inside and outside of the
trailing arm and with $v_{GSR}$ well away from the trailing arm
trend (Fig.\ 2). According to the models of Paper IV, the leading
arm stars south of the Sun were predominantly stripped from Sgr
roughly 2-3 Gyr ago whereas those now north of the Sun were
stripped roughly 1.5-2 Gyr ago.
(3) Six stars in the very center of the Sgr core (magenta symbols)
were also observed with MIKE on the same observing run as the
other southern Sgr stars. Unlike the other groups of stars we
looked at in this survey, these Sgr core stars were not pre-vetted
based on radial velocity data, but rather selected on the basis of
the infrared color-magnitude diagram. Based on the high density of
Sgr giants in the core, this was a relatively safe strategy. We
subsequently derived radial velocities for these stars from the
MIKE spectra (values shown in Table 1), and these show them all to
have radial velocities consistent with the Sgr core. These
velocities were obtained via cross-correlation against four radial
velocity standards using the echelle order we used for the stellar
atmospheres analysis described in \S4.
We combine this small sample of Sgr core stars with the other extant
echelle resolution metallicities for Sgr core stars in the literature in our
analysis of the MDF below.
(4) Finally, we targeted thirteen additional M giants (blue
symbols) lying among the stars of the Sgr leading arm in the NGC
that were found to have velocities quite unlike that expected for
the Sgr leading arm at this position. We refer to this sample
as the ``North Galactic Cap (NGC) group". Most of these stars are too
far away and have velocities far too large to be contamination
from the Galactic disk. On the other hand, while dynamically old
Sgr stars from the wrapped {\it trailing arm} --- if they exist in
the M giant sample --- are expected to lie in the direction of the
NGC (Fig.\ 1 of Paper IV) and with more positive radial velocities,
initial estimates of the distances of the NGC group stars from
the Paper I photometric parallax analysis (which, again, assumes an
[Fe/H]$\sim$-0.4 giant branch color-magnitude relation) puts these stars {\it
too close} to the Sun to be consistent with wrapped trailing arm
debris. Thus, obtaining echelle resolution spectra of some of
these peculiar stars is of interest in order to test whether they
can be ``chemically fingerprinted" as Sgr debris (\S 6 and 7).
To lessen potential metallicity biases, M giant stars in all four
groups were selected with a wide range of $J-K_s$ color ---
typically $\sim$1.0-1.2. Otherwise, the specific selection of
targets was dictated by the desire to sample the four groups of
stars outlined above and by the limitations of assigned observing
schedules. Table 1 summarizes the targets, their equatorial and
Galactic coordinate positions, dereddened 2MASS $K_s$ and $J-K_s$
photometry from Paper I, the Sgr orbital plane longitude
($\Lambda_{\sun}$), the velocity in the Galactic Standard of
Reference ($v_{GSR}$), and the spectrograph with which each target
was observed and on what date. For most stars in Table 1 we give
two velocities: The first is from the medium resolution
spectroscopic campaign described above (\S3.1), which has typical
velocity uncertainties of about 5-15 km s$^{-1}$; these are the
velocities that were used in the selection of the present
spectroscopic samples and that are shown in Figure 2. The second
$v_{GSR}$ values were derived from the new echelle resolution
spectra by cross-correlating the echelle order that we use for the
chemical analyses (presented below) against that same order for
several radial velocity standard stars taken from the Astronomical
Almanac. The estimated velocity errors for the echelle data are
1.6 km s$^{-1}$ for the MIKE spectra, 0.6 km s$^{-1}$ for the KPNO
spectra, and 0.2 km s$^{-1}$ for the SARG spectra. As may be seen,
the echelle and medium resolution velocities track each other
well, with a dispersion in their difference of 7.3 km s$^{-1}$,
which is consistent with the uncertainties in the medium
resolution spectra. In the case of the Sgr core stars we only have
velocities derived from the new, echelle spectra. Table 1 also
gives the $S/N$ of each spectrum; these ranged from $\sim$40-190
for the Mayall, $\sim$110-390 for the TNG and $\sim$35-120 for the
MIKE data. The $S/N$ was determined using the total photoelectron
count level at 7490\AA.
\section{Iron Abundance Analysis}
\subsection{Data Reduction and Equivalent Width Measurements}
To convert our 2-D echelle images into fully calibrated 1-D
spectra we used the basic echelle spectra reduction routines in
the Image Reduction and Analysis Facility (IRAF).\footnote{IRAF is
distributed by the National Optical Astronomy Observatories.} This
process included overscan and bias correction, scattered light
subtraction, flattening of the spectra by division of normalized
quartz lamp exposures, extraction of the echelle orders,
wavelength calibration using paired exposures of either a thorium
(SARG spectra) or a thorium-argon discharge tube (KPNO and MIKE
spectra) taken at the same telescope position as each target
observation, and spectrum continuum fitting.
For the present analysis we focused on eleven unblended
\ion{Fe}{+1} lines (listed in Table 2) found in a particular part
of the spectrum previously explored by Smith \& Lambert (1985;
1986; 1990
---hereafter ``S\&L") in their spectroscopic exploration of M
giants (see Section 4.3). We used the IRAF task splot to measure
interactively the equivalent widths (EWs) of these lines, which
typically spanned one echelle order.
Because three different instruments (with three different
resolutions --- see examples of spectra from each instrument in
Fig.\ 3) were used to collect the spectra, the possibility that
the equivalent widths might suffer from significant systematic
differences was investigated.
In Figure 4 we compare the measured EWs of \ion{Fe}{+1}
lines in very high $S/N$ spectra of Arcturus (the one star we
have observed on all three systems) taken on each the SARG, KPNO and MIKE
spectrographs. The equivalent widths
for the three different spectrographs agree reasonably well.
Only slight offsets of EW(Mayall)$-$EW(SARG)=$11.0\pm10.7$ m\AA\ and
EW(MIKE)$-$EW(SARG)= $4.9\pm3.8$ m\AA\ were found; because of the
sizes of the uncertainties on these offsets compared to their
measured values, we elected not to apply any corrections
between spectrographs. However, if real, the level of these offsets in terms
of an [Fe/H] value is +0.09 dex and +0.04 dex, respectively, offsets about
those size of the estimated random [Fe/H] errors (see below).
The final measured EWs of the \ion{Fe}{+1} lines for each of the
Sgr spectra are given in Table 3. We also include there
the EW's measured for Arcturus from spectra taken on the the three
different instruments used to make Figure 4, as well
as for several standard stars we analyze next.
\subsection{Determining the Effective Temperatures, Surface Gravities, and Iron Abundances}
A detailed abundance analysis from spectra requires as input
parameters the stellar effective temperature, $T_{\rm eff}$,
surface gravity (usually parameterized as $\log g$), and
metallicity. The first parameter, $T_{\rm eff}$, has been
determined using the dereddened 2MASS ($J-K_{\rm s}$) colors and
the Houdashelt et al. (2000) color-temperature
calibration.\footnote{Houdashelt et al. (2000) work in the CIT
near-infrared filter system, whereas our Sgr star photometry is in
the 2MASS system. We adopted the Carpenter (2001) transformation
equations to convert the 2MASS colors to the CIT system.} In the
following analysis, the effective temperature is used in
combination with stellar isochrones (Girardi et al. 2000; Demarque
et al. 2004, hereafter Y$^2$) to constrain the stellar surface
gravity.
For a given population age and metallicity, a single
isochrone defines a nearly unique curve in a
$T_{\rm eff}$-$\log g$ plane, so that a given effective
temperature defines a $\log g$ value. Red giants can
either be first ascent red giant branch
(RGB) stars or asymptotic giant branch (AGB) stars and these
two separate phases of stellar evolution define
slightly different $T_{\rm eff}$-$\log g$ tracks. However, the
$\log g$ differences for a given $T_{\rm eff}$ are quite
small in older stellar populations. This
is particularly true for red giants with M-star
temperatures ($T_{\rm eff}$$\le$ 4000K), where the
RGB and AGB almost coincide in the $T_{\rm eff}$-$\log g$
diagram (and where differences between the RGB and AGB are
measured in hundredths of a dex in $\log g$).
In principle then, the effective temperature in an old red giant
defines its $\log g$. The two other primary
variables that define the $T_{\rm eff}$--$\log g$ curve are age and
metallicity. All of the potential Sgr populations are ``old'',
which here translates to ages greater than about 3 Gyr. For a
specific metallicity, the difference between a 3 Gyr and a 10 Gyr
isochrone in a $T_{\rm eff}$-$\log g$ plane is not large (about 0.1
dex in $\log g$ at $T_{\rm eff}$=3800 K). This is due to the small
difference in mass between a 3 Gyr red giant
($M\sim1.4$ M$_{\odot}$) and a 10 Gyr one ($M\sim1.0$ M$_{\odot}$).
Once a population is older than a few Gyr, the exact age becomes
relatively unimportant in defining $\log g$. Metallicity, on the other
hand, does have a significant effect on the derived $\log g$ for a
given effective temperature in an old red giant. This effect is
incorporated into the abundance analysis here via an iterative
scheme matching the isochrone used to define $\log g$ to the iron
abundance then derived with that particular isochrone. Sample
Fe I lines are used along with the photometric $T_{\rm eff}$ and an
initial estimate of $\log g$
from an isochrone of a given metallicity to derive [Fe/H]. If
this value of [Fe/H] does not match the adopted isochrone metallicity,
a new isochrone is selected and
the process is repeated until there is agreement between isochrone
and derived spectroscopic stellar metallicity.
The Fe I lines used to determine the iron abundance and final
isochrone metallicity (and thus the final $\log g$) are listed in
Table 2, along with the excitation potentials and $gf$-values. The
Fe I $gf$-values in Table 2 were determined by measuring these Fe
I equivalent widths in the solar flux atlas of Kurucz et al.
(1984) and varying the $gf$-values for each line in order to match
the solar iron abundance of $A$(Fe)=7.45 (Asplund, Grevesse, \&
Sauval 2005). The analysis here used the LTE code MOOG (Sneden
1973) combined with a Kurucz ATLAS9 (1994) solar model, with
$T_{\rm eff}$=5777 K, $\log g$= 4.438, and a microturbulent
velocity, $\xi$=1.0 km s$^{-1}$.
A comparison of the Fe I $gf$-values derived in this way
with those given for these same lines in
Kurucz (1995) line list yields a difference of $\Delta \log gf$=
+0.14$\pm$0.15. This is a small offset between these two
$gf$-value scales, with a small dispersion comparable
to the measured line-to-line variations found when the program
stars were analyzed.
The model atmospheres adopted in the analysis were generated by
interpolation from the Kurucz (1994) grids.\footnote{From
http://kurucz.harvard.edu/grids.html.}
In our iterative scheme, we also must
assume an initial metallicity for the model atmosphere.
Both this and the isochrone used to estimate $\log{g}$ are
iterated until the derived iron abundance of the stars
agrees with the metallicity of the model atmosphere, and
the metallicity of the adopted isochrone.
\subsection{An Analysis of Nearby ``Standard'' M Giants}
The abundance analysis method described in the previous section
can be tested on nearby, well-studied M giants that have physical
properties that bracket approximately those of the program Sgr
stream red giants. Included in the observed dataset for this
program are three nearby M giants ($\beta$ And, $\rho$ Per, and
$\beta$ Peg) that were analyzed in a series of papers by S\&L.
S\&L focussed their studies on a narrow spectral window, near
$\lambda$7440-7590\AA\, for abundance determinations in M, MS, and
S stars. This region is quite free from significant TiO
blanketing down to temperatures of about $T_{\rm eff}$=3200-3300K
in giant stars, which allows for a straightforward abundance
analysis. Smith \& Lambert exploited this fact to explore
nucleosynthesis in cool red giants on both the RGB and AGB. The
same spectral region is used in this study for the Sgr stream M
giants and the three bright M giants that were analyzed by S\&L
are analyzed here using the techniques described in Section 4.2.
Along with $\beta$ And, $\rho$ Per, and $\beta$ Peg standard stars
we include $\alpha$ Tau, the K5III giant used by S\&L as their
standard star.
As a first comparison of the spectra collected here with those
from S\&L, eleven Fe I lines, common to both studies, were
measured in the three M giants and the mean difference in
equivalent widths is found to be EW(this study)$-$EW(S\&L)
$=-6\pm$7 m\AA. This small offset is not significant and the
scatter is about what is expected given the overall
signal-to-noise levels and spectral dispersions. Spectra from
this study and those from S\&L are of comparable $S/N$ and
resolution and have expected equivalent-width uncertainties of
about $\pm$5m\AA. Differences between the two sets of
measurements would then be expected to scatter around
5$\times$(2)$^{1/2}$ or $\pm$7m\AA --- i.e., close to what is
found.
Stellar parameters were derived for $\alpha$ Tau, $\beta$ And,
$\rho$ Per, and $\beta$ Peg using first a method similar to that
used by S\&L, followed by the method used here for the Sgr stream
stars (\S 4.2) to see how these different methods compare in
deriving $T_{\rm eff}$, $\log g$, and [Fe/H]. S\&L used ($V-K$)
colors to define $T_{\rm eff}$, while they set the luminosity
based on the Wilson (1976) calibration of the strength of the Ca
II K-line with absolute visual magnitude ($M_{\rm V}$). Given
luminosity and effective temperature, S\&L then compared these
observed values to stellar-model mass tracks to set $\log g$ via
the relation of $g \propto (M \times L)/T_{\rm eff}$$^{4}$.
One significant difference between this particular S\&L procedure
and our modified use of it here concerns the estimate of the
luminosities. The S\&L studies predate the availability of
Hipparcos parallaxes, which are now well-measured for the four red
giants under consideration. Table 4 lists the Hipparcos parallaxes
for $\alpha$ Tau, $\beta$ And, $\rho$ Per, and $\beta$ Peg, as
well as the resulting distances (and their respective
uncertainties). These distances then provide the absolute $V$-
and $K$-magnitudes also listed in the table (with the distance
uncertainties considered). Both $V$ and $K$ bolometric
corrections were applied to determine $M_{\rm bol}$ in Table 4,
with the respective corrections differing by less than 0.05 in
magnitude. Finally, effective temperatures from both a ($V-K$)
calibration (Bessell et al. 1998) and the ($J-K$) calibration from
Houdashelt et al. (2000) are listed in Table 4.\footnote{In this
case, the near infrared colors for the bright stars are in the
Johnson system, and we converted to the Houdashelt et al. (2000)
CIT system using the transformation equations in Bessell \& Brett
(1988).}
Stellar luminosities for the four standard red giants are
calculated by adopting $M_{\rm bol}$=4.74 for the Sun and the values
of $\log$($L$/L$_{\odot}$) versus the mean $T_{\rm eff}$ (i.e.
the average of the two determinations in the previous paragraph)
are plotted in the
two panels of Figure 5. Also plotted in this figure are stellar
model tracks from the Padua grid
\footnote{http://pleiadi.pd.astro.it} for masses of $M=$1.0, 1.5,
and 2.0M$_{\odot}$. The top panel shows models with near-solar
metallicity ($Z=$0.019), while the bottom panel has models with
[M/H]$\sim=-0.4$ ($Z$=0.008). This figure illustrates the effect
that metallicity has on estimates of the gravity. At lower
metallicities the model tracks indicate a lower mass for a given
measured $T_{\rm eff}$ and $\log L$. This effect is quantified in
Table 5 where $T_{\rm eff}$ and $\log L$/L$_{\odot}$ are listed,
along with the estimated mass and resultant $\log g$ for the two
model metallicities plotted in Figure 5.
Given the effective temperatures and model mass (and thus $\log
g$) as a function of metallicity, the \ion{Fe}{+1}
equivalent-widths are used in an abundance analysis to achieve
final agreement between derived [Fe/H] and model metallicity. In
the line analysis the microturbulent velocity is set by the
requirement that the derived Fe abundance be independent of the Fe
I equivalent width for the different lines. The derived values of
$\log g$, microturbulence ($\xi$) and [Fe/H] are listed in Table
5. These values of $\log g$ can be referred to as ``Hipparcos
gravities'' because they are set by the mass, which is derived
from the luminosity, which is derived from the distance, which is
derived from the Hipparcos parallaxes. This analysis is very
similar as that used by S\&L, differing only in that S\&L used
\ion{Ca}{+2} K-line absolute magnitudes to establish a luminosity
while here the Hipparcos parallaxes are used to get a distance and
therefore a luminosity.
With the basic red giant parameters now defined for the bright
giant stars via the standard Fe-abundance analysis, the new
analysis technique (\S4.2) used in this paper for the candidate
Sgr stream red giants can be checked for differences when also
applied to these same bright giant stars. Recall that with Sgr
stream stars there is no reliable distance estimate available to
establish luminosity; rather, the effective temperature is used in
combination with the Fe abundance to establish surface gravity via
isochrone tracks. Moreover, for the new analysis the $T_{\rm
eff}$ are derived only from ($J-K$) colors (rather than from both
$J-K$ and $V-K$ colors) due to the larger effects of uncertain
reddening on optical colors and also the fact that we don't have
$V-K$ colors for the Sgr stream giants. Finally, we apply several
different isochrone ages as well as two separate families of
isochrones (Girardi et al. 2000 versus $Y^2$) in the
characterization of the standard red giants to test the
sensitivity of the new technique to these variables. The results
of the ``new" analysis applied to the bright giants are tabulated
in Table 6.
The Table 6 results show, first, that there is rather little
difference in the derived surface gravities to either the adopted
set of isochrones or the variation from 1.0 Gyr to 2.5 Gyr
isochrones. We have already mentioned (\S4.2) that there is only a
$\log g$ difference of 0.1 between a 3 and a 10 Gyr isochrone of
the same metalliticity; the 1 and 2.5 yr isochrones here are
intended to explore ages more appropriate to disk-like giants like
our standard stars, but we note that there is only a $\Delta
\log{g}$ difference of 0.05 between a 2.5 and a 5 Gyr isochrone of
the same metallicity. Moreover, a comparison between the Table 6
gravities and abundances and those derived from the more standard
analysis reveals no large differences. Figure 6 provides a
graphical comparison of the surface gravities (top panel) and
[Fe/H] (bottom panel) derived from the two techniques, and shows
their close correlation.
This comparative analysis of the four red
giants with well-established, fundamental stellar parameters
demonstrates that the analysis technique used for the candidate
Sgr Stream red giants is sound, and yields reliable stellar
parameters and Fe abundances.
\subsection{Final Results}
Table 7 gives the results of the \S4.2 abundance analysis applied
to our Sgr stars. For each star, the columns give the derived effective temperature
using the Houdashelt et al. (2000) color-temperature relation applied to
the 2MASS $(J-K_s)_o$ color, and the derived values of
the surface gravity ($\log{g}$), microturbulence and [Fe/H]. In the case
of the surface gravities, any entry given as ``0.0(-)" means that our
iterative procedure was converging on a model atmosphere with
$\log{g} < 0$,
whereas the Kurucz (1994) model atmosphere grids do not go below
$\log{g} = 0$. In these cases, we have adopted the $\log{g} = 0$
atmosphere.
The final column in Table 7 represents the standard deviation in
the line abundance determinations. In principle, from the adopted
model atmosphere and each EW we get a measure of the abundance.
With multiple EWs from different Fe I lines, MOOG calculates the
standard deviation of the resulting abundances. The typical
standard deviations are about 0.1 dex. Combined with the
instrument-to-instrument offsets discussed in \S4.1 and shown in
Figure 4 as well as other potential offsets, such as those shown
in Figure 6, we estimate the full [Fe/H] errors, systematic and
random combined, to be no more than $\sim0.2$ dex.
\section{Metallicity Distribution Functions}
\subsection{The Sagittarius Core}
Figures 7 and 8 summarize the MDFs determined for the three groups
of Sgr core/leading arm samples studied here (Figure 7 shows the
distributions with the same absolute vertical scale, Figure 8
shows the distributions with the same normalized, fractional MDF
scale in each panel).
For the Sgr core, data for our six stars (Figs.\ 7a and 8a; shown
by the open histogram) have been combined with previous echelle
data for 14 K giants by Smecker-Hane \& McWilliam (2002) and for
15 M giants by Monaco et al.\ (2005). The precisions in the
metallicities quoted for each of these studies is 0.07 and 0.20
dex, respectively, similar to our results here. The combined MDF
from these data shows the very broad distribution previously
reported for Sgr (see \S2), with a peak near [Fe/H]=-0.3 but a
very long, metal-weak tail. The new MIKE spectra we collected
contribute two stars near [Fe/H] = -1 but the other four lie in
the metal-rich end of the distribution, and include one star we
determine to have solar [Fe/H]. We consider this star to be a bona
fide member of Sgr because of its chemical peculiarities (in
particular, its Ti, Y, and La abundances, which are like other Sgr
stars of similar metallicity, as we shall show elsewhere --- Chou
et al., in preparation).
\subsection{Leading Arm North}
Panels (b) in Figures 7 and 8 present the MDF for all stars we
selected to be members of the Sgr leading arm in the Northern
Hemisphere. As may be seen, while broad like the MDF of the Sgr
core, the distribution ``of leading arm north" stars is, on the
whole, more metal poor than the Sgr core, with a median near -0.7
dex.
As discussed in \S3, this particular sample is
the most vulnerable to potential contamination by Milky Way disk M giants.
However, several arguments can be made that this contamination is probably
small, and, even if there is some contamination, it has little affect on the overall
conclusions of the present study:
(1) First, we can compare the MDFs of subsamples of ``leading arm
north" stars, divided into the ``best" (generally farther) and
``less certain" (generally closer) Sgr stream groups discussed in
\S3. Figure 9 makes this comparison, and shows that there is
little difference in the overall character of the two MDFs. The
two subsamples have the same median [Fe/H] and similar tails to the
metal-rich end. The difference in the mean metallicities of the
two samples, -0.72 and -0.64 dex, respectively, is much smaller
than the MDF dispersions (0.31 and 0.33 dex, respectively).
(2) The majority of the stars in the Leading Arm North sample are more metal-poor
than the mean metallicity of the Sgr core, so that their projected distances are
even farther away from the Milky Way disk than initially projected based on
the Paper I photometric parallaxes that assumed a Sgr core RGB color-magnitude relation.
For the ``best" subsample, the implied minimum distances are generally 10 kpc or more,
well above the Galactic disk.
(3) The median metallicity of the Galactic thick disk, the Milky Way component most
likely to contribute contaminants, is well known to be about -0.7 dex (whereas
the thin disk would contribute more metal rich stars in general, if at all). Thus, we might expect
the probability distribution of Milky Way contaminants to look very similar to the
distribution we actually see, and therefore have little impact on the true MDF.
(4) As we shall show elsewhere (Chou et al., in preparation), the
abundance patterns (e.g., the combinations of [Fe/H], [Ti/Fe],
[Y/Fe], [La/Y]) of all but a few of the stars in the leading arm
north sample (and indeed in our entire survey) are quite unlike
those of Milky Way stars, but very much resemble the patterns seen
in dSph stars, including Sgr (Bonifacio et al.\ 2000; Fulbright
2002; Smecker-Hane \& McWilliam 2002; Shetrone et al.\ 2003; Venn
et al.\ 2004; Geisler et al.\ 2005; Monaco et al.\ 2005).
(5) The leading arm stars were
pre-selected to be in the Sgr stream and to follow the expected
velocity trends for Sgr debris. No evidence for other
M giant tidal debris from any other satellite is found to
intersect the Sgr stream.\footnote{While the Monoceros stream {\it
does} also contain M giant stars (Rocha-Pinto et al. 2003, Crane
et al. 2003), these lie outside of the Galactic disk along the
Galactic plane and not near the samples we have selected here. We
shall show in \S 6 that the NGC moving group M giants are also
likely to be from Sgr.} Because the bulk of the
halo M giants are found to be contributed from the Sgr system and we are
probing the general orbital plane of the Sgr system and well away from the Galactic
disk for the most part, it is logical to
conclude that our leading arm samples (both north and south)
are indeed dominated by members from the Sgr dSph.
Thus, we expect the relative contamination of our leading arm north sample by Milky Way
stars to be small. While at this point it is true that we cannot be assured
that {\it every star} in any of samples, or any one particular star within them,
is definitely a member of the Sgr stream, a few contaminants will have little effect on the
general conclusions of this paper, which are based on {\it mean trends} in the
Sgr MDF. In this regard, it is sufficient that {\it most} of the stars are Sgr stream members
and to recognize that the Leading Arm North MDF differs significantly from that
of the Sgr core.
\subsection{Leading Arm South}
The Leading Arm South sample (Figures 7c and 8c) shows an even
more metal-poor MDF than either the Sgr core of the Leading Arm
North samples. With regard to contamination by the Milky Way
disk, things are even more secure for this sample than for the
Leading Arm North: Not only are these stars even farther away from
the disk according to the original projected distances from Paper
I (and even more so if their projected distances are corrected
for their newly discovered low metallicity), but they have an MDF
even more unlike the Milky Way disk. The median metallicity of
around -1.1 dex, the lack of stars more metal rich than
[Fe/H]=-0.7, the relatively small [Fe/H] dispersion in this
sample, and unusual chemical abundance patterns found in these
stars (Chou et al., in preparation) all argue against the notion
of significant contamination of this group of stars by the thick
disk.
\subsection{Evolution in the Sagittarius MDF}
Comparison of the Sgr core MDF with those at the two points in its
leading arm we explored here (Figs.\ 7b/8b and 7c/8c) reveals
substantial evolution in the Sgr MDF with position. While all
three points of the Sgr system sampled contain stars from a
metal-poor population with [Fe/H] $<-1$, the relative proportion
of these stars increases with separation from the Sgr core. The
latter shows a dominant metal-rich population peaked at [Fe/H]
$\sim-0.3$, whereas the median metallicity declines from
$\sim-0.4$ dex in the core to $\sim-0.7$ dex in the leading arm
north of the Sun and $\sim-1.1$ dex south of the Sun, which
represents debris lost from the Sgr core some 3.5 orbits
($\sim2.5-3$ Gyr) ago (Paper IV).
While the Figure 7c/8c MDF has only one star with [Fe/H]$>-0.95$,
because we are color-selecting {\it M giants} our samples tend to
be biased against finding {\it metal-poor giants} (which are bluer
and earlier in spectral type). Thus the significant, $-0.7$ dex
median metallicity gradient shown in Figures 7 and 8 may actually
{\it underestimate} the true gradient of what already appears to
be a substantial MDF variation along the Sgr stream.\footnote{A
possible selection effect that would bias the survey in the
opposite direction might arise from the fact that metal-poor
giants tend to be brighter at a given color, and therefore
possibly more likely to be observed. We believe that this is less
likely to be affecting our results based on the fact that there
are no significant differences between the MDFs of the two
subsamples of Leading Arm North stars divided primarily into two,
large apparent magnitude bins ($4.8\lesssim K_{s,o} \lesssim 7.5$
and $7.5 \lesssim K_{s,o} \lesssim 9.7$) shown in Figure 9.} We
address the implications of this gradient in \S7.
\section{Evidence for Sgr Trailing Arm in the North}
In the course of our ongoing, medium resolution radial velocity
survey of Sgr M giants (e.g., Majewski et al.\ 2004) we identified
a subsample of M giants lying among leading arm stars at the NGC,
but having the {\it opposite} velocity expected for {\it falling}
leading arm debris there (see $v_{\rm gsr}>0$ black points near
$\Lambda_{\sun}$ = 260$\arcdeg$ in Fig.\ 12 of Paper IV). Because
of their {\it apparent} proximity to the Sun (solid blue points,
Figs.\ 1 and 2), the origin of these stars has been puzzling.
Thirteen of these peculiar velocity M giants with median
$\Lambda_{\sun}$=265$\arcdeg$ were targeted with the Mayall 4-m
and TNG SARG echelle spectrographs on the same observing runs and
to the same approximate $S/N$ as the NGC leading arm stars (\S3).
The relatively low metallicities of these $v_{\rm gsr}>0$ stars
(Figs.\ 7d/8d) indicates that the initial Paper I photometric
distances for these stars (based on an assumed [Fe/H]$\sim-0.4$;
Fig.\ 1) were underestimated by a mean factor of $\sim1.5$, based
on the color-magnitude sequences presented in Ivanov \& Borissova
(2002). Adjusting the distances for correct metallicities ---
minding the $v_{\rm gsr}$ of these stars and recognizing that the
models were not well constrained for {\it old} debris --- we find
reasonable consistency of these stars with the Sgr {\it trailing
arm} towards the NGC (see Fig.\ 12 of Paper IV).
Detailed abundance analysis supports this conclusion. The MDF of
these positive $v_{\rm gsr}$ stars (Figs.\ 7d/8d) fits the general
trend with Sgr mass loss epoch established by the leading arm data
(Figs.\ 7a-c/8a-c); as may be seen by comparing the mass loss
epoch sequences of the leading and trailing arms in the Paper IV
(colored point in Figure 1) model, stars in our leading arm south
sample and in the NGC sample, if it is indeed old trailing arm
debris, were torn from Sgr at approximately the same time. Thus it
is compelling that the MDFs in Figures 7c/8c and 7d/8d look very
similar to one another. In addition, this NGC moving group is
found to have similarly peculiar \ion{Ti}{0}, \ion{Y}{0} and
\ion{La}{0} abundance trends as stars in the Sgr leading arm (Chou
et al., in preparation), further supporting the idea of a common
origin with these latter stars.
If trailing arm stars are found toward the NGC it establishes with
certainty that the Sgr debris tracks at least 3 orbits (2.5-2.75
Gyr) of mass loss (Paper IV); because of much stronger phase
mixing of debris in the leading arm, this fact is not well
established by the apparent {\it length} of the Sgr leading arm
(although previous evidence that it may exist has been offered by
Martinez-Delgado et al. 2004). Moreover, including the MDF in
Figures 7d/8d in the overall sequence shown in Figures 7a-c/8a-c,
lends further support to the overall notion that there is a
significant MDF variation along the Sgr stream.
\section{Discussion}
Because Sgr is reputed to have enriched to near solar metallicity
by at least a few Gyr ago (Lanfranchi \& Matteucci 2004;
Bellazzini et al. 2006; Siegel et al. 2007), the observed MDF
variation over the past 3.5 orbits (2.5-3 Gyr) of mass loss cannot
be due to an intrinsic variation of the instantaneous mean
metallicity of the Sgr system with time. Rather, it must point to
the shedding of successive layers within the satellite over which
there must have been an intrinsic MDF {\it gradient} (see also
Mart{\'i}nez-Delgado et al. 2004). However, the $> 0.7$ dex median
metallicity variation in the debris lost over a 2.5-3 Gyr
timescale is quite large and suggests the loss of stars over a
significant radius in the system. For comparison, the strongest
[Fe/H] gradient observed in the Sculptor dSph is about 0.5 dex
over about $0\fdg2$ ($\sim 275$ pc), which is about 15\% the
apparent Sculptor tidal radius; however, this same 0.5 dex change
also represents the {\it entire} variation seen across the
$\sim75\%$ of the Sculptor tidal radius studied in detail so far
(Tolstoy et al.\ 2004). Sculptor seems to have among the strongest
net internal metallicity gradients among Milky Way dSphs (though
some M31 dSphs may have larger gradients; Harbeck et al.\ 2001);
for comparison, the now well-studied Carina dSph exhibits only a
$-0.2$ dex gradient from its core to its tidal radius (Koch et
al.\ 2006). Moreover, no large metallicity gradient seems to
exist within the main body of Sgr now: Alard (2001) identified
only a $-0.2$ dex variation in mean metallicity from the Sgr core
to $7\fdg5$ down the major axis. While the position of the current
tidal radius in Sgr is still uncertain, Paper I argues that it is
likely to be only $\sim$ 3-4$\arcdeg$ (or Sgr would be too massive
to produce its observed dynamically cold tails); thus the Alard
observation likely pertains to the beginning of the metallicity
gradient {\it within the debris tail}. Therefore, we must
conclude either (1) the destruction of Sgr over the past several
Gyr has been fine-tuned to mass shedding from a narrow progenitor
radial range over which there was an extraordinarily strong [Fe/H]
gradient for a dSph, or, (2) more likely, Sgr experienced a quite
rapid change in its binding energy over the past several Gyr,
which has decreased the tidal boundary of the satellite across a
broader radial range over which there would have still been a
large net metallicity variation, but a shallower and more typical
{\it gradient}.\footnote{Support for significant Sgr mass loss
over its past $\sim3$ orbits is that about half of the Sgr M
giants in the corresponding tails lie $30\arcdeg$ beyond the Sgr
center (Paper I).} Such a catastrophic change of state happening
so relatively recently (1/5 the Hubble time) points to a dramatic
event affecting Sgr's life several Gyr ago, perhaps a transition
to its current, destructive orbit.
Figures 7 and 8 not only provide the first direct evidence that
the satellites of today may {\it not} well represent the stars
they lost to the halo, but that this effect can be considerable.
If tidal mass loss is typical among other dSph systems, as seems
to be the case (e.g., Mu\~{n}oz et al.\ 2006a, 2007; Sohn et al.
2007), it might explain such puzzles as why: (1) the detailed
chemical abundances (e.g., [$\alpha$/Fe] vs. [Fe/H]) of satellites
today appear to differ from those observed in the halo field to
which they should contribute (e.g., Font et al.\ 2006), (2) a
system like the Carina dSph, which exhibits clear signs of tidal
disruption, presently holds a much larger fraction of
intermediate-age than old stars today (Majewski et al.\ 2000,
2002), and (3) there remains a G dwarf problem in dSph systems
(e.g., Koch et al.\ 2006; Helmi et al.\ 2006). Such mass loss
shaping of the MDF prompts caution in attempting to interpret the
chemical evolution and star formation history of a dSph based on
stars left in its core (e.g., Tolstoy et al.\ 2003; Lanfranchi \&
Matteucci 2004).
To demonstrate this point, we approximate the total MDF of the Sgr
core several Gyr ($\sim$3.5 orbits) ago using two methods to
account for stars now in the tidal streams produced over that
time. In the first method (Fig.\ 10, blue lines), the normalized
MDFs in Figs.\ 8a-c represent their respective median {\it
Galactocentric} orbital longitudes and each leading arm star (as
identified in Fig.\ 11 of Paper I) is assigned a
longitude-interpolated version of these different MDFs. Regions
obscured by the Galactic plane or overlapping trailing arm are
``filled in" by reflecting the numbers of stars in the
corresponding part of the trailing arm as seen from the Galactic
Center (in the case of the first $50\arcdeg$ of leading arm) or by
extrapolating the observed stream density (for the farthest
175-300$\arcdeg$ of leading arm -- i.e. that part starting in the
solar neighborhood). In the second method (Fig.\ 10, red lines) we
use the Sgr disruption model for an oblate Milky Way halo from
Fig.\ 1 of Paper IV and assign the normalized MDFs in Figs.\ 8a, b
and c to leading arm model stars lost on the last 0.5 orbit (i.e.,
since last apogalacticon; yellow-colored debris in Fig.\ 1 of
Paper IV), 1.5-2.5 orbits ago (cyan-colored debris) and 2.5-3.5
orbits ago (green-colored debris) respectively, while for debris
lost 0.5-1.5 orbits ago (magenta-colored debris) we use the
average of Figures 8a and b. The model provides the relative
numbers of stars in each Sgr population (bound and unbound). Both
``Sgr-progenitor" MDFs generated are relatively flat, exhibiting a
much higher representation of metal-poor stars than presently in
the Sgr core. These regenerated MDFs are, of course, necessarily
schematic, because (1) The [Fe/H] spread of the net MDFs is, of
course, limited by the input MDFs, (2) an M giant-based survey is
biased {\it against} finding metal-poor stars, and (3) Sgr stars
with [Fe/H]$\sim -2$ have already been reported (see \S1;
ironically, the most metal poor stars shown in Fig.\ 7 are
contributed by the input MDF of the Sgr {\it core}, which includes
bluer giants as well as a larger overall sample of stars that
allows a higher chance of drawing stars from a low probability,
metal-poor wing in the distribution). But Figure 10 illustrates
how critically the observed MDFs of satellite galaxies may depend
on their mass loss/tidal stripping history.
We have discussed {\it integrated} MDFs as a function of position
in the Sgr system, but it is likely that, like other dwarf
galaxies, Sgr has had a variable star formation history including
possible ``bursts" (Layden \& Sarajedini 2000; Siegel et al.
2007), and that these produced populations with different, but
overlapping radial density profiles in the progenitor satellite.
The MDF gradients described here may relate more to differences in
the relative proportion of distinct populations than a smooth
variation in mean metallicity from a more continuous star
formation history. ``Distinct" Sgr populations are suggested by
the multiple peaks and general character of the Figure 7 MDFs (and
even more strongly by stream position variations of the abundances
of other elements, like lanthanum; Chou et al., in prep.). Earlier
suggestions of multiple Sgr populations include Alard (2001),
Dohm-Palmer et al.\ (2001), Smecker-Hane \& McWilliam (2002),
Bonifacio et al.\ (2004), and Monaco et al.\ (2005). Greater
resolution of the initial Sgr stellar populations, their former
radial distributions, and the Sgr enrichment history will come
from further scrutiny of its tidal debris, particularly along the
{\it trailing arm}. As shown in Figure 1 of Paper IV, leading arm
stars lost on different orbits (i.e., shed from different radial
``layers") significantly overlap in orbital phase position; this
``fuzzes out" the time (i.e. initial satellite radius) resolution.
In contrast, the dynamics of the longer trailing arm yields much
better energy sorting of the debris, and stars stripped at
specific epochs can be more cleanly isolated. In addition, study
of the trailing arm will allow much better separation of the Sgr
debris from potential Milky Way disk M giant contamination.
The abundance gradients found here imply that the estimated
photometric distances for {\it many} M giant stars along the Sgr
tidal arms have been systematically underestimated in Paper I,
where photometric parallaxes were derived using the
color-magnitude relation of the Sgr core. The best-fitting Sgr
destruction models of Paper IV should now be refined to account
for this variation (as well as an updated distance for the Sgr
core itself --- e.g., Siegel et al. 2007). Proper spectroscopic
parallax distances will necessarily require assessment of both
[Fe/H] and [$\alpha$/Fe] to determine absolute magnitudes. We
undertake this task elsewhere.
\acknowledgements We gratefully acknowledge support by NSF grant
AST-0307851, NASA/JPL contract 1228235 and the David and Lucile
Packard Foundation as well as Frank Levinson through the
Peninsular Community Foundation. VVS and KC also thank support
from the NSF via grant AST-0307534 and AURA, Inc. through
GF-1006-00. D.G. gratefully acknowledges support from the Chilean
{\sl Centro de Astrof\'\i sica} FONDAP No. 15010003. The author
thanks Jon Fulbright for kindly providing the MIKE spectrum of Arcturus.
Parts of this paper were written while SRM, KC and VVS
participated in the ``Deconstructing the Local Group" workshop
held at the Aspen Center for Physics in June 2006. Finally,
we appreciate helpful comments from the anonymous referee.
|
2,869,038,154,537 | arxiv | \section*{Abstract}
The spectrum of mutations in a collection of cancer genomes can be described by a mixture of a few mutational signatures. The mutational signatures can be found using non-negative matrix factorization (NMF). To extract the mutational signatures we have to assume a distribution for the observed mutational counts and a number of mutational signatures. In most applications, the mutational counts are assumed to be Poisson distributed, but they are often overdispersed, and thus the Negative Binomial distribution is more appropriate. We demonstrate using a simulation study that Negative Binomial NMF requires fewer signatures than Poisson NMF to fit the data and we propose a Negative Binomial NMF with a patient specific overdispersion parameter to capture the variation across patients. We also introduce a robust model selection procedure inspired by cross-validation to determine the number of signatures. Furthermore we study the influence of the distributional assumption in relation to two classical model selection procedures: the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). In the presence of overdispersion we show that our model selection procedure is more robust at determining the correct number of signatures than state-of-the-art methods, which are overestimating the number of signatures. We apply our proposed analysis on a wide range of simulated data and on a data set from breast cancer patients. The code for our algorithms and analysis is available in the R package SigMoS and can be found at \url{https://github.com/MartaPelizzola/SigMoS}.
\textbf{Keywords:} cancer genomics, cross-validation, model checking, model selection, mutational signatures, Negative Binomial, non-negative matrix factorization, Poisson.
\textbf{AMS classification: 92-08}, 92-10, 62-08
\section{Introduction}
Somatic mutations occur relatively often in the human genome and are mostly neutral. However, the accumulation of some mutation types in a genome can have harmful consequences. More specifically, the accumulation of somatic mutations observed in a tumor is called a mutational profile and can often be associated with factors such as aging \citep{Risques2018}, UV light \citep{Shibai2017} or tobacco smoking \citep{Alexandrov2016}. A mutational profile is thus a mixture of mutational processes that are represented through mutational signatures. Several signatures have been identified from the mutational profiles and associated with different cancer types \citep{Alexandrov2020,Tate2019}.
A common strategy to derive the mutational signatures is non-negative matrix factorization \citep{alexandrov2013breastcancer, Nik-Zainal2012, Lal2021}. Non-negative matrix factorization (NMF) is a factorization of a given data matrix $V \in \mathbb{N}_0^{N \times M}$ into the product of two non-negative matrices $W \in \mathbb{R}_+^{N \times K}$ and $H \in \mathbb{R}_+^{K \times M}$ such that
\[V \approx WH.\]
The rank $K$ of the lower-dimensional matrices $W$ and $H$ is much smaller than $N$ and $M$.
In cancer genomics, the data matrix $V$ contains the mutational counts for different patients, also referred to as mutational profiles. The number of rows $N$ is the number of patients and the number of columns $M$ is the number of different mutation types. In this paper, $M = 96$ corresponding to the 6 base mutations, when assuming strand symmetry times the 4 flanking nucleotides on each side ( i.e. $4 \cdot 6 \cdot 4 = 96$). The matrix $H$ consists of $K$ mutational signatures defined by probability vectors over the different mutation types. In the matrix $W$, each row contains the weights of the signatures for the corresponding patient. In this context, the weights are usually referred to as the exposures of the different signatures. To estimate $W$ and $H$ we need to choose a model for the data $V$. For mutational counts the usual assumption is the Poisson distribution \citep{alexandrov2013breastcancer}
\begin{align} \label{eq:poisson}
V_{nm} \sim \text{Po}((WH)_{nm}),
\end{align}
where $W$ and $H$ are estimated using the algorithm from \cite{Lee1999} that minimizes the generalized Kullback-Leibler divergence. The algorithm is equivalent to maximum likelihood estimation, as the negative log-likelihood function for the Poisson model is equal to the generalized Kullback-Leibler up to an additive constant.
We suggest using a model where the mutational counts follow a Negative Binomial distribution that has an additional parameter that allows for overdispersion in the data. The Negative Binomial NMF (NB-NMF) is discussed in \cite{Gouvert2020}, where it is applied to recommender systems, and it has recently been used in the context of cancer mutations in \cite{Lyu2020}. They apply a supervised NB-NMF model to mutational counts from different cancers which uses cancer types as metadata. Their aim is to obtain signatures with a clear etiology, which could be used to classify different cancer types.
We investigate when and why NB-NMF is more suitable for mutational counts than the usual Poisson NMF (Po-NMF). In particular we suggest using a residual-based approach to investigate goodness of fit for a given data set. We consider mutational count data and we extend the NB-NMF model by including patient specific overdispersion. The extended model is referred to as NB$_\text{N}$-NMF, where $N$ is the number of overdispersion parameters. As they also mention in \cite{Fevotte2009}, we believe that a great amount of research has been focusing on improving the performance of NMF algorithms given an underlying model and less attention has been directed to the choice of the underlying model given the data and application.
Additionally, we propose a novel model selection framework to choose the number of signatures. We use both simulated and real data to validate our proposed model selection procedure against other methods. We show that our model selection procedure is more robust against inappropriate model assumptions and that the Negative Binomial model is more suited for mutational count data than the usual Po-NMF.
We have implemented our methods in the R package SigMoS that includes NB-NMF, NB$_N$-NMF and the model selection procedure. The R package is available at \url{https://github.com/MartaPelizzola/SigMoS}. The package also contains the simulated and real data used in this paper.
\section{Negative Binomial non-negative matrix factorization}\label{sec:NegBinNMF}
In this section we first argue why the Negative Binomial model in \cite{Gouvert2020} is a natural model for the number of somatic mutations in a cancer patient. Then we describe our patient specific Negative Binomial non-negative matrix factorization NB$_\text{N}$-NMF and the corresponding estimation procedure.
\subsection{Negative Binomial model for mutational counts}\label{subsec:negbin}
We start by illustrating the equivalence of the Negative Binomial to the more natural Beta-Binomial model as a motivation for such model choice. Assume a certain mutation type can occur in $\tau$ triplets along the genome with a probability $p$. Then it is natural to model the mutational counts with a binomial distribution \citep{Weinhold2014, Lochovsky2015}
\begin{equation} \label{eq:betadist}
V_{nm} \sim \text{Bin}(\tau,p).
\end{equation}
However, \cite{Lawrence2013} observed that the probability of a mutation varies along the genome and is correlated with both expression levels and DNA replication timing. We therefore introduce the Beta-Binomial model
\begin{align} \label{eq:betabinom}
\begin{split}
V_{nm}|&p \sim \text{Bin}( \tau, p) \\
&p \sim \text{Beta}(\alpha, \beta),
\end{split}
\end{align}
where the beta prior on $p$ models the heterogeneity of the probability of a mutation for the different mutation types due to the high variance along the genome. As $p$ follows a Beta distribution, its expected value is $\mathbb{E}[p] = \nicefrac{\alpha}{(\alpha + \beta)}$.
For mutational counts, the number of triplets $\tau$ is extremely large and the probability of mutation $p$ is very small. In the data described in \cite{Lawrence2013} there are typically between 1 and 10 mutations per megabase with an average of 4 mutations per megabase ($\tau \approx 10^6$). This means $\mathbb{E}[p] = \nicefrac{\alpha}{(\alpha + \beta)} \approx 4 \cdot 10^{-6}$ and thus, for mutational counts in cancer genomes we have that $\beta >> \alpha$.
As $\tau$ is large and $p$ is small, the Binomial model is very well approximated by the Poisson model
$\text{Bin}(\tau, p) \backsimeq \text{Pois}(\tau p)$.
This distributional equivalence of Poisson and Binomial when $\tau$ is large and $p$ is small is well known. This also means that the models \eqref{eq:poisson} and \eqref{eq:betadist} are approximately equivalent to $\tau p = (WH)_{nm}$.
The Beta and Gamma distributions are also approximately equivalent in our setting. Indeed, as $\beta >> \alpha$, the Beta density can be approximated by the Gamma density in the following way
\begin{align*}
\frac{p^{\alpha - 1}(1-p)^{\beta - 1}}{B(\alpha, \beta)} = \frac{p^{\alpha - 1}}{\Gamma(\alpha)} (\beta-1+\alpha)(\beta-1+(\alpha-1)) \cdots (\beta-1) (1-p)^{\beta - 1} \approx \frac{p^{\alpha - 1}}{\Gamma(\alpha)} \beta^\alpha (e^{-p})^{\beta}.
\end{align*}
Therefore, for mutational counts the model in \eqref{eq:betabinom} is equivalent to
\begin{align} \label{eq:gammapoisson}
\begin{split}
V_{nm}|&p \sim \text{Pois}( \tau p) \\
&p \sim \text{Gamma}(\alpha, \beta).
\end{split}
\end{align}
Since the Negative Binomial model is a Gamma-Poisson model we can also write the model as
\[ V_{nm} \sim \text{NB} \left(\alpha, \frac{\tau }{\beta + \tau}\right) \backsimeq \text{NB} \left(\alpha, \frac{ \tau \mathbb{E}[ p]}{\alpha + \tau \mathbb{E}[p]}\right) \backsimeq \text{NB}\left(\alpha, \frac{(WH)_{nm}}{\alpha + (WH)_{nm}} \right),\]
where the last parametrization is equivalent to the one in \cite{Gouvert2020}. In the first distributional equivalence we use $\mathbb{E}[p] \approx \frac{\alpha}{\beta}$ and in the second we use $\tau \mathbb{E}[ p] = (WH)_{nm}$. Compared to the Beta-Binomial model, the Negative Binomial model has one fewer parameter and is analytically more tractable. The mean and variance of this model are given by
\begin{align} \label{eq:mean_var_nb}
\mathbb{E}[V_{nm}] = (WH)_{nm} \quad \text{and} \quad \text{Var}(V_{nm}) = (WH)_{nm} \left(1 + \frac{(WH)_{nm}}{\alpha}\right).
\end{align}
In recent years, this model has become more popular to model the dispersion in mutational counts \citep{Martincorena2017,Zhang2020}. When $\alpha \rightarrow \infty$ above, the Negative Binomial model converges to the more commonly used Poisson model as $\text{Var}(V_{nm}) \downarrow (WH)_{nm}$. As shown in this section, the Negative Binomial model can be seen both as an extension of the Poisson model and as equivalent to the Beta-Binomial model. Thus, we opted to implement a NB-NMF model for mutational count data. More details on the approximation of the Negative Binomial to the Beta-Binomial distribution can also be found in \cite{Teerapabolarn2015}.
\subsection{Patient specific Negative Binomial NMF: NB$_\text{N}$-NMF} \label{subsec:patientNBNMF}
\cite{Gouvert2020} and \cite{Lyu2020} present a Negative Binomial model where $\alpha$ is shared across all observations. We extend the model by allowing patient specific dispersion. We assume
\[V_{nm} \sim \text{NB}\left( \alpha_n, \frac{(WH)_{nm}}{\alpha_n + (WH)_{nm}}\right) \] where $n \in 1, \dots, N$ correspond to the different patients. As for the estimation of $W$ and $H$ in \cite{Gouvert2020}, we define the following divergence measure:
\begin{align}
d_N(V||WH) & = \sum_{n=1}^N \sum_{m=1}^M \left \{ V_{nm} \log \left(\frac{V_{nm}}{ (WH)_{nm}}\right) - (\alpha_n + V_{nm}) \log \left(\frac{\alpha_n + V_{nm}}{\alpha_n + (WH)_{nm}} \right) \right \}. \label{eq:div}
\end{align}
In Section \ref{sec:methods} we show that the negative of the log-likelihood function is equal to this divergence up to an additive constant.
If $V = WH$, it is straightforward to see that $d_N(V||WH) = 0$ and when $V \neq WH$ we can show $d_N(V||WH)>0$ by defining $g(t) = (V_{nm}+t)\log \left( \nicefrac{ (V_{nm}+t)}{((WH)_{nm}+t) }\right)$ and showing $g'(t)<0$.
In our application, we find maximum likelihood estimates (MLEs) of $\alpha_1, \dots , \alpha_N$ based on the Negative Binomial likelihood using Newton-Raphson together with the estimate of $WH$ from Po-NMF. We opted for this more precise estimation procedure for $\alpha_1, \dots , \alpha_N$ instead of the grid search approach used in \cite{Gouvert2020}. Final estimates of $W$ and $H$ are then found by minimizing the divergence in \eqref{eq:div} by the iterative majorize-minimization procedure (see the derivation in Section \ref{sec:methods}).
The NB$_\text{N}$-NMF procedure is described in Algorithm \ref{alg:nbnmf_alphan} and further details can be found in Section \ref{sec:methods_alphai}. The NB-NMF method is similar except $\alpha_1 = \cdots = \alpha_N = \alpha$.
\begin{algorithm}[h!]
\caption{NB$_\text{N}$-NMF: Estimation of $W$, $H$ and $\{ \alpha_1, \dots, \alpha_N \}$}\label{alg:nbnmf_alphan}
\begin{algorithmic}[1]
\Require{$V, K, \epsilon$}
\Ensure{$W$, $H$, $\{ \alpha_1, \dots, \alpha_N \}$ }
\State $\hat{W}, \hat{H} \gets$ apply Po-NMF to $V$ with $K$ signatures
\State $\alpha_1, \dots , \alpha_N \gets$ Negative Binomial MLE using $\hat{W}, \hat{H}$ and $V$
\State Initialize $W^1,H^1$ from a random uniform distribution
\For{$i = 1, 2, \dots$}
\State $W^{i+1}_{nk} \gets W^i_{nk} \frac{\sum_{m=1}^M \frac{V_{nm}}{(W^iH^i)_{nm}} H^i_{km}}{\sum_{m=1}^M \frac{V_{nm} + \alpha_n}{(W^iH^i)_{nm} + \alpha_n} H^i_{km}}$
\linespread{2.7}\selectfont
\State $H^{i+1}_{km} \gets H_{km}^i \frac{\sum_{n=1}^N \frac{V_{nm}}{(W^{i+1}H^i)_{nm}} W^{i+1}_{nk}}{\sum_{n=1}^N \frac{V_{nm} + \alpha_n}{(W^{i+1}H^i)_{nm} + \alpha_n} W^{i+1}_{nk}}$
\linespread{2.7}\selectfont
\If{$|d_N(V||W^{i+1}H^{i+1}) - d_N(V||W^{i}H^{i}) | < \epsilon$}
\State \Return $ W, H \gets W^{i+1}, H^{i+1}$
\linespread{1}\selectfont
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
\section{Estimating the number of signatures} \label{sec:methods_CV}
Estimating the number of signatures is a difficult problem when using NMF. More generally, estimating the number of components for mixture models or the number of clusters is a well known challenge in applied statistics.
Examples of the complexity of this problem can be found in the $K$-means clustering algorithm and in Gaussian mixture models where the number of clusters $K$ has to be provided for the methods. The silhouette and the elbow method are among the most common techniques to estimate $K$ for $K$-means clustering, however it is often unclear how to find an exact estimate of $K$. A detailed description of these challenges can be found in \cite{Gupta2018}. Here the authors also propose a new way of estimating the number of clusters that follows the same rationale as the elbow method, but it combines the detection of optimal well-separated clusters and clusters with equal number of elements. The discrepancy between these two solutions is then used to determine $K$.
Estimating the number of components is also a critical issue for mixed membership models. One example can be found in the estimation of the number of subpopulations in population genetics. Population structure is indeed modeled as a mixture model of $K$ subpopulations and the inference of $K$ is challenging. In \cite{Pritchard2000} an ad hoc solution is proposed under the assumption that the posterior distribution follows a normal distribution, which is often violated in practice. \cite{Verity2016} take a different approach and derive a new estimator using thermodynamic integration based on the "power posterior" distribution. This is nothing more than the ordinary posterior distribution, but with the likelihood raised to a power to ensure that the distribution integrates to 1. This procedure seems to be very accurate, however it is computationally intense and thus can only be used on small data sets.
Classical procedures to perform model selection are the Akaike Information Criterion (AIC)
\begin{align}\label{eq:AIC}
\text{AIC} = -2\ln L + 2n_{prm}
\end{align}
and the Bayesian Information Criterion (BIC)
\begin{align}\label{eq:BIC}
\text{BIC} = -2\ln L + \ln(n_{obs}) n_{prm}
\end{align}
where $\ln L$ is the estimated log-likelihood value, $n_{obs}$ is the number of observations and $n_{prm}$ the number of parameters to be estimated. The two criteria attempt to balance the fit to the data (measured by $-2\ln L$) and the complexity of the model (measured by the scaled number of free parameters). We have $n_{obs} = N$ where $N$ is the number of patients, so $\ln(n_{obs}) > 2$ if $N \geq 8$, which means that in our context the number of parameters has a higher influence for BIC compared to AIC because real data sets always have at least tens of patients. Additionally, the structure of the data matrix $V$ can lead to two different strategies for choosing $n_{obs}$ when BIC is used. Indeed, the number of observations in this context can be set as the total number of counts (i.e.\, $N \cdot M$) or as the number of patients $N$, leading to an ambiguity in the definition of this criterion. \cite{Verity2016} also presents results on the performance of AIC and BIC, where the power is especially low for BIC. AIC provides higher stability in the scenario from \cite{Verity2016}, however it does not seem suitable in our situation due to a small penalty term.
A very popular model selection procedure is cross-validation. In \cite{Gelman2013} they compare various model selection methods including AIC and cross-validation. Here, the authors recommend to use cross-validation as they demonstrate that the other methods fail in some circumstances. In \cite{Luo2017} they also show that cross-validation has better performance than the other considered methods, including AIC and BIC.
\subsection{Model selection for NMF}
For NMF we propose an approach for estimating the rank which is highly inspired by cross-validation. As for classical cross-validation we split the patients in $V$ in a training and a test set multiple times. However, opposite to cross-validation, we are using information from the full data to validate the fit of the test set. As all the parameters in the model are free parameters, we have chosen to fix the exposures to the ones estimated from the full data. This means our evaluation on the test set is a combination of estimated signatures from the training set and exposures from the full data. The idea is to exploit the fact that the signature matrix should be robust to changes in the patients included in the training set. If the estimated signatures are truly explaining the main patterns in the data, then we expect the signatures obtained from the training set to be equivalent to the ones from the full data. Therefore the product of the exposures from the full data and the signatures from the training set should give a good approximation of the test set, if $K$ is appropriate.
Inputs for the method are the data $V$, an NMF procedure, the number of signatures to be considered $K$, the number of splits into training and test $J$ and the $cost$ function. We evaluate the model for a range of values of $K$ and then select the model with the lowest cost. The NMF procedures we are using here are either Po-NMF from \cite{Lee1999}, NB-NMF or NB$_N$-NMF in Algorithm \ref{alg:nbnmf_alphan}, but any NMF procedure could be applied.
A visualization of our model selection algorithm can be found in Figure \ref{fig:cv_algorithm}. First, we consider the full data matrix $V$ and we apply the chosen NMF algorithm to obtain an estimate for both $W$ and $H$. Afterwards, for each iteration we sample $90\%$ of the patients randomly to create the training set and determine the remaining $10\%$ as our test set. We then apply the chosen NMF to the mutational counts of the training set obtaining an estimate $W_{train}$ and $H_{train}$.
\begin{figure}[h]
\centering
\includegraphics[width =0.99\textwidth]{Figures/CrossValidationTikzFigure_Train_Test.pdf}
\vspace{-1em}
\caption{Model selection procedure for a given number of signatures $K$ and a count matrix $V$. Pseudocode can be found in Algorithm \ref{alg:crossval}.}
\label{fig:cv_algorithm}
\end{figure}
Now, as for classical cross-validation, we want to evaluate our model on the test set. To evaluate the model here, we use the full data: indeed, we multiply the exposures relative to the patients in the test set estimated on the full data times the corresponding signatures estimated from the training set. We use the prediction of the test data to evaluate the model computing the distance between the true data and their prediction with a suitable $cost$ function. This procedure is iterated $J$ times leading to $J$ cost values $c_j$, $j=1, \dots, J$. The median of these values is calculated for each number of signatures $K$. We call this procedure SigMoS and summarize it in Algorithm~\ref{alg:crossval}. The optimal $K$ is the one with the lowest cost. We use the generalized Kullback-Leibler divergence as a cost function and discuss the choice of cost function in Section~\ref{sec:discussion}. We compare the influence of the model choice for our procedure to AIC and BIC. We also compare to other recently introduced methods in the literature and present the results from this comparison in Section \ref{sec:results_simulations}.
\mbox{}
\begin{algorithm}[h]
\caption{SigMoS: Cost for a given number of signatures $K$ for the count matrix $V$}\label{alg:crossval}
\begin{algorithmic}[1]
\Require{$V, K, J, cost,$ NMF-distribution}
\Ensure{$c_{median}$}
\State $\hat{W}, \hat{H} \gets$ apply NMF with the chosen distribution to $V$ with $K$ signatures
\For{$j = 1$ to $J$}
\State $V_{train} \gets $ mutational counts for the patients in the $j^{th}$ training set
\State $V_{test} \gets V \setminus V_{train}$
\State $\hat{W}_{test} \gets $ exposures from $\hat{W}$ for the patients in the test set
\State $\hat{W}_{train}, \hat{H}_{train} \gets$ apply NMF with the chosen distribution to $V_{train}$ with $K$ signatures
\State $c_j \gets cost(V_{test},\hat{W}_{test}\hat{H}_{train})$
\EndFor
\State \Return $c_{median} \gets median(c_1, \dots, c_J)$
\end{algorithmic}
\end{algorithm}
\section{Results} \label{sec:results_intro}
In this section we describe our results on both simulated and real data. For simulated data we present a study on Negative Binomial simulated data with different levels of dispersion where results from AIC, BIC, \texttt{SparseSignatures}\xspace \citep{Lal2021}, \texttt{SigneR}\xspace \citep{Rosales2017}, and \texttt{SignatureAnalyzer}\xspace \citep{Tan2013} are compared with our proposed model selection procedure. These results are discussed in Section \ref{sec:results_simulations}. We show that our method performs well and is robust to model misspecification. We use a residual analysis to evaluate the goodness of fit and a likelihood ratio test to choose between NB-NMF and NB$_\text{N}$-NMF. Furthermore, we apply this analysis to a data set with 21 breast cancer patients from \cite{alexandrov2013breastcancer} in Section \ref{sec:results_BRCA}.
\subsection{Simulation results} \label{sec:results_simulations}
We simulate our data following the procedure of \cite{Lal2021}. We consider the set of signatures from \cite{Tate2019} and, for each simulation run, we use signature 1 and 5 from \cite{Tate2019} as they have been shown to be shared across all cancer types, and we sample at random three additional signatures from this set. We simulate the exposures from a Negative Binomial model with mean $6000$ and dispersion parameter $1.5$ \citep{Lal2021}. The mutational count data is then generated as the product of the exposures and signature matrix. Lastly, Poisson noise, Negative Binomial noise with dispersion parameter $\alpha \in \{10, 200\}$ or patient specific dispersion parameters $\{\alpha_1, \dots , \alpha_N\}$ are added to the mutational counts. The values of the patient specific dispersion are inspired from the data set in Section \ref{sec:results_BRCA}. A lower $\alpha$ is associated with higher dispersion, however the actual level of dispersion associated to a given $\alpha$ value depends on the absolute mutational counts as can be seen from the variance in Equation \eqref{eq:mean_var_nb}. Therefore it is not possible to directly compare these values with the ones estimated for the real data. Using this setup, we simulated 100 data sets with five signatures and 100 patients each.
\begin{figure}[h]
\centering
\includegraphics[width = \textwidth]{Figures/Fig1_Simulations_WithTruePoisson_Transposed.png} \vspace{-2em}
\caption{Results from AIC, BIC, and SigMoS based on Po-NMF (in the top), NB-NMF (in the middle) and NB$_N$-NMF (in the bottom) using simulated data. Each method is applied on 100 simulated data sets for each value of $\alpha$ and four different scenarios are simulated: Poisson and Negative Binomial dispersion with $\alpha = 10, 200$ and $\alpha \sim U(10,150)\}$. The true number of signatures in the simulated data is five and marked in bold. For each data set, the value of the number of signatures is estimated with AIC, BIC, and SigMoS using Po-NMF, NB-NMF and NB$_N$-NMF. The $\alpha$ values are estimated by maximum likelihood estimation.}
\label{fig:simulationresults}
\end{figure}
Figure \ref{fig:simulationresults} shows the effect of the model assumption on the estimated number of signatures using AIC, BIC (recall Equations \eqref{eq:AIC} and \eqref{eq:BIC}) and SigMoS as model selection procedures. Using Po-NMF, when the data is overdispersed, leads to an overestimation of the number of signatures as the model explains the overdispersion with additional signatures. The Negative Binomial model allows for more variability and thus a lower number of signatures can explain the data.
Correctly specifying the model proves to be essential for determining the optimal number of signatures for AIC and BIC especially when the overdispersion is high ($\alpha = 10$). Nonetheless, SigMoS is robust even when the wrong model is chosen (i.e.\, when using Po-NMF). These results also illustrate that our model selection method is accurate for detecting the correct number of signatures in situations with some overdispersion i.e.\, $\alpha = 200$ when assuming the Poisson model. When the true distributional model is unknown, another possibility would be to use the NB$_\text{N}$-NMF on the data. Indeed, the patient specific NB$_\text{N}$-NMF is a generalization of both Po-NMF and NB-NMF and thus it is robust under any of these distributional assumptions (see bottom row in Figure \ref{fig:simulationresults}).
In this simulation study we also consider the accuracy of the MLE for the $\alpha$ value in the three scenarios. Our approach estimates the true $\alpha$ with high accuracy when the overdispersion is high i.e.\, $\hat{\alpha} \in [9.21, 11.78]$ for $\alpha = 10$, $\alpha$ is slightly overestimated when the overdispersion is low: for $\alpha = 200$ we find $\hat{\alpha} \in [225.8, 292.7]$.
However according to Figure \ref{fig:simulationresults} this does not affect the performance of our model selection procedure.
We considered also an identical set of simulations with 10 signatures: here, we observed that our proposed approach is again estimating the number of signatures accurately and it is still robust to model misspecifications when comparing the results assuming the Poisson and the Negative Binomial model.
Several methods have been proposed in the literature for estimating the number of signatures in cancer data. In the following we present the results of a comparison between our method and three commonly used methods in the literature: \texttt{SparseSignatures}\xspace, \texttt{SignatureAnalyzer}\xspace, and \texttt{SigneR}\xspace. \texttt{SparseSignatures}\xspace \citep{Lal2021} provides an alternative cross-validation approach where the test set is defined by setting $1\%$ of the entries in the count matrix to $0$. Then NMF is iteratively applied to the modified count matrix and the entries are updated at each iteration. The resulting signature and exposure matrices are used to predict the entries of the matrix corresponding to the test set. \texttt{SignatureAnalyzer}\xspace \citep{Tan2013}, on the other hand, proposes a procedure where a Bayesian model is used and maximum a posteriori estimates are found with a majorize-minimization algorithm. Lastly, with \texttt{SigneR}\xspace \citep{Rosales2017} an empirical Bayesian approach based on BIC is used to estimate the mutational signatures.
For our method comparison, we run all methods on the simulated data from Figure \ref{fig:simulationresults}. For each method and simulation setup we only allow the number of signatures to vary from two to eight due to the long running time of some of these methods.
\begin{figure}[h]
\centering
\includegraphics[width = \textwidth]{Figures/methodComparison_v2.pdf} \vspace{-2em}
\caption{Method comparison using simulated data. Each method is applied on 100 simulated data sets and, for each data set, the value of the estimated number of signatures is kept. We test values for the number of signatures from two to eight and we simulate under four different scenarios: Poisson and Negative Binomial with $\alpha = 10, 200$, and a patient specific dispersion parameter.}
\label{fig:methodcomparison_simulations}
\end{figure}
Figure \ref{fig:methodcomparison_simulations} shows that, when Poisson data are simulated all methods have a very good performance and can recover the true number of signatures in most of the simulations. The \texttt{SparseSignatures}\xspace method also has a good performance in this case as it includes a background signature. We would therefore expect it to always estimate one signature more than the true number of signatures present in the data. When Negative Binomial noise is added to the simulated data with a moderate dispersion ($\alpha = 200$), however, both \texttt{SignatureAnalyzer}\xspace and \texttt{SigneR}\xspace have low power emphasizing the importance of correctly specifying the distribution for these methods, whereas our proposed approach (regardless of the distributional assumption) and \texttt{SparseSignatures}\xspace maintain good power. For patient specific overdispersion also the power of \texttt{SparseSignatures}\xspace decreases. Good performance is also achieved with our proposed approach under high overdispersion ($\alpha = 10$) if the correct distribution is assumed. These results demonstrate that SigMoS is accurate for detecting the correct number of signatures and it performs well also in situations with overdispersion.
\subsection{Results on Breast Cancer Data} \label{sec:results_BRCA}
Here, we apply the workflow for mutational count analysis to a data set with 21 breast cancer patients \citep{alexandrov2013breastcancer}. In Figure \ref{fig:BRCA21} we present the results of this analysis.
\begin{figure}[h]
\centering
\includegraphics[width = \textwidth]{Figures/PlotEverything_v5.png}
\caption{SigMoS results for Po-NMF, NB-NMF, and NB$_\text{N}$-NMF applied to a data set with 21 breast cancer patients. The upper panel of the figure shows the median of 100 iterations of our proposed model selection approach and the confidence interval between the $25\%$ and $75\%$ quantiles. The two lower panels of the figure show the residuals plots for the optimal number of signatures from SigMoS. The lines in the first plot correspond to two times the expected variance under the chosen distributional assumption. As the NB$_\text{N}$-NMF holds 21 different expected variances, we have chosen to plot the median, minimum and maximum variance among the 21. The second plots show the normalised residuals. The vertical grey lines depict the theoretical quantiles.}
\label{fig:BRCA21}
\end{figure}
We have applied SigMoS to choose the number of signatures for the NMF-methods: Po-NMF, NB-NMF and NB$_\text{N}$-NMF. SigMoS indicates to use three signatures with all methods. This is in line with the results in our simulation study, where we show that our model selection is robust to model misspecification. In Figure \ref{fig:BRCA21} we also present results from BIC. According to BIC, six signatures are needed for Po-NMF whereas only three signatures should be used with NB$_\text{N}$-NMF which emphasizes the importance of a correct model choice when using BIC. The confidence intervals for SigMoS also demonstrate that the results are more robust around the optimal number of signatures. When the number of signatures is too high we see a high variability in the model selection which explains the fluctuations in the median.
After finding the number of signatures we report the corresponding residuals $R_{nm} = V_{nm} - (WH)_{nm}$. The residuals are plotted against the expected mean $(WH)_{nm}$, as the variance in both the Poisson and Negative Binomial model depends on this value. The colored lines in the residual plots correspond to $\pm 2\sigma$ for the Poisson and the Negative Binomial distribution respectively. The variance $\sigma^2$ can be derived from Equation \pef{eq:mean_var_nb} for the Negative Binomial model and is equal to the mean for the Poisson model.
Starting from the results of Po-NMF, we observe a clear overdispersion in the residuals, which suggest to use a Negative Binomial model. In the first residual plot for the Negative Binomial model we see that the residuals have a much better fit to the variance structure, which is indicated by the colored lines. The quantile lines in the lower panel with normalised residuals show that the quantiles from the NB-NMF and the NB$_\text{N}$-NMF are much closer to the theoretical ones, suggesting again that the Negative Binomial model is better suited in this case.
We apply a likelihood ratio test to choose between NB-NMF ($\alpha = 65$) and NB$_\text{N}$-NMF ($\alpha_n \in [16, 26083]$) as follows:
\begin{align}\label{eq:LRT}
\text{LRT} = -2(\log L_{\text{NB}\text{-NMF}} - \log L_{\text{NB}_\text{N}\text{-NMF}}) = 166.68 \sim \chi^2(20)
\end{align}
and we obtain that NB$_\text{N}$-NMF has a better fit in this case ($p \approx 0$). We would therefore recommend to use NB$_\text{N}$-NMF when analysing the 21 breast cancer patients. For this data set indeed there is a high difference in the dispersion parameters of the different patients with one patient with a much lower dispersion ($\alpha = 26083$) compared to the others ($\alpha_n \in [16, 550]$) and thus NB$_\text{N}$-NMF is preferable.
We applied the same workflow removing the patient with $\alpha = 26083$ from the data, indeed this patient can also be viewed as an outlier in this context \citep{Fischer2013}. In this case, we find that NB-NMF would give a better fit to the data ($p \approx 1$), suggesting that the patient specific component in this case would be mainly driven by this one outlier. The usage of NB$_\text{N}$-NMF instead of NB-NMF would thus depend on the treatment of outliers in downstream analyses in this case.
\section{Discussion} \label{sec:discussion}
Mutational profiles from cancer patients are a widely used source of information and NMF is often applied to these data in order to identify signatures associated with cancer types. We propose a new approach to perform the analysis and signature extraction from mutational count data where we emphasize the importance of validating the model using residual analysis, and we propose a robust model selection procedure.
We use the Negative Binomial model as an alternative to the commonly used Poisson model as the Negative Binomial can account for the high dispersion in the data. As a further extension of this model, we allow the Negative Binomial to have a patient specific variability component to account for heterogeneous variance across patients.
We propose a model selection approach for choosing the number of signatures. As we show in Section \ref{sec:results_simulations} this method works well with both Negative Binomial and Poisson data and it is a robust procedure for choosing the number of signatures. We note that the choice of the divergence measure for the $cost$ function in Algorithm \ref{alg:crossval} is not trivial and may favor one or the other model and thus a comparison of the costs between different NMF methods is not possible. For example, in our framework, we use the Kullback-Leibler divergence which would favor the Poisson model. This means that a direct comparison between the cost values for Po-NMF and NB-NMF or NB$_\text{N}$-NMF is not feasible.
To check the goodness of fit and choose between the Poisson model and the Negative Binomial model we propose to use the residuals and to choose between the classical Negative Binomial (NB-NMF) and the Negative Binomial with patient specific variability (NB$_\text{N}$-NMF) we use a likelihood ratio test.
We investigated the role of the cost function in our model selection by including the Frobenius norm and the Beta and Itakura-Saito (IS) \citep{Fevotte2011} divergence measures from \cite{Li2012} where the authors propose a fast implementation of the NMF algorithm with general Bregman divergence. In this investigation the cost function did not influence the optimal number of signatures. The only difference was how the cost values differed among the NMF methods, as each cost function favored the models differently. Therefore we chose to use the Kullback-Leibler divergence and compared the methods with the residual analysis.
Less signatures are found when accounting for overdispersion with the Negative Binomial model. Indeed, there is no need to have additional signatures explaining noise, which we assume is the case for the Poisson model. We show the Negative Binomial model is more suitable and therefore believe the corresponding signatures are more accurate. This can be helpful when working with mutational profiles for being able to better associate signatures with cancer types and for a clearer interpretation of the signatures when analysing mutational count data. The workflow for analysing the data, and the procedures in Algorithms \ref{alg:nbnmf_alphan} and \ref{alg:crossval} are available in the R package SigMoS at \url{https://github.com/MartaPelizzola/SigMoS}.
\section{Methods}\label{sec:methods}
In Section \ref{sec:NegBinNMF} we describe the NB-NMF model applied to mutational count data and we propose an extension where a patient specific dispersion coefficient is used. The majorization-minimization (MM) procedure for patient specific overdispersion $\{ \alpha_1, \dots, \alpha_N \}$ can be found in Section \ref{subsec:patientNBNMF}. In our application, we propose to use negative binomial maximum likelihood estimation for $\alpha$ (see Section \ref{subsec:negbin}) and $\{ \alpha_n: 1 \leq n \leq N \}$ (see Section \ref{subsec:patientNBNMF}) instead of the grid search approach adopted in \cite{Gouvert2020}. The pseudocode shown in the initial steps of Algorithm \ref{alg:nbnmf_alphan} describes this approach for patient specific overdispersion. For shared overdispersion among all patients and mutation types we simply set $\alpha = \alpha_1 = \cdots = \alpha_N$ in Algorithm \ref{alg:nbnmf_alphan}.
\subsection{Patient specific NB$_\text{N}$-NMF}\label{sec:methods_alphai}
As we discuss in Section \ref{sec:results_BRCA} the variability in mutational counts among different patients can be really high. Thus we extend the NB-NMF model from \cite{Gouvert2020} (see Section \ref{subsec:negbin}) by including a patient specific component (see Section \ref{subsec:patientNBNMF}). We noticed that the variability among different patients is usually much higher than the one among different mutation types, thus we decided to focus on a patient specific NB$_\text{N}$-NMF here.
The entries in V are modeled as
\[V_{nm} \sim \text{NB}\left( \alpha_n, \frac{(WH^T)_{nm}}{\alpha_n + (WH^T)_{nm}}\right), \]
where $\alpha_n$ is the dispersion coefficient of each patient, and the corresponding Gamma-Poisson hierarchical model can be rewritten as:
\begin{align} \label{eq:gamma_patient}
V_{nm}|a_{nm} \sim \text{Po}(a_{nm}(WH)_{nm}) \\ \nonumber
a_{nm} \sim \text{Gamma}(\alpha_n, \alpha_n).
\end{align}
Here $a_{nm}$ is the parameter responsible for the variability in the Negative Binomial model.
Now we can write the Negative Binomial log-likelihood function with patent specific $\alpha_n$
\begin{align} \label{eq:fulllik}
\ell(W,H;V) & = \sum_{n=1}^N \sum_{m=1}^M \log { \binom{\alpha_n + V_{nm} - 1}{\alpha_n}} + V_{nm} \log \left( { \frac{(WH)_{nm}}{\alpha_n + (WH)_{nm}}}\right) + \alpha_n \log \left( { 1 - \frac{(WH)_{nm}}{\alpha_n + (WH)_{nm}} } \right)
\end{align}
and recognize the negative of the log-likelihood function as proportional to the following divergence:
\begin{align}
d_N(V||WH) & = \sum_{n=1}^N \left \{ \sum_{m=1}^M V_{nm} \log \left(\frac{V_{nm}}{ (WH)_{nm}}\right) - (\alpha_n + V_{nm}) \log \left(\frac{\alpha_n + V_{nm}}{\alpha_n + (WH)_{nm}} \right) \right \}. \label{eq:div_methods}
\end{align}
The term $\log { \binom{\alpha_n + V_{nm} - 1}{\alpha_n}}$ in the likelihood is a constant we can remove and then we have added the constants $V_{nm} \log (V_{nm})$, $\alpha_{n} \log (\alpha_{n})$ and $(V_{nm} + \alpha_n) \log (V_{nm} + \alpha_n)$.
Following the steps in \cite{Gouvert2020}, we will update $W$ and $H$ one at a time, while the other is assumed fixed. We will show the procedure for updating $H$ using a fixed $W$ and its previous value $H^t$. First we construct a majorizing function $G(H, H^t)$ for $d_N(V||WH)$ with the constraint that $G(H, H) = d_N(V||WH)$. The first term in Equation \eqref{eq:div_methods} can be majorized using Jensen's inequality leading to
\begin{align}
d_N(V||WH) & = \sum_{n=1}^N \sum_{m=1}^M V_{nm} \log \left(\frac{V_{nm}}{\sum_{k=1}^K W_{nk}H_{km}} \right) - (\alpha_n + V_{nm}) \log \left(\frac{\alpha_n + V_{nm}}{\alpha_n + \sum_{k=1}^K W_{nk}H_{km}} \right) \\ \nonumber
& \leq \sum_{n=1}^N \sum_{m=1}^M V_{nm} \log V_{nm} - V_{nm} \sum_{k=1}^K \beta_{k} \log \frac{W_{nk}H_{km}}{\beta_{k}} \\ \nonumber
& + (\alpha_n + V_{nm}) \log \left(\frac{\alpha_n + \sum_{k=1}^K W_{nk}H_{km}}{\alpha_n + V_{nm}} \right) \label{eq:jensen}
\end{align}
where $\beta_{k} = \nicefrac{W_{nk}H_{km}^t}{\sum_{k=1}^K W_{nk}H^t_{km}}$. The second term can be majorized with the tangent line using the concavity property of the logarithm:
\begin{align}
d_N(V||WH) & = \sum_{n=1}^N \sum_{m=1}^M V_{nm} \log V_{nm} - V_{nm} \sum_{k=1}^K \beta_{k} \log \frac{W_{nk}H_{km}}{\beta_{k}} \\ \nonumber
& + (\alpha_n + V_{nm}) \log \left(\frac{\alpha_n + \sum_{k=1}^K W_{nk}H_{km}}{\alpha_n + V_{nm}} \right) \\ \nonumber
& \leq \sum_{n=1}^N \sum_{m=1}^M V_{nm} \log V_{nm} - V_{nm} \sum_{k=1}^K \beta_{k} \log \frac{W_{nk}H_{km}}{\beta_{k}} \\ \nonumber
& + (\alpha_n + V_{nm}) \log \left(\frac{\alpha_n + (WH^t)_{nm}}{\alpha_n + V_{nm}} \right) + \frac{W_{nm}}{\alpha_n + (WH^t)_{nm}}(H_{nm} - H^t_{nm}) = G(H, H^t).
\label{eq:tangent}
\end{align}
Lastly, we need to show that $G(H, H) = d_N(V||WH)$. This follows from
\begin{align}
G(H, H) = & \sum_{n=1}^N \sum_{m=1}^M V_{nm} \log V_{nm} - V_{nm} \sum_{k=1}^K \beta_{k} \log \frac{W_{nk}H_{km}}{\beta_{k}} \\ \nonumber
& + (\alpha_n + V_{nm}) \log \left(\frac{\alpha_n + (WH)_{nm}}{\alpha_n + V_{nm}} \right) + \frac{W_{nm}}{\alpha_n + (WH)_{nm}}(H_{nm} - H_{nm}) \\ \nonumber
& = \sum_{n=1}^n \sum_{m=1}^M V_{nm} \log V_{nm} - V_{nm}\sum_{k=1}^K \frac{W_{nk}H_{km}}{\sum_{k=1}^K W_{nk}H_{km}} \log \frac{W_{nk}H_{km}}{\frac{W_{nk}H_{km}}{\sum_{k=1}^K W_{nk}H_{km}}} \\ \nonumber
& - (\alpha_n + V_{nm}) \log \left(\frac{\alpha_n + V_{nm}}{\alpha_n + \sum_{k=1}^K W_{nk}H_{km}} \right) \\ \nonumber
& = \sum_{n=1}^n \sum_{m=1}^M V_{nm} \log V_{nm} - V_{nm} \cdot 1 \cdot \log \left(\sum_{k=1}^K W_{nk}H_{km} \right) \\ \nonumber
&- (\alpha_n + V_{nm}) \log \left(\frac{\alpha_n + V_{nm}}{\alpha_n + \sum_{k=1}^K W_{nk}H_{km}} \right) \\ \nonumber
& = \sum_{n=1}^N \sum_{m=1}^M V_{nm} \log \left(\frac{V_{nm}}{\sum_{k=1}^K W_{nk}H_{km}}\right) - (\alpha_n + V_{nm}) \log \left(\frac{\alpha_n + V_{nm}}{\alpha_n + \sum_{k=1}^K W_{nk}H_{km}} \right) \\ \nonumber
& = d_N(V||WH)
\end{align}
Having defined the majorizing function $G(H, H^t)$ in \eqref{eq:tangent}, we can derive the following multiplicative update for $H$:
\begin{align}
H_{km}^{t+1} = H_{km}^t \frac{\sum_{n=1}^N \frac{V_{nm}}{(WH^t)_{nm}} W_{nk}}{\sum_{n=1}^N \frac{V_{nm} + \alpha_n}{(WH^t)_{nm} + \alpha_n} W_{nk}}. \label{eq:updateH_methods}
\end{align}
Similar calculations can be carried out for $W$ to obtain the following update:
\begin{align}
W^{t+1}_{nk} = W^t_{nk} \frac{\sum_{m=1}^M \frac{V_{nm}}{(W^tH)_{nm}} H_{km}}{\sum_{m=1}^M \frac{V_{nm} + \alpha_n}{(W^tH)_{nm} + \alpha_n} H_{km}}. \label{eq:updateW_methods}
\end{align}
It is straightforward to see that when $\alpha_n = \alpha$ for all $ n = 1, \dots, N$ then the updates for $W$ and $H$ equal those in \cite{Gouvert2020}. Additionally, as shown in \cite{Gouvert2020} when $\alpha \to \infty$ the updates of the Po-NMF \citep{Lee1999} are recovered.
The pseudo code in Algorithm \ref{alg:nbnmf_alphan} summarizes the NB$_\text{N}$-NMF model discussed in this section.
\subsection{Code for method comparison}
For \texttt{SparseSignatures}\xspace we use the function \texttt{nmfLassoCV} with \texttt{normalize\_counts} being set to FALSE and \texttt{lambda\_values\_alpha} and \texttt{lambda\_values\_beta} to zero. All the other parameters are set to their default values. When applying \texttt{SignatureAnalyzer}\xspace we used the following command \texttt{python SignatureAnalyzer-GPU.py --data f --prior\_on\_W L1 --prior\_on\_H L2 --output\_dir d --max\_iter 1000000 --tolerance 1e-7 --K0 8}. For \texttt{SigneR}\xspace we used the default options.
\section*{Acknowledgement}
We would like to thank Simon Opstrup Drue for helpful comments and suggestions on an earlier version of this manuscript.
MP acknowledges funding of the Austrian Science Fund (FWF Doctoral Program Vienna Graduate School of Population Genetics", DK W1225-B20).
\bibliographystyle{apalike}
|
2,869,038,154,538 | arxiv | \section{Introduction}
Consider the free Hamiltonian as a self-adjoint operator acting on $L^2({\bf R}^n)$:
\begin{align*}
H_0 = p^2 - \sigma |x|^{\alpha},
\end{align*}
where $x= (x_1, x_2,...,x_n) \in {\bf R}^n$, $p = -i \nabla $, $p^2 = p \cdot p = - \Delta $, $\sigma > 0$ and $0 < \alpha < 2$.
The external potential $V$ is defined as follows.
\begin{Ass}\label{A1}
Let $V$ be a multiplication operator of the function $V\in C^{\infty} ({\bf R}^n)$ that satisfies the following decaying conditions: for $0 < \theta \leq \rho := 1- \alpha /2$, $|x| \gg 1$ and any multi-index $\alpha$, there exist constants $C_{V,\alpha} >0$ such that
\begin{align*}
\left| \partial ^{\alpha} V (x) \right| \leq C_{V,\alpha} \J{x}^{- \theta - |\alpha|} ,
\end{align*}
where $\J{x} := (1 + |x|^2) ^{1/2}$. Moreover, there exist $0 < c_0 < C_0$ such that
\begin{align*}
c_0 \J{x}^{-\theta} \leq V(x) \leq C_0 \J{x}^{- \theta }
\end{align*}
or
\begin{align*}
c_0 \J{x}^{-\theta } \leq - V(x) \leq C_0 \J{x}^{- \theta }
\end{align*}
holds for $|x| \gg 1$.
\end{Ass}
Under this assumption, we define the perturbed Hamiltonian $H = H_0 +V$, which is also a self-adjoint because $V$ is bounded. Then, a family of unitary operators can be defined as follows:
\begin{align*}
W(t) := e^{itH} e^{-itH_0}, \quad t \in {\bf R},
\end{align*}
owing to the self-adjointness of $H_0$ and $H$. If the external potential $V\in L^{\infty} ({\bf R}^n)$ satisfies
\begin{align}\label{0}
\left|
V(x)
\right| \leq C \J{x}^{- \rho - \varepsilon}
\end{align}
with $\varepsilon>0$, then the existence and completeness of wave operators
\begin{align*}
{W}^{\pm} := \mathrm{s-} \lim_{t \to \pm \infty} W(t)
\end{align*}
can be proven (see Bony-Carles-H\"{a}fner-Michel \cite{BCHM} and Itakura \cite{It2}). Hence, the external potential satisfying \eqref{0} can be considered short-range. Physically, the repulsive potentials $- \sigma |x|^{\alpha} $ accelerate the quantum particle, and the probability of the position $x(t)$ and velocity $v(t)$ of the particle behave similar to $\CAL{O} (t^{1/ \rho})$ and $\CAL{O}(t^{1/\rho -1})$, respectively (see Section 1.3 of Itakura \cite{It3}). This acceleration phenomenon changes the threshold of the decay power of the external potential for which the wave operators exist. In the case where $\sigma = 0$, Dollard \cite{Do}, Jensen-Ozawa \cite{JO}, and others have determined that the threshold for the existence of the wave operators is $\rho =1$ and that the wave operators do not exist if $\rho \leq 1$. Subsequently, Ozawa \cite{O} considered the case where $\sigma \neq 0$ and $\alpha =1$, that is, a Stark Hamiltonian, and showed that its threshold is $\rho =1/2$. Ishida \cite{Is} considered the case where $\alpha =2$, showed that its threshold cannot be characterized by the polynomial decay of the external potential, and determined that $(\log (1+|x|))^{-1}$ is the threshold of the decay rate. Based on such studies, the threshold in our case is reasonably $\rho = 1- \alpha /2$. Hence, we prove that this expectation is true using the following theorem.
\begin{thm}\label{T1}
Under Assumption \ref{A1}, the wave operators ${W}^{\pm}$ do not exist.
\end{thm}
The key estimate to demonstrate this theorem is the {\em strong propagation estimate} for $e^{-it H}$, which plays an important role in scattering theory. A well-known approach to obtain this estimate employs the conjugate operator $\SCR{A}$ such that the commutator on $\D{H} \cap \D{\SCR{A}}$ satisfies the Mourre inequality $ \varphi(H) i [H , \SCR{A}] \varphi(H) \geq c_0 \varphi (H) ^2$ with a positive constant $c_0 >0$. In this study, we employ $\SCR{A}$ as follows:
\begin{align} \label{ad10}
\SCR{A} := \J{x}^{- \alpha} x \cdot p + p \cdot x \J{x}^{- \alpha},
\end{align}
and this operator is different from the conjugate operators used in \cite{BCHM} and \cite{It}. Using conjugate operator $\SCR{A}$, we obtain the following theorem.
\begin{thm}\label{T2}
Let $\alpha _0 = \min \{ \alpha \sigma , (2- \alpha) \sigma \} $, $0< \delta \ll \alpha _0$, and $g \in C^{\infty} ({\bf R})$ be a cut-off function such that $g(x) = 1$ if $x < \delta $ and $g(x) = 0$ if $x > 2 \delta$. Then, for any $\kappa \geq 0$, $\varphi \in C_0^{\infty}({\bf R})$ and $\psi \in L^2( {\bf R}^n) $, there exists $C_{\kappa} >0$ such that
\begin{align}\label{adad30}
\left\| g(\SCR{A} /t) e^{-itH} \varphi (H) \J{\SCR{A}}^{- \kappa} \psi \right\| \leq C_{\kappa} |t|^{- \kappa} \| \psi \|
\end{align}
holds for $|t| \geq 1$.
\end{thm}
\begin{rem}
The nonexistence of the embedded eigenvalues for $H$ has been proven by \cite{It} under weaker assumptions than Assumption \ref{A1}; hence, we have $\sigma (H) = \sigma _{\mathrm{ac}} (H) = {\bf R}$.
\end{rem}
\begin{rem}\label{R1}
For the strong propagation estimates, Skibsted \cite{Sk} and Adachi \cite{A2} showed \eqref{adad30} for generalized frameworks with a suitable Hilbert space and a pair of self-adjoint operators ${H}$ and ${\SCR{A}}$. However, these studies had to assume that ${H}$ is bounded from below or that $i[H ,\SCR{A}]$ can be extended to a bounded operator. By contrast, for our model, $H$ and $\SCR{A}$ do not satisfy the both conditions mentioned above, and we do not rely on the results of \cite{Sk} and \cite{A2}. Hence, our estimate \eqref{adad30} is new and not a consequence of those results.
\end{rem}
\begin{rem}
In \cite{BCHM}, the authors considered $\tilde{H} = p^2 - \sigma \J{x}^{\alpha} +V $ instead of $H = p^2 - \sigma |x|^{\alpha} +V$ because ${H}$ can be written as $H = \tilde{H} + \sigma (|x|^{\alpha} - \J{x}^{\alpha} ) $ and $|x|^{\alpha} - \J{x}^{\alpha}$ belongs to the short range. Hence, if the asymptotic completeness (or nonexistence of eigenvalues) is proven for $e^{it \tilde{H}} e^{-it(\tilde{H} -V)}$ (or $\tilde{H}$), the same conclusion is true for $H$. However, for Theorem \ref{T2}, this does not hold because $|x|^{\alpha} - \J{x}^{\alpha}$ is not differentiable more than twice.
\end{rem}
By Theorem \ref{T1}, we determine that the repulsive potential $- \sigma |x|^{\alpha}$ relaxes the decay rate of the external potential $V$, which guarantees the existence of the wave operators. Conversely, the deceleration phenomenon was recently determined to occur when the harmonic potential $\sigma (t) |x|^2 $ exists with a time-decaying coefficient $\sigma (t)$ (see Ishida-Kawamoto \cite{IK,IK2}); in this case, we note that there exists $\rho >0$ in \eqref{0} such that the wave operators do not exist. Regarding these studies, to consider the sub-quadratic repulsive potential with time-decaying coefficients $- \sigma (t) |x| ^{\alpha}$ seems interesting. Our results are fundamental for considering such studies. \par
In the usual method to prove the nonexistence of wave operators (e.g., \cite{Do, JO, O}), the estimate
\begin{align}\label{ad30}
\int_{a}^b \left( V e^{-itH_0} \phi , e^{-itH_0} \phi \right) dt \geq C \int_{a}^b \frac{dt}{t}, \quad a>b \gg 1
\end{align}
is necessary for $\phi \in C_0^{\infty} ({\bf R}^n)$. If $\sigma =0$, the well-known MDFM-type decomposition $e^{-itH_0} = \CAL{M}(t) \CAL{D}(t) \SCR{F} \CAL{M}(t)$ holds, where $\CAL{M}(t)\phi(x)=e^{ix^2/(4t)}\phi(x)$, $\CAL{D}(t)\phi(x)=(2it)^{-n/2}\phi(x/(2t))$ and $\SCR{F}$ is the standard Fourier transform of $L^2({\bf R}^d)$. Then, the problem of the nonexistence of the wave operators is whether
$$ \mathrm{s-} \lim_{t \to \pm \infty} e^{itH} \CAL{M}(t) \CAL{D}(t) \SCR{F} \phi $$
exist. With this reduction, \eqref{ad30} can be reduced to
\begin{align*}
\int_{a}^b \left( V \CAL{M}(t) \CAL{D}(t) \SCR{F} \phi , \CAL{M}(t) \CAL{D}(t) \SCR{F} \phi \right) dt \geq C \int_{a}^b \frac{dt}{t}
\end{align*}
(with some error terms). Let $\phi $ be $ \chi_{(a' \leq 2|x| \leq b')} [\SCR{F}\phi ](x)$ with a characteristic function $\chi$. We can then easily obtain
\begin{align*}
\int_{a}^b \left( V \CAL{M}(t) \CAL{D}(t) \SCR{F} \phi , \CAL{M}(t) \CAL{D}(t) \SCR{F} \phi \right) dt &=
\int_{a}^b \left( V \chi _{(a't \leq |x| \leq b't)} \CAL{M}(t) \CAL{D}(t) \SCR{F} \phi , \CAL{M}(t) \CAL{D}(t) \SCR{F} \phi \right) dt
\\ & \geq C \int_{a}^b \frac{dt}{t}.
\end{align*}
These arguments are based on the integral kernel of $e^{it \Delta}$ having an explicit expression. In the case of $\sigma \neq 0$, the imitation of such an argument is difficult. Indeed, an MDFM-type decomposition for $e^{-itH_0}$ with $0< \alpha <2$ has not yet been obtained (if $\alpha=2$, the Mehler formula is known as a correspondence, e.g., \cite{Is, N}). Therefore, an alternative approach must be established. Our plans to obtain \eqref{ad30} are as follows. We first present the large-velocity estimate in Section 3. A similar estimate was proved by \cite{BCHM}. However, to show the nonexistence of wave operators, the estimate in \cite{BCHM} is insufficient, and we must extend this estimate. In particular, we show
\begin{align}\label{ad31}
\int_1^{\infty} \left\| F \left( \frac{|x|^{\rho} }{t} \right) e^{-itH} \varphi(H) \J{x}^{- \rho} \phi \right\| ^2 \frac{dt}{t} \leq C \| \phi \|^2.
\end{align}
in Proposition \ref{P2} for a large-velocity cut-off $F$, which is {\em not} compactly supported. To employ this cut-off, we must prove auxiliary lemmas (Lemmas \ref{L1} and \ref{L2}). Next, we provide the proof of Theorem \ref{T2} in Section 5 using Proposition \ref{P2} and the Mourre inequality (Proposition \ref{P3}). As mentioned in Remark \ref{R1}, we cannot rely on the results of \cite{Sk} and \cite{A2} because operator $H$ is not bounded from below, and the commutator of $H$ and conjugate operator $\SCR{A}$ cannot be extended to the bounded operator. For these reasons, we provide minor changes to the approach used in \cite{Sk}; the lower bounds of $H$ are only used to show the domain-invariant property,
\begin{align}\label{adadad1}
e^{-itH} \varphi(H) \D{\SCR{A}^N} \subset \D{\SCR{A} ^N},
\end{align}
for $N \in {\bf N}$. Hence, we must only provide a different proof for \eqref{adadad1} without using the lower bound of $H$. The approach is relatively simple; we simply divide $e^{-itH}$ into a $ \cos (t H) $ and $\sin (tH)$ part, for which we can justify the Helffer-Sj\"{o}strand formula and employ a direct calculation (see \S{5}).
To complete the proof of Theorem \ref{T1}, we want to follow the approaches in \cite{Is, IK, IK2, IW, JO, O} using the strong small-velocity estimate for the free-time evolution $e^{-itH_0}$ that has the following shape:
\begin{align}\label{ad32}
\left\|
g\left( \frac{|x|^{\rho}}{t} \right) e^{-itH_0} \phi
\right\| \leq C t^{-N}
\end{align}
for some $\phi \in \SCR{S}({\bf R}^n)$, where $g$ is the small-velocity cut-off $g$ and $N\in{\bf N}$. However, in our case, showing \eqref{ad32} is difficult even for $H_0$ by direct calculation because $e^{-itH_0}$ does not have the MDFM-type decomposition. Hence we alternatively employ Theorem \ref{T2} for $H_0$ (that is, $V\equiv0$), and then estimates $g(\SCR{A}/t) e^{-itH_0} \phi = o (t^{-1})$ and $V(x) (1-g(\SCR{A}/t) ) \varphi(H_0) \sim \CAL{O}(t^{-1})$ as $t \to \infty$ enable us to show Theorem \ref{T1} for $\theta=\rho$ (see \S{6}). We finally prove Theorem \ref{T1} for $0<\theta<\rho$ using the result of $\theta=\rho$ and Theorem \ref{T2} for $H$. Here we emphasize that the case of $\theta < \rho$ can be shown by the different scheme for the case of $\theta = \rho$. \par
In previous studies, proofs for the nonexistence of wave operators fully use good tools, such as MDFM-type decompositions and Fourier multipliers, that a free propagator has. However, our approach employs only the large-velocity propagation estimates and strong propagation estimates of the conjugate operator. We think such strategy is new and applicable to more developed studies.
\section{Preliminaries}
In this section, we introduce important lemmas. Throughout this study, $\| \cdot \| $ indicates the norm on $L^2({\bf R}^n)$ or operator norm on $L^2({\bf R}^n)$, and $(\cdot , \cdot )$ indicates the inner product of $L^2({\bf R}^n)$. If an operator $A$ satisfies $\| A \| \leq C$ with a constant $C$, which is independent of any parameters under consideration, then we may denote $A$ by $B_0$, and compact operators are denoted by $C_0$.
One difficulty in this study is the domain-invariant issue; showing that $ \varphi(H) \SCR{S}({\bf R}^n) \subset \D{ p^2 + |x|^{\alpha} } $ is difficult even if $\varphi \in C_0^{\infty} ({\bf R})$. This makes many arguments difficult; hence, in this section, we show some domain-invariant properties that are necessary to show Theorem \ref{T1}.
\begin{lem}\label{La1}
For any $\varphi \in C_0^{\infty} ({\bf R})$, $z \in {\bf C} \backslash {\bf R}$ and $j \in \{ 1,2,...,n \} $, the domain-invariant properties
\begin{align} \label{ad1}
\J{x}^{- \alpha} (z-H)^{-1} L^2({\bf R}^n) \subset \D{p^2}, \quad \J{x}^{- \alpha /2} (z-H)^{-1} L^2({\bf R}^n) \subset \D{p_j}
\end{align}
hold. In particular,
\begin{align} \label{ad2}
\J{x}^{- \alpha} \varphi (H) L^2({\bf R}^n) \subset \D{p^2}, \quad \J{x}^{- \alpha /2} \varphi (H) L^2({\bf R}^n) \subset \D{p_j}
\end{align}
hold.
\end{lem}
\Proof{
We show only \eqref{ad1}, because \eqref{ad2} can be shown using the Helffer-Sj\"{o}strand formula and \eqref{ad1}. Owing to the similar argument in Lemma 2.3 of \cite{BCHM}, we have that
\begin{align*}
\J{x}^{-\alpha} (z-H)^{-1} &= (p^2+1)^{-1}(p^2 +1) \J{x}^{-\alpha} (z-H)^{-1} \\ &=
(p^2 + 1) ^{-1} \J{x}^{-\alpha} (H + \sigma |x|^{ \alpha}-V) (z-H)^{-1} + (p^2 + 1) ^{-1} [p^2, \J{x}^{- \alpha} ] (z-H)^{-1}
\end{align*}
and that
\begin{align*}
& (p^2 + 1) ^{-1} [p^2, \J{x}^{- \alpha} ] (z-H)^{-1} \\ & = (p^2 + 1) ^{-1} [p^2, \J{x}^{- \alpha} ] (p^2 +1)^{-1} (H + \sigma |x|^{ \alpha}-V) (z-H)^{-1}
\\ &= (p^2 + 1) ^{-1} [p^2, \J{x}^{- \alpha} ] (p^2 +1)^{-1} \J{x}^{\alpha} \cdot \J{x}^{- \alpha} (H + \sigma |x|^{ \alpha}-V) (z-H)^{-1}.
\end{align*}
Clearly, the operator
\begin{align*}
[p^2, \J{x}^{- \alpha} ] (p^2 +1)^{-1} \J{x}^{\alpha} = \sum_{j=1}^n B_0 \J{x}^{- \alpha -2} x_j p_j (p^2 + 1)^{-1} \J{x}^{\alpha} + B_0 (p^2 + 1)^{-1} \J{x}^{\alpha}
\end{align*}
on $\D{\J{x}^{\alpha}}$ can be extended to the bounded operator, implying that $\J{x}^{- \alpha} (z-H)^{-1} L^2({\bf R}^n) \subset \D{p^2}$. Next, we present the second part of \eqref{ad1} using the first term of \eqref{ad1}.
We fix $z$ and set $u_l \in \SCR{S} ({\bf R}^n) $ such that $u_l \to (z-H)^{-1} u $ and $p_j^2 \J{x}^{-\alpha} u_l \to p_j^2 \J{x}^{-\alpha} (z-H)^{-1} u $ as $l \to \infty$. For $\SCR{S}({\bf R}^n)$, we have
\begin{align*}
p_j ^2 \J{x}^{-\alpha } &= [p_j^2, \J{x}^{- \alpha /2}] \J{x}^{- \alpha /2} + \J{x}^{-\alpha /2} p_j ^2 \J{x}^{- \alpha /2}
\\ &= \left( B_0 + 2 i \alpha x_j \J{x}^{\alpha /2 -2} p_j \right) \J{x}^{- \alpha} + \J{x}^{-\alpha /2} p_j ^2 \J{x}^{- \alpha /2}
\end{align*}
and hence,
\begin{align*}
\left\| p_j \J{x}^{- \alpha /2} (u_l - u_k) \right\| ^2 \to 0 , \quad \mbox{as} \quad l, k \to \infty.
\end{align*}
Because $p_j \J{x}^{- \alpha /2} $ is a closed operator, we have $p_j \J{x}^{- \alpha /2} u_l \to p_j \J{x}^{- \alpha /2} (z-H)^{-1} u \in L^2({\bf R}^n) $.
}
\begin{lem}\label{La2}
For all $N \in {\bf N}$ and $z \in {\bf C} \backslash {\bf R}$, we have
\begin{align} \label{ad3}
(z-H)^{-1} \D{\J{x}^{\rho N} } \subset \D{\J{x}^{\rho N}} , \quad \varphi(H) \D{\J{x}^{\rho N} } \subset \D{\J{x}^{\rho N}}.
\end{align}
In particular, for any fixed $t \in {\bf R}$,
\begin{align} \label{ad4}
e^{-itH} \varphi(H) \D{\J{x}^{\rho N}} \subset \D{ \J{x}^{\rho N}}.
\end{align}
\end{lem}
\Proof{
First, we show the first term of \eqref{ad3} with $N=1$ and use induction. Using the Helffer-Sj\"{o}strand formula, the second term of \eqref{ad3} can be similarly shown. Let $l \in {\bf N}$ and set $\gamma \in C_0^{\infty} ({\bf R})$ such that $\gamma(t) =1$ if $|t| \leq 1$ and $\gamma(t) = 0$ if $|t| >2$ and $J_l (x) = \J{x}^{\rho} \gamma (\J{x}/l) $. For $u \in L^2({\bf R}^n)$, we define $u_l := J_l({x}) (z-H)^{-1} \J{x}^{- \rho} u $. Then, by Lemma \ref{La1}, we have $ u_l \in \D{p^2 + |x|^{\alpha}} $, which implies that $u_l$, $\overline{H_0} = \overline{p^2} - \overline{|x|^{\alpha}}$; hence, the commutator can be calculated: $[J_l (x) , (z- H)^{-1}]$ as $(z-H) ^{-1}[H, J_l (x)] (z- H)^{-1}$. Then,
\begin{align*}
u_l &= - (z-H)^{-1} [ H, J_l (x) ] (z-H)^{-1} \J{x}^{- \rho} u + (z-H)^{-1} \gamma (\J{x}/l ) u \\
& = - (z-H)^{-1} B_0 \times \left(\sum_{m=1}^n p_m x_m \J{x}^{-\alpha /2 -1} +O(1)+ O(l^{-1- \alpha /2}) \right)
(z-H)^{-1} \J{x}^{- \rho} u
\\ & \qquad \quad + (z-H)^{-1}\gamma (\J{x}/l ) u
\end{align*}
converges as $l \to \infty$, implying that $ (z-H)^{-1} \D{\J{x}^{\rho } } \subset \D{\J{x}^{\rho }}$.
Subsequently, suppose that $ (z-H)^{-1} \D{\J{x}^{\rho k} } \subset \D{\J{x}^{\rho k}}$ for some $k \in {\bf N}$. Then, by defining $u_l = J_l (x)\J{x}^{\rho k} (z-H)^{-1} \J{x}^{-\rho(k+1)}u $, we obtain
\begin{align*}
u_l &= (z-H)^{-1}[ H, J_l (x) \J{x}^{\rho k} ] (z-H)^{-1} \J{x}^{-\rho(k+1)}u + (z-H)^{-1}\gamma (\J{x}/l) u
\\ & = - (z-H)^{-1} \left( \sum_{m=1}^n x_m \J{x}^{-\alpha /2 -1} p_m + O(1) + O(l^{-1- \alpha /2})\right) \times B_0
\J{x}^{\rho k} (z-H)^{-1} \J{x}^{- \rho(k+1)} u \\ & \qquad \quad + (z-H)^{-1} \gamma (\J{x}/l ) u.
\end{align*}
By Lemma \ref{La1} and the assumption of $ (z-H)^{-1} \D{\J{x}^{\rho k} } \subset \D{\J{x}^{\rho k}}$, we have that
$u_l$ converges as $l \to \infty$, meaning that $\D{\J{x}^{\rho (k+1)} } \subset (z-H)^{-1} \D{\J{x}^{\rho (k+1)}}$ holds. Then, the domain-invariant property \eqref{ad4} follows from \eqref{ad3} with $e^{-it \cdot } \varphi (\cdot ) \in C_0^{\infty} ({\bf R})$ for any fixed $t$.
}
Finally, we obtain the domain-invariant property of $\D{p^2 + |x|^{\alpha} }$, which plays a fundamental role in the analysis of many terms.
\begin{prop}\label{Pa3}
Let $\CAL{N}_{\alpha} := p^2 + |x|^{\alpha} $. Then, the domain-invariant property
\begin{align}\label{2}
(z-H)^{-1} \D{\CAL{N}_{\alpha}} \subset \D{\CAL{N}_{\alpha}}, \quad \varphi (H) \D{\CAL{N}_{\alpha}} \subset \D{\CAL{N}_{\alpha}}
\end{align}
holds.
\end{prop}
\Proof{
First, we demonstrate that $(z-H)^{-1} \D{\CAL{N}_{\alpha}} \subset \D{|x|^{\alpha} }$. Consider that $N$ in Lemma \ref{La2} is sufficiently large such that $ \rho N = 2 + \delta $ and $0\leq \delta <1$, i.e., $\rho N \geq 2>\alpha $. Then, it follows that
\begin{align*}
\left\| (z-H)^{-1} \right\| = \left\| \J{x}^0 (z-H)^{-1} \J{x}^{-0} \right\| \leq C_0, \quad \left\| \J{x}^{2+\delta} (z-H)^{-1} \J{x}^{-2- \delta} \right\| \leq C_{2+ \delta}.
\end{align*}
The interpolation theorem (see Kato \cite{Ka}) states that for any $0 < \beta < 1$,
\begin{align*}
\left\| \J{x}^{(2+ \delta)\beta} (z-H)^{-1} \J{x}^{-(2 + \delta)\beta} \right\| \leq C_0^{1-\beta}C_{2 + \delta}^{\beta}.
\end{align*}
Taking $\beta = \alpha/(2 + \delta) \in (0,1) $, we obtain the bound of the operator $\J{x}^{\alpha} (z-H)^{-1} \J{x}^{-\alpha} $. Hence, $(z-H)^{-1} \D{\CAL{N}_{\alpha}} \subset \D{|x|^{\alpha} }$. Moreover, from Lemma \ref{La1}, we note $(z-H)^{-1}\D{\CAL{N}_{\alpha}}\subset \D{p^2}$ because $ (z-H)^{-1}\D{\CAL{N}_{\alpha}}\subset \D{\J{x}^{\alpha} }$.
}
\begin{rem}
In addition, we can show $(z-H)^{-1}\D{p^2 + |x|^{\theta}} \subset \D{p^2 + |x|^{\theta}} $ for any $\theta \geq \alpha$.
\end{rem}
\section{Large velocity estimate}
In this section, we present the large-velocity propagation estimate for $H$. This type of estimate has already been shown in Proposition 5.7 of \cite{BCHM} with a compactly supported cut-off. This section aims to extend this result to cut-offs that are not compactly supported. This extended result enables us to demonstrate the one-key estimate \eqref{ad30}.
In the following, we set $\CAL{N}_{\alpha}= p^2 + |x|^{\alpha}$, and $\varphi \in C_0^{\infty} ({\bf R})$ satisfies $0 \leq \varphi \leq 1$, $\varphi (s) = 0$ for $|s| \leq R -1$ and $\varphi (s) = 0$ for $ |s| \geq R$, where $R$ is a positive constant provided later. Before considering the large velocity estimate, we note that $H$ has no embedded eigenvalues on ${\bf R}$. Hence, considering the cut-off $\varphi \in C_0^{\infty} ({\bf R}) $ instead of $\varphi \in C_0^{\infty} ({\bf R} \backslash \sigma_{\mathrm{pp}}(H) )$ is sufficient. In the following, we can omit the discussion of issues arising from the embedded eigenvalues.
The following lemma provides the momentum bound under the energy cut-off.
\begin{lem} \label{L1}
We define $A_{0,R} := \left(\alpha \sqrt{n} (2n+1) + \sqrt{\alpha ^2 n (2n+1)^2 + 4 a_{0,R}} \right)/2$, where $a_{0,R} = n(n+1)(\alpha ^2 + 3 \alpha) +n(R+ C_{V,0} + \sigma) $. Then, for all $\phi \in L^2({\bf R}^n)$,
\begin{align*}
\sum_{j=1}^n \left\|
\J{x}^{-\alpha /2} p_j \varphi (H) \phi
\right\| ^2 \leq A_{0,R}^2 \left\| \varphi (H) \phi \right\|^2
\end{align*}
holds. In particular, for all $j \in \{1,2,...,n \}$,
\begin{align*}
\left\|
\J{x}^{-\alpha /2} p_j \varphi (H) \phi
\right\| \leq A_{0,R} \left\| \varphi (H) \phi \right\|
\end{align*}
at least, and
\begin{align*}
\left\|
\J{x}^{-\alpha /2} p_j (H + i)^{-1} \phi
\right\| \leq A_{0,1} \left\| \phi \right\|
\end{align*}
hold.
\end{lem}
\Proof{
For $\phi \in \D{\CAL{N}_{\alpha}}$, we define
\begin{align*}
I_j := \left\|
\J{x}^{-\alpha /2} p_j \varphi (H) \phi
\right\|.
\end{align*}
Then,
\begin{align*}
I_j ^2 \leq \left\| \varphi (H) \phi \right\| \left\| p_j \J{x}^{-\alpha} p_j \varphi(H) \phi \right\|.
\end{align*}
Let $v = \varphi(H) \phi$ and $a_{0,R} = n(n+1)(\alpha ^2 + 3 \alpha) + n(R + C_{V,0}+ \sigma) $. Using $\| (p^2 - \sigma |x|^{\alpha} ) v\| \leq R \| v \|$ and
\begin{align*}
& \sum_{j=1}^n \left\| p_j \J{x}^{-\alpha} p_j v \right\| \\ &= \sum_{j=1}^n \left\| \left( -i \alpha p_j x_j \J{x}^{- \alpha -2 } + p_j^2 \J{x}^{-\alpha} \right) v \right\| \\
& \leq \sum_{j=1}^n \left\| \left( \alpha \J{x}^{- \alpha -2} - \alpha (\alpha +2) x_j^2 \J{x}^{- \alpha -4} \right) v \right\| + n \left\| \sum_{j=1}^n p_j ^2 \J{x}^{- \alpha} v \right\| + \alpha \sum_{j=1}^n \left\| x_j \J{x}^{- \alpha -2} p_j v \right\| \\
& \leq n (n+1)(\alpha ^2 + 3 \alpha) \|v \|+ \alpha(2n+1) \sum_{j=1}^n \left\| x_j \J{x}^{- \alpha -2} p_j v \right\| \\ & \qquad
+ n \left\| \J{x}^{- \alpha} \left( p^2 - \sigma |x|^{\alpha} +V + \sigma |x|^{\alpha} -V \right) v \right\| \\
& \leq a_{0,R} \| v \| + \alpha(2n+1) \sum_{j=1}^n \left\| \J{x}^{- \alpha /2} p_j v \right\|,
\end{align*}
we have
\begin{align*}
\sum _{j=1}^n I_j^2 &\leq \alpha (2n+1) \| \varphi (H) \phi \| \sum_{j=1}^n I_j + a_{0,R} \| \varphi (H) \phi \|^2
\\ & \leq \alpha \sqrt{n} (2n+1) \| \varphi (H) \phi \| \left( \sum_{j=1}^n I_j ^2 \right)^{1/2}+ a_{0,R}\| \varphi (H) \phi \|^2.
\end{align*}
By solving this inequality with $ \sum I_j^2 \geq 0$, we obtain the desired result.
}
In addition, we set $\chi$ as a smooth cut-off such that $0 \leq \chi \leq 1$, and for some positive constant $a>0$, $\chi(t) = 1$ for $|t| \geq 2a$ and $\chi(t) = 0$ for $|t| \leq a$. Then, the following estimate holds.
\begin{lem}\label{L2}
Let $t \geq 0$. For constant $C >0$, the estimate
\begin{align} \label{1}
\left\|
|x|^{\rho} \chi (|x|^{\rho}/(t+1)) e^{-itH} \varphi(H) \J{x}^{- \rho}
\right\| \leq C \J{t}
\end{align}
holds.
\end{lem}
\Proof{
For $\phi \in \CAL{N}_{\alpha}$, we first calculate
\begin{align*}
\frac{d}{dt}
\left(
e^{itH} |x|^{\rho} \chi (|x|^{\rho} /(t+1)) e^{-itH}
\right) \varphi (H) \phi &=
e^{itH} i [ p^2 , |x|^{\rho} \chi (|x|^{\rho} /(t+1)) ] e^{-itH} \varphi (H) \phi \\ & \quad - e^{itH} \frac{|x|^{2 \rho} }{(t+1)^2} \chi' (|x|^{\rho} /(t+1))e^{-itH} \varphi (H) \phi
\\ &= \sum_{j=1}^n e^{itH} i [ p^2_j , |x|^{\rho} \chi (|x|^{\rho} /(t+1)) ] e^{-itH} \varphi (H) \phi
\\ & \quad - e^{itH} \frac{|x|^{2 \rho} }{(t+1)^2} \chi' (|x|^{\rho} /(t+1))e^{-itH} \varphi (H) \phi.
\end{align*}
From the commutator calculation, we have
\begin{align*}
& i[p_j ^2 , |x|^{\rho} \chi(|x|^{\rho} /(t+1) )] \\ &= -i \left( \rho |x|^{\rho -2} + \rho (\rho -2) x_j^2 |x|^{\rho -4} \right) \chi(|x|^{\rho} /(t+1) ) \\ & \quad \quad -i \left(
\frac{2\rho ^2 x_j^2 |x|^{2 \rho -4}}{t+1}
+ \frac{2 \rho (\rho -1) x_j^2 |x|^{2 \rho -4} }{t+1} + \frac{\rho |x|^{2 \rho -2} }{(t+1)^2} \right) \chi'(|x|^{\rho}/(t+1))
\\ & \qquad \quad -i \frac{\rho ^2x_j ^2 |x|^{3\rho -2} }{(t+ 1)^2} \chi ''(|x|^{\rho} /(t+1) )
\\ & \qquad \qquad + 2\left(
\rho x_j |x|^{\rho -2} \chi (|x|^{\rho}/t) + \frac{ \rho x_j |x|^{2\rho -2}}{t+1} \chi '(|x|^{\rho}/(t+ 1))
\right) p_j \\
&=: J_1 + J_2 + J_3 + J_4.
\end{align*}
Clearly, $J_1$, $J_2$, and $J_3$ are bounded operators. Moreover, from Lemma \ref{L1}, we have that
\begin{align*}
\left\| J_4 \varphi(H) \right\| \leq C \left\| \J{x}^{- \alpha /2} p_j \varphi (H) \right\|
\end{align*}
is bounded using $\rho -1 = - \alpha /2$, and that $|x|^{\rho} \leq 2a (t+1) $ holds on $\chi '$. Consequently, we obtain
\begin{align*}
& \left\|
e^{itH} |x|^{\rho} \chi (|x|^{\rho} /(t+1)) e^{-itH} \varphi (H) \phi
\right\| \\ & \leq \left\|
e^{i 0 H} |x|^{\rho} \chi (|x|^{\rho}) e^{-i 0 H} \varphi (H) \phi
\right\| + \int_0^t \sum_{j=1}^n \left\|
e^{itH} i [ p^2_j , |x|^{\rho} \chi (|x|^{\rho} /(t+1)) ] e^{-itH} \varphi (H) \phi
\right\| dt \\
& \leq C \| \J{x}^{\rho} \varphi(H) \phi \| + C t \| \phi \|
\\ & \leq C \J{t} \| \J{x}^{\rho} \phi \|,
\end{align*}
using Lemma \ref{La1} and the Hellfer-Sj\"{o}strand formula.
}
\subsection{Large-velocity estimate}
Let $A_{1,R} = 4 n \rho A_{0,2R}$. Here, we set $F (\cdot) $ as a smooth cut-off such that $0 \leq F \leq 1$, $F(s) = 0$ for $ s \leq A_{1,R} $ and $F(s) = 1$ for $s \geq 2A_{1,R}$. Moreover, we set $G(s) = \int_{-\infty}^s F(\tau) ^2 d \tau$. The purpose of this is to show the following large-velocity estimate.
\begin{prop}\label{P2}
The inequality
\begin{align*}
\int_1^{\infty} \left\|
F\left( \frac{|x|^{\rho}}{t} \right) e^{-itH} \varphi (H) \J{x}^{- \rho} \phi
\right\|^2 \frac{dt}{t} \leq C \| \phi \|^2
\end{align*}
holds for all $\phi \in L^2({\bf R}^n)$.
\end{prop}
\Proof{
First, we set an observable as
\begin{align*}
\Phi (t) := \J{x}^{- \rho}\varphi(H) e^{itH} G(|x|^{\rho}/t) e^{-it H} \varphi(H) \J{x}^{- \rho}.
\end{align*}
Here, we note that by the definition of $G$, we have
\begin{align*}
G(s) =
\begin{cases}
0, & s \leq A_{1,R}, \\
\mathrm{(bdd)}, & A_{1,R} < s < 2 A_{1,R}, \\
(|s| -2A_{1,R}) + \mathrm{(bdd)}, & s \geq 2 A_{1,R},
\end{cases}
\end{align*}
where $\mathrm{(bdd)}$ indicates a bounded function whose norm is bounded by a constant independent of $s$. Hence, we can write $\Phi (t)$ as
\begin{align*}
\Phi (t) \leq (\mathrm{bdd}) + \J{x}^{- \rho}\varphi(H) e^{itH} \frac{|x|^{\rho}}{t} {\chi} (|x|^{\rho}/(t+1)) e^{-it H} \varphi(H) \J{x}^{- \rho}
\end{align*}
with a suitable cut-off ${\chi}$ and $a$ in Lemma \ref{L2}, and hence, by Lemma \ref{L2}, we can determine that $\Phi (t)$ is bounded and whose bound is independent of $t$.
We now prove this proposition. Straightforward calculations show that:
\begin{align*}
{\bf D}_{H} (G(|x|^{\rho}/t)) &= \frac{d}{dt} G(|x|^{\rho}/t) + i [H , G(|x|^{\rho}/t)]
\\ &=
- \frac{|x|^{\rho} }{t^2} F\left( \frac{|x|^{\rho}}{t} \right)^2 + \sum_{j=1}^n \left( \rho \frac{x_j }{t |x|^{2-\rho} } p_j F\left( \frac{|x|^{\rho}}{t} \right)^2 + \rho F\left( \frac{|x|^{\rho}}{t} \right)^2 p_j \frac{x_j }{t |x|^{2-\rho} } \right)
\\ & \leq - \sum_{j=1}^n \frac{1}{t} F\left( \frac{|x|^{\rho}}{t} \right) \left( A_{1,R} - 2 \rho \frac{x_j}{|x|^{2- \rho}} p_j \right)F\left( \frac{|x|^{\rho}}{t} \right) + \CAL{O}(t^{-2}).
\end{align*}
Moreover, by setting ${\varphi}_0 \in C_0^{\infty} ({\bf R})$ such that $\varphi = {\varphi}_0 \varphi$ and $|t| \leq 2R$ on the support of ${\varphi}_0(t)$, we determine that, for $v(t) = e^{-itH} \varphi(H) \J{x}^{- \rho} \phi$,
\begin{align*}
\left\|
\J{x}^{- \alpha /2} p_j F\left( \frac{|x|^{\rho}}{t} \right) v(t)
\right\| &\leq \left\|
\J{x}^{- \alpha /2} p_j F\left( \frac{|x|^{\rho}}{t} \right) {\varphi}_0 (H) v(t)
\right\|
\\ & \leq C|t|^{-1} \left\| v(t)
\right\| + \left\|
\J{x}^{- \alpha /2} p_j {\varphi}_0 (H) \right\| \left\| F\left( \frac{|x|^{\rho}}{t} \right) v(t)
\right\|
\\ & \leq C|t|^{-1} \left\| v(t)
\right\| + A_{0,2R} \left\| F\left( \frac{|x|^{\rho}}{t} \right) v(t)
\right\|,
\end{align*}
using
\begin{align*}
\left\| \J{x}^{- \alpha /2} p_j [F(|x|^{\rho}/t) , {\varphi}_0 (H) )] \right\| = \CAL{O}(t^{-1}).
\end{align*}
Indeed, using the Helffer-Sj\"{o}strand formula, for $\phi \in \D{\CAL{N}_{\alpha}}$, we have
\begin{align*}
& \J{x}^{- \alpha /2} p_j [F(|x|^{\rho}/t) , {\varphi}_0 (H) ] \phi \\ & = - \frac{1}{2 \pi i} \int_{\bf C} ( \overline{\partial _z }\tilde{\varphi}_0 (z)) \J{x}^{- \alpha /2} p_j (z- H)^{-1}[H,F(|x|^{\rho}/t) ] (z- H)^{-1} \phi dz d \bar{z} \\ &=
- \frac{1}{2 \pi i} \int_{\bf C} ( \overline{\partial _z }\tilde{\varphi}_0 (z)) \J{x}^{- \alpha /2} p_j (z- H)^{-1}
\\ & \qquad \qquad \times \left( \left( \frac{B_0}{t^{2/ \rho}} + \frac{2 \rho x_j |x|^{\rho -2} }{t} \right) F'(|x|^{\rho}/t) p_j \right) (z- H)^{-1} \phi dz d \bar{z}
\\ & = \CAL{O}(t ^{-1}),
\end{align*}
where $\tilde{\varphi}_0(z)$ denotes the almost analytic extension of $\varphi_0$ (see Helffer-Sj\"{o}strand \cite{HS}).
Consequently, we obtain
\begin{align*}
& \frac{d}{dt} \left( \Phi(t) \phi, \phi \right) \\ & \leq - \frac{A_{1,R}}{t} \left\| F\left( \frac{|x|^{\rho}}{t} \right) v(t) \right\|^2 + \frac{2 \rho}{t} \sum_{j=1}^n \left\| F\left( \frac{|x|^{\rho}}{t} \right) v(t)\right\|\left\| \J{x}^{- \alpha /2}p_j F\left( \frac{|x|^{\rho}}{t} \right) v(t) \right\| + \CAL{O}(t^{-2})\| \phi \|^2
\\ & \leq
- \frac{ A_{1,R} - 2 n \rho A_{0,2R} }{t} \left\| F\left( \frac{|x|^{\rho}}{t} \right) v(t)\right\| ^2 + \CAL{O}(t^{-2})\| \phi \|^2,
\end{align*}
which yields
\begin{align*}
2n \rho A_{0,2R} \int_1^{\infty} \left\| F\left( \frac{|x|^{\rho}}{s} \right) v(s) \right\| \frac{ds}{s} &\leq \| \phi \|^2 \int_1^{\infty} \CAL{O}(s^{-2}) ds + \| \Phi(1) \phi \| \| \phi \| + \sup_{t \geq 1 }\| \Phi(t) \phi \| \| \phi \|
\\ & \leq C \| \phi \|^2.
\end{align*}
}
\section{Mourre theory}
In this section, we introduce the Mourre theory for $H$. We set $\CAL{N}_{\alpha} = p^2 + |x|^{\alpha} $. For $\SCR{D}(\CAL{N}_{\alpha})$, we have ${\displaystyle \overline{p^2 - \sigma |x |^{\alpha} } = \overline{p^2} - \overline{\sigma |x|^{\alpha}}}$ because $\SCR{D} (\CAL{N}_{\alpha})= \D{p^2} \cap \D{|x|^{\alpha}}$ holds, and $\D{H} \cap \D{\CAL{N}_{\alpha}} = \D{\CAL{N}_{\alpha}}$ is dense on $\D{H}$. Moreover, from Proposition \ref{Pa3}, $\varphi (H_0) \SCR{D}(\CAL{N}_{\alpha}) \subset \D{\CAL{N}_{\alpha}}$ holds. Based on this, we define the commutator in the form sense with domain $\D{\CAL{N}_{\alpha}}$.
From the compactness argument, we have that for any compact operator $C_0$, there exists $\varphi \in C_0^{\infty} ({\bf R})$ such that
\begin{align*}
\left\| C_0 \varphi (H) \right\| \ll \alpha _0 , \quad \alpha _0 := \mathrm{min} \{ \alpha \sigma , (2- \alpha ) \sigma \}
\end{align*}
holds. In the following, we always consider $\| C_0 \varphi (H)\|$ to be sufficiently small compared with other constants.
We first provide a short sketch of the Mourre estimate for the case where $\alpha \geq 1$. The Mourre estimate for $H_0$ was first obtained by \cite{BCHM}, and \cite{It2} then considered this using a different conjugate operator to consider the Besov bounds for resolvents. In this study, we handle the different types of conjugate operators addressed in \cite{BCHM} and \cite{It2}. In \cite{BCHM}, which defined the pseudo-differential operator $\SCR{A}_0$ with symbols,
\begin{align*}
{a}_{0} (x, \xi) := x \cdot \xi \J{x}^{- \alpha} \psi \left( \frac{\xi^2 - \J{x}^{\alpha}}{\xi ^2 + \J{x}^{\alpha}} \right),
\end{align*}
we note that
\begin{align*}
\varphi(\tilde{H}_0) i [\tilde{H}_0, \SCR{A}_0 ] \varphi(\tilde{H}_0 ) \geq \tilde{\delta } \varphi(\tilde{H}_0 )^2+ C_0, \quad \tilde{H}_0 := p^2 - \sigma \J{x}^{- \alpha},
\end{align*}
where $\psi \in C_0^{\infty}({\bf R})$ is narrowly supported around $0$ and $\tilde{\delta} >0$ is a constant; see Lemma 3.16 in \cite{BCHM}. In this study, we set $\SCR{A} := \J{x}^{- \alpha} x \cdot p + p \cdot x \J{x}^{- \alpha} $ and show that
\begin{align} \label{4}
\varphi({H}_0) i [{H}_0, \SCR{A} ] \varphi({H}_0 ) \geq (2- \sigma ) \sigma \varphi({H}_0 )^2+ C_0,
\end{align}
where we note that, for $\psi \in \D{\CAL{N}_{\alpha}}$, the commutator $(i[H_0, \SCR{A}] \varphi(H_0) \psi, \varphi(H_0) \psi )$ can be defined in the form sense because $\D{\CAL{N}_{\alpha}} \subset \D{\SCR{A}}$ holds. The calculation in the proof of Proposition \ref{P3} shows that
\begin{align} \label{3}
\varphi({H}_0) i [{H}_0, \SCR{A}] \varphi({H}_0 ) \geq \varphi(H_0) \J{x}^{- \alpha /2} \left( 4(1- \alpha) p^2 + 2 \alpha \sigma |x|^{\alpha} \right) \J{x}^{- \alpha /2}\varphi({H}_0 )+ C_0.
\end{align}
Let $\eta >0$ be a small constant such that $2 - \alpha -2 \eta > \eta$. Then, the inequality above can be rewritten as
\begin{align*}
& \varphi({H}_0) i [{H}_0, \SCR{A}] \varphi({H}_0 ) \\ & \geq \varphi(H_0) \J{x}^{- \alpha /2} \left( 4\eta p^2 + (4 -2 \alpha -4 \eta) \sigma |x|^{\alpha} +4(1- \alpha - \eta) H_0 \right) \J{x}^{- \alpha /2}\varphi({H}_0 )+ C_0.
\end{align*}
If $(4|1- \alpha - \eta|)H_0 \leq \eta (p^2 + |x|^{\alpha}) $ holds in the form sense, we can roughly deduce the positivity of the commutator. Hence, \cite{BCHM} employed a cut-off of $\psi (\xi ^2 - \J{x}^{\alpha} /(\xi ^2 + \J{x}^{\alpha}))$. The merit of using such a cut-off is that we can deduce the positivity of the commutator and that such a commutator can be extended to a bounded operator. Boundedness enables many easy calculations for deducing the Mourre theory and propagation estimates. However, such a cut-off makes the commutator calculation difficult. In our approach, we must calculate the commutator of $H_0$ and $\SCR{A}$ at least $5$ times. To simplify this discussion, we introduce another type of conjugate operator.
Let us suppose that
$$
\varphi(H_0) \J{x}^{- \alpha /2} H_0 \J{x}^{- \alpha /2}\varphi({H}_0 ) \leq 2R\varphi(H_0) \J{x}^{- \alpha }\varphi({H}_0 ) +C_0
$$
holds on the support of $\varphi(H_0)$; we can then deduce from \eqref{3} that:
\begin{align*}
\varphi({H}_0) i [{H}_0, \SCR{A} ] \varphi({H}_0 ) & \geq \varphi(H_0) \J{x}^{- \alpha /2} \left( 4(1- \alpha )H_0 + (4- 2 \alpha) \sigma |x|^{\alpha} \right) \J{x}^{- \alpha /2}\varphi({H}_0 )+ C_0
\\ & \geq \varphi(H_0) \J{x}^{- \alpha /2} \left( (4- 2 \alpha) \sigma |x|^{\alpha} - 8R \right) \J{x}^{- \alpha /2}\varphi({H}_0 )+ C_0.
\end{align*}
By setting $\chi \in C_0^{\infty} ({\bf R}^n)$ such that $\chi (s) = 0$ on $s \leq R_0$, $R_0 >0$ and noting that $(1- \chi (|x|)) \varphi(H_0)$ is the compact operator (see Lemma 2.3 in \cite{BCHM}), we obtain the following for large $R_0 \gg R$ such that $ (4 - 2\alpha ) \sigma R_0^{\alpha} \J{R_0}^{- \alpha} \geq 32 R \J{R_0}^{- \alpha} $:
\begin{align}
\nonumber \varphi({H}_0) i [{H}_0, \SCR{A} ] \varphi({H}_0 ) &\geq \varphi(H_0) \J{x}^{- \alpha /2} \chi (|x|) \left( (4-2 \alpha) \sigma |x|^{\alpha} - 8R \right)
\chi(|x|)\J{x}^{- \alpha /2} \varphi({H}_0 )+ C_0
\\ & \geq \label{adf1} \varphi(H_0) \chi (|x|) \left( (4-2 \alpha) \sigma \left( \frac{R_0^2}{1 + R_0 ^2} \right)^{\alpha/2} - 8R \J{R_0}^{- \alpha} \right) \chi (|x|) \varphi(H_0) + C_0
\\ & \geq \nonumber (2-\alpha) \sigma \varphi(H_0) \chi (|x|)^2 \varphi(H_0) + C_0.
\end{align}
Again, using $(1- \chi (|x|)) \varphi(H_0)$ as the compact operator, we obtain \eqref{4} without pseudo-differential cut-offs. This is our scheme for deducing the Mourre theory.
\begin{prop}\label{P3}
Let the conjugate operator $\SCR{A}$ be
\begin{align*}
\SCR{A} := \J{x}^{- \alpha} x \cdot p + p \cdot x \J{x}^{-\alpha}.
\end{align*}
Then,
\begin{align} \label{5}
\varphi(H) i[H, \SCR{A}] \varphi(H) \geq \alpha _0 \varphi(H) ^2 + C_0,
\end{align}
where $\alpha _0 := \mathrm{min} \{ \alpha \sigma , (2- \alpha ) \sigma \}$.
\end{prop}
\Proof{
We divide $\SCR{A} = \SCR{A}_1 + \SCR{A}_1^{\ast} $. Then,
\begin{align*}
i[H, \SCR{A}_1] = i[p^2 , \SCR{A}_1] - i[ \sigma |x|^{\alpha}, \SCR{A}_1 ] .
\end{align*}
Straightforward calculations show that:
\begin{align*}
i[p^2, \SCR{A}_1] = - \alpha \left( p \cdot x \J{x}^{- \alpha -2} + \J{x}^{- \alpha -2} x \cdot p \right) x \cdot p + 2\J{x}^{- \alpha} p^2
\end{align*}
and
\begin{align*}
- i[ |x|^{\alpha}, \SCR{A}_1] = \alpha |x|^{\alpha} \J{x}^{- \alpha}.
\end{align*}
With
\begin{align*}
\varphi(H) \J{x}^{- \alpha -2} x_j p_j \varphi(H) &= \varphi(H) \J{x}^{- \alpha /2-2} x_j \cdot \J{x}^{-\alpha /2} p_j \varphi(H)
\end{align*}
as the compact operator, we have
\begin{align*}
\varphi(H) \J{x}^{- \alpha} p^2 \varphi(H) = \varphi(H) \J{x}^{- \alpha /2} p^2 \J{x}^{- \alpha /2} \varphi(H) + C_0,
\end{align*}
using $[p_j , \J{x}^{- \alpha/2}] = - \alpha x_j \J{x}^{- \alpha /2 -2} /2$. Using a similar calculation, we have
\begin{align*}
& - \alpha \varphi(H) \left( p \cdot x \J{x}^{- \alpha -2} + \J{x}^{- \alpha -2} x \cdot p \right) x \cdot p \varphi(H)
\\ &=
- 2\alpha \varphi(H) \J{x}^{- \alpha /2 } p \cdot x \J{x}^{-1} \cdot \J{x}^{-1} x \cdot p \J{x}^{- \alpha /2 } \varphi(H) + C_0.
\end{align*}
For all $\phi \in L^2({\bf R}^n)$,
\begin{align}\label{adf2}
\left\|
\J{x}^{-1} x \cdot p \J{x}^{- \alpha /2 } \varphi(H) \phi
\right\|^2 &\leq \left\| |p| \J{x}^{- \alpha /2 } \varphi(H) \phi \right\|^2
\end{align}
holds. Indeed, let $\rho_j = \J{x}^{-1} x_j $, $\psi \in \D{\CAL{N}_{\alpha}}$, and $\psi _j = p_j \psi $. Then,
\begin{align*}
\left|
\J{x} ^{-1} x \cdot p \psi
\right|^2 = \left| \sum_{j=1}^n \rho_j \psi _j \right|^2 \leq \left| \sum_{j=1}^n | \rho_j || \psi _j | \right|^2 \leq
\sum_{l=1}^n | \rho_l | ^2 \sum_{j=1}^n | \psi_j | ^2
\end{align*}
yields
\begin{align*}
\left\|
\J{x} ^{-1} x \cdot p \psi
\right\|^2 &= \int_{{\bf R} ^n} \left( \sum_{l=1}^n | \rho_l | ^2 \sum_{j=1}^n | \psi_j | ^2 \right) dx
\\ & = \int_{{\bf R} ^n} \left( \sum_{j=1}^n | \psi_j | ^2 \right) dx
\\ &= \sum_{j=1}^n ( \psi_j, \psi _j ) = \left\| |p| \psi \right\|^2.
\end{align*}
By setting $\psi ^{(k)} \in \D{\CAL{N}_{\alpha}} \to \psi^{(\infty)} = \J{x}^{- \alpha} \varphi(H) \phi \in \D{|p|}$ as $k \to \infty$, we have \eqref{adf2}. Using \eqref{adf2}, we have
\begin{align*}
& - 2\alpha \varphi(H) \J{x}^{- \alpha /2 } p \cdot x \J{x}^{-1} \cdot \J{x}^{-1} x \cdot p \J{x}^{- \alpha /2 } \varphi(H)
\\ & \geq -2 \alpha \varphi(H) \J{x}^{- \alpha /2 }p^2 \J{x}^{- \alpha /2 } \varphi(H) +C_0.
\end{align*}
The term associated with $\SCR{A}_1^{\ast}$ can be estimated in a similar manner. Consequently, we obtain
\begin{align*}
\varphi(H) i[H , \SCR{A}] \varphi(H) \geq \varphi(H) \J{x}^{- \alpha /2} \left( 4(1- \alpha) p^2 + 2 \alpha \sigma |x|^{\alpha} \right)\J{x}^{-\alpha /2} \varphi(H) +C_0.
\end{align*}
For the case in which $\alpha \leq 1$, the following clearly holds:
\begin{align*}
\varphi(H) i[H , \SCR{A}] \varphi(H) &\geq 2 \alpha \varphi(H) \J{x}^{- \alpha /2} \sigma |x|^{\alpha} \J{x}^{-\alpha /2} \varphi(H) +C_0
\\ & \geq 2 \sigma \alpha |R_0|^{\alpha} \J{R_0}^{- \alpha} \varphi(H) ^2 +C_0
\\ & \geq \alpha \sigma \varphi(H)^2 + C_0,
\end{align*}
using the compactness of $(1- \chi (|x|)) \varphi (H)$. Next, we consider the case in which $\alpha >1$. By $p^2 = H + \sigma |x|^{\alpha}$, we have that
\begin{align} \label{6}
\varphi(H) i[H , \SCR{A}] \varphi(H) \geq \varphi(H) \J{x}^{- \alpha /2} \left( 4(1- \alpha) H + ( 4-2 \alpha) \sigma |x|^{\alpha} \right)\J{x}^{-\alpha /2} \varphi(H) +C_0.
\end{align}
Set $\tilde{\varphi} \in C_0^{\infty} ({\bf R})$ such that $\varphi \tilde{\varphi} = \varphi$ and $\mathrm{supp}\{ \tilde{\varphi} \} \subset \{ s\, | \, |s| \leq 2R \}$. Then, the Helffer-Sj\"{o}strand's formula yields
\begin{align*}
\varphi(H) \J{x}^{- \alpha /2} H \J{x}^{-\alpha /2} \varphi(H) &= \varphi(H) \tilde{\varphi_k} (H) \J{x}^{- \alpha /2} H \J{x}^{-\alpha /2} \varphi(H)
\\ &= \varphi(H) \J{x}^{- \alpha /2} \tilde{\varphi} (H) H \J{x}^{-\alpha /2} \varphi(H) + C_0
\\ & \leq
2R \varphi(H) \J{x}^{- \alpha } \varphi(H) + C_0.
\end{align*}
According to \eqref{6}, this inequality, $4 -2 \alpha >0$, and \eqref{adf1}, we have
\begin{align*}
\varphi(H) i[H , \SCR{A}] \varphi(H) \geq (2 -\alpha) \sigma \varphi(H) ^2 + C_0
\end{align*}
for $\alpha \geq 1$.
}
\section{Strong propagation estimate for $\SCR{A}$.}
This section demonstrates Theorem \ref{T2} using the results of Skibsted \cite{Sk}. Let $\varepsilon >0$ and set $\chi_0 \in C^{\infty}({\bf R})$ with the following properties:
\begin{align*}
\chi_0 (x) =
\begin{cases}
1 & x < -2 \varepsilon , \\
0 & x > - \varepsilon ,
\end{cases}
\quad \frac{d}{dx} \chi_0 (x ) \leq 0 , \quad \chi_0 (x) + x \frac{d}{dx} \chi_0 (x) = \tilde{\chi} _0 (x)^2 ,
\end{align*}
where $\tilde{\chi}_0 \geq 0 $ and $\tilde{\chi}_0 \in C^{\infty}({\bf R})$. Moreover, we define
\begin{align*}
g(x, \tau) = - \chi (x /\tau)
\end{align*}
for $\tau >0$; see Definition 2.1 in \cite{Sk} with $(\alpha, \beta) = (0,0)$. The key operator in \cite{Sk} is $\SCR{A}(\tau)$, which must satisfy Assumption 2.2 in \cite{Sk}. We set $\SCR{A}(\tau) = \SCR{A} - 3 \varepsilon \tau $ and verify that $\SCR{A}(\tau)$ and $H$ satisfy Assumption 2.2 in \cite{Sk}. Here, the important point is the lower boundedness of $H$, which was employed in \cite{Sk} (see Lemma 2.11 in \cite{Sk}) to show the domain-invariant property \eqref{adadad1}. However, our model of a Hamiltonian $H$ does not have such a condition. Hence, instead of the lower-boundedness of $H$, we instead provide a different proof (see Lemma \ref{LL1}). Throughout this section, for the two operators $A$ and $B$, we define ${\rm ad}^k_A(B)$ as ${\rm ad}^0_A(B)=B$ and ${\rm ad}^k_A(B)=[{\rm ad}^{k-1}_A(B),A]$ for $k\in{\bf N}$.
First, we demonstrate that, for any $n_0 \in {\bf N}\cup \{0 \}$, $\mathrm{ad}^{n_0}_{\SCR{A}(\tau)} (H) = \mathrm{ad}^{n_0}_{\SCR{A}} (H) $ can be extended to the symmetric operator on $\D{H}$. Evidently, $\mathrm{ad}^0_{\SCR{A}(\tau)} (H) = H$ satisfies this assumption. From the previous calculation and Lemma \ref{L1}, we have that
\begin{align*}
\mathrm{ad}_{\SCR{A}(\tau)}^1(H) &= - \alpha \left( p \cdot x \J{x}^{- \alpha -2} + \J{x}^{- \alpha -2} x \cdot p \right) x \cdot p + 2\J{x}^{- \alpha} p^2
+ \alpha \sigma|x|^{\alpha} \J{x}^{- \alpha} \\ & \quad - \J{x}^{- \alpha} (x \cdot \nabla V(x)) + (\mathrm{h.c.})
\end{align*}
can be extended to the symmetric operator on $\D{H}$, using the notation $A + A^{\ast} = A + (\mathrm{h.c.})$ for operator $A$. Continuing similar calculations, we can obtain that $\mathrm{ad}^{n_0}_{\SCR{A}(\tau)} (H)$ can be extended to the symmetric operator on $\D{H}$.
Next, we demonstrate that $\| H e^{-is \SCR{A}} (H +i)^{-1} \| \leq C$ for any $s \in [0,1]$. Let $\psi \in \SCR{S}({\bf R}^n)$ with $\SCR{F}[\psi] \in C_0^{\infty} ({\bf R}^n)$. Then, there exists $\Psi \in C_0^{\infty} ({\bf R})$ such that $\psi = \Psi (p^2) \psi $. The pseudo-differential operator can then be defined as follows:
\begin{align*}
\CAL{A}(s) \psi := \int_{{\bf R}^n} e^{ix \cdot \xi} a(s; x,\xi) \hat{\psi} (\xi) d \xi , \quad a(s; x, \xi) = e^{-i s (\J{x}^{- \alpha} x \cdot \xi + \xi \cdot x \J{x}^{- \alpha})} \Psi (\xi).
\end{align*}
Then, noting that $ \| |x|^{\alpha} \J{x}^{-2} \| \leq C $, the bound
$$ \| H e^{-i s \SCR{A} } \psi \| \leq C \| (1+p^2+x^2) \psi \| < \infty $$
can be obtained.
Let $u,v \in \SCR{S} ({\bf R}^n)$ with $\SCR{F}[u],\SCR{F}[v] \in C_0^{\infty} ({\bf R}^n)$ to consider the form
\begin{align}\label{adadad2}
\left( H e^{- is \SCR{A}}u, e^{- is \SCR{A}}v \right) = \left( e^{ is \SCR{A}}H e^{- is \SCR{A}}u, v \right).
\end{align}
By the above argument, we note that $(\CAL{N}_2 -i)^{-1}e^{ is \SCR{A}}H e^{- is \SCR{A}} (\CAL{N}_2 +i)^{-1} $ is strongly differentiable in $s$ and that its derivative is integrable over $[0,s]$. We determine that
\begin{align*}
&(\CAL{N}_2 -i)^{-1}e^{ is \SCR{A}}H e^{- is \SCR{A}} (\CAL{N}_2 +i)^{-1}
\\ & = (\CAL{N}_2 -i)^{-1}H (\CAL{N}_2 +i)^{-1} - \int_0^s (\CAL{N}_2 -i)^{-1}e^{ i\tau \SCR{A}} i[H, \SCR{A}] e^{- i\tau \SCR{A}} (\CAL{N}_2 +i)^{-1} d \tau.
\end{align*}
Then, form \eqref{adadad2} satisfies
\begin{align*}
\left( \left( e^{ is \SCR{A}}H e^{- is \SCR{A}}u -Hu + \int_0^s e^{ i\tau \SCR{A}} i[H, \SCR{A}] e^{- i\tau \SCR{A}} u d \tau \right), v \right) =0.
\end{align*}
Because $v$ can be taken arbitrarily in $\D{\CAL{N}_2}$, for all $\psi \in \SCR{S}({\bf R}^n)$ with $\SCR{F}[\psi] \in C_0^{\infty} ({\bf R}^n)$, we have that
\begin{align*}
\left\| H e^{-i s \SCR{A}} \psi \right\| &= \left\| e^{i s \SCR{A}} H e^{-i s \SCR{A}} \psi \right\|
\\ & \leq \left\| H \psi \right\| + \int_0^{s} \left\| e^{i \tau \SCR{A}} i[\SCR{A} , H ] e^{-i \tau \SCR{A}} \psi \right\| d \tau
\\ & \leq C \left\| H \psi \right\| + \int_0^{s} \left\| i[\SCR{A} , H ] (H+i)^{-1} \right\| \left\| (H+i) e^{-i \tau \SCR{A}} \psi \right\| d \tau
\\ & \leq C \left\| (1 +H) \psi \right\| + C\int_0^{s} \left\|H e^{-i \tau \SCR{A}} \psi \right\| d \tau.
\end{align*}
The Gronwall inequality shows that
\begin{align*}
\left\| H e^{-i s \SCR{A}} \psi \right\| \leq C \| (1 +H) \psi \|.
\end{align*}
Because $e^{i s \SCR{A}} H e^{-i s \SCR{A}}$ is the closed operator and $\SCR{F}^{-1} C_0^{\infty} ({\bf R}^n)$ is dense on $\D{H}$, we have that, for all $\phi \in \D{H}$, $\left\| H e^{-i s \SCR{A}} \phi \right\| \leq C \| (1 +H) \phi \|$.
Finally, we show the domain-invariant property.
\begin{lem}\label{LL1}
Let $\varphi \in C_0^{\infty} ({\bf R})$. Then, for any $t \in {\bf R}$ and $N \in {\bf N}$,
\begin{align} \label{adadad3}
e^{-itH} \varphi(H) \D{\SCR{A} ^N} \subset \D{\SCR{A}^N}
\end{align}
holds. Moreover, for all $\psi \in \D{\SCR{A}^N }$, there exists $C_N>0$ such that
\begin{align}\label{adadad4}
\left\| \SCR{A}^N e^{-itH} \varphi(H) \psi \right\| \leq C_N t^{N+1} \| \SCR{A}^N \psi \|.
\end{align}
\end{lem}
\Proof{
Straightforward calculations show that:
\begin{align*}
& e^{-itH}\varphi(H) (\SCR{A}+ i )^{-N}
\\ & = (\SCR{A} +i)^{-1} \left[ \SCR{A}, e^{-itH}\varphi(H) \right] (\SCR{A} +i)^{-N} + (\SCR{A} +i)^{-1} e^{-itH}\varphi(H)(\SCR{A} +i)^{1-N}
\\ & \quad \vdots
\\ & = (\SCR{A} + i)^{- N } \sum_{l_1 =0}^{N} C_{l_1} \mathrm{ad}_{\SCR{A}}^{N-l_1} (e^{-itH}\varphi(H)) (\SCR{A} + i)^{- N + l_1 }.
\end{align*}
We know that the commutator $ \mathrm{ad}_{\SCR{A}}^{N-l_1} (e^{-itH}\varphi(H)) $ can be defined inductively using Helffer-Sj\"{o}strad's formula; because $ \CAL{C}_{t} (x) := \cos (t x) \varphi( x ) \in C_0^{\infty} ({\bf R})$ (as well as $\CAL{S}_t (x) := \sin (t x) \varphi( x )$), for any fixed $t$, we can apply the Helffer-Sj\"{o}strand formula to $\CAL{C}_t(H)$ and obtain
\begin{align*}
\CAL{C}_t (H) = c' \int_{{\bf C}} \overline{\partial _z} \widetilde{c_t} (z) (z-H)^{-1} dz d \bar{z},
\end{align*}
where $c' = (2 \pi i )^{-1}$, $\widetilde{c_t} $ is the almost analytic extension of $\CAL{C}_t$, and $\widetilde{c_t } (z)$ is written as
\begin{align*}
\widetilde{c_t} (z) = \sum_{k=0}^{N-1} c'_k \left( \frac{d^k}{dx^k} \left( \CAL{C}_t^{} (x) \right) \right) y^{k} \psi ( y/\J{x} ), \quad z = x +iy, \quad c_k '= \frac{i^k}{k!}.
\end{align*}
Because $\CAL{C}_t \in C_0^{\infty} ({\bf R})$, for any $s >0$ and $N_0 \in {\bf N}$, the following well-known estimate holds:
\begin{align*}
\left| \overline{\partial _z} \widetilde{{c}_t} (z) \right| \leq C t^k |\mathrm{Im}z|^k \J{z}^{-k-s-1}, \quad 1 \leq k \leq N_0-1,
\end{align*}
where $C$ is independent of $t$. Then, the following immediately follows that
\begin{align*}
\left[ \SCR{A}, \cos (tH)\varphi(H) \right]
&= c' \int_{{\bf C}} \overline{\partial _z} \widetilde{c_t} (z) [\SCR{A} , (z-H)^{-1} ] dz d \bar{z}
\\ & = c' \int_{{\bf C}} \overline{\partial _z} \widetilde{c_t} (z) (z-H)^{-1} [\SCR{A} , H ]
(z- H)^{-1} dz d \bar{z}
\end{align*}
is a bounded operator (and $\left[ \SCR{A}, \sin (tH)\varphi(H) \right]$ is bounded), and it also follows that
\begin{align*}
\left\|
\left[ \SCR{A}, \cos (tH)\varphi(H) \right]
\right\| \leq C \int_{\bf C} \J{z}^{-s-3} t^2 |\mathrm{Im} z|^2 \| (z-H)^{-1} \|^2 dz d \bar{z} \leq C t^{2}.
\end{align*}
Inductively, we determine that $\mathrm{ad}_{\SCR{A}}^N (\cos (tH) \varphi(H)) $ is defined, bounded, and satisfies\\ $\| \mathrm{ad}_{\SCR{A}}^N (\cos (tH) \varphi(H)) \| \leq Ct^{N+1}$ (as well as $\mathrm{ad}^N_{\SCR{A}} (\sin (tH) \varphi(H)) $). This proves \eqref{adadad3} and \eqref{adadad4}.
}
\begin{rem}
The growth order in $t$ in \eqref{adadad4} is much stronger than the result in \cite{Sk} (in \cite{Sk}, the growth order is $t^N$); however, this is not the principle of the proof of Theorem 2.4 of \cite{Sk}.
\end{rem}
Owing to Corollaries 2.5 and 2.6 in \cite{Sk} and the Mourre inequality, by taking $n_0$ to be sufficiently large, we obtain the following for large $\alpha _0 ' $ and $\psi \in \D{\SCR{A}^{\alpha _0 ' /2} }$:
\begin{align*}
\left\|
\sqrt{ \chi_0 ( \SCR{A}(\tau) / \tau ) } e^{-itH} \varphi (H ) \psi
\right\| \leq C \tau ^{ - \alpha _0' /2} \left\| \J{\SCR{A}}^{ \alpha _0' /2} \psi \right\|.
\end{align*}
By taking $\varepsilon$ as $\delta$, $\tau$ as $t$, $\alpha_0'$ as $2 \kappa$, and $\sqrt{\chi_0}$ as $g$, Theorem \ref{T2} can be shown.
\section{Nonexistence of wave operators}
We now prove the main theorem. The proof is divided into two parts: \\ ~~ \\
{\bf Case 1:} We prove the case where $\theta = \rho$. \\
{\bf Case 2:} We prove the case where $\theta < \rho$. \\ ~~ \\
The key argument involves deducing the decay estimate,
\begin{align*}
\left\| V e^{-itH_0} \phi \right\| \leq C |t|^{-1},
\end{align*}
for $ \phi \in \SCR{S} ({\bf R}^n)$, and to show this, we employ Theorem \ref{T2}, which proves
\begin{align*}
\left\| V e^{-itH_0} \phi \right\| & \leq \left\| V g(\SCR{A} /t) e^{-itH_0} \phi \right\| + \left\| V\left( 1- g(\SCR{A} /t) \right) e^{-itH_0} \phi \right\|. \\
& \leq C |t|^{-1} + \left\| V\left( 1- g(\SCR{A} /t) \right) e^{-itH_0} \phi \right\|.
\end{align*}
In addition, showing the decay estimate is necessary:
\begin{align*}
\left\| V\left( 1- g(\SCR{A} /t) \right) e^{-itH_0} \phi \right\| \leq C |t|^{-1} .
\end{align*}
To justify this estimate $V$ must be decayed as $|V(x)| \leq C \J{x}^{- \rho} $ because $\SCR{A} \J{x}^{- \theta} \varphi (H_0) $ is an unbounded operator if $\theta < \rho$. Demonstrating this issue using Theorem \ref{T1} with $\theta < \rho$ is impossible. Hence, we first show Theorem \ref{T1} with $\theta = \rho$. Then, by employing a different approach, we show Theorem \ref{T1} with $\theta < \rho$.
\\ ~~ \\
{\bf Proof of Case 1:} \\ We assume that
\begin{align*}
W^+ := \mathrm{s-} \lim_{t \to \infty} e^{itH} e^{-itH_0}
\end{align*}
exists and that it leads to a contradiction. Let $t_2 >t_1 \gg 1$ and
\begin{align*}
Y(t_1, t_2) := \left(
\left(
e^{it_2H} e^{-it_2 H_0} - e^{it_1H} e^{-it_1 H_0}
\right) \phi, W^+ \phi
\right),
\end{align*}
where we set $\phi \in \SCR{S}({\bf R}^n)$ such that $\phi = \varphi (H_0) \phi$ with $\varphi$ defined as in \S{2}. Then, $Y(t_1,t_2)$ can be estimated as follows:
\begin{align*}
|Y(t_1, t_2) | &= \left|
\int_{t_1}^{t_2} \frac{d}{dt} \left(
e^{itH} e^{-itH_0} \phi, W^+ \phi
\right) dt
\right|
\\ &=
\left|
\int_{t_1}^{t_2} \left(
e^{itH} V e^{-itH_0} \phi, W^+ \phi
\right) dt
\right|
\\ & \geq |J_1| -|J_2|
\end{align*}
with
\begin{align*}
J_1 =
\int_{t_1}^{t_2} \left( V e^{-itH_0} \phi , e^{-itH_0} \phi \right) dt
\end{align*}
and
\begin{align*}
J_2 = \int_{t_1}^{t_2} \left(
V e^{-itH_0} \phi , e^{-itH} \left( W^+ - e^{itH} e^{-itH_0} \right) \phi
\right) dt.
\end{align*}
By the definition of $V$, for all $\psi \in L^2({\bf R}^n)$, we have
\begin{align*}
| (V \psi, \psi ) | \geq c_0 \left| \left( \J{x}^{- \rho} \psi, \psi \right) \right|,
\end{align*}
which yields the following for $\tilde{F}(s) = {1 - F (s) }$ with the $F$ in Proposition \ref{P2}:
\begin{align*}
c_0 ^{-1}|J_1| &\geq
\int_{t_1}^{t_2} \left( \J{x}^{- \rho} \tilde{F}(|x|^{\rho}/t) e^{-itH_0} \phi ,
\tilde{F}(|x|^{\rho}/t) e^{-itH_0}
\right) dt.
\end{align*}
On $\tilde{F}(|x|^{\rho}/t)$, $|x|^{\rho} \leq 2A_{1,R} t$ holds. Therefore,
\begin{align*}
|J_1| &\geq c_0 (2A_{1,R} + 1)^{-\rho}
\int_{t_1}^{t_2} \left\| \tilde{F}(|x|^{\rho}/t) e^{-itH_0} \phi \right\| ^2 \frac{dt}{t}.
\end{align*}
From Proposition \ref{P2} and $\tilde{F} = 1- F$, we have
\begin{align*}
|J_1| \geq \frac{3c_0 (2A_{1,R} + 1)^{- \rho}}{4} \| \phi \|^2 \int_{t_1}^{t_2} \frac{dt}{t} - C \| \J{x}^{\rho} \phi \|^2,
\end{align*}
using $|a+b|^2 \geq 3 a^2 /4 -3b^2$.
Next, we estimate $J_2$. Assuming that $W^{+}$ exists, for any $\varepsilon_0 >0$ and a sufficiently small constant compared with $A_{1,R}$ and $c_0$, there exists $t_1 >0$ such that for all $t > t_1$,
\begin{align*}
\left\| e^{-itH} \left( W^{\pm} -e^{itH}e^{-itH_0} \right) \phi \right\| \leq \varepsilon_0 \| \phi \|.
\end{align*}
Hence, we have
\begin{align*}
|J_2| \leq \varepsilon_0 \| \phi \| \int_{t_1}^{t_2} \left\| V( g(\SCR{A}/t ) + (1- g(\SCR{A}/t))) e^{-itH_0} \phi \right\| dt,
\end{align*}
with $g$ as in Theorem \ref{T2}. By Theorem \ref{T2} with $\kappa =5$, the term associated with $g(\SCR{A}/t )$ can be estimated as
\begin{align} \label{8}
C \varepsilon_0 \| \phi \| \| \J{\SCR{A}}^2 \phi \| \int_{t_1}^{t_2} \frac{dt}{t^2} \leq C \varepsilon_0 \| \phi \| \| \J{\SCR{A}}^{2} \phi \|.
\end{align}
Next, we estimate the term associated with $ (1- g(\SCR{A}/t)))$. We first show the following inequality:
\begin{align} \label{7}
\| V (1- g(\SCR{A}/t))) \varphi (H_0) \| \leq C t^{-1} .
\end{align}
Using the Helffer-Sj\"{o}strand's formula, boundedness of $[\J{x}^{- \rho} , \SCR{A} ]$, and commutator expansion (see, \S{C.3} in Derezi\'{n}ski-G\'{e}rard \cite{DG}), we have
\begin{align*}
\J{x}^{- \rho} (1 - g(\SCR{A}/t )) \varphi (H_0) = t^{-1} B_0 + (1-g(\SCR{A}/t)) \J{x}^{- \rho} \varphi (H_0).
\end{align*}
On the support of $(1-g(\SCR{A}/t)) $ we obtain
\begin{align*}
\left\|
(1-g(\SCR{A}/t)) V \varphi (H_0)
\right\| \leq \frac{1}{\delta t} \left\| \SCR{A} \J{x}^{- \rho} \varphi (H_0) \right\| \leq C t^{-1} + Ct^{-1}\sum_{j=1}^n \left\| \J{x}^{- \alpha /2} p_j \varphi(H_0) \right\|.
\end{align*}
Hence, \eqref{7} is obtained. Then, we have
\begin{align*}
\int_{t_1}^{t_2} \left\| V (1- g(\SCR{A}/t))) e^{-itH_0} \phi \right\| dt \leq C \| \phi \| \int_{t_1}^{t_2} \frac{dt}{t},
\end{align*}
which with \eqref{8} yields
\begin{align} \label{9}
\left| J_2 \right| \leq C \varepsilon_0 \| \phi \| \left( \| \phi \| + \left\| \SCR{A}^2 \phi \right\| \right) + C \varepsilon_0 \| \phi \|^2 \int_{t_1}^{t_2} \frac{dt}{t}.
\end{align}
~~ \\
{\bf Conclusion} \\
Suppose that $\CAL{W}^{+}$ exists and $\phi \in \SCR{S}({\bf R}^n)$ for $\phi = \varphi (H_0) \phi$. Let $t_1$ be sufficiently large, such that $\varepsilon_0$ in \eqref{9} becomes sufficiently small compared with $3c_0(2A_1 + 1)^{- \rho}$. In these situations, we have, on the one hand:
\begin{align} \label{10}
\left|
Y(t_1, t_2)
\right| \leq 2 \| \phi \|^2 \leq 2 \| \phi \| \left( \| \J{x}^{\rho} \phi \| + \| \J{\SCR{A}}^2 \phi \| \right),
\end{align}
and on the other hand:
\begin{align}
\nonumber \left|
Y(t_1, t_2)
\right| &\geq |J_1| - | J_2|
\\ &
\label{11}
\geq \left( \frac{3c_0 (2A_1 +1)^{- \rho}}{4} - C\varepsilon_0 \right) \| \phi \|^2 \int_{t_1}^{t_2} \frac{dt}{t} - C \| \phi \| \left( \| \J{x}^{\rho} \phi \| + \| \J{\SCR{A} } ^2 \phi \| \right).
\end{align}
Take $\| \phi \| = 1$, $\| \J{x}^{\rho} \phi \| + \| \J{\SCR{A} }^2 \phi \| = \tilde{C}$. Then, \eqref{10} and \eqref{11} imply that:
\begin{align*}
\int_{t_1}^{t_2} \frac{dt}{t} \leq C \tilde{C},
\end{align*}
which fails as $t_1 \to \infty$. This contradiction indicates that $\CAL{W}^{+}$ does not exist.
\\ ~~ \\
{\bf Proof of Case 2:} \\ Let $H = H_0 +V$ and $H_{\rho } = H_0 + \J{x}^{- \rho } + V $, where $V$ satisfies Assumption \ref{A1} with $0< \theta < \rho$, and we define
\begin{align} \label{ad20}
W_{\theta}^+ := \mathrm{s-} \lim_{t \to \infty} e^{itH} e^{-itH_0}
\end{align}
and
\begin{align} \label{ad21}
W_{\theta, \rho}^+ := \mathrm{s-} \lim_{t \to \infty} e^{itH_\rho} e^{-itH_0}.
\end{align}
Here, we note that all arguments in the proof of Case 1 are true for the pair $e^{itH} e^{-itH_{\rho}}$ because
$$
\frac{d}{dt} e^{itH} e^{-itH_{\rho}} = e^{itH} \J{x}^{- \rho} e^{-itH_{\rho}} ,
$$
which implies that $\mathrm{s-} \lim_t e^{itH} e^{-itH_{\rho}} $ does not exist. Here, we note that by the density argument, unitarity of $ e^{itH} e^{-itH_{\rho}}$, and arbitrariness of the choice of $\varphi \in C_0^{\infty} ({\bf R})$, we can demonstrate the nonexistence of wave operators as the following sense:
\begin{align} \label{M-ad1}
\mbox{`` } \forall u \in L^2({\bf R}^n) \backslash \{0\},\, \nexists v \in L^2({\bf R}^n) \mbox{ s.t. }{\displaystyle \mathrm{s-} \lim_{t \to \infty} e^{itH} e^{-itH_{\rho}} u = v} \mbox{ "}.
\end{align}
Then, the identity
\begin{align*}
e^{itH} e^{-itH_0} = e^{itH} e^{-itH_{\rho}} \cdot e^{itH_{\rho}} e^{-itH_0}
\end{align*}
shows that neither $W^+_{\theta}$ nor $W^{+}_{\theta, \rho} $ exist. Indeed, if both $W^+_{\theta}$ and $W^{+}_{\theta, \rho} $ exist, then for all $u \in L^2({\bf R}^n)$, there exist $w_{+, \theta} , w_{+, \theta, \rho} \in L^2({\bf R}^n)$ such that
\begin{align*}
e^{itH}e^{-itH_0}u - w_{+, \theta} \to 0 \quad \mbox{and} \quad e^{itH_{\rho}}e^{-itH_0} u - w_{+, \theta, \rho} \to 0
\end{align*}
hold. Then, the following also follows:
\begin{align*}
0 &= \lim_{t \to \infty} \left\| e^{itH} e^{-itH_0} u - w_{+, \theta} \right\|
\\ &= \lim_{t \to \infty} \left\| e^{itH}e^{-itH_{\rho}} \left( e^{itH_{\rho}}e^{-itH_0} u -w_{+,\theta, \rho} \right) + \left( e^{itH}e^{-itH_{\rho}} w_{+,\theta, \rho}- w_{+, \theta} \right) \right\|,
\end{align*}
which yields
\begin{align*}
\lim_{t \to \infty} \left\| e^{itH}e^{-itH_{\rho}} w_{+,\theta, \rho}- w_{+, \theta} \right\| \leq \lim_{t \to \infty} \left\| e^{itH_{\rho}}e^{-itH_0} u -w_{+,\theta, \rho} \right\| = 0.
\end{align*}
Hence, $e^{itH}e^{-itH_{\rho}} w_{+,\theta, \rho} \to w_{+, \theta}$, which contradicts \eqref{M-ad1}. Because $V + \J{x}^{- \rho} $ satisfies Assumption \ref{A1} with $\theta < \rho$, the nonexistence for $W^+_{\theta}$ or $W^{+}_{\theta, \rho} $ indicates the nonexistence of both $W^+_{\theta}$ and $W^{+}_{\theta, \rho} $, which is the desired result.
|
2,869,038,154,539 | arxiv | \section{Introduction}\label{sc:setting of the problem}
In this paper we consider the following non-linear parabolic equation
\begin{equation}\label{eq: PDE non lin iniziale}
\left\{\begin{array}{lcr}
\frac{\partial u(t,x)}{\partial t}= \Delta u (t,x) + F( \nabla u(t,x)) b (t, x),&\quad& x\in \mathbb R^d ,t \in (0,T]\\
u(0,x)=u_0(x), && x\in \mathbb R^d
\end{array}\right.
\end{equation}
where $u:[0,T]\times \mathbb R^d \to \mathbb R$ is the unknown, $b:[0,T]\times \mathbb R^d \to \mathbb R$ is a given (generalised) function and $u_0: \mathbb R^d \to \mathbb R$ is a suitable initial condition.
Here the gradient operator $\nabla$ and the Laplacian $\Delta$ refer to the space component. The term $F: \R^d \to \R$ is a non-linear map of quadratic type whose regularity will be specified below (see Assumption A1).
In this paper we are interested in the case when the coefficient $b$ is highly singular in the space component, in particular we will consider bounded functions of time taking values in a suitable class of Schwartz distributions, $b\in L^\infty([0,T]; \mathcal C^{\beta}(\mathbb R^d))$ for some $\beta\in (-1/2 ,0)$. Here $\mathcal C^\beta$ is a Besov space whose exact definition will be recalled later.
The main motivation for looking at this kind of \emph{rough} equations with singular coefficients comes from Physics. In recent years there has been a great interest in the study of stochastic partial differential equations (SPDEs), fuelled by the success of the theories of regularity structures by Hairer \cite{hairer14} and of paracontrolled distributions by Gubinelli and coauthors \cite{gubinelli04, gubinelli-imkeller-perkowski, gubinelli-perkowski15}. These two theories allowed for the first time to study stochastic PDEs with very singular coefficients (such as the Kardar-Parisi-Zhang equation, see \cite{hairer13}) which posed long standing problems. Amongst the many papers in the area of stochastic PDEs that build on these ideas, we mention a series of recent ones on quasilinear stochastic PDEs \cite{ballieul_et.al, furlan_gubinelli, gerencser_hairer, otto_et.al, otto_weber} that may be of interest to the reader.
Also in the present paper we consider a quasilinear PDE, but a deterministic one, where one of the coefficients is \emph{singular} because it is a distribution. This coefficient however, is \emph{regular} enough to allow for \emph{Young-type} products to be used.
This approach is the same in spirit as \cite{hinz_et.al, hinz_zahle, issoglio13, issoglio_zahle}, where the authors look for solutions to linear and non-linear parabolic PDEs for which some of the coefficients are distributions that may arise as realisations of stochastic noises. The aim of these papers, as well as the present work, is to solve such PDEs with \emph{classical} techniques and in particular without using any special properties of the coefficients that derive from the fact that they are the realization of a stochastic noise -- hence avoiding to use the machinery mentioned above for SPDEs. This of course will result in restrictions on the (ir)regularity of the distributional coefficient $b$ (which would play the role of the space-time noise in the SPDEs context). In the present paper, the non-linearity $F$ is assumed to be continuously differentiable with Lipschitz partial derivatives, hence allowing for quadratic growth.
To the best of our knowledge this is the first time that existence and uniqueness of mild solutions for \eqref{eq: PDE non lin iniziale} is studied in the literature. It may be worth emphasizing that the key technical difficulty is that the non-linearity involves the gradient of the unknown (as for example in the Burger's equation). This is different to \cite{issoglio_zahle} where the non-linearity involves the solution itself. In both cases, the non-linear term is `multiplied' by the distributional coefficient.
Our main result is {\em local existence and uniqueness of a mild solution} in $C([0,T]; \mathcal C^{\alpha+1})$, where $\alpha>0$ depends on $\beta$ (see Assumption A2 below). Here local solution means either a solution with an arbitrary initial condition and a sufficiently small time $T$ (see Theorem \ref{thm: local fixed point for J}) or with an arbitrary time $T$ but a sufficiently small (in norm) initial condition (see Theorem \ref{thm: fixed point for J for small u0}). Both theorems are proven with a fixed point argument and careful a-priori bounds on the quadratic non-linearity $F$. We also show continuity of the solution with respect to the initial condition (Proposition \ref{pr: continuity wrt u0}) and we start to investigate blow-up times for the solution (see Proposition \ref{pr: blow up}).
The quadratic growth of the non-linearity $F$ is the main issue that prevents us from finding a global in time solution. Indeed, if we assume that $F$ is Lipschitz with sub-linear growth (see Assumption A4) then we can show {\em existence and uniqueness of a global mild solution} in $ C^\eps([0,T]; \mathcal C^{\alpha+1})$ for some $\eps>0$ and for all times $T<\infty$ (see Theorem \ref{thm:global}).
To conclude the paper we illustrate an application of PDE \eqref{eq: PDE non lin iniziale} to stochastic analysis, in particular to a class of non-linear backward stochastic differential equations (BSDEs) with singular coefficients. This example falls in the class of quadratic BSDEs and the novelty is the presence of a distributional coefficient in the so-called \emph{driver} of the BSDE. The study of quadratic BSDEs has been initiated in 2000 by Kobylanski \cite{kobylanski}, while BSDEs with singular terms (mostly linear) have started gaining attention only recently, see e.g.\ \cite{diehl_friz12, diehl_zhang17, IssoglioJing16, issoglio_russo}. To the best of our knowledge, the only paper that deals with {\em singular} quadratic BSDEs is \cite{eddahbi}, but the singular term is a linear stochastic integral with respect to a rough function, unlike in the present paper where the singularity appears in the quadratic term.
\vspace{5pt}
The paper is organised as follows: In Section \ref{sc:preliminaries} we recall known results that will be needed later, including the definition of product between distributions and the definition of the function spaces used. In Section \ref{sc: solving the PDE} we show useful properties of the integral operator appearing in the mild solution and show all necessary a priori bounds and contraction properties. Using those we prove the main result of local existence and uniqueness of a mild solution (Theorems \ref{thm: local fixed point for J} and \ref{thm: fixed point for J for small u0}). We also investigate continuity with respect to initial condition and blow-up of the solution. In Section \ref{sc:global} we study global existence and uniqueness of a mild solution (see Theorem \ref{thm:global}) under more restrictive assumptions on the non-linearity.
Finally in Section \ref{sc: applications} we apply these results to stochastic analysis, and give a meaning and solve a class of non-linear BSDEs with distributional coefficients.
For ease of reading we collect here some of the function spaces used more often in this papers (and point the reader to the precise definition in the section below when needed). We have
\begin{itemize}
\item $C_TX:= C([0,T];X)$, that is the space of $X$-valued continuous functions defined on $[0,T]$ for any Banach space $X$, see Section \ref{sc:preliminaries}
\item $L^\infty_TX:= L^\infty(0,T;X)$, that is the space of $X$-valued $L^\infty$-functions defined on $[0,T]$ for any Banach space $X$
\item $\mathcal C^\gamma : = B^\gamma_{\infty, \infty}$, where the Besov spaces $B^\gamma_{p,q}$ are defined in \eqref{eq: Besov spaces alpha p q}
\item $C_T\mathcal C^{\alpha+1}$ is then a particular case (often used below) and this is the space of continuous functions of time defined on $[0,T]$ taking values in the Besov space $\mathcal C^{\alpha+1}$
\item $ C^\eps_T\mathcal C^{\alpha+1}:= C^\eps([0,T];\mathcal C^{\alpha+1} )$ is the space of $\eps$-H\"older continuous functions on $[0,T]$ taking values in the Besov space $\mathcal C^{\alpha+1}$, see Section \ref{sc:global}
\end{itemize}
\section{Preliminaries}\label{sc:preliminaries}
\subsection{Fractional Sobolev spaces, semigroups and products}\label{ssc: sobolev sp and semigroups}
We start by recalling the definition of Besov spaces $B^\gamma_{p,q}$ on $\mathbb R^d$ for $\gamma\in \mathbb R$ and $1 <p,q\leq \infty$. For more details see for example Triebel \cite[Section 1.1]{triebel10} or Gubinelli \cite[Appendix A.1]{gubinelli-imkeller-perkowski}. Let $\mathcal S'$ be the space of real valued Schwartz distributions on $\mathbb R^d$.
We denote by $|\cdot|_d$ the Euclidean norm in $\mathbb R^d$. For $x, y \in \mathbb R^d$ we write $x\cdot y$ to denote the scalar product in $\mathbb R^d$.
Let us consider a dyadic partition of unity $\{\phi_j, j\geq 0\}$ with the following properties: the zero-th element is such that
\[
\phi_0(x) = 1 \text{ if } |x|_d\leq 1 \quad \text{ and } \quad \phi_0(x) = 0 \text{ if } |x|_d\geq \frac32
\]
and the rest satisfies
\[
\phi_j (x) = \phi_0 (2^{-j}x)- \phi_0(2^{-j+1}x) \text{ for } x\in \mathbb R^d \text{ and } j\in \mathbb N.
\]
We define
\begin{equation}\label{eq: Besov spaces alpha p q}
B^\gamma_{p,q}:= \left\{ f\in \mathcal S' \, : \, \|f\|_{B^\gamma_{p,q}}:= \left( \sum_{j=0}^\infty 2^{\gamma j q} \|(\phi_j \hat f )^\vee\|_{L^p}^q \right)^{1/q}<\infty \right\},
\end{equation}
where $\hat \cdot $ and $()^\vee$ denote the Fourier transform and its inverse, respectively. If $q=\infty$ in \eqref{eq: Besov spaces alpha p q} we consider the usual modification of the norm as follows
\begin{equation*}
\|f\|_{B^\gamma_{p,\infty}}:= \sup_{j} 2^{\gamma j } \|(\phi_j \hat f )^\vee\|_{L^p}
\end{equation*}
In the special case where both $p=q=\infty$ in \eqref{eq: Besov spaces alpha p q}, we use a different notation for the Besov space, namely $\mathcal C^\gamma := B^\gamma_{\infty, \infty}$. The norm in this space will be denoted by $\|\cdot\|_\gamma$. Note that the norm depends on the choice of the dyadic partition of unity $\{\phi_j\}$ but the space $B^\gamma_{p,q}$ does not, and all norms defined with a different $\{\phi_j\}$ are equivalent. In the case when $0<\gamma<1$ we will sometimes use yet another equivalent norm in $\mathcal C^\gamma$ which is given by
\begin{equation}\label{eq: equivalent norm C alpha}
\sup_{x\in\mathbb R^d } \left( |f(x)| + \sup_{0<|h|_d\leq 1} \frac{|f(x+ h)-f(x) |}{|h|_d^{\gamma}}\right),
\end{equation}
see \cite[equation (1.22) with $m=1$]{triebel10}.
Note moreover that for a non-integer $\gamma>0$, the space $\mathcal C^\gamma$ is the usual space of functions differentiable $m$ times (with $m$ being the highest integer smaller than $\gamma$), with bounded partial derivatives up to order $m$ and whose partial derivatives of order $m$ are ($\gamma-m$)-H\"older continuous (see \cite[page 99]{bahouri}).
On the other hand, if $\gamma<0$ then the space $\mathcal C^\gamma$ contains distributions.
Besov spaces are well suited to give a meaning to multiplication between distributions. Indeed using Bony's estimates (see \cite{bony}) one can show that for $f\in \mathcal C^\gamma$ and $g\in \mathcal C^\delta$ with $\gamma+\delta>0$ and $\delta<0$, then $fg$ exists as an element of $\mathcal C^\delta$ and
\begin{equation}\label{eq: Bony's estimates}
\| fg \|_\delta \leq c \| f\|_\gamma \| g\|_\delta,
\end{equation}
for some constant $c>0$, see \cite[Lemma 2.1]{gubinelli-imkeller-perkowski} for more details and a proof.
For a Banach space $X$, let $C_T X:= C([0,T]; X)$ denote the space of $X$-valued continuous functions of time. This is a Banach space endowed with the usual supremum norm
$$\| u\|_{C_T X}:= \sup_{t\in[0,T]} \|u(t)\|_{X}$$
for $u\in C_T X$.
On the same space $C_T X$ we consider a family of equivalent norms $\|\cdot\|^{(\rho)}_{C_T X}, \rho\geq 1$ given by
\begin{equation}\label{eq: equivalent norm}
\|u\|^{(\rho)}_{C_T X} := \sup_{t\in [0,T]} e^{-\rho t}\|u(t)\|_X
\end{equation}
for $u\in C_T X$. On the space $L_T^\infty X:= L^{\infty}([0,T]; X)$, where $X$ is a Banach space, we consider the norm ${\mathrm{ess sup}}_{t\in[0,T]} \|f(t)\|_{X}$ for a function $f:[0,T]\to X $ and we denote it by $\| f \|_{L_T^\infty X}$.
It is useful to rewrite equation \eqref{eq: PDE non lin iniziale} as the following abstract Cauchy problem
\begin{equation}\label{eq: PDE non lin Cauchy prb}
\left\{\begin{array}{lcl}
\frac{d u(t)}{d t}= \Delta u(t) +F (\nabla u(t)) b(t) &\quad& \text{on } \mathbb R^d \times(0,T]\\
u(0)=u_0, &&
\end{array}\right.
\end{equation}
where now $u$ denotes a function of time with values in an infinite dimensional space that will be specified later. The same notation is applied to the field $b$. We are now ready to introduce explicitly the notion of solution of \eqref{eq: PDE non lin iniziale} considered in this paper.
\begin{defin}
We say that $u\in C_T \mathcal C^{\alpha +1}$ is a mild solution of \eqref{eq: PDE non lin iniziale} or equivalently \eqref{eq: PDE non lin Cauchy prb} if it satisfies the following integral equation
\begin{equation}\label{eq: mild solution}
u(t)= P_t u_0+\int_{0}^t P_{t-s}\left( F(\nabla u(s)) b(s) \right) \mathrm ds,
\end{equation}
where $\{P_t\}_{t\geq 0} $ is the heat semigroup acting on the product $ F(\nabla u(s)) b(s) $.
\end{defin}
The generator of $\{P_t\}_{t\geq 0} $ is the Laplacian $\Delta$ and the semigroup acts on $\mathcal S'$ but as an operator it can be restricted to $\mathcal C^\gamma$ for any $\gamma$. It is known that the heat semigroup $P_t$ enjoys useful properties as a mapping on the $\mathcal C^\gamma$-spaces,
for example the well-known \emph{Schauder's estimates} (see e.g.~\cite[Lemma A.8]{gubinelli-imkeller-perkowski} or \cite[Prop.\ 2.4]{cannizzaro}) recalled in the following.
Let $\theta\geq0 $ and $\gamma\in \mathbb R$. For any $g\in \mathcal C^\gamma$ and $t>0$ then $P_t g\in \mathcal C^{\gamma+2\theta}$ and
\begin{equation}\label{eq: mapping Pt Besov spaces}
\|P_tg\|_{\gamma+2\theta} \leq c t^{-\theta} \|g\|_{\gamma}
\end{equation}
and
\begin{equation}\label{eq: mapping Pt-I Besov spaces}
\|(P_t-1)g\|_{\gamma-2\theta} \leq c |t|^{\theta} \|g\|_{\gamma}.
\end{equation}
\subsection{Assumptions} We list here the main assumptions that we will use throughout the paper on the non-linear term $F$, on the parameters $\alpha, \beta$ and on the distributional term $b$.
\begin{description}
\item[A1] \textbf{Assumption on non-linear term $F$.}
\emph{Let $F:\mathbb R^d\to \mathbb R$ be a $\C^1$-function whose partial derivatives $\frac{\partial}{\partial x_i} F$ are Lipschitz with the same constant $L$ for all $i=1, \ldots, d$.}
\end{description}
Note that from Assumption A1 it follows that there exists a positive constant $l$ such that
\[
\left | \frac{\partial F}{\partial x_i}(x)\right | \leq l (1+|x|_d)
\]
for all $i=1,\ldots, d$.
The key example we have in mind is the {\em quadratic non-linearity} $F(x)=x^2$ (in dimension $d=1$).
Using $F$ we define an operator $\mathrm F$ as follows: for any element $f\in \mathcal C^\alpha$ for some $\alpha>0$ we define the function $\mathrm F(f)$ on $\mathbb R^d$ by
\begin{equation}\label{eq: operator F}
\mathrm F(f)(\cdot):=F(f(\cdot)).
\end{equation}
\begin{description}
\item[A2] \textbf{Assumption on parameters.}
\emph{We choose $0<\alpha<1$ and $\beta<0$ such that $\max\{-\alpha, \alpha-1\}<\beta$. In particular this implies $-\frac12<\beta<0$.}
\item[A3] \textbf{Assumption on $b$}.
\emph{We take $b\in L_T^\infty \mathcal C^\beta$.}
\end{description}
\section{Solving the PDE}\label{sc: solving the PDE}
\subsection{On the non-linear term}\label{ssc: non-linear term}
In this section we prove a technical result that will be key to control the non-linear term in equation \eqref{eq: PDE non lin Cauchy prb} when applying a fixed point argument later on. We state and prove the result for the operator $\mathrm F$ applied to functions $f$ and $g$ with the same regularity as $\nabla u(s)$ will have.
\begin{prop}\label{pr: mapping prop non linear F}
Let $F: \mathbb R^d \to \mathbb R$ be a non-linear function that satisfies Assumption A1. Then the operator $\mathrm F$ defined in \eqref{eq: operator F} is a map
\[
\mathrm F: \mathcal C^\alpha \to \mathcal C^\alpha
\]
for any $\alpha\in(0,1)$. In particular if $\mathbf 0$ denotes the zero-function then $\|\mathrm F(\mathbf 0)\|_\alpha = |F(0)|$. Moreover for $f, g: \mathbb R^d \to \mathbb R^d$ elements of $\mathcal C^\alpha$ component by component then we have
\begin{align}\label{eq: mapping prop non linear F}
\|\mathrm F(f)-\mathrm F( g)\|_{\alpha} & \leq c( 1+ \| f \|_\alpha^2 +\| g \|_\alpha^2 )^{1/2} \|f-g\|_{\alpha}
\end{align}
where the constant $c$ depends on $ L, l$ and $d$.
\end{prop}
\begin{proof}
For simplicity of notation we will omit the brackets and sometimes write $\mathrm Ff-\mathrm Fg$ instead of $\mathrm F(f)-\mathrm F(g)$ for $f, g \in \mathcal C^\alpha$. We recall that a function is an element of $\mathcal C^\alpha$ if its norm is bounded. Moreover for $0<\alpha<1$ we can use the equivalent norm \eqref{eq: equivalent norm C alpha}.
We want to bound
\begin{align}\label{eq: eq for Ff-Fg}
\| \mathrm Ff-\mathrm Fg \|_\alpha :=& \sup_{x\in \mathbb R^d} |Ff(x) - Fg(x)| \nonumber \\
+& \sup_{0<|y|_d\leq 1} \sup_{x\in \mathbb R^d} \frac{|Ff(x+y) - Fg (x+y) - Ff(x) + Fg(x)|}{|y|_d^\alpha}.
\end{align}
Using the $ \C^1$ assumption on $F$, we have for $ a,b \in \R^d$ and $\theta\in[0,1]$ that
\begin{align*}
\frac{\mathrm d}{\mathrm d \theta} F( \theta a +(1-\theta)b ) &= \sum_{i=1}^d \frac \partial {\partial x_i} F ( \theta a +(1-\theta)b ) ( a_i - b_i ),
\end{align*}
and so integrating from 0 to 1 in $\mathrm d \theta$ one has
\[
F(a)- F(b) = \int_0^1 \nabla F(\theta a+(1-\theta)b) \,\ud \theta \cdot (a-b).
\]
Furthermore using the linear growth assumption on each component $\frac{\partial}{\partial x_i}F$ of $\nabla F$ and Jensen's inequality we get
\begin{align} \label{eq: F lipschitz bound}
\vert F(a)-F(b)\vert & \leq \nonumber
\vert a-b \vert_d \int_0^1 \left( \sum_{i=1}^d \left\vert \frac\partial{\partial x_i} F(\theta a+(1-\theta)b)\right\vert^2 \right)^{1/2} \mathrm d \theta\\
& \leq c \vert a-b \vert_d \int_0^1 \left( \sum_{i=1}^d l^2 (1+ |\theta a+(1-\theta)b |_d)^2 \right)^{1/2} \mathrm d \theta \\ \nonumber
& \leq c \vert a-b \vert_d \int_0^1 \left( \sum_{i=1}^d l^2 (1+ \theta^2 |a|^2+(1-\theta)^2 |b |^2_d) \right)^{1/2} \mathrm d \theta \\ \nonumber
& \leq c\sqrt d l \vert a-b \vert_d ( 1+ |a|^2_d+ |b|^2_d)^{1/2} .
\end{align}
Hence for the first term in \eqref{eq: eq for Ff-Fg} we get
\begin{align*}
\sup_{x\in \mathbb R^d} |Ff(x) - Fg(x)| &\leq c \sup_{x\in \mathbb R^d} |f(x) - g(x)| (1+ |f(x)|^2_d+ |g(x)|^2_d)^{1/2}\\
&\leq c \|f-g\|_\alpha (1+\|f\|_\alpha^2 + \|g\|_\alpha^2 )^{1/2}.
\end{align*}
Let us now focus on the numerator appearing in the second term of \eqref{eq: eq for Ff-Fg}. Inside the absolute value we use twice a computation similar to the one used above and add and subtract the same quantity to get
\begin{align*}
&|Ff(x+y) -Ff(x)-Fg(x+y)+Fg(x)| \\
= & \Big | \int_0^1 \nabla F ( \theta f(x+y) + (1-\theta)f(x)) \mathrm d \theta \cdot(f(x+y)-f(x))\\
& - \int_0^1 \nabla F ( \theta g(x+y) + (1-\theta)g(x)) \mathrm d \theta \cdot (g(x+y)-g(x)) \Big|_d \\
\leq & \int_0^1 \left | \nabla F ( \theta f(x+y) + (1-\theta)f(x)) \right|_d \mathrm d \theta \\
& \left|f(x+y)-f(x) - g(x+y)+g(x) \right|_d\\
& + \Big| \int_0^1\left [ \nabla F ( \theta f(x+y) + (1-\theta)f(x)) - \nabla F ( \theta g(x+y) + (1-\theta)g(x))\right] \mathrm d \theta \\
& \cdot(g(x+y)-g(x) ) \Big|
\end{align*}
The first term can be bounded similarly as in \eqref{eq: F lipschitz bound} by
$$c ( 1+ \|f\|^2_\alpha )^{1/2} |f(x+y)-f(x) - g(x+y)+g(x)|_d .$$
For the second term above, we first observe that since $\frac{\partial}{\partial x_i} F:\mathbb R^d \to \mathbb R$ is Lipschitz by assumption for all $i$, then $\nabla F: \mathbb R^d \to \mathbb R^d$ is Lipschitz with constant $L\sqrt d $. Thus we get the upper bound
\begin{align}\label{eq: second summand non-linear F}
\nonumber
&|g(x+y)-g(x) |_d \sqrt d L \\ \nonumber
&\int_0^1 \left|
\theta f(x+y) + (1-\theta)f(x) -\theta g(x+y) - (1-\theta)g(x)\right|_d \mathrm d\theta \\
\leq& c |g(x+y)-g(x)|_d \| f-g\|_\alpha.
\end{align}
Putting everything together for both terms in \eqref{eq: eq for Ff-Fg} we get the bound
\begin{align*}
&\| \mathrm Ff-\mathrm Fg \|_\alpha \\
\leq & c \sup_{0<|y|_d\leq 1} \sup_{x\in\mathbb R^d} \Big [ ( 1+ \|f\|_\alpha^2 )^{1/2} \frac{ |f(x+y)-f(x) - g(x+y)+g(x)|_d}{|y|^\alpha_d}\\
&+ \|f-g\|_\alpha\frac{ |g(x+h)-g(x)|_d}{|y|^\alpha_d } \Big] \\
\leq & c ( 1+ \|f\|_\alpha^2 )^{1/2} \|f-g\|_\alpha + \|f-g\|_\alpha \|g\|_\alpha\\
\leq & c \|f-g\|_\alpha ( 1+ \|f\|_\alpha^2 + \|g\|^2_\alpha )^{1/2}
\end{align*}
having used again the equivalent norm \eqref{eq: equivalent norm C alpha}. This shows \eqref{eq: mapping prop non linear F} and in particular that $\mathrm Ff-\mathrm Fg \in \mathcal C^\alpha$. \\
Let us denote by $k:= F(0)$. Then clearly $\mathrm F\mathbf 0\equiv k$ and
\begin{align*}
\|\mathrm F\mathbf 0 \|_\alpha
&= \sup_{x\in \mathbb R^d} |(\mathrm F \mathbf 0) (x)| + \sup_{0<|y|_d\leq 1} \sup_{x\in \mathbb R^d} \frac{|(\mathrm F\mathbf 0)(x+y) - (\mathrm F\mathbf 0)(x)|}{|y|_d^\alpha}\\
& = \sup_{x\in \mathbb R^d} |k| + 0\\
&= |k|.
\end{align*}
Finally to show that $\mathrm F$ maps $\mathcal C^\alpha$ into itself it is enough to observe that
\[
\|\mathrm Ff\|_\alpha \leq \|\mathrm F f - \mathrm F \mathbf 0\|_\alpha + |k|
\]
and then the RHS of the above equation is finite by \eqref{eq: mapping prop non linear F} hence $\mathrm Ff\in \mathcal C^\alpha$ for all $f\in \mathcal C^\alpha$.
\end{proof}
\subsection{Existence and Uniqueness}
Let us denote by $J_t(u)$ the right-hand side of (\ref{eq: mild solution}), more precisely
\begin{equation}\label{eq: operator J}
J_t(u):=P_t u_0+ I_t(u),
\end{equation}
where the integral operator $I$ is given by
\begin{equation}\label{eq: operator I}
I_t(u):= \int_{0}^t P_{t-s} \left( \mathrm F(\nabla u(s)) b(s)\right) \mathrm ds
\end{equation}
and the semigroup $P_{t-s}$ acts on the whole product $\mathrm F(\nabla u(s)) b(s)$.
Using Schauder's estimates it is easy to show that $t\mapsto I_t(u)$ is continuous from $[0,T]$ to $\mathcal C^{\alpha+1}$. We show the result below for a general $f$ in place of $ F(\nabla u(s)) b(s)$. Note that the result might look not sharp because one normally gains 2 derivatives in parabolic PDEs when using semigroup theory (and possibly some time regularity too). Here we gain slightly less than 2 derivatives (we go from $\beta$ to $\alpha+1$ and $\alpha+1-\beta<2$) because we need the time singularities $t^{-\theta}$ and $t^{-\frac{\alpha+1-\beta}{2}}$ to be integrable. We will investigate the time regularity, that is, H\"older continuity in time of small order, later in Section \ref{sc:global}.
\begin{lemma}\label{lm: continuity of I}
Let $\alpha, \beta$ satisfy Assumption A2. Let $f\in L_T^\infty \mathcal C^{\beta}$. Then $\mathcal I_\cdot (f)\in C_T\mathcal C^{\alpha+1}$, where $\mathcal I_t(f):= \int_0^t P_{t-s}f(s)\mathrm ds$.
\end{lemma}
\begin{proof}
We first observe that for fixed $0\leq s\leq t\leq T$ then $P_{t-s}f(s)\in \mathcal C^{\alpha+1}$ by \eqref{eq: mapping Pt Besov spaces}. The singularity in time is still integrable if $\alpha $ and $\beta$ satisfy Assumption A2. To show continuity of $\mathcal I$ we take some $\varepsilon>0$ and we bound $\mathcal I_{t+\varepsilon}(f) - \mathcal I_t(f)$ in the space $\mathcal C^{\alpha+1}$ by
\begin{align*}
&\|\int_0^t P_{t-s}(P_\varepsilon f(s) ) \mathrm ds + \int_t^{t+\varepsilon} P_{t+\varepsilon-s} f(s) \mathrm ds - \int_0^t P_{t-s}f(s) \mathrm ds \|_{\alpha+1}\\
\leq & \| \int_0^t P_{t-s}(P_\varepsilon f(s) -f(s)) \mathrm ds \|_{\alpha+1} + \| \int_t^{t+\varepsilon} P_{t+\varepsilon-s} f(s) \mathrm ds \|_{\alpha+1}.
\end{align*}
Now we use Schauder's estimates \eqref{eq: mapping Pt Besov spaces} and \eqref{eq: mapping Pt-I Besov spaces} with some $\nu>0$ such that $\theta:=\alpha+1-\beta+2\nu<2$ (which always exists by Assumption A2) and we get
\begin{align*}
\|\mathcal I_{t+\varepsilon}(f)&-\mathcal I_{t}(f)\|_{\alpha +1} \\\leq & c \int_0^t (t-s)^{-\frac \theta 2} \| P_\varepsilon f(s) -f(s)\|_{\beta-2\nu} \mathrm ds \\
& + c \int_t^{t+\varepsilon} (t+\varepsilon-s)^{-\frac{\alpha+1-\beta}{2}} \|f(s)\|_{\beta} \mathrm ds\\
\leq&c \int_0^t (t-s)^{-\frac\theta 2} |\varepsilon|^\nu \| f(s)\|_{\beta} \mathrm ds \\
& + c \int_t^{t+\varepsilon} (t+\varepsilon-s)^{-\frac{\alpha+1-\beta}{2}} \|f(s)\|_{\beta} \mathrm ds\\
\leq &c\|f\|_{L_T^\infty\mathcal C^\beta} \left( |\varepsilon|^\nu \int_0^t (t-s)^{-\frac\theta 2} \mathrm ds + \int_t^{t+\varepsilon} (t+\varepsilon-s)^{-\frac{\alpha+1-\beta}{2}} \mathrm ds \right)\\
\leq &c \|f\|_{L_T^\infty\mathcal C^\beta} \left( |\varepsilon|^\nu t^{-\frac\theta 2+1} + \varepsilon^{\frac{-\alpha+1+\beta}{2}} \right),
\end{align*}
and the latter tends to 0 as $\varepsilon\to0$ for all $t\in[0,T]$ because $\nu>0$ and $-\frac \theta 2+1>0$ by construction and $-\alpha+1+\beta>0$ by Assumption A2.
\end{proof}
Next we show an auxiliary result useful later on.
\begin{prop}\label{pr: bound for Iu - Iv}
Let Assumptions A1, A2 and A3 hold. Let $u, v \in C_T\mathcal C^{\alpha+1}$. Then for all $\rho\geq1$
\begin{align}\label{eq: bound for Iu - Iv}
\nonumber
\|I(u) - I(v)\|_{C_T \mathcal C^{\alpha+1} }^{(\rho)} \leq & c \|b\|_{L_T^\infty \mathcal C^\beta} \rho^{\frac{\alpha-1-\beta}2 } (1+ \|u\|^2_{C_T \mathcal C^{\alpha+1} } + \|v\|^2_{C_T \mathcal C^{\alpha+1} })^{1/2}\\
& \|u-v\|_{C_T \mathcal C^{\alpha+1} }^{(\rho)}
\end{align}
where the constant $c$ depends only on $ L, l$ and $d$.
\end{prop}
\begin{proof}
Using the definition of $I$ we have
\begin{align*}
\|I(u) - &I(v)\|_{C_T \mathcal C^{\alpha+1} }^{(\rho)} \\
& = \sup_{0\leq t\leq T} e^{-\rho t} \|I_t(u) - I_t(v)\|_{\alpha+1} \\
& = \sup_{0\leq t\leq T} e^{-\rho t} \left\| \int_0^t P_{t-s}\left( [\mathrm F(\nabla u(s) ) - \mathrm F(\nabla v(s))] b(s)\right)\mathrm d s \right\|_{\alpha+1}.
\end{align*}
Now using \eqref{eq: mapping Pt Besov spaces} with $\theta = \frac{\alpha+1-\beta}{2} $ (which is positive by Assumption A2) and \eqref{eq: Bony's estimates} (again by A2 $\alpha+\beta>0$) we bound the integrand by
\[
(t-s)^{-\frac{\alpha+1-\beta}{2}} \|b\|_{L_T^\infty \mathcal C^\beta} \|\mathrm F(\nabla u(s) )- \mathrm F(\nabla v (s)) \|_\alpha
\]
and using the result of Proposition \ref{pr: mapping prop non linear F} we further bound it by
\begin{align*}
c (t-s)^{-\frac{\alpha+1-\beta}{2}} \|b\|_{L_T^\infty \mathcal C^\beta} \| \nabla u(s)- \nabla v (s) \|_\alpha (1+\| \nabla u (s)\|_\alpha^2 + \|\nabla v(s)\|_\alpha^2)^{1/2},
\end{align*}
where the constant $c$ depends on $ L, l$ and $d$.
Substituting the last bound into the equation above we get
\begin{align*}
\|I(u) - &I(v)\|_{C_T \mathcal C^{\alpha+1} }^{(\rho)} \\
\leq & c \|b\|_{L_T^\infty \mathcal C^\beta} \sup_{0\leq t\leq T} \int_0^t (t-s)^{-\frac{\alpha+1-\beta}{2}} e^{-\rho (t-s)} \\
& e^{-\rho s} \| \nabla u(s)- \nabla v (s) \|_\alpha (1+\| \nabla u (s)\|_\alpha^2 + \|\nabla v(s)\|_\alpha^2)^{1/2} \mathrm d s \\
\leq & c \|b\|_{L_T^\infty \mathcal C^\beta} \sup_{0\leq t\leq T} \int_0^t (t-s)^{-\frac{\alpha+1-\beta}{2}} e^{-\rho (t-s)} \mathrm d s \\
& \| \nabla u- \nabla v \|^{(\rho)}_{C_T \mathcal C^\alpha} (1+\| \nabla u \|_{C_T\mathcal C^\alpha}^2 + \|\nabla v\|_{C_T\mathcal C^\alpha}^2 )^{1/2} .
\end{align*}
Finally we use the bound $\|\nabla f\|_\alpha \leq c\|f\|_{\alpha+1}$ for $f\in \mathcal C^{\alpha+1}$ (which follows from Bernstein inequalities, see e.g. \cite[Lemma 2.1]{bahouri}) and we integrate the singularity since $-\frac{\alpha+1-\beta}{2}>-1 $ to get
\[
c \|b\|_{L_T^\infty \mathcal C^\beta} \rho^{\frac{\alpha-1-\beta}{2}} (1+\| u \|_{C_T\mathcal C^{\alpha+1}}^2 + \| v\|_{C_T\mathcal C^{\alpha+1}}^2 )^{1/2} \|u-v\|_{C_T \mathcal C^{\alpha+1} }^{(\rho)},
\]
as wanted.
\end{proof}
We remark that the power of $\rho$ in \eqref{eq: bound for Iu - Iv} is negative due to Assumption A2 and the idea is to pick $\rho$ large enough so that $I$ is a contraction. However this cannot be done using \eqref{eq: bound for Iu - Iv} directly because of the term
$ (1+\| u \|_{C_T\mathcal C^{\alpha+1}}^2 + \| v\|_{C_T\mathcal C^{\alpha+1}}^2 )^{1/2} $.
Indeed we are only able to show existence and uniqueness of a solution for a small time-interval or alternatively for a small initial condition, as we will see later.
\begin{prop} \label{pr: mapping of J in C}
Let Assumptions A1, A2 and A3 hold. Let $u_0\in \mathcal C^{\alpha+1}$ be given. Then the operator $J$ maps $ C_T\mathcal C^{\alpha+1}$ into itself. In particular, for arbitrary $T, \rho$ and $u \in C_T\mathcal C^{\alpha+1}$ we have
\begin{align}\label{eq: bound for Ju}
\|J(u)\|_{C_T\mathcal C^{\alpha+1}}^{(\rho)}
&\leq \|u_0\|_{\alpha+1} \\
& + C \rho^{\frac{\alpha-1-\beta}2} \left(1 + \|u\|^{(\rho)}_{ C_T \mathcal C^{\alpha + 1}} (1 + \|u\|^2_{ C_T \mathcal C^{\alpha + 1}})^{1/2} \right), \nonumber
\end{align}
where $C= c \|b\|_{L_T^\infty \mathcal C^\beta}$ is the constant appearing in \eqref{eq: bound for Iu - Iv} in front of $\rho$ and $c$ depends only on $ L, l$ and $d$.
\end{prop}
\begin{proof}
It is clear that \eqref{eq: bound for Ju} implies that $J$ maps $ C_T \mathcal C^{\alpha + 1} $ into itself. To prove \eqref{eq: bound for Ju} we use the definition of $J$ to get
\begin{align*}
\|J(u)\|_{ C_T \mathcal C^{\alpha + 1}}^{(\rho)} &= \|P_\cdot u_0 + I(u)\|_{ C_T \mathcal C^{\alpha + 1}}\\
&\leq \| P_\cdot u_0\|_{ C_T \mathcal C^{\alpha + 1}}^{(\rho)} + \|I(u)\|_{ C_T \mathcal C^{\alpha + 1}}^{(\rho)}\\
& =:(A) + (B).
\end{align*}
The term (A) is bounded using the contraction property of $P_t$ in $\mathcal C^\alpha$ and by the definition of the equivalent norm
\[
(A)\leq \|u_0\|_{ C_T \mathcal C^{\alpha + 1}}^{(\rho)} = \sup_{0\leq t\leq T} e^{-\rho t} \|u_0\|_{\alpha+1} = \|u_0\|_{\alpha+1}.
\]
The term (B) can be bounded similarly as in the proof of Proposition \ref{pr: bound for Iu - Iv} and one gets
\begin{align*}
(B)& \leq c \sup_{0\leq t \leq T} e^{-\rho t } \int_0^t (t-s)^{-\frac{\alpha+1-\beta}2} \|\mathrm F(\nabla u(s))\|_\alpha \|b(s)\|_\beta \mathrm ds.
\end{align*}
Now we apply Proposition \ref{pr: mapping prop non linear F} with $f=\nabla u(s)$ and $g=0$ to get
\begin{align*}
\|\mathrm F(\nabla u(s))- \mathrm F(\mathbf 0) + \mathrm F(\mathbf 0)\|_{\alpha}
& \leq \|\mathrm F(\nabla u(s))- \mathrm F(\mathbf 0)\|_\alpha + \|\mathrm F(\mathbf 0)\|_{\alpha}\\
& \leq c + (1+\|\nabla u(s)\|_\alpha^2)^{1/2} \|\nabla u(s)\|_\alpha\\
&\leq c (1+ \|u(s)\|_{\alpha+1}(1+\|u(s)\|_{\alpha+1}^2)^{1/2}).
\end{align*}
Plugging this into (B) we get
\begin{align*}
(B)\leq& c \|b\|_{L_T^\infty \mathcal C^\beta } \sup_{0\leq t \leq T} \int_0^t e^{-\rho (t-s) } (t-s)^{-\frac{\alpha+1-\beta}2} \mathrm ds\\
&\sup_{0\leq s\leq T } e^{-\rho s } \left(1+ \|u(s)\|_{\alpha+1}(1+\|u(s)\|_{\alpha+1}^2)^{1/2}\right)\\
\leq & c \|b\|_{L_T^\infty \mathcal C^\beta } \rho^{\frac{\alpha-1-\beta}2} \left(1+ \|u\|_{C_T\mathcal C^{\alpha+1}}^{(\rho)} (1+\|u\|_{C_T\mathcal C^{\alpha+1}}^2)^{1/2}\right)
\end{align*}
as wanted.
\end{proof}
Carrying out the same proof in the special case when $F(0)=0$ we easily obtain the result below.
\begin{coroll}\label{cor: mapping of J in C}
Under the assumptions of Proposition \ref{pr: mapping of J in C} and if moreover $F(0)=0$ then we have
\begin{equation}\label{eq: bound for Ju for F(0)=0}
\|J(u)\|_{C_T\mathcal C^{\alpha+1}}^{(\rho)}
\leq \|u_0\|_{\alpha+1} + C \rho^{\frac{\alpha-1-\beta}2} \|u\|^{(\rho)}_{ C_T \mathcal C^{\alpha + 1}} (1 + \|u\|^2_{ C_T \mathcal C^{\alpha + 1}})^{1/2}.
\end{equation}
\end{coroll}
To show that $J$ is a contraction in a suitable (sub)space we introduce a subset of $C_T\mathcal C^{\alpha+1}$ which depends on three parameters, $\rho$, $R$ and $T$. We define
\begin{equation}\label{eq: ball}
B^{(\rho)}_{R, T} := \left\{ f\in C_T\mathcal C^{\alpha+1} \, : \, \|f\|_{C_T\mathcal C^{\alpha+1}}^{(\rho)}\leq 2 R e^{-\rho T} \right\}.
\end{equation}
Now choosing $\rho$, $R$ and $T$ appropriately (depending on the initial condition $u_0$) one can show that $J$ is a contraction by applying Proposition \ref{pr: mapping of J in C} as illustrated below.
\begin{prop}\label{pr: J contraction}
Let Assumptions A1, A2 and A3 hold. Let $R_0$ be a given arbitrary constant. Then there exists $\rho_0$ large enough depending on $R_0$, and $T_0$ small enough depending on $\rho_0$ such that
\[
J: B_{R_0,T_0}^{(\rho_0)} \to B_{R_0,T_0}^{(\rho_0)},
\]
for any initial condition $u_0\in \mathcal C^{\alpha+1}$ such that $ \|u_0\|_{\alpha+1}\leq R_0$. Moreover for each $u,v \in C_{T_0}\mathcal C^{\alpha+1}$ then
\[
\|J(u)-J(v)\|_{C_{T_0}\mathcal C^{\alpha+1}}^{(\rho_0)} < \|u-v\|_{C_{T_0}\mathcal C^{\alpha+1}}^{(\rho_0)} .
\]
\end{prop}
\begin{proof}
We begin by taking $u\in B_{R_0,T}^{(\rho)}$ for some arbitrary parameters $ T$ and $\rho$. For this $u$ we have the following bounds
\[
\|u\|_{C_T\mathcal C^{\alpha+1}}^{(\rho)} \leq 2 R_0 e^{-\rho T}
\]
and
\begin{equation}\label{eq: norm for u in B_R}
\|u\|_{C_T\mathcal C^{\alpha+1}} \leq 2 R_0 e^{-\rho T} e^{\rho T} = 2R_0.
\end{equation}
Let $u_0\in C^{\alpha+1}$ be such that $\|u_0\|_{\alpha+1}\leq R_0$. Then by Proposition \ref{pr: mapping of J in C} we obtain
\begin{align*}
\|J(u)\|_{C_T\mathcal C^{\alpha+1}}^{(\rho)}
&\leq R_0 + C \rho^{\frac{\alpha-1-\beta}2} \left(1 + 2R_0 e^{-\rho T}(1 + 4R_0^2 )^{1/2} \right)\\
& = R_0 e^{-\rho T} \left( e^{\rho T} + \frac C {R_0} \rho^{\frac{\alpha-1-\beta}2} e^{\rho T} + 2C\rho^{\frac{\alpha-1-\beta}2}(1 + 4R_0^2 )^{1/2} \right).
\end{align*}
To show that $J(u)\in B_{R_0,T}^{(\rho)}$ we need to pick $\rho_0$ and $T_0 $ such that
\begin{equation}\label{eq: bound to get a contraction}
e^{\rho T} + \frac C {R_0} \rho^{\frac{\alpha-1-\beta}2} e^{\rho T} + 2C \rho^{\frac{\alpha-1-\beta}2}(1 + 4R_0^2 )^{1/2} \leq 2.
\end{equation}
This is done as follows. First we pick $\rho_0\geq 1 $ depending on $R_0$ and large enough such that the following three conditions hold
\begin{eqnarray}
2C \rho_0^{\frac{\alpha-1-\beta}2}(1 + 4R_0^2 )^{1/2} \leq \frac14 \label{eq: bound 1}\\
\frac C {R_0} \rho_0^{\frac{\alpha-1-\beta}2} \leq \frac14 \label{eq: bound 2}\\
C\rho_0^{\frac{\alpha-1-\beta}2} (1 + 8R_0^2 )^{1/2} <1.\label{eq: bound 3}
\end{eqnarray}
This is always possible since $\rho\mapsto \rho^{\frac{\alpha-1-\beta}2}$ is decreasing. Moreover this can be done independently of $T$. We also remark that the third bound is not needed to show that $J(u)\in B_{R_0,T}^{(\rho)}$ but will be needed below to show that $J$ is a contraction for the chosen set of parameters $R_0, \rho_0, T_0$.\\
Next we pick $T_0>0$ depending on $\rho_0, R_0$ and small enough such that
\begin{equation}\label{eq: bound 4}
e^{\rho_0 T_0}\leq 1+\frac25.
\end{equation}
This is always possible since $T\mapsto e^{\rho_0 T}$ is increasing, continuous and has minimum 1 at 0. \\
With these parameters, \eqref{eq: bound to get a contraction} is satisfied under the assumptions \eqref{eq: bound 1}, \eqref{eq: bound 2} and \eqref{eq: bound 4}. Indeed
\begin{equation*}
e^{\rho_0 T_0} + \frac C {R_0} \rho^{\frac{\alpha-1-\beta}2} e^{\rho_0 T_0} + 2C \rho_0^{\frac{\alpha-1-\beta}2}(1 + 4R_0^2 )^{1/2} \leq 1+\frac25 +\frac 14(1+\frac25)+ \frac14 = 2.
\end{equation*}
It is left to prove that $J$ is a contraction on $B^{(\rho_0)}_{R_0, T_0}$. For this, it is enough to use Proposition \ref{pr: bound for Iu - Iv} for $u,v\in B^{(\rho_0)}_{R_0, T_0}$
\begin{align*}
\|I(u)-I(v)\|_{C_{T_0}\mathcal C^{\alpha+1}}^{(\rho_0)} &\leq
C \rho_0^{\frac{\alpha-1-\beta}2} (1+ 2(2R_0)^2 )^{1/2}\|u-v\|_{C_{T_0}\mathcal C^{\alpha+1}}^{(\rho)}\\
& < \|u-v\|_{C_{T_0}\mathcal C^{\alpha+1}}^{(\rho_0)},
\end{align*}
where the last bound is ensured by \eqref{eq: bound 3}.
\end{proof}
Using the last result we can show that a unique solution exists locally (for small time $T_0$) in the whole space $C_{T_0}\mathcal C^{\alpha+1}$.
\begin{theorem}\label{thm: local fixed point for J}
Let Assumptions A1, A2 and A3 hold.
Let $u_0\in \mathcal C^{\alpha +1}$ be given. Then there exists a unique local mild solution $u$ to \eqref{eq: mild solution} in $C_{T_0}\mathcal C^{\alpha+1}$, where $T_0 $ is small enough and it is chosen as in Proposition \ref{pr: J contraction} (depending on the norm of $u_0$).
\end{theorem}
\begin{proof}
Let $R_0=\|u_0\|_{\alpha+1}$ and $\rho_0$ and $T_0$ such that \eqref{eq: bound 1}--\eqref{eq: bound 4} are satisfied. \\
\emph{Existence.}
By Proposition \ref{pr: J contraction} we know that the mapping $J$ is a contraction on $ B^{(\rho_0)}_{R_0, T_0}$ and so there exists a solution $u \in B^{(\rho_0)}_{R_0, T_0}$ which is unique in the latter subspace. \\
\emph{Uniqueness.}
Suppose that there are two solutions $u_1$ and $u_2$ in $C_{T_0}\mathcal C^{\alpha+1}$. Then obviously $u_i= J(u_i)$ and $ \| u_i\|_{C_{T_0}\mathcal C^{\alpha+1}}< \infty$ for $i=1,2$. We set $r:= \max\{\| u_i\|_{C_{T_0}\mathcal C^{\alpha+1}}, i=1,2 \}$ (which only depends on $u_i$ and not on any $\rho$).
By Proposition \ref{pr: bound for Iu - Iv} for any $\rho\geq 1$ we have that the $\rho$-norm of the difference $u_1-u_2$ is bounded by
\begin{align*}
\|u_1-&u_2\|_{C_{T_0}\mathcal C^{\alpha+1}}^{(\rho)} =
\|I(u_1)-I(u_2)\|_{C_{T_0}\mathcal C^{\alpha+1}}^{(\rho)} \\
& \leq C \rho^{\frac{\alpha-1-\beta}2 } (1+ \|u_1\|^2_{C_{T_0} \mathcal C^{\alpha+1} } + \|u_2\|^2_{C_{T_0} \mathcal C^{\alpha+1} })^{1/2}\|u_1-u_2\|_{C_{T_0} \mathcal C^{\alpha+1} }^{(\rho)}\\
& \leq C \rho^{\frac{\alpha-1-\beta}2 } (1+ 2r^2)^{1/2}\|u_1-u_2\|_{C_{T_0} \mathcal C^{\alpha+1} }^{(\rho)}.
\end{align*}
Choosing $\rho_0$ large enough such that $1- C \rho_0^{\frac{\alpha-1-\beta}2 } (1+ 2r^2)^{1/2} >0$ implies that $\|u_1-u_2\|_{C_{T_0} \mathcal C^{\alpha+1} }^{(\rho_0)} \leq 0$ and hence the difference must be 0 in the space $C_{T_0}\mathcal C^{\alpha+1}$, thus $u_1=u_2$.
\end{proof}
\begin{rem}\label{rm:uniqueness1}
Note that in the proof of uniqueness of Theorem \ref{thm: local fixed point for J} we do not assume anything about the size of time $T_0$. Hence, if a solution to \eqref{eq: mild solution} exists up to time $T$ in the space $C_T\mathcal C^{\alpha+1}$, then it is unique.
\end{rem}
An alternative existence and uniqueness result is shown below. A global in time solution is found up to any given time $T$, but in this case we have to restrict the choice of initial conditions $u_0$ to a set with small norm (depending on $T$). Moreover we are able to show this result only under the extra condition that $F(0)=0$.
\begin{prop}\label{pr: fixed point for J for any T}
Let Assumptions A1, A2 and A3 hold. Assume $F(0)=0$. Let $T>0$ be given and arbitrary. Then there exists $\rho_0$ large enough such that for all $u_0\in B_{\frac12, T}^{(\rho_0)}$ then
\begin{equation}\label{eq: fixed point for J}
J: B_{1, T}^{(\rho_0)} \to B_{1, T}^{(\rho_0)}
\end{equation}
and $J$ is a contraction on $B_{1, T}^{(\rho_0)}$, namely for $u,v \in B_{1, T}^{(\rho_0)} $ we have
\begin{equation}\label{eq: fixed point for J bound}
\|J(u)-J(v)\|^{(\rho_0)}_{C_T\mathcal C^{\alpha+1}}< \|u-v\|^{(\rho_0)}_{C_T\mathcal C^{\alpha+1}}.
\end{equation}
\end{prop}
\begin{proof}
We recall that for some given $R, \rho$ and $T$, the assumption $u_0\in B_{R, T}^{(\rho)} $ means that $ \|u_0\|^{(\rho)}_{C_T\mathcal C^{\alpha+1}}\leq 2Re^{-\rho T}$, see \eqref{eq: ball}.
Moreover $u_0$ does not depend on time hence $ \|u_0\|^{(\rho)}_{C_T\mathcal C^{\alpha+1}} = \|u_0\|_{\alpha+1}$
so $u_0\in B_{\frac12, T}^{(\rho)}$ implies
\begin{equation*}
\|u_0\|_{\alpha+1}\leq e^{-\rho T}.
\end{equation*}
Using this and Corollary \ref{cor: mapping of J in C} we have
\begin{align*}
\|J(u)\|^{(\rho)}_{C_T\mathcal C^{\alpha+1}}
&\leq \|u_0\|_{\alpha+1} + C \rho^{\frac{\alpha-1-\beta}2} \|u\|^{(\rho)}_{ C_T \mathcal C^{\alpha + 1}} (1 + \|u\|^2_{ C_T \mathcal C^{\alpha + 1}})^{1/2}\\
&\leq e^{-\rho T} + C \rho^{\frac{\alpha-1-\beta}2} \|u\|^{(\rho)}_{ C_T \mathcal C^{\alpha + 1}} (1 + \|u\|^2_{ C_T \mathcal C^{\alpha + 1}})^{1/2}.
\end{align*}
Let $u\in B_{1,T}^{(\rho)}$. Then $\|u\|^{(\rho)}_{C_T\mathcal C^{\alpha+1}}\leq 2e^{-\rho T}$ and
\begin{equation} \label{eq: norm of u in B}
\|u\|_{C_T\mathcal C^{\alpha+1}}\leq 2.
\end{equation}
Thus the bound above becomes
\begin{align*}
\|J(u)\|^{(\rho)}_{C_T\mathcal C^{\alpha+1}}
&\leq e^{-\rho T} + C \rho^{\frac{\alpha-1-\beta}2} 2e^{-\rho T} (1 + 4)^{1/2}\\
& = 2 e^{-\rho T} (\frac12 + C\sqrt 5 \rho^{\frac{\alpha-1-\beta}2} ).
\end{align*}
We choose $\bar \rho_0$ such that $\frac12 + C\sqrt 5 \bar \rho_0^{\frac{\alpha-1-\beta}2} =1 $, and since the function $\rho \mapsto \rho^{\frac{\alpha-1-\beta}2} $ is decreasing, for each $\rho_0\geq \bar\rho_0$ we have
\begin{equation}\label{eq: rho not bar}
\frac12 + C\sqrt 5 \rho_0^{\frac{\alpha-1-\beta}2} \leq 1.
\end{equation}
Then for $\rho=\rho_0$ we have $\|J(u)\|^{(\rho_0)}_{C_T\mathcal C^{\alpha+1}} \leq 2 e^{-\rho_0 T} $ which implies that $J(u)\in B_{1, T}^{(\rho_0)}$ and this shows \eqref{eq: fixed point for J}.
To show \eqref{eq: fixed point for J bound}, let $u,v\in B_{1,T}^{(\rho_0)}\subset C_T\mathcal C^{\alpha+1}$ with $\rho_0\geq \bar\rho_0$. Then by Proposition \ref{pr: bound for Iu - Iv} and by \eqref{eq: norm of u in B}
\begin{align*}
\|J(u)-&J(v)\|_{C_T\mathcal C^{\alpha+1}}^{(\rho_0)} \\
& \leq C \rho_0^{\frac{\alpha-1-\beta}{2}} \left( 1+ \|u\|_{C_T\mathcal C^{\alpha+1}}^2 + \|v\|_{C_T\mathcal C^{\alpha+1}}^2 \right)^{1/2} \|u-v\|_{C_T\mathcal C^{\alpha+1}}^{(\rho_0)}\\
& \leq C \rho_0^{\frac{\alpha-1-\beta}{2}} \left( 1+ 4+4 \right)^{1/2} \|u-v\|_{C_T\mathcal C^{\alpha+1}}^{(\rho_0)}\\
& \leq 3C \rho_0^{\frac{\alpha-1-\beta}{2}} \|u-v\|_{C_T\mathcal C^{\alpha+1}}^{(\rho_0)}.
\end{align*}
We now chose $\rho_0\geq \bar \rho_0$ large enough so that
\begin{equation}\label{eq: rho not}
3C \rho_0^{\frac{\alpha-1-\beta}{2}} <1
\end{equation}
and the proof is concluded.
\end{proof}
\begin{theorem}\label{thm: fixed point for J for small u0}
Let Assumptions A1, A2 and A3 hold. Let $T>0$ be given and let $F(0)=0$. Then there exists $\delta>0$ depending on $T$ such that for each $u_0 $ with $\|u_0\|_{\alpha+1}\leq \delta$ there exists a unique solution $u\in C_T\mathcal C^{\alpha +1} $ to \eqref{eq: mild solution}.
\end{theorem}
\begin{proof}
\emph{Existence.}
We choose $\rho_0$ according to \eqref{eq: rho not} and \eqref{eq: rho not bar}. Let $\delta = e^{-\rho_0 T}$. Then the assumption $\|u_0\|_{\alpha+1}\leq \delta$ means $u_0\in B_{\frac12, T}^{(\rho_0)}$ and by Proposition \ref{pr: fixed point for J for any T} we know that the mapping $J$ is a contraction on $ B^{(\rho_0)}_{1, T}$. Thus there exists a unique fixed point $u$ in $ B^{(\rho_0)}_{1, T}$ which is a solution. \\
\emph{Uniqueness.}
This is shown like in the uniqueness proof of Theorem \ref{thm: local fixed point for J}, with $T$ instead of $T_0$.
\end{proof}
\begin{rem}\label{rm:uniqueness2}
Note that in the proof of uniqueness of Theorem \ref{thm: local fixed point for J} we do not actually use the assumption $\|u_0\|_{\alpha+1}\leq \delta$, so if $F(0)=0$ then uniqueness holds for any initial condition and any time $T$, when a solution exists.
\end{rem}
We now show continuity of the solution $u$ with respect to the initial condition $u_0$. This is done in the following proposition both for the case of existence and uniqueness of a solution $u$ for an arbitrary initial condition and a sufficiently small time $T_0$ (Theorem \ref{thm: local fixed point for J}) and for the case of existence and uniqueness of a solution $u$ for an arbitrary time $T$ and for a sufficiently small (in norm) initial condition $u_0$ (Theorem \ref{thm: fixed point for J for small u0}).
\begin{prop}\label{pr: continuity wrt u0}
\begin{itemize}
\item[(i)] Let the assumptions of Theorem \ref{thm: local fixed point for J} hold and let $R_0>0$ be arbitrary and fixed. Let $u$ be the unique solution found in Theorem \ref{thm: local fixed point for J} on $[0,T_0] $ with initial condition $u_0$ such that $\|u_0\|\leq R_0$ and where $T_0$ depends on $R_0$. Then $u$ is continuous with respect to the initial condition $u_0$, namely
\[
\|u\|^{(\rho_0)}_{C_{T_0}\mathcal C^{\alpha+1}} \leq 2 \|u_0\|_{\alpha+1}
\]
for $\rho_0$ large enough.
\item[(ii)] Let the assumptions of Theorem \ref{thm: fixed point for J for small u0} hold and let $T>0$ be arbitrary and fixed. Let $u$ be the unique solution found in Theorem \ref{thm: fixed point for J for small u0} on $[0,T]$ with initial condition $u_0$ such that $\|u_0\|\leq e^{-\rho_0 T} $ for $\rho_0$ large enough. Then the unique solution $u$ is continuous with respect to the initial condition $u_0$, namely
\[
\|u\|^{(\rho_0)}_{C_T\mathcal C^{\alpha+1}} \leq 2 \|u_0\|_{\alpha+1}.
\]
\end{itemize}
\end{prop}
\begin{proof}
\emph{(i)} Let $\rho_0$ be chosen according to \eqref{eq: bound 1}--\eqref{eq: bound 3} and $T_0$ according to \eqref{eq: bound 4}. Take $u_0$ such that $\|u_0\|_{\alpha+1}\leq R_0$. Then by Proposition \ref{pr: J contraction} we have $ J: B_{R_0, T_0}^{(\rho_0)}\to B_{R_0, T_0}^{(\rho_0)}$ and so by \eqref{eq: norm for u in B_R} the unique solution $u$ given in Theorem \ref{thm: local fixed point for J} satisfies $\|u\|_{C_{T_0}\mathcal C^{\alpha+1}} \leq 2R_0$ for any initial conditions $u_0$ with $\|u_0\|_{\alpha+1}\leq R_0$.
Using this and Corollary \ref{cor: mapping of J in C} we have
\begin{align*}
\|u\|^{(\rho_0)}_{C_{T_0}\mathcal C^{\alpha+1}}
&= \|J(u)\|^{(\rho_0)}_{C_{T_0}\mathcal C^{\alpha+1}} \\
&\leq \|u_0\|_{\alpha+1} + C \rho_0^{\frac{\alpha-1-\beta}{2}} \|u\|^{(\rho_0)}_{C_{T_0}\mathcal C^{\alpha+1}} (1+ \|u\|_{C_{T_0}\mathcal C^{\alpha+1}})^{1/2}\\
&\leq \|u_0\|_{\alpha+1} + \sqrt{1+4R_0^2} C \rho_0^{\frac{\alpha-1-\beta}{2}} \|u\|^{(\rho_0)}_{C_T\mathcal C^{\alpha+1}} .
\end{align*}
By the choice of $\rho_0$ according to \eqref{eq: bound 1} we have $2\sqrt{1+4R_0^2} C \rho_0^{\frac{\alpha-1-\beta}{2}} \leq \frac14$ hence
\[
\|u\|^{(\rho_0)}_{C_{T_0}\mathcal C^{\alpha+1}} \leq \|u_0\|_{\alpha+1} + \frac12 \|u\|^{(\rho_0)}_{C_T\mathcal C^{\alpha+1}} ,
\]
and rearranging terms we conclude.
\emph{(ii)} Let $\rho_0$ be chosen according to \eqref{eq: rho not bar}. Then for all $u_0\in B_{\frac12, T}^{(\rho_0)}$ (that is for $\|u_0\|_{\alpha+1}\leq e^{-\rho_0 T}$) we have $J: B_{1, T}^{(\rho_0)} \to B_{1, T}^{(\rho_0)}$ by Proposition \ref{pr: fixed point for J for any T}. In particular, the unique solution $u$ given in Theorem \ref{thm: fixed point for J for small u0} belongs to $B_{1, T}^{(\rho_0)}$, and \eqref{eq: norm of u in B} holds, that is $\|u\|_{C_T\mathcal C^{\alpha+1}}\leq 2$. Using this and Corollary \ref{cor: mapping of J in C} we have
\begin{align*}
\|u\|^{(\rho_0)}_{C_T\mathcal C^{\alpha+1}}
&= \|J(u)\|^{(\rho_0)}_{C_T\mathcal C^{\alpha+1}} \\
&\leq \|u_0\|_{\alpha+1} + C \rho_0^{\frac{\alpha-1-\beta}{2}} \|u\|^{(\rho_0)}_{C_T\mathcal C^{\alpha+1}} (1+ \|u\|_{C_T\mathcal C^{\alpha+1}})^{1/2}\\
&\leq \|u_0\|_{\alpha+1} + \sqrt 5 C \rho_0^{\frac{\alpha-1-\beta}{2}} \|u\|^{(\rho_0)}_{C_T\mathcal C^{\alpha+1}} .
\end{align*}
By the choice of $\rho_0$ according to \eqref{eq: rho not bar} we have $\sqrt 5 C \rho_0^{\frac{\alpha-1-\beta}{2}} \leq \frac12$ and we conclude as in part (i).
\end{proof}
Finally we conclude this section by investigating the blow-up for the solution $u$ to the PDE. It is still an open problem to show whether the solution $u$ blows up or not, but we have the following result that states that if blow-up occurs, then it does so in finite time.
\begin{prop}\label{pr: blow up}
Let $u_0\in \mathcal C^{\alpha +1}$ and $T>0$ be given.
Then one of the following statements holds:
\begin{itemize}
\item[(a)] There exists a time $t^*\in [0,T]$ such that $\lim_{s\to t^*}\|u(s)\|_{\alpha+1}=\infty$; Or
\item[(b)] there exists a solution $u$ for all $t\in[0,T]$.
\end{itemize}
\end{prop}
\begin{proof}
Assume that $\limsup_{s\to t^*} \|u(s)\|_{\alpha +1} = \infty$ for some $t^*\in [0,T]$. Suppose moreover by contradiction that $\liminf_{s\to t^*}\|u(s)\|_{\alpha+1}<\infty$. Then we can find $R_0>0$ and a sequence $t_k\to t^*$ such that $\|u(t_k)\|_{\alpha+1} <R_0$ for all $k$. Let us now restart the PDE from $u(t_k)$ and apply Theorem \ref{thm: local fixed point for J}: We know that there exists a solution for the interval $[t_k, t_k+T_0]$, where $T_0>0$ depends on $R_0$ but not on $k$. Thus we are able to extend the solution $u$ past $t^*$ because as $k\to\infty$ we have $t_k+T_0\to t^*+T_0$. Thus it cannot be that $\limsup_{s\to t^*} \|u(s)\|_{\alpha +1} = \infty$ and $\liminf_{s\to t^*}\|u(s)\|_{\alpha+1}<\infty$ for some $t^*\in [0,T]$. This means that if $\limsup_{s\to t^*} \|u(s)\|_{\alpha +1} = \infty $ for some $t^*\in[0,T]$ then actually also $\lim_{s\to t^*} \|u(s)\|_{\alpha +1} = \infty$, which is case (a).
Otherwise, if $\limsup_{s\to t^*}\|u(s)\|_{\alpha+1}<\infty$ for all $t^*\in [0,T]$
then a global solution on $[0,T]$ must exists, which is case (b).
\end{proof}
Further research is needed to show either global in time solution or the existence of a finite blow-up time. The difficulty here is due to the quadratic non-linearity and the fact that this term is multiplied by the distributional coefficient. This prevents us to apply classical techniques such as the Cole-Hopf transformation which would be used in the special case $F(x)=x^2$ and $b\equiv 1$ to linearise the equation.
\section{A global existence result}\label{sc:global}
In this section we provide a global result on existence and uniqueness of a solution upon imposing further assumptions on the non-linearity $F$. In particular, we will exclude the quadratic case but still allow for a rich class of non-linear functions.
\begin{description}
\item[A4] \textbf{Further assumption on non-linear term $F$.}
\emph{Let $F: \mathbb R^d \to \mathbb R$ be globally Lipschitz, i.e., there exists a positive constant $L$ such that for all $x, y\in\mathbb R^d$ we have
\[
|F(x)-F(y)|\leq \tilde L |x-y|_d.
\]}
\end{description}
Assumption A4 implies that $F$ has sub-linear growth, that is, there exists a positive constant $\tilde l$ such that for all $x\in \mathbb R^d$
\[
|F(x)|\leq \tilde l (1+ |x|_d).
\]
Moreover also the operator $\mathrm F: \mathcal C^\alpha \to \mathcal C^\alpha$ has sub-linear growth in $\mathcal C^\alpha$, namely there exists $c>0$ such that for all $f\in\mathcal C^\alpha$ we have
\begin{equation}\label{eq:Fsublinear}
\|\mathrm F(f)\|_\alpha \leq c (1+\|f\|_\alpha).
\end{equation}
Indeed
\begin{align*}
\|\mathrm F(f)\|_\alpha
& = \sup_{x\in\mathbb R^d} |Ff(x)| + \sup_{x\in\mathbb R^d} \sup_{|y|_d\leq 1} \frac{|Ff(x+y) - Ff(x)|}{|y|_d^\alpha}\\
&\leq \sup_{x\in\mathbb R^d} \tilde l (1+|f(x)) +\sup_{x\in\mathbb R^d} \sup_{|y|_d\leq 1} \frac{\tilde L |f(x+y) - f(x)|}{|y|_d^\alpha}\\
& \leq c (1+\|f\|_\alpha).
\end{align*}
This extra assumption allows us to find a priori bounds on the solution, as follows.
\begin{prop}[A priori bounds]\label{pr:priori}
Let Assumptions A1, A2, A3 and A4 hold. Let $T<\infty$ be an arbitrary time and $u_0\in \mathcal C^{\alpha+1}$.
If there exists $u\in C_T\mathcal C^{\alpha+1}$ such that
\begin{equation}\label{eq:lu}
u(t) = \lambda P_t u_0 + \lambda \int_0^t P_{t-r} (F(\nabla u(r)) b(r)) \ud r
\end{equation}
where $\lambda \in [0,1]$ is fixed, then for all $t\in[0,T]$ it must hold
\[
\| u (t)\|_{\alpha+1} \leq K
\]
for some finite constant $K$ which depends only on $T, b$ and $u_0$. In particular, $\|u\|_{C_T \mathcal C^{\alpha+1}}\leq K$.
\end{prop}
Note that when $\lambda=1$ then \eqref{eq:lu} reduces to \eqref{eq: mild solution}. By slight abuse of notation, in this result we use $u$ for the solution of \eqref{eq:lu} for $\lambda\in[0,1]$.
\begin{proof}
Let $u\in C_T\mathcal C^{\alpha+1}$ be a solution of \eqref{eq:lu}, that is
\begin{equation}\label{eq:e1}
u(t) = \lambda P_t u_0 + \lambda I_t(u).
\end{equation}
Note that $\mathrm F(\nabla u)\in C_T \mathcal C^{\alpha+1}\subset L^\infty_T \mathcal C^{\alpha+1}$ and so $I(u) \in C_T \mathcal C^{\alpha+1}$ by Lemma \ref{lm: continuity of I} and by Assumption A3. Now we apply \eqref{eq: mapping Pt Besov spaces} and assumption A4 to get
\begin{align*}
\|I_t(u)\|_{\alpha+1}
&\leq \int_0^t \| P_{t-s} (F(\nabla u (s)) b(s) \|_{\alpha+1} \ud s \\
& \leq c \| b \|_{L^\infty_T\mathcal C^\beta} \int_0^t ({t-s})^{-\frac{\alpha+1-\beta}2}(1+ \|\nabla u (s) \|_{\alpha}) \ud s \\
& \leq c \| b \|_{L^\infty_T\mathcal C^\beta} \int_0^t ({t-s})^{-\frac{\alpha+1-\beta}2}(1+ \| u (s) \|_{\alpha+1}) \ud s .
\end{align*}
Taking the $\mathcal C^{\alpha+1}$ norm of \eqref{eq:e1} and plugging the above estimate in, we obtain
\begin{align*}
\| u(t)\|_{\alpha+1}
\leq & \lambda \|P_t u_0\|_{\alpha+1} + \lambda \|I_t( u)\|_{\alpha+1}\\
\leq & c \| u_0\|_{\alpha+1} + c \| b \|_{L^\infty_T\mathcal C^\beta} T^{\frac{-\alpha+1+\beta}2} \\
& + c \| b \|_{L^\infty_T\mathcal C^\beta}
\int_0^t ({t-s})^{-\frac{\alpha+1-\beta}2}\| u (s) \|_{\alpha+1} \ud s .
\end{align*}
Now an application of Gronwall's lemma and the evaluation of the supremum over $t\in[0,T]$ allows to conclude.
\end{proof}
Our strategy to show global existence of a solution of \eqref{eq: mild solution} is to apply Schaefer's fixed point theorem. To this aim, for $\eps>0$ let us define the space $ C_T^\eps\mathcal C^{\alpha+1} $ as the collection of all functions $f:[0,T]\times \mathbb R^d \to \mathbb R$ with finite $\| \cdot \|_{\eps, \alpha+1}$ norm, where the latter is given by
\[
\|f\|_{\eps,{\alpha+1}}: = \sup_{0\leq t \leq T} \|f(t)\|_{\alpha+1} + \sup_{0\leq s< t \leq T} \frac{\|f(t)-f(s)\|_{\alpha+1} }{(t-s)^\eps}.
\]
In order to apply Schaefer's fixed point theorem, it is convenient to work in $ C_T^\eps\mathcal C^{\alpha+1}$ rather than $C_T\mathcal C^{\alpha+1}$, the reason being that balls in $ C_T^{\eps'}\mathcal C^{\alpha'+1}$ are pre-compact sets in $ C_T^\eps\mathcal C^{\alpha+1}$ for $\eps'>\eps$ and $\alpha'>\alpha$.
For ease of reading we set
\[
G_r(u) := \mathrm F(\nabla u(r)) b(r).
\]
Using Assumption A4 and \eqref{eq: Bony's estimates} we have that for $u(r)\in \mathcal C^{\alpha+1}$ then
\begin{equation}\label{eq:G}
\|G_r(u)\|_\beta \leq c (1+ \| u(r)\|_{\alpha+1} ),
\end{equation}
where $c$ depends on $b$ and $\tilde l$.
Moreover by Proposition \ref{pr: mapping prop non linear F} we have that for $u(r), v(r) \in \mathcal C^{\alpha+1}$ then
\begin{equation}\label{eq:Gdiff}
\|G_r(u)-G_r(v)\|_\beta \leq c (1+ \| u(r)\|^2_{\alpha+1}+\| v(r)\|^2_{\alpha+1} )^{1/2} \|u(r)-v(r) \|_{\alpha+1},
\end{equation}
where $c$ depend on $b, l, L$ and $d$.
We now state and prove three preparatory results that are the keys steps needed to apply Schaefer's fixed point theorem.
\begin{lemma}\label{lm:J1}
Let Assumptions A1, A2 and A3 hold and fix $\eps> 0$ such that $\alpha -1 -\beta +\eps <0$. Let $u_0\in \mathcal C^{\alpha+1+ 2\eps +\nu}$ for some small $\nu>0 $.
If $u\in C_T \mathcal C^{\alpha+1} $ then $J(u)\in C_T^{\eps'}\mathcal C^{\alpha'+1} $ for some $\eps'>\eps$ and $\alpha'>\alpha$, and
\begin{equation}\label{eq:J1}
\|J(u)\|_{\eps', \alpha'+1} \leq c\|u_0\|_{\alpha+1+2\eps +\nu} +c T^{\frac{-\alpha'+1+\beta-2\eps'}2}(1+ \|u\|_{C_T \mathcal C^{\alpha+1}}).
\end{equation}
\end{lemma}
\begin{rem}
Note that the parameter $\eps$ in Lemma \ref{lm:J1} could in principle betaken equal zero, in which case we would only need $u_0\in \mathcal C^{\alpha+1+\nu}$ and $\eps'>0$. Later on however, $\eps$ will be chosen strictly greater than zero, hence we state and prove the result for $\eps>0$.
\end{rem}
\begin{proof}[Proof of Lemma \ref{lm:J1}]
First we note that it is always possible to pick $\eps >0$ such that $\alpha -1 -\beta +\eps <0$, because $\alpha -1 -\beta <0$ by assumption A2. Let $u \in C_T \mathcal C^{\alpha+1}$. Moreover let us pick any $\alpha'>\alpha$ small enough such that $\alpha' -1 -\beta <0$ and $\alpha'+1 < \alpha+1+\nu$. Then we can easily see that for all $t\in[0,T]$ we have $J_t(u)\in \mathcal C^{\alpha'+1}$ as follows.
\begin{align*}
\|J_t(u)\|_{\alpha'+1}
= & \|P_t u_0 + \int_0^t P_{t-r} G_r(u) \ud r \|_{\alpha'+1}\\\nonumber
\leq & \|P_t u_0\|_{\alpha'+1} + \int_0^t \| P_{t-r} G_r(u) \|_{\alpha'+1} \ud r\\\nonumber
\leq & c\| u_0\|_{\alpha'+1} + \int_0^t (t-r)^{-\frac{\alpha'+1-\beta}2} \|G_r(u) \|_{\beta} \ud r\\\nonumber
\leq & c\| u_0\|_{\alpha'+1} + c T^{\frac{-\alpha'+1+\beta}2} (1+\|u\|_{C_T \mathcal C^{\alpha+1}}),
\end{align*}
where we have used \eqref{eq: mapping Pt Besov spaces} and \eqref{eq: mapping Pt-I Besov spaces} in the second inequality, and \eqref{eq:G} in the last inequality. Note that $-\alpha'+1+\beta>0$ by construction, and $u_0\in \mathcal C^{\alpha+1+2\eps+\nu}\subset \mathcal C^{\alpha'+1}$.
In order to show that $J(u)\in C_T^{\eps'}\mathcal C^{\alpha'+1}$ we need to control the $\eps'$-H\"older semi-norm. We now choose $\eps'>\eps$ small enough such that $\alpha' -1 -\beta +2\eps'<0$ and $\alpha' +1 +2\eps' < \alpha + 1 + 2\eps + \nu$, which is always possible. Then $u_0\in \mathcal C^{\alpha'+1+2\eps'}$ and we express the difference $J_t(u)- J_s(u)$ for all $0\leq s <t\leq T$ as
\begin{align}\label{eq:Jint}
\|J_t(u)- J_s(u)\|_{\alpha'+1}
\leq & \|(P_{t-s}- I) (P_s u_0)\|_{\alpha'+1} \\ \nonumber
&+ \|\int_0^s (P_{t-s}- I) (P_{s-r} G_r(u))\ud r\|_{\alpha'+1}\\ \nonumber
&+ \|\int_s^t P_{t-r} G_r(u) \ud r\|_{\alpha'+1}\\ \nonumber
=&: M_1 + M_2 + M_3.
\end{align}
Using \eqref{eq: mapping Pt Besov spaces} we get for the first term
\[
M_1 \leq (t-s)^{\eps'} \|P_s u_0\|_{\alpha'+1+2\eps'} \leq c (t-s)^{\eps'} \|u_0\|_{\alpha'+1+2\eps'},
\]
and $u_0\in \mathcal C^{\alpha+1+2\eps+\nu}\subset \mathcal C^{\alpha'+1+2\eps'}$ by choice of $\alpha'$ and $\eps'$.\\
The second term can be bounded using \eqref{eq: mapping Pt Besov spaces}, \eqref{eq: mapping Pt-I Besov spaces} and \eqref{eq:G}, and produces a singularity integrable in time by choice of the parameters. We get
\begin{align*}
M_2
& \leq \int_0^s (t-s)^{\eps'}\| P_{s-r} G_r(u)\|_{\alpha'+1+2\eps'} \ud r\\
& \leq (t-s)^{\eps'} s^{\frac{-\alpha'+1+\beta-2\eps'}2} c (1 + \|u\|_{C_T \mathcal C^{\alpha+1}})\\
& \leq (t-s)^{\eps'} T^{\frac{-\alpha'+1+\beta-2\eps'}2} c (1 + \|u\|_{C_T \mathcal C^{\alpha+1}}).
\end{align*}
The third term is similar, and using \eqref{eq: mapping Pt Besov spaces} and \eqref{eq:G} we obtain
\begin{align*}
M_3
&\leq\int_s^t (t-r)^{-\frac{\alpha'+1-\beta}2} \| G_r(u)\|_{\alpha'+1} \ud r\\
&\leq\int_s^t (t-r)^{-\frac{\alpha'+1-\beta}2} \| G_r(u)\|_{\alpha'+1} \ud r\\
&\leq (t-s)^{\eps'} (t-s)^{\frac{-\alpha'+1+\beta-2\eps'}2} c (1 + \|u\|_{C_T \mathcal C^{\alpha+1}})\\
&\leq (t-s)^{\eps'} T^{\frac{-\alpha'+1+\beta-2\eps'}2} c (1 + \|u\|_{C_T \mathcal C^{\alpha+1}}).
\end{align*}
Putting everything together we get
\begin{align*}
\|J(u)\|_{\eps', \alpha'+1}
=& \sup_{0\leq t\leq T} \|J_t(u)\|_{\alpha'+1} + \sup_{0\leq s< t\leq T} \frac{\|J_t(u)- J_s(u)\|_{\alpha'+1}}{ (t-s)^{\eps'}}\\
\leq& c\| u_0\|_{\alpha'+1} + c T^{\frac{-\alpha'+1+\beta}2} (1+\|u\|_{C_T \mathcal C^{\alpha+1}})\\
& + c \|u_0\|_{\alpha'+1+2\eps'} + 2 T^{\frac{-\alpha'+1+\beta-2\eps'}2} c (1 + \|u\|_{C_T \mathcal C^{\alpha+1}})\\
\leq &c\|u_0\|_{\alpha+1+2\eps+\nu} +c T^{\frac{-\alpha'+1+\beta-2\eps'}2}(1+ \|u\|_{C_T \mathcal C^{\alpha+1}}),
\end{align*}
and the proof is complete.
\end{proof}
\begin{rem}\label{rm:holder}
Applying Lemma \ref{lm:J1} to the unique local solution $u\in C_T\mathcal C^{\alpha+1}$ found in Theorem \ref{thm: local fixed point for J} and in Theorem \ref{thm: fixed point for J for small u0} we obtain that the unique mild solution is not only continuous in time, but it is actually smoother, more precisely $u\in C_T^{\eps'}\mathcal C^{\alpha'+1}$, provided that $u_0\in\mathcal C^{\alpha+1+2\eps+\nu}$ for some small $\nu>0$ and $\eps>0$ chosen as in Lemma \ref{lm:J1}.
\end{rem}
\begin{lemma}\label{lm:J2}
Let Assumptions A1, A2 and A3 hold and let us choose $\eps >0 $ according to Lemma \ref{lm:J1}. Then the operator $J: C_T^{\eps}\mathcal C^{\alpha+1} \to C_T^{\eps}\mathcal C^{\alpha+1} $ is continuous.
\end{lemma}
\begin{proof}
From Lemma \ref{lm:J1}, the fact that $\eps'>\eps$ and $\alpha'>\alpha$ and the embeddings $ C_T^{\eps'}\mathcal C^{\alpha'+1}\subset C_T^{\eps}\mathcal C^{\alpha+1}\subset C_T\mathcal C^{\alpha+1}$ we have that $J: C_T^{\eps}\mathcal C^{\alpha+1} \to C_T^{\eps}\mathcal C^{\alpha+1} $. To show continuity we take $u,v \in C_T^{\eps}\mathcal C^{\alpha+1}$ and bound the sup norm and the H\"older semi-norm of the difference $J(u) - J(v)$.
The sup norm of $J(u) - J(v)$ is bounded by Propositions \ref{pr: bound for Iu - Iv} (with $\rho=1$) together with the fact that the embedding $ C_T^{\eps}\mathcal C^{\alpha+1}\subset C_T\mathcal C^{\alpha+1}$ is continuous. Then one has
\[
\sup_{0\leq t\leq T} \|J_t(u)-J_t(v)\|_{\alpha+1} \leq c(1+\|u\|_{\eps, \alpha+1}^2+\|v\|_{\eps, \alpha+1}^2)^{1/2} \|u-v\|_{\eps, \alpha+1}.
\]
The H\"older semi-norm of $J(u) - J(v)$ is bounded by splitting the integral similarly to what was done in \eqref{eq:Jint}. One obtains
\begin{align*}
\|J_t(u)-&J_t(v)- J_s(u)+J_s(v)\|_{\alpha+1} \\
\leq & \|\int_0^s (P_{t-s}- I) (P_{s-r} \left( G_r(u)-G_r(v) \right) ) \ud r\|_{\alpha+1}\\
&+ \|\int_s^t P_{t-r} \left( G_r(u)-G_r(v) \right) \ud r\|_{\alpha+1}.
\end{align*}
Then we proceed similarly as for the bounds of $M_2$ and $M_3$
in the proof of Lemma \ref{lm:J1}, but using \eqref{eq:Gdiff} instead of \eqref{eq:G}, and with $\eps, \alpha$ instead of $\eps', \alpha'$, to obtain
\begin{align*}
\|J_t(u)-&J_t(v)- J_s(u)+J_s(v)\|_{\alpha+1} \\
\leq & (t-s)^\eps \left( s^{\frac{-\alpha+1+\beta}2} + (t-s)^{\frac{-\alpha+1+\beta-2\eps}2} \right) \times \\
&\times c(1+\|u\|_{\eps, \alpha+1}+\|v\|_{\eps, \alpha+1})^{1/2} \|u-v\|_{\eps, \alpha+1}.
\end{align*}
Thus
\begin{align*}
\sup_{0\leq s< t\leq T} &\frac{\| J_t(u)-J_t(v)- J_s(u)+J_s(v)\|_{\alpha+1} }{(t-s)^\eps}\\
&\leq c T^{\frac{-\alpha+1+\beta-2\eps}2}(1+\|u\|_{\eps, \alpha+1}+\|v\|_{\eps, \alpha+1})^{1/2} \|u-v\|_{\eps, \alpha+1}
\end{align*}
and the proof is complete.
\end{proof}
\begin{lemma}\label{lm:J3}
Let Assumptions A1, A2, A3 and A4 hold and let $\eps$ be chosen as in Lemma \ref{lm:J1}. Let $u_0\in \mathcal C^{\alpha+1+ 2\eps +\nu}$ for some small $\nu>0 $. Then the set
\[
\Lambda:=\{u\in C_T^\eps\mathcal C^{\alpha+1} \text{ such that } u=\lambda J(u) \text{ for some } \lambda \in[0,1]\}
\]
is bounded in $ C_T^\eps\mathcal C^{\alpha+1} $.
\end{lemma}
\begin{proof}
Let $u^*\in \Lambda$, that is $ u^*=\lambda J(u^*) $ for some $\lambda \in[0,1]$. Applying Lemma \ref{lm:J1} and Proposition \ref{pr:priori} we get
\begin{align*}
\|u^*\|_{\eps, \alpha+1} \leq & \|J(u^*)\|_{\eps, \alpha+1}\\
\leq & c\|u_0\|_{\alpha +1+2\eps+\nu} + cT^{\frac{-\alpha'+1+\beta-2\eps}2} (1+\|u^*\|_{C_T \mathcal C^{\alpha+1}} )\\
\leq & c\|u_0\|_{\alpha +1+2\eps+\nu} + cT^{\frac{-\alpha'+1+\beta-2\eps}2} (1+ K),
\end{align*}
where the constant on the right hand side is finite and independent of $u^*$.
\end{proof}
\begin{theorem}\label{thm:global}
Let Assumptions A1, A2, A3 and A4 hold and let $\eps>0$ be chosen according to Lemma \ref{lm:J1}. If $u_0 \in \mathcal C^{\alpha+1+2\eps+\nu}$ for some small $\nu>0$, then there exists a global mild solution $u$ of \eqref{eq: PDE non lin Cauchy prb} in $ C_T^\eps\mathcal C^{\alpha+1} $ which is unique in $ C_T \mathcal C^{\alpha+1} $.
\end{theorem}
\begin{proof}
\emph{Existence.} By Lemma \ref{lm:J1} we have that
\[
J: C_T^\eps\mathcal C^{\alpha+1} \to C_T^\eps\mathcal C^{\alpha+1}
\]
and by Lemma \ref{lm:J2} we know that $J$ is also continuous. Moreover using Lemma \ref{lm:J1} again we have that the operator $J$ maps balls of $ C_T^\eps\mathcal C^{\alpha+1}$ into balls of $C_T^{\eps'}\mathcal C^{\alpha'+1}$ for some $\eps'>\eps$ and $\alpha'>\alpha$, which are pre-compact sets in $ C_T^{\eps}\mathcal C^{\alpha+1}$. Thus $J$ is compact. We conclude that $J$ has a fixed point $u^*$ in $ C_T^{\eps}\mathcal C^{\alpha+1}$ by Schauder's fixed point theorem and by Lemma \ref{lm:J3}. The fixed point $u^*$ is a mild solution of \eqref{eq: PDE non lin Cauchy prb} in $ C_T^\eps\mathcal C^{\alpha+1} $.\\
\emph{Uniqueness.} Clearly $u^*\in C_T \mathcal C^{\alpha+1} $. This solution is unique in the latter space by Remark \ref{rm:uniqueness1}.
\end{proof}
\section{Applications to stochastic analysis}\label{sc: applications}
In this section we illustrate an application of non-linear singular PDEs to stochastic analysis, in particular to a class of non-linear backward stochastic differential equations (BSDEs) with distributional coefficients. The class of BSDEs that we consider here has not been studied previously in the BSDEs literature.
The concept of a BSDE was introduced in the early 90s by Pardoux and Peng \cite{pardoux-peng}. Since then, BSDEs have become a popular research field and the literature on this topic is now vast, see for example two recent books \cite{pardoux-rascanu14, zhang} and references therein. BSDEs own their success to the many applications they have in other areas of research. The main ones are their use in financial mathematics for pricing and hedging derivatives; their application to stochastic control theory to find the optimal control and the optimal value function; and their use in showing existence and uniqueness of solutions to certain classes of non-linear PDEs by means of a probabilistic representation of their solution (known as non-linear Feynman-Kac formula).
The application that we are going to illustrate below fits in the latter two of these three topics. Indeed, the singular PDE studied above will allow us to define and solve a singular BSDE which is linked to the PDE by an extended Feynman-Kac formula. Moreover this class of BSDEs arises also in stochastic control when looking at problems in Economics where an agent wants to maximise her exponential utility, see for example \cite[Chapter 20]{bjork} and \cite[Chapter 7]{zhang}.
This latter class of BSDEs is known as quadratic BSDEs and is linked to the special non-linearity $F(x)=x^2$. Note that in this section we restrict to one space dimension. This restriction and the choice of quadratic $F$ are done to avoid technicalities, but it should be a simple exercise to extend the argument below to a general non-linear $F$ satisfying Assumption A1 and such that $F(0)=0$. The multidimensional case $(d>1)$ should also be possible to treat, much in the spirit of \cite{IssoglioJing16}. Details of this are left to the interested reader and to future work.
\vspace{10pt}
Let us start by writing the PDE \eqref{eq: PDE non lin Cauchy prb} in one-dimension and backward in time, which is the classical form (Kolmogorov backward equation) when dealing with BSDEs:
\begin{equation}\label{eq: PDE for BSDE}
\left\{\begin{array}{ll}
\partial_t u(t,x) + \partial _{xx} u(t,x) + (\partial_x u(t,x))^2 b(t,x)=0, & \text{ for } (t,x)\in [0,T] \times\mathbb R \\
u(T,x)=\Phi (x), & \text{ for } x\in \mathbb R .
\end{array}\right.
\end{equation}
We observe that (by abuse of notation) we used the same symbol $u$ as in the forward PDE and we denoted by $\Phi$ rather than $u_0$ the final condition. This is done to be in line with classical BSDEs notation. The results of Section \ref{sc: solving the PDE} and in particular Theorem \ref{thm: fixed point for J for small u0} apply to this PDE because the only difference from \eqref{eq: PDE non lin Cauchy prb} is the time-change. Indeed it is easy to check that $F(x)= x^2$ satisfies Assumption A1 and moreover $F(0)=0$.
\begin{rem}
Since here we want to work in a given time-interval $[0,T]$ then we must ensure that the terminal condition $\Phi$ is small enough according to Theorem \ref{thm: fixed point for J for small u0}.
\end{rem}
Given a probability space $(\Omega, \mathcal F, \mathbb P)$ we consider a BSDE of the form
\begin{equation}\label{eq: BSDE}
Y_r^{t,x} = \Phi(B_T^{t,x}) +\int_r^T b(s, B_s^{t,x}) (Z^{t,x}_s)^2 \mathrm ds - \int_r^T Z^{t,x}_s \mathrm dB^{t,x}_s,
\end{equation}
where $B:=(B^{t,x}_r)_{t\leq r\leq T}$ is a Brownian motion starting in $x$ at time $t$ and with quadratic variation $2r$ at time $r\geq t$. This latter non-standard quadratic variation is introduced to account for the fact that the generator of Brownian motion is $\frac12 \partial_{xx}$ but the operator in the PDE \eqref{eq: PDE for BSDE} is $\partial_{xx}$. The Brownian motion $B$ generates a filtration $\mathbb F:=(\mathcal F_r)_{t\leq r\leq T }$.
It is known that if $b$ and $\Phi$ are smooth enough functions and satisfy some bounds (see e.g. \cite[Theorem 7.3.3]{zhang}) then the solution to the BSDE exists and it is unique. Note that a solution to \eqref{eq: BSDE} is a \emph{couple} of adapted processes $(Y^{t,x},Z^{t,x})$ that satisfies \eqref{eq: BSDE} and some other integrability conditions (like the ones in the second bullet point of Definition \ref{def: virtual solution} below).
Moreover it is know that, in the classical case, the BSDE and the PDE above are linked via the Feynman-Kac formula, namely $ Y^{t,x}_r = u(r, B_r^{t,x}), \text{ and } Z^{t,x}_r = \partial_x u(r, B_r^{t,x})$.\footnote{One side of the Feynman-Kac formula can be easily checked, namely that the couple $( u(r, B_r^{t,x}), \partial_x u(r, B_r^{t,x})) $ is a solution of the BSDE. This is done by applying It\^o's formula to $ u(r, B_r^{t,x})$.} In particular for the initial time $t$ one gets the stochastic representation for the solution of the PDE \eqref{eq: PDE for BSDE} in terms of the solution of the BSDE \eqref{eq: BSDE}, namely
\[
u(t,x) = Y_t^{t,x}.
\]
In the remaining of this section we are going to use the results on the singular parabolic PDE to solve the singular BSDE \eqref{eq: BSDE} when $b\in L_T^\infty \mathcal C^{\beta}$.
One of the delicate points here is to give a meaning to the term $\int_r^T b(s, B_s) Z_s^2 \mathrm ds $, which we do by using the \emph{It\^o trick}. The {It\^o trick} has been used in the past to treat other SDEs and BSDEs with distributional coefficients, see e.g.\ \cite{flandoli_et.al, IssoglioJing16}. This trick makes use of the following auxiliary PDE
\begin{equation}\label{eq: auxiliary PDE for BSDE}
\left\{\begin{array}{ll}
\partial_t w(t,x) + \partial_{xx} w(t,x) = (\partial_x u(t,x))^2 b(t,x), & \text{ for } (t,x)\in [0,T] \times\mathbb R \\
w(T,x)=0, & \text{ for } x\in \mathbb R ,
\end{array}\right.
\end{equation}
where the function $u$ appearing on the right-hand side is the solution to \eqref{eq: PDE for BSDE}. The mild form of this PDE is given by
\begin{equation*}\label{eq: mild solution w}
w(t) = - \int_t^T P_{s-t} \left( ( \partial_x u(s))^2 b(s) \right) \mathrm ds.
\end{equation*}
Let us now do some \emph{heuristic} reasoning. If $b$ was smooth, then applying It\^o's formula to $w(r, B^{t,x}_r)$ would give
\begin{align*}
\int_r^T \mathrm d w(s, B_s^{t,x}) =& \int_r^T \partial_t w(s, B_s^{t,x}) \mathrm d s + \int_r^T \partial_x w(s, B_s^{t,x}) \mathrm dB^{t,x}_s \\
&+ \frac12 \int_r^T \partial _{xx} w(s, B^{t,x}_s) 2\mathrm ds\\
=& \int_r^T\partial_x w(s, B^{t,x}_s) \mathrm dB^{t,x}_s + \int_r^T(\partial_x u(s,B^{t,x}_s))^2 b(s,B^{t,x}_s) \mathrm ds.
\end{align*}
Moreover, if $b$ was smooth, then the classical theory on BSDEs ensures that $Z_r = \partial_x u(r, B^{t,x}_r)$, so integrating the above equation one has
\begin{align*}
w(T, B_T)- w(r, B_r^{t,x}) &= \int_r^T \partial_x w(s, B^{t,x}_s) \mathrm dB^{t,x}_s + \int_r^T (Z^{t,x}_s)^2 b(s,B_s^{t,x}) \mathrm ds.
\end{align*}
Thus we can express the singular term including $b$ in terms of quantities that are well defined and do not depend on $b$ explicitly, namely
\begin{equation}\label{eq: virtual term}
\int_r^T (Z^{t,x}_s)^2 b(s,B_s^{t,x}) \mathrm ds = - w(r, B_r^{t,x}) - \int_r^T \partial_x w(s, B^{t,x}_s) \mathrm dB^{t,x}_s .
\end{equation}
We note that even in the singular case when $b\in L_T^\infty \mathcal C^\beta$ we have that all terms on the right hand side of \eqref{eq: virtual term} are well defined. Indeed using the regularity of $u$, $b$ and their product (see \eqref{eq: Bony's estimates}) together with Lemma \ref{lm: continuity of I} one has that $w\in C_T\mathcal C^{\alpha+1}$ and therefore $w$ is differentiable (in the classical sense) once in $x$, so $\partial_x w (s,x)$ is well defined.
The idea of the It\^o trick is to ``replace'' the singular integral term with the right-hand side of \eqref{eq: virtual term}, which is the motivation for the following definition. Note that we drop the superscript $\cdot^{t,x}$ for ease of notation.
\begin{defin}\label{def: virtual solution}
A couple $(Y,Z)$ is called \emph{virtual solution} of \eqref{eq: BSDE} if
\begin{itemize}
\item $Y$ is continuous and $\mathbb F$-adapted and $Z$ is $\mathbb F$-progressively measurable;
\item $ E \left[ \sup_{r\in[t,T]}|Y_r|^2 \right ] < \infty$ and $ E\left[ \int_t^T |Z_r|^2 \mathrm dr\right] < \infty$;
\item for all $r\in[t,T]$, the couple satisfies the following backward SDE
\begin{align}
\label{eq: BSDE virtual}
{Y}_r= & \ \Phi(B_T)- w(r, B_r) - \int^T_r ({Z}_s + \partial_x w(s, B_s) )\mathrm d B_s
\end{align}
$\mathbb P$-almost surely.
\end{itemize}
\end{defin}
We now observe that BSDE \eqref{eq: BSDE virtual} can be transformed into a classical BSDE by setting $\hat Y_r := Y_r+ w(r, B_r)$ and $\hat Z_r := Z_r + \partial_x w(r, B_r)$. One has that \eqref{eq: BSDE virtual} is equivalent to
\begin{equation}\label{eq: BSDE virtual transformed}
\hat Y_r= \Phi(B_T) - \int^T_r \hat Z_s \mathrm d B_s,
\end{equation}
thus the $\hat Y$ component in \eqref{eq: BSDE virtual transformed} is given explicitly by $\hat Y_r = \mathbb E\left [ \Phi (B_T) \vert \mathcal F_r \right ]$.
Moreover by the martingale representation theorem (see e.g.\ \cite[Theorem 2.5.2]{zhang}) there exists a unique predictable process $\hat Z$ such that $ \hat Y_r = \hat Y_t + \int^r_t \hat Z_s \mathrm d B_s$ and so $\hat Y_r= \hat Y_T - \int^T_r \hat Z_s \mathrm d B_s $.
Therefore given the transformation $w$, we can find explicitly the virtual solution of \eqref{eq: BSDE} by
\begin{equation}\label{eq: sol BSDE}
Y_r = \mathbb E\left [ \Phi (B_T) \vert \mathcal F_r \right ] - w(r, B_r),
\text{ and }
Z_r = \hat Z_r - \partial_x w(r, B_r).
\end{equation}
What we explained above can be summarised in the following theorem.
\begin{theorem}
If $b\in L_T^\infty\mathcal C^\beta$, then there exists a unique virtual solution $(Y,Z)$ of \eqref{eq: BSDE} given by \eqref{eq: sol BSDE}.
\end{theorem}
\begin{rem}
It is easy to check that the notion of virtual solution coincides with the classical solution when $b$ is smooth, because the heuristic argument explained above to motivate \eqref{eq: virtual term} is actually rigorous. Indeed this is the case if $b\in L_T^\infty\mathcal C^{\beta}$ is also a function smooth enough so that $u\in C^{1,2}$ and so that the BSDE can be solved with classical theorems (see e.g.\ \cite[Chapter 7]{zhang}).
\end{rem}
The notion of virtual solution for BSDEs has been previously used in \cite{IssoglioJing16} for the linear case when $F(x) = x $. There the authors show existence and uniqueness of a virtual solution for the corresponding BSDE similarly as what has been done here but for a slightly different class of drifts that live in Triebel-Lizorkin spaces rather than Besov spaces. Moreover for the linear case $F(x)=x$ it has been shown in \cite{issoglio_russo} that the virtual solution introduced in \cite{IssoglioJing16} indeed coincides with a solution to the BSDE defined directly (hence by giving a meaning to the singular term instead of replacing it with known terms via the It\^o trick). This was achieved with the introduction of an integral operator $A$ to represent the singular integral.
It will be objective of future research to investigate the existence of an integral operator $A$ related to the non-linear term $F(x)$ analogously to the integral operator introduced in \cite{issoglio_russo}, and give a meaning to the BSDE directly rather than via the It\^o trick as done here.
\section*{Acknowledgment}
The author would like to thank the anonymous referee for providing useful comments and hints, in particular regarding Section \ref{sc:global}.
|
2,869,038,154,540 | arxiv | \section{Introduction}
Optical solitons are localized pulses that do not change shape as they propagate in nonlinear media \cite{Agr92,Tay92,Kiv03}. Dispersion and nonlinearity conspire to cancel the spatial dependence in the dynamics, which is usually described by nonlinear differential equations \cite{Lam80,Nov84}, so the solitons are not only shape invariant, they are also very stable when their area is a constant of motion. This last defines the so-called bright solitons and shows how practical is their presence in optics \cite{Tay92}. Being bound states of the cubic nonlinear Schr\"odinger equation, bright optical solitons exist because attractive nonlinearities are originated in the media by the Kerr effect \cite{Kiv98,Sul99}. Solutions for repulsive nonlinearities, known as dark optical solitons, are also available and useful \cite{Kiv98,Sul99}. An interesting application of the optical soliton properties concerns the recent practical validation of the parity-time symmetry in optics \cite{Rut10}. Such a symmetry means invariance under parity and time-reversal transformations in quantum mechanics \cite{Ben05}, and expresses that self-adjointness is not a necessary condition to have physical observables with real spectrum. The experimental proof proportioned in \cite{Rut10} is based on the formal equivalence between some dynamical equations in optics and the Schr\"odinger equation in quantum mechanics. A complex refractive--index $n(x) = n_R(x) + i n_I(x)$ serves as an `optical potential' that can be realized in the laboratory. The gain and loss regions of the material are associated with the imaginary distribution $n_I(x)$, which may be chosen odd $n_I(-x)=n(x)$ to balance the gain--loss rates. An even distribution $n_R(-x) = n_R(x)$ would guide the signal along the propagation direction that is transversal to $x$. The above provides a balanced gain-loss optical potential that can be used to propagate self-focussing electromagnetic signals if, in addition, the media is nonlinear \cite{Agr92,Tay92,Kiv03,Kiv98,Sul99}.
Quite recently, we have developed a formalism that permits the construction of analytically solvable complex-valued potentials with real spectrum \cite{Ros15,Zel16,Jai17,Ros18}. The model is based on the properties of the Riccati equation in the complex domain \cite{Hil97,Sch18}, and the Darboux method \cite{Dar82}. The transformation theory introduced by Darboux in 1882 is useful to intertwine the energies of two different spectral problems in contemporary physics \cite{Mie04}, and find immediate applications in soliton theory \cite{Rog02} as well as in supersymmetric quantum mechanics \cite{Mie04,Coo01}. The present work is motivated by the usefulness of the complex-valued function $V= u^2 + i u_x$ as the seed of solutions $u$ for nonlinear equations like the modified Korteweg--de Vries, sine--Gordon and cubic nonlinear Schr\"odinger ones \cite{Lam80}. The judicious selection of $u$ may provide a parity-time symmetric potential $V$ addressed to generate the nonlinearities that are necessary to control the propagation of light in optical media. Indeed, we find that combining the model introduced in \cite{Ros15} with the above expression of $V$ leads automatically to the Gross--Pitaevskii nonlinear equation \cite{Gro61,Pit61}, which offers a natural arena to study Bose-Einstein condensates \cite{Kev08,Rog13} and is reduced to the cubic nonlinear Schr\"odinger equation in the absence of external interactions \cite{Kiv98,Sul99}. The latter is exactly solvable by using the inverse scattering method \cite{Nov84,Zak71} but the former is very restrictive in the search for integrable models.
The organization of the paper is as follows. In Section~\ref{model} we revisit the main ideas and results introduced in \cite{Ros15}. Then, conditions are imposed to obtain a balanced gain--loss optical potential and the Gross--Pitaevskii equation is derived in Section~\ref{GPE}. We specialize the model to the free-particle potential and show that the nonlinear Schr\"odinger equation defines the profile of $u$ in $V= u^2 + i u_x$. Then, we find that $u^2$ coincides with the intensity of a bright optical soliton while $u_x$ is defined by the convolution of two optical solitons, one obeying attractive nonlinearities and the other responding to repulsive nonlinear interactions. That is, potential $V$ is generated from the linear superposition of bright and dark optical solitons, both of them in either the stationary regime or in a flat configuration. Finally, in Section~\ref{concluye} we give some conclusions of our work.
\section{Model and results}
\label{model}
Using the Darboux approach \cite{Dar82}, stationary one-dimensional Schr\"odinger equations,
\begin{equation}
-\psi_{xx} +V \psi = k^2 \psi
\label{nl1}
\end{equation}
and
\begin{equation}
-\varphi_{xx} +V_0 \varphi = k^2 \varphi,
\label{nl2}
\end{equation}
can be intertwined through the relationship
\begin{equation}
V = V_0 + 2 \beta_x, \quad \psi= \varphi_x + \beta \varphi,
\label{darboux}
\end{equation}
with $\beta$ a solution of the nonlinear Riccati equation
\begin{equation}
-\beta_x +\beta^2 =V_0 -\epsilon.
\label{ricatti}
\end{equation}
Assuming that $V_0$ is a real-valued measurable function such that Eq.~(\ref{nl2}) is integrable in $\mbox{Dom} V_0 = (a_1,a_2) \subseteq \mathbb R$, with the real eigenvalues $E=k^2$, one can construct a complex-valued function $V$ such that Eq.~(\ref{nl1}) is integrable with the same energies $E=k^2$ plus an additional real eigenvalue $\epsilon$ \cite{Ros15}. Indeed, for any $\epsilon \in \mathbb R$, a complex-valued solution $\beta = \beta_R + i \beta_I$ of (\ref{ricatti}) must satisfy the coupled system
\begin{equation}
-\beta_{R x} + \beta_R^2 - \beta_I^2 + \epsilon -V_0 =0,
\label{rica1}
\end{equation}
%
\begin{equation}
-\beta_{I x} + 2 \beta_I \beta_R =0.
\label{rica2}
\end{equation}
Once the solutions of (\ref{rica1})-(\ref{rica2}) have been supplied, the real and imaginary parts of the complex-valued potential $V$ are
\begin{equation}
V_R = V_0 + 2 \beta_{Rx}, \quad V_I = 2\beta_{I x}.
\label{potcomp}
\end{equation}
Given a bound state $\psi_n$ of such potential, the conventional notions of probability density $\rho_n = \vert \psi_n \vert^2$ and probability current ${\cal J}_n= i (\psi_n \ \psi_{nx}^* - \psi_{nx} \psi_n^* )$ apply \cite{Zel16}, the asterisk stands for complex conjugation, and they are such that the {\em condition of zero total area} \cite{Jai17},
\begin{equation}
\int_{Dom V_0} \mbox{Im} V_{\lambda} (x) dx = \left. 2\beta_I(x) \right\vert_{a_1}^{a_2}=0,
\label{zero}
\end{equation}
ensures conservation of total probability.
The new potential $V$ may feature the parity-time (PT) symmetry, defined as the invariance under parity (P) and time-reversal (T) transformations. In quantum mechanics the former corresponds to spatial reflection $p \rightarrow -p$, $x \rightarrow -x$, and the latter to $p \rightarrow -p$, $x \rightarrow x$, together with complex conjugation $i \rightarrow -i$ \cite{Ben05}. Thus, a necessary condition for PT-symmetry is that the complex-valued potential $V(x)$ should satisfy $V(x)=V^*(-x)$. In our case this last requires initial potentials represented by even functions $V_0(x)= V_0(-x)$ in $\mbox{Dom} V_0$. Then, it is sufficient to take $\beta_R$ even and $\beta_I$ odd in $\mbox{Dom} V_0$ to get parity-time symmetric potentials $V$.
The straightforward calculation shows that $\beta$ is parameterized by a real number $\lambda$ as follows
\begin{equation}
\beta = -\frac{\alpha_x}{\alpha} + i\frac{\lambda}{\alpha^2},
\label{beta}
\end{equation}
where the function
\begin{equation}
\alpha(x) = \left[ av^2(x) + b v(x) u(x) + c u^2(x) \right]^{1/2}
\label{alpha}
\end{equation}
is real-valued and free of zeros in $\mbox{Dom}V_0$ when the parameters $\{a,b,c\}$ are real and satisfy $4ac- b^2 = 4 (\lambda/\omega_0)^2$ \cite{Ros15}. Here $u$ and $v$ are two linearly independent solutions of Eq.~(\ref{nl2}) for $k^2=\epsilon$, and $w_0 = W(u,v)$ is their Wronskian.
Of particular interest, the complex-valued potentials (\ref{potcomp}) may be constructed with the profile
\begin{equation}
V= - (\vartheta^2 + i \vartheta_x),
\label{pot1}
\end{equation}
where the function $\vartheta$ is (at least) twice differentiable with respect to $x$ and should contain a parameter, say $z$, so that $\vartheta = \vartheta(x; z)$. Potentials satisfying (\ref{pot1}) are very important in soliton theory since $\vartheta$ can be used to solve the three nonlinear evolution equations known as the modified Korteweg--de~Vries equation, the sine--Gordon equation, and the cubic nonlinear Schr\"odinger equation, all of them defining the propagation of waves in dispersive media \cite{Lam80}.
In the following we show that complex-valued potentials (\ref{potcomp}) and (\ref{pot1}) are compatible for the appropriate solutions of the system (\ref{rica1})-(\ref{rica2}). Such relationship supplies a meaning for the real and imaginary parts of the $\beta$-function that generates potential (\ref{potcomp}) via the Darboux transformation (\ref{darboux}).
\subsection{The Gross--Pitaevskii equation}
\label{GPE}
From (\ref{potcomp}) and (\ref{pot1}) we obtain the system
\begin{equation}
V_0 + 2 \beta_{R x} =- \vartheta^2, \qquad 2\beta_{I x} =- \vartheta_x.
\label{system}
\end{equation}
After integrating, the last of the above equations leads to
\begin{equation}
\vartheta = -2 \beta_I + \vartheta_0,
\label{teta1}
\end{equation}
with $\vartheta_0$ an integration constant. The combination of (\ref{teta1}) with (\ref{rica2}) produces
\begin{equation}
\vartheta_x = 2 (\vartheta -\vartheta_0) \beta_R.
\label{teta2}
\end{equation}
Then, the real and imaginary parts of $\beta$ are respectively given by
\begin{equation}
\beta_R = \frac{\vartheta_x}{ 2 (\vartheta - \vartheta_0) }, \qquad \beta_I = - \left( \frac{\vartheta - \vartheta_0}{2} \right).
\label{betas}
\end{equation}
Using these results in (\ref{beta}) yields the expression
\begin{equation}
\vartheta = -2 \frac{\lambda}{\alpha^2} + \vartheta_0,
\label{sol}
\end{equation}
where the constant arising from the integration of $\beta_R$ has been fixed as $-2\lambda$, for consistency. Now, to find a mechanism to determine $\vartheta$, let us introduce (\ref{teta1}) into the equation for $\beta_R$ in (\ref{system}). After using Eq.~(\ref{rica1}) we obtain
\begin{equation}
2 (\beta^2_R + \beta^2_I + \epsilon) -V_0 = \vartheta_0 (4 \beta_I -\vartheta_0).
\end{equation}
Without loss of generality we make $\vartheta_0 =0$. Then, the above equation is reduced to the constraint
\begin{equation}
\vert \beta \vert^2 = \tfrac12 V_0 -\epsilon.
\label{modbeta}
\end{equation}
As $\vert \beta \vert \geq 0$ we immediately have $V_0 \geq 2 \epsilon$. Besides, from (\ref{betas}) we realize that (\ref{modbeta}) produces the nonlinear differential equation
\begin{equation}
\vartheta_x^2 + (4\epsilon - 2 V_0) \vartheta^2 + \vartheta^4 = 0,
\label{tetita}
\end{equation}
which defines the analytic form of $\vartheta$.
The next step is to determine whether or not the function $\vartheta$ features a soliton profile. With this aim notice that the derivative of (\ref{teta2}), after using (\ref{system}) and condition (\ref{modbeta}), gives
\begin{equation}
-\vartheta_{xx} + (V_0 - 2 \vartheta^2 ) \vartheta = 4 \epsilon \vartheta.
\label{teta4}
\end{equation}
Now, we introduce a real parameter $z$ via the equation
\begin{equation}
i \vartheta_z = 4 \epsilon \vartheta,
\label{time1}
\end{equation}
with solution
\begin{equation}
\vartheta(x;z) = \vartheta(x) \exp ( - i 4 \epsilon z + \xi_0 ),
\label{time2}
\end{equation}
where $\xi_0$ is an integration constant. Considering this new form of $\vartheta$, to avoid dependence on $\arg (\vartheta) = - i 4 \epsilon z + \xi_0$, let us change $\vartheta^3$ by $\vert \vartheta \vert^2 \vartheta$ in (\ref{teta4}). We obtain the spectral problem
\begin{equation}
-\vartheta_{xx} + \left( V_0 - 2 \vert \vartheta \vert^2 \right) \vartheta = 4 \epsilon \vartheta,
\label{teta4b}
\end{equation}
which is named after Gross \cite{Gro61} and Pitaevskii \cite{Pit61}, and currently known as the time-independent Gross-Pitaevskii (GP) equation. Of course, (\ref{teta4}) and (\ref{teta4b}) coincide for real $\vartheta$. Combining (\ref{time1}) and (\ref{teta4b}) one has
\begin{equation}
-\vartheta_{xx} + \left( V_0 - 2 \vert \vartheta \vert^2 \right) \vartheta = i \vartheta_z.
\label{teta5}
\end{equation}
The latter is called time-dependent GP equation (or simply GPE), mainly when the propagation parameter $z$ is treated as the evolution variable. In analogy with the Schr\"odinger equation, $V_0$ is an external potential and the nonlinear term $-2\vert \vartheta \vert^2$ represents an attractive interaction that is proportional to the local density $\vert \vartheta \vert^2$. The GPE is a powerful tool to study Bose-Einstein condensates (BEC) in the mean-field approximation \cite{Rog13}, where the nonlinearity represents an effective potential to which is subjected each atom because its interaction with all other particles, and $\vert \vartheta \vert^2$ stands for the atomic density. In such approach the external potential $V_0$ produces the BEC confinement and may adopt different forms. The trapping in 3D models can be either magnetic or optical, the latter with the advantage that optical traps are extremely flexible and controllable in shape \cite{Kev08}. Lower dimensional BECs are possible at temperatures close to zero when phase fluctuations are negligible. For instance, magnetic traps include external harmonic potentials that can be produced with highly anisotropic profiles. If the longitudinal frequency $\omega_z$ is such that $\omega _z << \omega_{\perp} \equiv \omega_x=\omega_y$, then the fully 3D GPE can be reduced to an effectively 1D model described by the GPE ({\ref{teta5}), where $V_0$ is an oscillator of frequency $\omega_z$ \cite{Kev08}.
In general, the GPE (\ref{teta5}) cannot be solved analytically for arbitrary $V_0$. Particular examples include periodic potentials $V_0(x+L)= V_0(x)$ with period $L$ for which the Bloch theory \cite{Koh59} gives rise to discrete solitons \cite{Tro01}. However, the simplest exactly solvable case is given by the free--particle potential $V_0=0$, which reduces (\ref{teta5}) to the cubic nonlinear Schr\"odinger equation (NLSE),
\begin{equation}
- \vartheta_{xx} - 2 \vert \vartheta \vert^2 \vartheta = i\vartheta_z.
\label{nlse}
\end{equation}
Eq.~(\ref{nlse}) is useful to describe the dynamics of complex field envelopes in nonlinear dispersive media \cite{Sul99}, as well as the paraxial approximation of the light propagation in Kerr media \cite{Kiv03}. In the last case, the propagation parameter $z$ refers to the distance along the beam and the variable $x$ stands for the direction transverse to the propagation. Therefore, $\vartheta$ is the normalized amplitude of the electric field envelope describing the pulse. The nonlinearity $-2\vert \vartheta \vert^2$ is due to the Kerr effect and represents the refractive index, its effect on the light rays increases with the light intensity $\vert \vartheta \vert^2$ and leads to the self-focussing of the beam \cite{Tay92}, Ch.1 (see also \cite{Kiv03} and \cite{Sul99}). In counterposition to the GPE (\ref{teta5}), the NLSE (\ref{nlse}) is exactly integrable in the inverse scattering approach \cite{Zak71} for the boundary condition $\vert \vartheta \vert \rightarrow 0$ at $x \rightarrow \pm \infty$. It possesses localized solutions representing `bright' solitons while its counterpart, constructed with repulsive nonlinearity $+2\vert \vartheta \vert^2$, includes localized `dark' pulses \cite{Kiv98}.
Some remarks are necessary. First, constraint (\ref{modbeta}) delimitates the class of real-valued functions $V_0$ that are useful to construct complex-valued potentials $V$ featuring the special form (\ref{pot1}). Usually $\beta_R$ and $\beta_I$ are finite in $\mbox{Dom} V_0$ and go to zero as $x \rightarrow a_{1,2}$. Thus, the above approach applies specially for functions $V_0$ that are finite in their respective domains and vanish asymptotically. As we are going to see, the free--particle potential is an immediate example. The family of transparent potentials produced via supersymmetry \cite{Coo01,Dia99,Mie00} and shape invariance \cite{Coo01} can be useful as well. Second, the phase of the polar form (\ref{time2}) cannot be included in the identification (\ref{betas}) since it produces complex-valued functions $\beta_I$. Although $\arg ({\vartheta})$ allows the propagation of $\vartheta$ along $z$, as it is determined by the linear derivative $i\vartheta_z$ in either (\ref{teta5}) or (\ref{nlse}), the relationship between $\beta = \beta_R + i \beta_I$ and $\vartheta$ is clearly valid at the stationary case ($z=0$). Third, potentials $V_0$ fulfilling (\ref{modbeta}) provide a Darboux profile (\ref{sol}) for $\beta_I = -\tfrac12 \vartheta$ that can be applied in the systematic search for analytically solvable GPEs (\ref{teta4b}).
On the other hand, for the sake of completeness, we may remove $\beta_I$ from (\ref{rica1}) by using (\ref{modbeta}). The result is the nonlinear Riccati equation
\begin{equation}
-\beta_{Rx} + 2 \beta_R^2 - \tfrac32 V_0 + 2\epsilon =0,
\label{modbetab}
\end{equation}
which is reduced to (\ref{tetita}) after using (\ref{betas}).
\subsection{Optical soliton engineering}
\label{NLSE}
Consider the free--particle potential $V_0=0$, then $\mbox{Dom} V_0 = \mathbb R$. To find an expression for $\vartheta$ let us divide the nonlinear equation (\ref{tetita}) by $\vartheta^4$. After introducing $y =- \vartheta^{-1}$ we have
\begin{equation}
y_x^2 + 4 \epsilon y^2 +1=0.
\label{tetita2}
\end{equation}
From (\ref{modbeta}) we know that this case requires $\epsilon \leq 0$. Making $k= i \tfrac{\kappa}{2}$ with $\kappa \geq 0$, the eigenvalue $\epsilon = -\kappa^2/4$ gives the negative coefficient $- \kappa^2$ for $y^2$ in (\ref{tetita2}). Then $y = \kappa^{-1} \cosh [ \kappa (x+x_0) ]$, with $x_0$ an integration constant, and
\begin{equation}
\vartheta (x;z) = - \frac{\kappa e^{ ( i\kappa^2 z + \xi_0) } }{\cosh \left[ \kappa (x+x_0) \right]},
\label{tetaz}
\end{equation}
where we have used (\ref{time2}). Without loss of generality we make $x_0 = \xi_0 =0$ to reduce (\ref{tetaz}) to the conventional form of the fundamental bright soliton
\begin{equation}
\vartheta (x;z) = - \frac{\kappa e^{ i\kappa^2 z } }{\cosh ( \kappa x) },
\label{tetaz2}
\end{equation}
which does not change shape as it propagates along the $z$-axis. The latter because the two left--terms of ({\ref{nlse}) conspire to cancel the dependence on $x$, as it is expected from the balanced relationship between nonlinearity and dispersion in soliton profiles \cite{Lam80}. Indeed, the area $A_b= \int_{\mathbb R} \vartheta (x) dx = \pi$ does not depend on $\kappa$, so it is a constant of motion for the bright soliton \cite{Tay92}, Ch.2. In Fig.~\ref{Fig1}(b)} we show the behavior of $\vartheta (x;z)$ at $z=0$.
\begin{figure}[htb]
\centering
\subfigure[$4 \vert \beta_R (x) \vert^2$]{\includegraphics[width=0.3\textwidth]{betare}}
\hspace{1ex}
\subfigure[$4 \vert \beta_I(x) \vert^2$]{\includegraphics[width=0.3\textwidth]{betaim}}
\hspace{1ex}
\subfigure[$V(x)$]{\includegraphics[width=0.3\textwidth]{pot}}
\caption{\footnotesize
(Color online) Excitations of the nonlinear Schr\"odinger equation (\ref{nlse}) used to construct a complex-valued potential (\ref{potcomp}) with balanced gain and loss. ({\bf a}) Stationary dark soliton (\ref{dsoliton}) associated with (\ref{nlse}) for the repulsive nonlinearity $+ 2\vert \vartheta \vert^2$. ({\bf b}) Stationary bright soliton (\ref{bsoliton}) associated to the NLSE (\ref{nlse}). In both cases $\kappa=1$. ({\bf c}) Potential (\ref{pt1}) with $V_R$ and $V_I$ even and odd, respectively. $V_R$ is defined by the bright soliton intensity profile and $V_I$ by the product of the bright and dark solitons described above. In all cases $\vert \vartheta \vert^2$, $\vartheta$ and $V_I$ are in black-solid, blue-dashed and red-dotted lines respectively.
}
\label{Fig1}
\end{figure}
The imaginary part of $\beta$ can be now expressed in terms of the stationary profile of the above bright soliton solution
\begin{equation}
2 \beta_I (x) = -\left. \vartheta (x;z) \right\vert_{z=0} = \frac{ \kappa }{\cosh (\kappa x ) }.
\label{bsoliton}
\end{equation}
In turn, the real part of $\beta$ can be obtained from either (\ref{betas}), no matter the phase $e^{i \kappa^2 z}$, or the constraint (\ref{modbeta}), by avoiding such phase. The latter produces
\begin{equation}
2 \beta_R(x)= \pm \kappa \left[ 1 - \frac{1}{\cosh^2 (\kappa x)} \right]^{1/2} = \pm \kappa \tanh (\kappa x).
\label{dsoliton}
\end{equation}
For consistency with (\ref{betas}) we shall preserve the minus sign. Thus, writing $2 \beta_R(x)= - \theta(x)$, we immediately recognize $\theta(x) = \kappa \tanh (\kappa x)$ as the fundamental dark soliton solution of (\ref{nlse}), where the attractive nonlinearity $-2 \vert \vartheta \vert^2$ is replaced by the repulsive one $+2 \vert \vartheta \vert^2$. Including the $z$-dependence we have
\begin{equation}
\theta(x; z) = \kappa e^{ i\kappa^2 z } \tanh (\kappa x).
\label{dark}
\end{equation}
In contrast with the bright soliton (\ref{tetaz2}), the area defined by $\theta (x)$ is not finite. Besides, although the area $A_d=2\kappa \pi$ described by the `hole' $\kappa^2 - \theta^2$ is finite, this is not a constant of motion since it depends on $\kappa$. Fig.~\ref{Fig1}(a) illustrates the `hole' pulse described by the density profile of the dark soliton (\ref{dark}).
Using the stationary versions of the optical solitons (\ref{tetaz2}) and (\ref{dark}), potential (\ref{pot1}) becomes
\begin{equation}
V(x)= - \vartheta^2 (x) - i \vartheta(x) \theta(x).
\label{pt1}
\end{equation}
That is, $V_R$ is defined by the bright soliton intensity while $V_I$ results from the convolution of the bright and dark solitons, both cases in the stationary regime, see Fig.~\ref{Fig1}. However, to elucidate the meaning of expressions (\ref{tetaz2}) and (\ref{dark}) in our model, let us rewrite (\ref{pt1}) in a more convenient form
\begin{equation}
V(x) = -\vert \vartheta (x;z) \vert^2 - i \vartheta(x;z) \theta^*(x;z).
\label{lab}
\end{equation}
Notice that $V(x)$ does not depend on the propagation parameter $z$, despite it is explicitly included in the soliton solutions. The situation changes for the $\beta$-function since it becomes the following linear superposition of bright and dark solitons
\begin{equation}
\beta_0 (x;z) = -\tfrac12 \left[ \theta(x;z) +i \vartheta(x;z) \right] = \beta(x) e^{i \kappa^2 z}.
\label{betasol}
\end{equation}
The constraint (\ref{modbeta}) is not affected by the $z$-dependence since $\vert \beta_0(x;z) \vert^2 = \vert \beta(x) \vert^2 =\frac{\kappa^2}{4}$. Then, the superposition (\ref{betasol}) does not change shape as it propagates along the $z$-axis. Nevertheless, a striking expression for $V(x)$ and $\beta(x)$ is still available. Considering that only the stationary version of $\theta(x;z)$ is involved in the definition of $\beta$, while the phase of $\vartheta(x;z)$ is permitted, we would write
\begin{equation}
V(x) = -\vert \vartheta (x;z) \vert^2 - i \vartheta(x) \theta(x).
\label{lab2}
\end{equation}
Therefore
\begin{equation}
\beta_1 (x;z) = -\tfrac12 \left[ \theta(x) +i e^{i \kappa^2 z} \vartheta(x) \right] = -\tfrac12 \left[ \theta(x) - \sin (\kappa^2 z) \vartheta(x) +i \cos (\kappa^2 z) \vartheta(x)
\right],
\label{betasol2}
\end{equation}
and the pulse
\begin{equation}
\vert \beta_1(x;z) \vert^2 = \tfrac{\kappa^2}{4} - \tfrac12 \sin (\kappa^2 z) \theta(x) \vartheta(x)
\label{betasol3}
\end{equation}
oscillates with period $\frac{ 2\pi}{ \kappa^2}$ as it propagates along $z$. Clearly, constraint (\ref{modbeta}) is satisfied at $\pm z_n = \pm \left( \frac{\pi}{\kappa^2} \right) n$, with $n=0,1,\ldots$ In Fig.~\ref{Fig1A} we can appreciate that excitation (\ref{betasol3}) is indeed a pair `hole--hill' that borns shyly at $z=0$, maturates up to a robust configuration at $z= \frac{\pi}{2 \kappa^2}
$, and decays slowly up to its annihilation at $z_1=\frac{\pi}{\kappa^2}$. Then the configuration twirls to provide a pair `hill--hole', and the process initiates again to finish at $z_2 = \frac{2\pi}{\kappa^2}$. The entire cycle $z_0 \rightarrow z_2$ is repeated over and over as $z$ grows up. The annihilation positions $z_n$ define a flat configuration of the excitation that serves to construct potential (\ref{lab2}).
\begin{figure}[htb]
\centering
\subfigure[ ]{\includegraphics[width=0.3\textwidth]{betaA}}
\hspace{1cm}
\subfigure[ ]{\includegraphics[width=0.3\textwidth]{betaB}}
\caption{\footnotesize
Pulse (\ref{betasol3}) generated by the linear superposition (\ref{betasol2}) that includes the bright soliton $\vartheta(x;z)$ and the stationary dark soliton $\theta(x)$, Eqs.~(\ref{tetaz2}) and (\ref{dsoliton}) respectively. The excitation oscillates as $z$ increases and constraint $\vert \beta_1 (x;z) \vert^2= \sfrac{\kappa^2}{4}$ is satisfied at the points $z_n = (\sfrac{\pi}{\kappa^2}) n$, where the pulse becomes flat.
At $z= \sfrac{z_1}{2}$, the configuration involves a hole (dark soliton) in $x>0$, and a hill (bright soliton) in $x<0$, which acquires a new shape at $z=\sfrac{z_3}{2}$ since it includes a hill in $x>0$, and a hole in $x<0$.
({\bf a}) The pulse propagates from $z_0$ to $z_5$. ({\bf b}) Distribution of holes and hills along the $z$-axis.
}
\label{Fig1A}
\end{figure}
On the other hand, from (\ref{sol}) we have $\lambda= \frac{\kappa}{2}$ and $\alpha^2(x) = \cosh (\kappa x)$. To verify that these results are recoverable from the Darboux expressions of Section~\ref{model} let us take $v=e^{-ikx}$ and $u= e^{ikx}$, with $w_0 = -2ik$, in (\ref{alpha}). The simple choice $a=c=1/2$ gives
\begin{equation}
\alpha(x) = \left[ \cos(2kx) + b \right]^{1/2}, \qquad b^2 = 1 + \tfrac{\lambda^2}{k^2}.
\end{equation}
We have already taken $k= i \frac{\kappa}{2}$, so that $\alpha(x) = [ \cosh (\kappa x) + b ]^{1/2}$ is reduced to the expression we are looking for when $b^2 = 1- ( \tfrac{2 \lambda}{\kappa})^2$ is cancelled. Thus $\lambda= \frac{\kappa}{2}$, as expected. Indeed, potential (\ref{pt1}) has been already reported in the context of the Darboux transformations \cite{Ros15}. There, it is shown that only the real energy $\epsilon =-\frac{\kappa^2}{4}$ permits a normalizable solution of the Schr\"odinger equation (\ref{nl1}). Such eigenfunction is of the form
\begin{equation}
\psi_{\epsilon}(x)= \frac{\vartheta(x)}{ \sqrt{ \kappa \pi}} \left[ \cosh \left( \frac{\kappa x}{2} \right) + i \sinh \left( \frac{\kappa x}{2} \right) \right],
\label{ground}
\end{equation}
and satisfies $\vert \psi_{\epsilon} \vert^2 = \tfrac{1}{\pi} \vartheta$. That is, the density of the ground state (\ref{ground}) has the bright soliton profile, see Fig.~\ref{Fig2}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.3\textwidth]{missing}
\caption{\footnotesize
(Color online) Real (blue-dashed) and imaginary (red-dotted) parts of the single bound state (\ref{ground}) associated with the potential shown in Fig.~\ref{Fig1}(c). Up to the factor $1/\pi$, the corresponding pulse (black-solid) has the bright soliton profile shown in Fig.~\ref{Fig1}(b).
}
\label{Fig2}
\end{figure}
Potential (\ref{lab2}) may be classified in the Scarf I-hyperbolic type (in notation of \cite{Lev00}, Table~1, use $\alpha= \pm \frac32$, $\beta= \pm \frac12 $). This is a family member of PT-invariant potentials studied in \cite{Ahm01} to show that non-Hermitian Hamiltonians have both real and complex discrete spectrum, and fully analytically. The model includes different global factors for $V_R$ and $V_I$ and investigates whether the eigenvalues are real or complex in terms of such parameters. It is conjectured that ``when the real part of the PT-invariant potential is stronger than its imaginary part, the eigenspectrum will be real, and they will be mixed (real and complex) otherwise'' \cite{Ahm01}. As our model considers the same global factor for $V_R$ and $V_I$, namely $\kappa^2$, the above conjecture is automatically verified (see Fig.~\ref{Fig4}), so that no complex eigenvalues are expected. Interestingly, potential (\ref{lab2}) has been implemented, with the global factors modified as in \cite{Ahm01}, as the external field in the GPE \cite{Mus08}. When $V_I$ is weighted by a factor $1/2$, it is found an exact solution for $\epsilon =0.98$ which acquires the analytic form given in Eq.~(\ref{ground}). Besides, the existence and stability of solitons in these potentials, with self-focusing and self-defocusing nonlinear cases, has been recently investigated in e.g. \cite{Mid14,Tso14,Che14}. The above results open the possibility of scaling our model to the more general case where the global factors of $V_R$ and $V_I$ are different.
\begin{figure}[htb]
\centering
\includegraphics[width=0.3\textwidth]{potintensity}
\caption{\footnotesize
(Color online) Total intensity (black-solid) of potential (\ref{lab2}), see also Fig.~\ref{Fig1}(c). The real part contribution (blue-dashed) is stronger than the imaginary one (red-dotted). }
\label{Fig4}
\end{figure}
\section{Conclusion}
\label{concluye}
In conclusion, we have demonstrated how the superpositions of nonlinear localized modes lead to complex-valued potentials with real energy spectrum and balanced gain--loss profile. In particular, we found a potential that is defined by the intensity of the fundamental bright optical soliton in its real part, and by the convolution of this soliton with the fundamental dark mode in the imaginary branch. Although the analytic expression of this potential has been already studied in different approaches, as far as we know, previous to the present work, there is no information about the origin of such interaction. Indeed, we have shown that the superposition leading to the optical potential defines also a `breathing' pulse with striking properties. The pulse is composited by a hill (bright soliton intensity) and a hole (dark soliton intensity) that propagate while they interchange roles: the hole becomes a hill and vice versa. The entire process starts with a flat signal that grows up shyly, maturates up to a robust configuration, and decays slowly up to its annihilation. In a second part of the evolution, the hole and hill interchange roles and the signal grows up and then decays again to finish in flat configuration. The definition of the optical potential occurs when the superposition is in flat configuration.
The model can be scaled in different directions. For instance, fundamental solitons may be replaced by excited modes in the definition of $\beta$, so it becomes a superposition of excited localized modes of the cubic nonlinear Schr\"odinger equation. Remarkably, the difficulty of using excited physical energies in the Darboux transformation is not present in the construction of complex-valued potentials since the conventional oscillation theorems do not operate in such a case \cite{Jai17}. Then, it is expected the same situation for the excited soliton modes. Another option trends towards the Gross--Pitaevskii equation where the external potential is not trivially zero. Namely, to satisfy the constraint (\ref{modbeta}) that delimits the class of external potentials $V_0$ that are useful in our model, periodic potentials might be investigated. The same holds for the family of transparent potentials that either vanish or become finite asymptotically. In any case, the complex-valued potential $V = V_0 + 2 \beta_x$ will be expressed as $V=u^2 +i u_x$, with $u$ a localized mode of either the Gross--Pitaevskii equation or the cubic nonlinear Schr\"odinger equation.
\section*{Acknowledgments}
We acknowledge the financial support from the Spanish MINECO (Project MTM2014-57129-C2-1-P), Junta de Castilla y Le\'on (VA057U16), and from the Instituto Polit\'ecnico Nacional, Mexico, under the project SIP20180377.
|
2,869,038,154,541 | arxiv |
\section{Introduction}
\label{sec:introduction}
The first observation of \glspl{gw} by the LIGO and Virgo collaborations~\cite{Abbott:2017xlt} marked the beginning of \gls{gw} astronomy. It was quickly followed by many more detections~\cite{LIGOScientific:2018mvr}. However, inherent sources of noise in ground-based detectors limit the observed frequency band to above \SI{10}{\hertz}, excluding many interesting sources, among which super-massive black hole binaries, extreme mass-ratio inspirals, or hypothetical cosmic strings. Several projects of space-borne detectors are put forward in the hope to detect \glspl{gw} in the \si{\milli\hertz} band.
One such project is the ESA-led \gls{lisa} mission~\cite{Audley:2017drz}. \Gls{lisa} aims to fly three spacecraft in a \num{2.5}-million-kilometer triangular formation, each of which exchanges laser beams with the others. The phases are monitored using sub-\si{\pico\meter} precision heterodyne interferometry, such that phase shifts induced by passing \glspl{gw} can be detected.
Laser frequency fluctuations will be the dominant source of noise, many order of magnitude above the expected level of \glspl{gw} signals~\cite{Audley:2017drz}. \Gls{tdi} is an offline technique proposed to reduce, among others, laser noise to acceptable levels~\cite{Giampieri:1996aa,Armstrong:1999hp,Tinto:1999yr,Tinto:2021aa}. It is based on the idea that the same noise affects different measurements at different times; by time-shifting and recombining these measurements, it is possible to reconstruct laser noise-free virtual interferometric signals in the case of a static constellation. We call these laser noise-free combinations the first-generation \gls{tdi} variables~\cite{Tinto:2002de,shaddock:2003aa,*Cornish:2003aa}. The algorithm has been extended to account for a breathing constellation to first order, giving rise to the so-called second-generation \gls{tdi} variables~\cite{Tinto:2004aa}. Several laboratory optical bench experiments and numerical studies have confirmed that second-generation combinations can suppress laser noise down to sufficient level to detect and exploit \glspl{gw}~\cite{Schwarze:2018lvl,Schwarze:2018lvl,Otto:2015erp,Laporte:2017bv,Cruz:2006js,Vallisneri:2005ca,Petiteau:2008ke,Bayle:2018hnm}.
In \gls{lisa}, the physical units used to represent, process, and deliver data remain to be chosen. Several studies are ongoing to determine the pros and cons of using either phase, frequency, or even chirpiness\footnote{Chirpiness is defined as the derivative of frequency.}. These include studies of the phasemeter design\footnote{Representing the variables in phase or frequency impacts most phasemeter internal processing steps, e.g., the bit depth required not to be limited by numerical quantization noise or whether or not filters must account for phase-wrapping. A detailed study of these trade-offs is beyond the scope of this paper.}, telemetry bandwidth, and potential impacts on offline noise reduction techniques, such as \gls{tdi}.
Most \gls{tdi} studies indifferently assume that the measurements are expressed in terms of interferometric beatnote phases or frequencies \cite{Tinto:2021aa}. However, these studies disregard the Doppler shifts that arise when using units of frequency~\cite{Tinto:2021aa,Vallisneri:2005ca,Petiteau:2008ke,Bayle:2018hnm}: the relative motion of the spacecraft induces time-varying frequency shifts in the beatnote frequencies, which reduce the performance of standard \gls{tdi} algorithms. In fact, as we show below, the standard formulation of \gls{tdi} applied to frequency data no longer suppresses laser noise to the required level. We however demonstrate that these \gls{tdi} algorithms can be easily modified to account for Doppler shifts when using frequency data. Ultimately, we recover the same laser noise-reduction performance as one obtains when using units of phase.
The paper is structured as follows: in \cref{sec:interferometric-measurements}, we derive the expression of the interferometric measurements in terms of frequency and show how Doppler shifts couple. Then, in \cref{sec:residual-in-tdi}, we evaluate the additional noise due to these Doppler shifts in the \gls{tdi} variables and show that it does not meet the requirements. A procedure to mitigate this effect is presented in \cref{sec:doppler-tdi}. We show that the Doppler couplings can be reduced to levels below the requirements, and confirm the analytical study by numerical simulations in \cref{sec:simulation}. Finally, we conclude in \cref{sec:conclusion}.
\section{Interferometric measurements}
\label{sec:interferometric-measurements}
In this paper, we follow the latest recommendations on conventions and notations established by the \gls{lisa} Consortium. Since these conventions are relatively new, we provide in \cref{sec:convention-mapping} a mapping between the various existing conventions.
We label the spacecraft as presented in \cref{fig:labelling}. The optical benches are labelled with two indices $ij$. The former matches the index $i$ of the spacecraft hosting the optical bench, while the second index is that of the spacecraft $j$ exchanging light with the optical bench. Any subsystem or measurement uniquely attached to an optical bench share the same indices.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/labelling}
\caption{Labelling conventions used for spacecraft, light \glspl{tt}, lasers, optical benches, and interferometric measurements.}
\label{fig:labelling}
\end{figure}
As an example, the light travel time (\gls{tt}) measured on optical bench~$ij$ represents the time of flight of a photon received by spacecraft $i$ and emitted from spacecraft $j$. Note the unusual ordering of the indices (\textit{receiver}, \textit{emitter}); while this choice may seem peculiar at first, it will turn out most useful when writing \gls{tdi} equations in \cref{sec:residual-in-tdi} and later.
We assume that the spacecraft follow perfectly the test masses they host; therefore, their orbits are described as geodesics around the Sun. Accounting for the sole influence of the Sun, the computation of their positions and velocities reduces to a two-body problem, which can be solved semi-analytically~\cite{Dhurandhar:2004rv,Nayak:2006zm}. A more realistic approach uses a set of orbits computed using numerical integration (which includes the influence of the more massive objects in the Solar System) optimized for a given set of constraints, such as minimizing the motion of the spacecraft relative to one another\cite{Nayak:2006zm,Joffre:2020,Martens:2021phh}.
From these orbits, one can compute the light \gls{tt}, denoted by $d_{ij}$, of a photon received by spacecraft $i$ and emitted from spacecraft $j$. Because no sets of orbits ensures a static constellation, we say that the constellation breathes. A direct consequence of this is that the light \glspl{tt} changes with time, and we write, e.g., $d_{ij}(t)$.
Each spacecraft contains, among others, two laser sources and two optical benches, labelled according to \cref{fig:labelling}. Three interferometric signals, namely the inter-spacecraft\footnote{Formerly known as the science or long-arm interferometer.} $\text{isc}_{ij}(t)$, test-mass $\text{tm}_{ij}(t)$, and reference $\text{ref}_{ij}(t)$ beatnotes, are measured on each optical bench $ij$~\cite{Audley:2017drz}. In addition, a pseudo-random code is used to modulate the laser beams exchanged by the spacecraft \cite{Heinzel:2011aa,jose-esteban:2011aa}. The signal is then correlated with a local version to provide an estimate of the light \glspl{tt}, called measured pseudo-ranges. Various errors entering the measured pseudo-ranges and their impact on data processing and analysis are the focus of ongoing studies \cite{Wang:2014aa,*Wang:2015aa}. We shall assume here that the measured pseudo-ranges furnish perfect measurements of the light \glspl{tt}, and therefore, we shall use indifferently pseudo-ranges or light \glspl{tt}, both denoted $d_{ij}(t)$.
Moreover, we will assume here that each spacecraft contains only one laser, which is used in both optical benches. This is without loss of generality, since this situation can be achieved in practice either by locking the two lasers on board each spacecraft\footnote{The precise locking configuration is still under study.} or by constructing the intermediary variables $\eta$~\cite{Tinto:2002de,Tinto:2021aa}.
On board spacecraft~$i$, the phase of the local laser beam in units of cycles is denoted $\Phi_i(t)$. It contains the phase ramp due to the average laser frequency (around $\SI{281}{\tera\hertz}$), as well as small in-band phase fluctuations, dominated by the instability of the reference cavity used for stabilization (around $\SI{30}{\hertz\per\sqrt\hertz}$ when expressed as a frequency noise~\cite{Audley:2017drz}).
The phase of the beam emitted by spacecraft $j$ and received on $i$ at time $t$ reads
\begin{equation}
\Phi_{i \leftarrow j}(t) = \Phi_j(t - d_{ij}(t) - H_{ij}(t))
\qc
\label{eq:distant-beam-phase-explicit-full}
\end{equation}
where $d_{ij}(t)$ is the light \gls{tt} between $j$ and $i$ without any \glspl{gw}. The effect of passing \glspl{gw} are modelled by an additional delay $H_{ij}(t)$. Because this quantity is very small with respect to $d_{ij}(t)$, we Taylor-expand the phase to write $H_{ij}(t)$ as an independent term, and get
\begin{equation}
\Phi_{i \leftarrow j}(t) = \Phi_j(t - d_{ij}(t)) - \nu_j(t - d_{ij}(t)) H_{ij}(t)
\,\text{.}
\label{eq:distant-beam-phase-explicit}
\end{equation}
For more the sake of clarity, we drop the time dependence and introduce the delay operator $\delay{ij}$, defined by
\begin{equation}
\delay{ij} x(t) = x(t - d_{ij}(t))
\qc
\end{equation}
for any signal $x(t)$. We shall also use the compact notation for chained delay operators, formally defined by
\begin{equation}
\delay{i_1 i_2 \dots i_n} = \delay{i_1 i_2} \delay{i_2 i_3} \dots \delay{i_{n-1} i_n}
\qc
\end{equation}
such that we have, \textit{e.g.}, in the case of two delay operators,
\begin{equation}
\begin{split}
\delay{ijk} x(t) &= \delay{ij} \delay{jk} x(t) = \delay{ij} x(t - d_{jk}(t)) \\
&= x\Big(t - d_{ij}(t) - d_{jk}\big(t - d_{ij}(t)\big)\Big)
\,\text{.}
\end{split}
\end{equation}
Using these conventions, \cref{eq:distant-beam-phase-explicit} becomes
\begin{equation}
\Phi_{i \leftarrow j} = \delay{ij} \Phi_j - (\delay{ij} \nu_j) H_{ij}
\,\text{.}
\label{eq:distant-beam-phase}
\end{equation}
The frequency of the local laser beam on optical bench~$ij$ is simply the derivative of the total phase $\nu_i = \dot \Phi_i$. Similarly, the frequency of the distant beam is obtained by Taylor-expanding the derivative of \cref{eq:distant-beam-phase-explicit-full},
\begin{equation}
\begin{split}
&\nu_{i \leftarrow j}(t) = \dot \Phi_{i \leftarrow j}(t) = [1 - \dot{d}_{ij}(t) - \dot H_{ij}(t)] \\
&\qquad \times [\nu_j(t - d_{ij}(t)) - \dot{\nu}_j(t - d_{ij}(t)) H_{ij}(t)]
\,\text{.}
\end{split}
\end{equation}
In the following, we neglect all terms in $\dot{\nu}_j H_{ij}$. Indeed, the rate of change of $\nu_j$ is driven by laser noise\footnote{We expect that laser frequencies also vary due to the frequency plan, by \si{\mega\hertz} over the timescale of months. This yields terms of the same order of magnitude, so that our reasoning holds.}. Using the expected level of laser noise and integrating it over the \gls{lisa} frequency band, $\dot{\nu}_j \approx \SI{E2}{\hertz\per\second}$. Therefore, $\dot{\nu}_j H_{ij} \approx \SI{E-18}{\hertz} \ll \nu_j \dot{H}_{ij} \approx \SI{E-7}{\hertz}$. Dropping the time dependence and using our delay operator,
\begin{equation}
\nu_{i \leftarrow j} = (1 - \dot d_{ij}) \delay{ij} \nu_j - (\delay{ij} \nu_j) \dot H_{ij}
\,\text{.}
\label{eq:distant-beam-freq}
\end{equation}
The factor $\dot d_{ij}(t) \delay{ij} \nu_j$ is often referred to as the \textit{Doppler shift}, and is proportional to the time derivative of the light \gls{tt}. \Cref{fig:orbits} show the time variations of such quantities for realistic orbits \cite{Joffre:2020,Martens:2021phh}, of the order of \num{E-8} (or \SI{3}{\meter\per\second}).
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figures/orbits}
\caption{Light travel time derivatives for realistic orbits.}
\label{fig:orbits}
\end{figure*}
The inter-spacecraft interferometer mixes the local and distant beams. The beatnote phase $\Phi^\text{isc}_{ij}$ can easily be expressed as the difference of the beam phases,
\begin{equation}
\Phi^\text{isc}_{ij} = \Phi_{i \leftarrow j} - \Phi_i
= \delay{ij} \Phi_j - \Phi_i - (\delay{ij} \nu_j) H_{ij}
\,\text{.}
\label{eq:isc-phase}
\end{equation}
In units of frequency, we have
\begin{equation}
\begin{split}
&\nu^\text{isc}_{ij} = (1 - \dot d_{ij}) \delay{ij} \nu_j - \nu_i - (\delay{ij} \nu_j) \dot H_{ij}
\qc
\label{eq:isc-freq}
\end{split}
\end{equation}
where the term $\dot d_{ij} \delay{ij} \nu_j$ is the Doppler shift.
In \cref{eq:isc-freq}, the main in-band contribution is laser noise, which does not cancel out\footnote{Even if lasers are locked such that there is only one laser noise, it is not sufficiently suppressed due to the large delays.} and remains orders of magnitude above the gravitational-wave signal $(\delay{ij} \nu_j) \dot H_{ij} \approx \SI{E-7}{\hertz}$. In order to detect and extract gravitational information from the measurements, laser noise must be reduced by at least 8 orders of magnitude.
\section{Residual noise due to Doppler shifts in TDI}
\label{sec:residual-in-tdi}
\Gls{tdi} is a technique proposed to reduce instrumental noises, including laser noise, to acceptable levels. The starting point for the main \gls{tdi} algorithm is usually to compute the so-called intermediary variables $\xi$ and $\eta$, which are used to remove spacecraft jitter noise and reduce the number of lasers to three. While we already consider only one laser per spacecraft, we will further neglect spacecraft jitter noise, such that we can directly write $\eta_{ij} = \Phi^\text{isc}_{ij}$ in phase, or $\eta_{ij} = \nu^\text{isc}_{ij}$ in frequency.
The next step is to reduce laser noise. Several laser noise-reducing combinations have been proposed. E.g., the second-generation Michelson variable $X_2$ reads \cite{Tinto:2021aa}
\begin{equation}
\begin{split}
&X_2 = (1 - \delay{121} - \delay{12131} + \delay{1312121}) (\eta_{13} + \delay{13} \eta_{31}) \\
&\quad - (1 - \delay{131} - \delay{13121} + \delay{1213131}) (\eta_{12} + \delay{12} \eta_{21})
\,\text{.}
\label{eq:X2}
\end{split}
\end{equation}
The two other Michelson variables $Y_2, Z_2$ are obtained by circular permutation of the indices $1 \rightarrow 2 \rightarrow 3 \rightarrow 1$.
In the following, we shall ignore any technical reasons for imperfect laser noise reduction, such as flexing-filtering coupling~\cite{Bayle:2018hnm}, interpolation errors or ranging errors, and only consider the maximum theoretical laser noise reduction achievable.
In case of phase, we know that the residual laser noise in this variable is given by the non-commutation of delay operators~\cite{Bayle:2018hnm},
\begin{equation}\label{eq:commut}
X^\Phi_2 = \comm{\comm{\delay{131}}{\delay{121}}}{\delay{12131}} \Phi_1
\,\text{.}
\end{equation}
Expanding this expression to second order in the average \gls{tt} derivatives $\dot{d}$'s and first order in average \gls{tt} second derivatives $\ddot{d}$'s, and assuming that these quantities are symmetric in $i$, $j$, the difference of the delays applied to the phase $\Phi_1$ in the two terms from \cref{eq:commut} reads
\begin{equation}
\Delta d = 8 \bar d \qty(\bar{\dot d}_{12}^2 - \bar{\dot{d}}_{31}^2) - 16 \bar{d}^2 \qty(\bar{\ddot{d}}_{12} - \bar{\ddot{d}}_{31})
\qc
\end{equation}
where the first term matches the results of~\cite{Bayle:2018hnm}. In terms of \gls{psd}, we have
\begin{equation}
\psd{X^\Phi_2}(\omega) = \omega^2 \Delta d^2 \psd{\Phi}(\omega)
\qc
\label{eq:X2-laser-psd-phase}
\end{equation}
where $\psd{\Phi}(\omega)$ is dominated by the \gls{psd} of the laser noise expressed in cycles.
Now, let us assess the impact of Doppler shifts if one uses naively the traditional second generation \gls{tdi} algorithm using measurements in units of frequency. For this, we can insert \cref{eq:isc-freq} in \cref{eq:X2}. The only structural difference between \cref{eq:isc-freq} and \cref{eq:isc-phase} is the additional Doppler term $\dot d_{ij} \delay{ij} \nu_j$. Because \gls{tdi} is a linear operation, we can immediately give the residual laser noise in terms of frequency when applying the same algorithm,
\begin{equation}
X^\nu_2 = \comm{\comm{\delay{131}}{\delay{121}}}{\delay{12131}} \nu_1 + \delta X^\nu_2
\qc
\label{eq:X2-freq}
\end{equation}
where $\delta X^\nu_2$ is a function of the Doppler shifts,
\begin{equation}
\begin{split}
\delta X^\nu_2 &= (1 - \delay{131} - \delay{13121} + \delay{1213131}) \\
&\qquad\qquad\qquad \times (\dot d_{12} \delay{12} \nu_2 + \dot d_{21} \delay{121} \nu_1) \\
&\quad - (1 - \delay{121} - \delay{12131} + \delay{1312121}) \\
&\qquad\qquad\qquad \times (\dot d_{13} \delay{13} \nu_3 + \dot d_{31} \delay{131} \nu_1)
\,\text{.}
\label{eq:delta-X2}
\end{split}
\end{equation}
A rough estimation of this Doppler coupling can be computed from $\delta X^\nu_2 \approx \bar{\dot d} \nu$. Plugging orders of magnitudes for the \glspl{tt} derivatives and laser noise yields a Doppler coupling at $\SI{E-6}{\hertz}$, above the expected level for our \gls{gw} signals ($\SI{E-7}{\hertz}$). It is also above the level of the traditional residuals of \gls{tdi}, given by the first term of \cref{eq:X2-freq} and shown in \cref{fig:analytical-curves}. As a consequence, the \gls{psd} of the residual noise for the $X_2^\nu$ \gls{tdi} variable is dominated by the Doppler coupling,
\begin{equation}\label{eq:PSD-X2-freq}
\psd{ X^\nu_2}(\omega) \approx \psd{\delta X^\nu_2}(\omega)
\,\text{.}
\end{equation}
Assuming that all laser frequencies are uncorrelated, a more precise computation yields the \gls{psd} of this extra residual noise,
\begin{equation}
\begin{split}
\psd{\delta X^\nu_2}(\omega) &\approx 16 \psd{\nu} \sin[2](\omega \bar d) \sin[2](2 \omega \bar d)
\\
&\qquad \times \qty(\bar{\dot d}_{12}^2 + \bar{\dot d}_{31}^2 + (\bar{\dot d}_{12} - \bar{\dot d}_{31})^2)
\,\text{.}
\label{eq:delta-X2-psd}
\end{split}
\end{equation}
This is to be compared with the residual laser noise in terms of frequency when one disregards Doppler effects. It is given by replacing $\psd{\Phi}$ with $\psd{\nu}$ in \cref{eq:X2-laser-psd-phase},
\begin{equation}
\psd{[X^{\nu}_2]}(\omega)= \omega^2 \Delta d^2 \psd{\nu}(\omega)
\,\text{.}
\label{eq:X2-without-dopplers}
\end{equation}
In \cref{fig:analytical-curves}, we show those analytical curves alongside the usual \gls{lisa} Performance Model's \SI{1}{\pico\meter}-noise allocation curve, given by
\begin{equation}
\begin{split}
&\psd{X_2^\text{alloc}}(\omega) = 64 \omega^2 \sin[2](\omega \bar d) \sin[2](2 \omega \bar d) \\
&\qquad \times \qty(\frac{\SI{1}{\pico\meter\hertz^{-1/2}}}{\lambda})^2 \qty[1 + \qty(\frac{\SI{2e-3}{\hertz}}{\omega / 2 \pi})^4]
\,\text{.}
\label{eq:noise-allocation}
\end{split}
\end{equation}
The extra residual laser noise due to Doppler terms is above or at the same level as the \gls{gw} signal, and far above the usual laser noise residual when one disregards the Doppler effect. Therefore, a procedure to mitigate this effect is required if one wishes to use frequency measurements.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/analytical-curves}
\caption{Amplitude spectral density of the second generation \gls{tdi} combination when using measurements expressed in units of frequency. The blue curve shows the amplitude of Doppler-related terms, c.f.~\cref{eq:delta-X2-psd}, the orange curve shows the amplitude of the delay commutators, c.f.~\cref{eq:X2-without-dopplers}, while the red curve presents the usual \gls{lisa} \SI{1}{\pico\meter}-noise allocation, c.f.~\cref{eq:noise-allocation}. The light travel times used in this simulation are presented in \cref{fig:orbits}.}
\label{fig:analytical-curves}
\end{figure}
\section{Adapting time-delay interferometry for Doppler shifts}
\label{sec:doppler-tdi}
As mentioned in the previous section, accounting for the Doppler effect in the inter-spacecraft beatnote frequency comes down to replacing the delay operator $\delay{ij}$ in \cref{eq:isc-phase} by $(1 - \dot d_{ij}) \delay{ij}$. We can formalize it by introducing the Doppler-delay operator,
\begin{equation}
\dotdelay{ij} = (1 - \dot d_{ij}) \delay{ij}
\qc
\label{eq:doppler-delay}
\end{equation}
such that laser noise entering \cref{eq:isc-freq} takes the same algebraic form as its phase counterpart \cref{eq:isc-phase},
\begin{equation}
\nu^\text{isc}_{ij} = \dotdelay{ij} \nu_j - \nu_i + (\delay{ij} \nu_j) \dot H_{ij}
\,\text{.}
\label{eq:isc-freq-doppler-delay}
\end{equation}
We now introduce a new type of second generation \gls{tdi} combination by considering the standard expression from \cref{eq:X2} but using the Doppler-delay operators introduced in \cref{eq:doppler-delay}. The new \gls{tdi} variable writes
\begin{equation}
\begin{split}
&\dot{X}_2 = (1 - \dotdelay{121} - \dotdelay{12131} + \dotdelay{1312121}) (\eta_{13} + \dotdelay{13} \eta_{31}) \\
&\quad - (1 - \dotdelay{131} - \dotdelay{13121} + \dotdelay{1213131}) (\eta_{12} + \dotdelay{12} \eta_{21})
\,\text{.}
\label{eq:X2-for-dopplers}
\end{split}
\end{equation}
The algebraic form of this expression is now identical in phase and frequency, and we immediately recover the residual noise given in \cref{eq:X2-laser-psd-phase},
\begin{equation}
\dot{X}^\nu_2 = \comm{\comm{\dotdelay{131}}{\dotdelay{121}}}{\dotdelay{12131}} \nu_1
\,\text{.}
\end{equation}
A direct comparison with \cref{eq:X2-freq} demonstrates that the new \gls{tdi} variable introduced in \cref{eq:X2-for-dopplers} is not impacted by the Doppler noise $\delta X_2^\nu$.
To compute the \gls{psd} of the $\dot X_2^\nu$ residual laser noise, we study the commutator of Doppler-delay operators
\begin{equation}
y = \comm{\dotdelay{i_1 j_1} \dots \dotdelay{i_n j_n}}{\dotdelay{k_1 l_1} \dots \dotdelay{k_n l_n}}
\,\text{.}
\label{eq:doppler-delay-commutator}
\end{equation}
As one can observe in \cref{fig:orbits}, the light \gls{tt} derivatives evolve slowly with time, with $\ddot d \Delta t \sim 10^{-14} \ll \dot d \sim 10^{-8}$ if $\Delta t \sim \SI{10}{\second}$ is the timescale of the \glspl{tt} considered here. Therefore, we can assume that $\dot d$'s are constant when computing $y$. \Cref{eq:doppler-delay-commutator} can then be factored as
\begin{equation}
\begin{split}
y = \qty(\prod_{m=1}^n{(1-\dot{d}_{i_m j_m})})\qty(\prod_{m=1}^n{(1-\dot{d}_{k_m l_m})}) \times \\
\comm{\delay{i_1 j_1} \dots \delay{i_n j_n}}{\delay{k_1 l_1} \dots \delay{k_n l_n}}
\,\text{.}
\end{split}
\end{equation}
The factor that contains the \gls{tt} derivatives is a constant, which, to first order, deviates from 1 by $2 \bar{\dot d}n \approx \num{E-7}$. We can therefore neglect it when estimating the \gls{psd}. For this reason, the \gls{psd} of the laser noise residual for the new \gls{tdi} variable introduced in \cref{eq:X2-for-dopplers} is then given by
\begin{equation}\label{eq:psd-dot-X2}
\psd{ \dot X^\nu_2}(\omega)=\psd{[X^{\nu}_2]}(\omega)\, ,
\end{equation}
whose expression is explicitly given in \cref{eq:X2-without-dopplers}. A direct comparison with \cref{eq:PSD-X2-freq} shows that the \gls{psd} of the new ${\dot X}_2^\nu$ \gls{tdi} variable is not impacted by the unacceptably large contribution from $\delta X_2^\nu$.
The method presented in this section which consists in replacing $\delay{ij}$ by $\dotdelay{ij}$ in the usual \gls{tdi} combinations in order to remove the effect of Doppler shift is very general and can be applied to any \gls{tdi} combination.
\section{Simulation results}
\label{sec:simulation}
Using LISANode~\cite{BayleThesis} and \texttt{lisainstrument}, a Python simulator based on LISANode, we simulated the interferometric measurements as frequency deviations from the average beatnote frequencies. These frequency deviations include only laser noise, which is Doppler-shifted during propagation. We assumed 3 free-running lasers for this study, and used a high sampling rate, such that effects of onboard filtering appear off band. We used the same realistic orbits and light travel times as presented in \cref{fig:orbits}, and simulated \num{E7} samples, i.e., a bit less than \num{12} days.
The \gls{tdi} processing was performed using PyTDI. In \cref{fig:simulation}, we compare 2 different scenarios using the same input data. The blue curve shows the \gls{asd} of the residual laser noise when the standard second-generation Michelson $X^\nu_2$ variable is used. We superimpose the model for the expected excess of noise $\delta X^\nu_2$ due to Doppler effect given in \cref{eq:delta-X2-psd}, and check that it matches our simulated results. Alternatively, the orange curve shows the \gls{asd} of the residual laser noise when the Doppler-corrected second-generation Michelson $\dot{X}^\nu_2$ variable is used. It is superimposed with the analytical expectation given in \cref{eq:psd-dot-X2} in most of the band, until we reach a noise floor around \SI{2E-12}{\hertz\per\sqrt{\hertz}}. This noise floor is in agreement with the numerical accuracy typically achieved in our simulations.
These simulations confirm the analytical results developed in the previous section. In particular, it shows that the residual noise of the new \gls{tdi} variable introduced in \cref{eq:X2-for-dopplers} is similar to the one obtained with the standard \gls{tdi} combinations when the Doppler effect is neglected. Say in other words, the \gls{tdi} variable corrects efficiently for the Doppler contribution which otherwise induces an unacceptably large noise.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figures/simulation}
\caption{ Amplitude spectral density of the residual laser noise in $X_2^\nu$ obtained using data in units of frequency, with the traditional algorithms (in blue) and Doppler correction (in orange). The theoretical models from Eqs.~(\ref{eq:delta-X2-psd}) and (\ref{eq:X2-without-dopplers}) are superimposed as black dashed lines. These curves need to be compared with the $\SI{1}{\pico\meter}$-noise allocation (in red).}
\label{fig:simulation}
\end{figure*}
\section{Conclusion}
\label{sec:conclusion}
In this paper, we show that the \gls{tdi} combinations found in the literature \cite{Tinto:2021aa} do not reduce laser noise to required levels when applied to data in units of frequency and we provide an analytical formulation of the additional residual noise. We then propose a technique to adapt existing \gls{tdi} combinations to data in units of frequency. We show through analytical studies, as well as with numerical simulations that we recover the original laser-noise reduction performance, compatible with requirements to detect and exploit \glspl{gw} signals.
\Gls{tdi} is required to suppress primary noises in the interferometric measurements to levels below that of \gls{gw} signals. Existing formulations are based on the assumption that these measurements are expressed in terms of phase, or disregard the impact of Doppler shifts when data in frequency are used \cite{Tinto:2021aa}. However, applying these \gls{tdi} algorithms to data in units of frequency yields extra noise residuals due to the Doppler shift induced by the time variation of the arm lengths. This extra noise residuals are larger than \gls{gw} signals. To account for Doppler shifts we reformulate the \gls{tdi} combinations by replacing delay operators by their Doppler equivalent, which not only shift measurement in time but also scale them by the corresponding Doppler factor, see \cref{eq:doppler-delay}. We show that this general procedure yields new \gls{tdi} combinations, whose performance when applied to measurements in frequency match that of the traditional combinations when working in units of phase.
This is a major result to study the impact of different physical units in \gls{lisa} data processing. We show that laser noise reduction can reach similar levels using phase or frequency measurements. Nevertheless, computing the \gls{tdi} using frequency measurements require the knowledge of both the \gls{tt} and their time derivatives while only the \gls{tt} are needed in order to construct \gls{tdi} variables using phase measurements. This might impact the development of a Kalman filter whose goal is to provide an estimate of the \gls{tt} \cite{Wang:2014aa,Wang:2015aa}. Finally, it is known that the clocks from the various spacecraft will drift with respect to each other because of relativistic effects \cite{pireaux:2007sh} and because of clock noise. Therefore, the \gls{lisa} pre-processing will also include a synchronization of the clocks from the 3 spacecraft \cite{Tinto:2021aa}. How this synchronization will impact the construction of \gls{tdi} variables is currently under exploration and might differ if one uses phase or frequency units. A detailed study of the interplay of \gls{tdi} with clock synchronization is left for a dedicated study. Finally, let us mention that using frequency units to perform the data analysis of \gls{lisa} may also impact the sources parameters inference since the \gls{tdi} response function used in Bayesian algorithm may have to include the currently neglected Doppler correction.
\section*{Acknowledgments}
The authors are grateful to the SYRTE Theory and Metrology Group, in particular Aurélien Hees, Marc Lilley, Peter Wolf, and Christophe Le Poncin-Lafitte, for the useful discussions and suggestions to improve the presentation of the article.
JBB was supported by an appointment to the NASA Postdoctoral Program at the Jet Propulsion Laboratory, California Institute of Technology, administered by Universities Space Research Association under contract with NASA. Part of this research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004).
OH and MS gratefully acknowledge support by the Deutsches Zentrum für Luft- und Raumfahrt (DLR) with funding from the Bundesministerium für Wirtschaft und Technologie (Project Ref. Number 50 OQ 1801, based on work done under Project Ref. Number 50 OQ 1301 and 50 OQ 0601). This work was also supported by the Max-Planck-Society within the LEGACY (“Low-Frequency Gravitational Wave Astronomy in Space”) collaboration (M.IF.A.QOP18098).
|
2,869,038,154,542 | arxiv | \section{Introduction}
Photoemission \textcolor{black}{spectroscopy} is an essential \textcolor{black}{experimental} tool to characterize the electronic structure of a system. In particular it can be used to trace phase transitions, which are especially important in strongly correlated systems. Indeed, one of the most
fascinating phenomena characterizing the physics of these systems is undoubtedly the Mott-Hubbard metal-to-insulator
transition (MIT)\cite{Mott_RMP68}. Here, the appearance of an
insulating state is a direct consequence of the strong
Coulomb repulsion, rather than of the underlying electronic
band structure. Systems at the edge of a metal-insulator
transition exhibit a wealth of exotic properties thanks to their
high sensitivity to external parameters (carrier
concentration, temperature, external magnetic
field), which makes them easy to manipulate. Therefore, besides the
interesting fundamental physics, also the possible
technological applications are plentiful.
Nowadays very accurate and detailed photoemission spectra can be measured.
\textcolor{black}{On the other hand,} theory is crucial for the analysis of the experiments as well as the prediction of material properties.
In particular, so-called first-principles
methods, such as Density Functional Theory (DFT) \cite{HK} and Many-Body Perturbation Theory (MBPT) based on Green's functions \cite{fetterwal}, \textcolor{black}{have the potential} to be predictive, since no empirical or adjustable parameters are involved.
However, standard implementations of these methods are known to work reasonably well for weakly to moderately correlated materials, such as metals and standard semiconductors (e.g., Si or GaAs) \cite{Schilfgaarde} but to fail for most strongly correlated systems \cite{stefano}. A paradigmatic example of this kind of materials is paramagnetic NiO, which is predicted to be a metal by standard approximations. This of course sets limits on the description and prediction of metal-insulator-phase transitions. Going beyond existing approximations is a challenge both from a fundamental \cite{romaniello2009,romaniello2012} and a practical point of view \cite{stan,Tandetzky}.
We have recently investigated the extended Koopmans' theorem (EKT) \cite{morrell_JCP1975,smith_JCP1975} as a promising \textcolor{black}{method} to describe photoemission in solids and, in particular, in strongly correlated systems \cite{stefano_JCP2015,stefano,stefano_JCTC,stefano_PRR2021,frontiers_2021,stefano_PRB2022}. The EKT can be used with any theory \textcolor{black}{that yields} the one- and two-body reduced density matrices (1-RDM and 2-RDM, respectively), which are the essential ingredients of this approach. \cite{kent_PRB1998,
doi:10.1021/acs.jctc.1c00100,
pernal_CPL2005,
Leiva200645} In particular within reduced density matrix functional theory (RDMFT) \cite{PhysRev.97.1474,PhysRevB.12.2111,Pernal_TOPCURRCHEM2015}, the EKT approach is based on a simple matrix diagonalization. However, even with exact density matrices, the EKT tends to overestimate band gaps, with the deviation from experiment increasing with increasing electron correlation. This error is amplified by the use of approximate density matrices \cite{frontiers_2021}. Improvements can be obtained by designing better density matrix approximations, or by going beyond the ``quasiparticle ansatz" at the core of the EKT equations, or both.
In general, designing new approximate density matrices for solids is a difficult task because most of the available approximations are designed for molecules and their extension to solids is not straightforward.
We have recently proposed to introduce electron screening in standard density matrix approximations available for solids \textcolor{black}{since it is crucially important to describe many-electron systems}. For example, in the context of many-body perturbation
theory (MBPT) based on Green’s functions, the improvement
of the $GW$ approximation over Hartree-Fock is precisely \textcolor{black}{thanks}
to the screening of the Coulomb interaction.
However, although \textcolor{black}{the inclusion of} screening in standard density matrix approximations reduces the gap, its effect is too large, which results in a zero gap in semiconductors and insulators \cite{stefano_PRB2022} (as an example, the PES of bulk Si is reported in the Supporting Information).
Instead, in this article, we focus on the improvement of the EKT \textcolor{black}{itself by directly including electron screening in the EKT equations}.
\textcolor{black}{We will show that this approach leads to much improved photoemission spectra for both weakly and strongly correlated materials.}
Using the EKT within the basis of natural orbitals, i.e. the orbitals which diagonalize the one-body density matrix, the spectral function, which is related to photoemission spectra, can be written as
$A(\omega)=\sum_i \left[n_i\delta(\omega-\epsilon_i^R)+(1-n_i)\delta(\omega-\epsilon_i^A)\right]$,
with $n_i$ the occupation number of state $i$ \cite{frontiers_2021}. The removal and addition energies $\epsilon^{R}_i$ and $\epsilon^{A}_i$, respectively, are given by \footnote{To be precise Eqs \eqref{eq_meetrem} and \eqref{eq_meetadd}, and the corresponding spectral function, are obtained within the so-called diagonal approximation to the EKT (DEKT). We have shown that within the available approximations to the 1-RDM and 2-RDM the DEKT and EKT give essentially the same result in solids \cite{frontiers_2021}.}
\begin{eqnarray}
\label{eq_meetrem}
\epsilon^{R}_i &=& h_{ii} + \sum_jV_{ijij}n_j + \frac{1}{n_i}\sum_{jkl}V_{ijkl}\Gamma^{(2)}_{\text{xc},klji}, \\
\label{eq_meetadd}
\epsilon^{A}_i &=& h_{ii} + \sum_jV_{ijij}n_j \nonumber\\ &&-\frac{1}{1-n_i}\left[\sum_jV_{ijji}n_j -\sum_{jkl}V_{ijkl}\Gamma^{(2)}_{\text{xc},klji}\right],
\end{eqnarray}
where $h_{ij}=\int d\mathbf{r} \phi_i^{*}(\mathbf{r}) h(\mathbf{r}) \phi_j(\mathbf{r})$ and $V_{ijkl}=\int d \mathbf{r} d \mathbf{r}' \phi_i^{*}(\mathbf{r})\phi_j^{*}(\mathbf{r}')V(\mathbf{r}-\mathbf{r}')\phi_k(\mathbf{r})\phi_l(\mathbf{r}')$ are the matrix elements of the single-particle hamiltonian $h(\mathbf{r})=-\nabla_{\mathbf{r}}^2/2+V_{\text{ext}}(\mathbf{r})$, with $V_{\text{ext}}(\mathbf{r})$ the external potential created by atomic nuclei, and the Coulomb interaction $V(\mathbf{r})=1/|\mathbf{r}|$, respectively. The 2-RDM is defined as $\Gamma_{klji}^{(2)}=\bra{\Psi_0}c_i^{\dagger}c_j^{\dagger}c_kc_l\ket{\Psi_0}$, where $c_i$ ($c_i^{\dagger}$) is the annihilation (creation) operator of an electron in orbital $i$ and $\ket{\Psi_0}$ is the ground-state many-body wavefunction. The exchange-correlation part of the 2-RDM reads $\Gamma_{\text{xc},klji}^{(2)}=\Gamma_{klji}^{(2)}-n_in_j\delta_{ik}\delta_{jl}$ and has to be approximated in practice. In this paper we use the power functional (PF) $\Gamma_{\text{xc},klji}^{(2)}=-n_i^{\alpha}n_j^{\alpha}\delta_{il}\delta_{jk}$, where $0.5\le\alpha\le1$. This functional provides an interpolation between the so-called M\"uller functional ($\alpha=0.5$), which has a tendency to overcorrelate, and Hartree-Fock ($\alpha=1$), which neglects correlation. The values suggested in the literature usually vary between 0.55 and 0.7 \cite{PhysRevB.78.201103, PhysRevA.79.040501}. In most of the works in literature a value of $\alpha=0.65$ is used for real solids.
Equations \eqref{eq_meetrem} and \eqref{eq_meetadd}, within the PF approximation to $\Gamma_{\text{xc}}$, give the qualitatively correct picture in correlated solids, but the fundamental band gap is very much overestimated \cite{stefano,stefano_PRR2021}.
\textcolor{black}{We note that the EKT is designed to capture quasiparticle peaks in the photoemission spectra but not satellites because it only explicitly considers one-hole and one-electron excitations. However, the EKT can be generalized to two electrons-one hole and two holes-one electron excitations (EKT-3) (and beyond) to also describe satellites. The explicit inclusion of electron-hole excitations can also improve the quasiparticle energies as these excitations capture part of the screening of the added hole or electron~\cite{Lee_JCTC2021}.
However, an important drawback of the EKT-3 approach is that it yields equations that depend also on the 3-RDM and 4-RDM, which makes EKT-3 computationally very expensive. Moreover, it requires practical approximations to the 3-RDM and 4-RDM, which are not available for solids.}
\textcolor{black}{In this work we propose a method that includes the screening of the added particle (hole or electron) while using only the 1-RDM and 2-RDM.
We achieve this in a similar way as one can obtain the $GW$ approximation from the HF approximation, i.e., we replace the bare Coulomb potential in the exchange-correlation part of the EKT equations by the screened Coulomb potential. This leads to the screened extended Koopmans' theorem (SEKT). The SEKT equations are thus given by}
\begin{eqnarray}
\epsilon^{R}_i &=& h_{ii} + \sum_jV_{ijij}n_j + \frac{1}{n_i}\sum_{jkl}W_{ijkl}\Gamma^{(2)}_{\text{xc},klji},\label{Eqn:EKT_R_mod} \\
\label{Eqn:EKT_A_mod}
\epsilon^{A}_i &=& h_{ii} + \sum_jV_{ijij}n_j\nonumber
\\ &&-\frac{1}{1-n_i}\left[\sum_jW_{ijji}n_j -\sum_{jkl}W_{ijkl}\Gamma^{(2)}_{\text{xc},klji}\right],
\end{eqnarray}
where $W=\varepsilon^{-1} V$ is the statically screened Coulomb interaction, with $\varepsilon$ the dielectric function.
\textcolor{black}{The SEKT is further motivated by the following two arguments}:
i) a general screening of the form $W_{ijkl}=\beta_iV_{ijkl}$ ($0<\beta_i<1$) can reproduce some of the effects of higher order RDMs \cite{stefano}; ii) Eqs \eqref{Eqn:EKT_R_mod}-\eqref{Eqn:EKT_A_mod} reduce to the screened exchange (SEX) equations of MBPT for single Slater determinants. In this case, indeed, the exchange-correlation part of the 2-RDM can be factorized as $\Gamma_{xc,klji}^{(2)}=-n_in_j\delta_{il}\delta_{jk}$ with the natural occupation numbers $n_i$ being zero or one, and this results in $\epsilon^{R}_i=\epsilon^{A}_i= h_{ii} + \sum_jV_{ijij}n_j - \sum_{jkl}W_{ijji}n_j$, which correspond to the poles of the one-body Green's function
obtained using the (static) screened exchange self-energy. It therefore becomes clear that, with the power functional approximation to the 2-RDM, Eqs \eqref{Eqn:EKT_R_mod}-\eqref{Eqn:EKT_A_mod} tend to the SEX energy equations for weakly correlated systems, which are characterized by occupation numbers close to zero or one. We will now show that the SEKT, besides describing correctly the PES of weakly correlated systems, can reproduce reasonably good PES (although some important deviations remain) for strongly correlated systems, which are characterized by highly fractional natural occupation numbers.
We have implemented the EKT and SEKT equations in a modified version of the full-potential linearized augmented plane-wave code Elk \cite{elk, PhysRevB.78.201103}.
In order to build the screened Coulomb exchange matrix elements $W_{ijji}$ we first calculate the static screening matrix in reciprocal space using the random-phase approximation (RPA); the matrix elements in NO basis are then obtained as
\eq{
W_{ijji} =\frac{1}{\Omega N_q}\sum_{\mathbf{q}\mathbf{G}\bfG'} W_{\mathbf{G}\bfG'} (\mathbf{q}) \bra{j} e^{-i(\mathbf{q}+\mathbf{G})\cdot\mathbf{r}}\ket{i}^{*} \\
\times \bra{j} e^{-i(\mathbf{q}+\mathbf{G}')\cdot\mathbf{r}}\ket{i} \delta_{\mathbf{q},\mathbf{k}_i-\mathbf{k}_j}, \nonumber
}
where $i=(\tilde{i},\mathbf{k}_i)$ is a generalized index that comprises the band
index $\tilde{i}$ and the wave vector $\mathbf{k}_i$, $\Omega$ and $N_q$ are the unit cell volume and the number of points in the Brillouin zone sampling, $\mathbf{G}$ is a reciprocal lattice vector, $\mathbf{q}$ \textcolor{black}{is a vector that} belongs to the first Brillouin zone, $W_{\mathbf{G},\mathbf{G}'} (\mathbf{q})$ is the Fourier
transform of the statically screened Coulomb interaction $W(\mathbf{r},\mathbf{r}')$, and the oscillator strengths are \eq{
\bra{i} e^{-i(\mathbf{q}+\mathbf{G})\cdot\mathbf{r}}\ket{j} = \int \d\mathbf{r} \phi_i^{*}(\mathbf{r})e^{-i(\mathbf{q}+\mathbf{G})\cdot\mathbf{r}}\phi_j(\mathbf{r}). \nonumber
}
The plane-wave cut-off $G_{\text{max}}$ is chosen by requiring $rG_{\text{max}}=10\text{ a.u.}$, where $r$ is the muffin-tin radius.
More details about the protocol used for the calculations can be found in Ref.~\onlinecite{frontiers_2021}.
We apply our method to two classes of systems: bulk LiH and Si as examples of weakly correlated systems, and paramagnetic (PM) and antiferromagnetic (AFM) NiO as examples of strongly correlated systems. We note that the paramagnetic phase is modelled as nonmagnetic (NM), therefore in the following paramagnetic NiO will be referred to as NM NiO.
For the simple semiconductors, LiH and Si, we use the local-density approximation (LDA) energies and wavefunctions to calculate the random-phase approximation (RPA) screening. For AFM NiO the LDA band gap is too small. One can hence envisage to use a self-consistent procedure, as it is done in \textcolor{black}{eigenvalue self-consistent} $GW$, starting from the LDA to build the screening to use in the \textcolor{black}{S}EKT equations, and then use the \textcolor{black}{S}EKT band structure to build the screening etc. Since our purpose is to show the validity of the SEKT equations, in this work we build the RPA screening by employing LDA+$U$ \textcolor{black}{and} a scissors correction that gives a reasonable band gap compared to experiment.
We use the around mean field double-counting correction \cite{Bultmark_PRB2009} and a $U$ parameter of 5 eV for the Ni $d$ electrons.
The scissors correction is 2 eV.
In the case of the NM NiO we cannot construct a good RPA screening using LDA+$U$, since this approach \textcolor{black}{does not} open a gap in the partially filled $e_g$ bands. Therefore we use the screening of the AFM phase also for the NM \textcolor{black}{phase}, such that all the calculations on NM NiO are performed in the AFM unit cell.
This is a reasonable approximation since the magnetic order has little effect on the photoemission spectrum of NiO ~\cite{Tjernberg_PRB96,Hughes_2008,PhysRevB.66.064434}.
The lattice parameters used in this work are
4.07 $\textup{\AA}$ for LiH,
5.43 $\textup{\AA}$ for Si, and
8.34 $\textup{\AA}$ for NiO.
\begin{figure}
\centering
\includegraphics[width=0.96\columnwidth]{DOS.pdf}
\caption{Spectral function of bulk LiH, Si, NM NiO ad AFM NiO: comparison of the EKT$@$PF and SEKT$@$PF. We used $\alpha=0.65$ in the PF. Note that the SEKT@PF result for AFM NiO is plotted only up to $\approx$15 eV since we used few empty bands for computational reasons. The experimental band gap of LiH \cite{PhysRevB.75.035204} is indicated with a dashed vertical line. The experimental spectra are taken from Refs.~\onlinecite{PhysRevB.40.9644} and \onlinecite{PhysRevLett.53.2339}.}
\label{fig_semi}
\end{figure}
In Fig.~\ref{fig_semi} we report the spectral functions of bulk LiH, Si, NM NiO and AFM NiO.
\textcolor{black}{We observe that} the EKT gives a large overestimation of the band gap for all these systems, but the valence part of the spectrum is well reproduced.
\textcolor{black}{The inclusion of screening in our SEKT equations dramatically improves the results.}
With the SEKT we obtained the following values for the fundamental band gap
5.25 (4.99) eV for LiH,
1.63 (1.12) eV for Si,
1.90 (4.3) eV for NiO NM, and
2.45 (4.3) eV for NiO AFM, with the corresponding experimental gap given in parentheses \cite{PhysRevB.75.035204, PhysRevLett.53.2339}.
\begin{figure*}
\centering
\includegraphics[width=0.96\columnwidth]{PDOS.pdf}
\includegraphics[width=0.96\columnwidth]{PDOS_2.pdf}
\caption{Projected spectral function of bulk LiH, Si, NM NiO and AFM NiO for the SEKT@PF and EKT@PF results. The spectral function is projected onto $s$, $p$ and $d$ states for LiH and Si. For NiO $d$ states are resolved into $t_{2g}$ and $e_g$ states.}
\label{fig_PDOS}
\end{figure*}
We observe that the introduction of the screening has no significant effect on the valence band width of LiH, while for Si we have a reduction of the bandwidth which gives a better agreement with experiments.
For NiO the situation is quite different: the screening produces a stretching of the valence bands. Moreover we observe a separation of O-$2p$ and Ni-$d$ bands in the valence.
The band gap is underestimated, since Ni-$s$ states are ``lowered" in energy while Ni-$e_g$ states remain too high in energy. It is interesting to analyze these two different trends: while in LiH and Si the screening introduces a kind of rigid shift of all the bands, which have predominantly $s$/$p$ character, in the case of NiO it acts differently on the various bands in the band-gap region, which is a mixture of Ni $s$, $p$, $d$ orbitals and O-$2p$ orbitals.
This can be explained by analyzing the two main contributions to the SEKT equations, namely, the contribution from the occupation numbers and the contribution from the Coulomb matrix elements. Fractional occupation numbers can make the second (negative) term in Eq. (\ref{Eqn:EKT_R_mod}) large, which, upon application of the screening, induces a larger shift than in case of occupation numbers close to 1. Large Coulomb matrix elements have a similar effect (one can reasonably assume that matrix elements are larger for localised states); indeed the relative position of contributions from bands with similar occupation numbers but different nature (e.g., localized or delocalized) change by applying the screening, which indicates the importance of Coulomb matrix elements. A similar analysis can be done for the addition energies. This suggests to improve the screening in strongly correlated materials by going beyond RPA or to introduce corrections to the SEKT based on the nature of the bands. For example, one could separate the bands in strongly occupied (occupancies larger than
0.5) and weakly occupied (occupancies smaller than 0.5) in the same spirit of the corrections proposed by Gritsenko \textit{et al.} to remedy to the overcorrelation of the M\"{u}ller functional \cite{bbc} and use a different screening for these two classes of orbitals (RPA for weakly occupied and beyond RPA for strongly occupied \cite{Kresse_PRL07}). This work is currently in progress.
As a final remark we notice that SEKT opens an unphysical band gap in the homogeneous electron gas (HEG) (as shown in the Supporting Information), which we expect to be closed using more advanced approximate density matrices. This also suggests to look for better approximations to the 1- and 2-RDM.
In conclusion, we presented an approach which can describe the band-gap opening in weakly as well as strongly correlated gapped materials. Although improvements are still needed, this is a remarkable result for \textit{ab-initio} methods and opens the way to a unified description of photoemission spectra in weakly as well as strongly correlated systems.
This study has been supported through the EUR grant NanoX ANR-17-EURE-0009 in the framework of the ``Programme des Investissements d'Avenir" and by ANR (project ANR-18-CE30-0025 and ANR-19-CE30-0011).
\section{Spectral function of bulk Si within the (S)EKT}
In Fig.~\ref{fig_comp} we compare the experimental photoemission spectra of bulk Si with the spectral function calculated at various level of approximations, namely the EKT within the power functional approximation to the 2-RDM (EKT$@$PF), the screened EKT within the power functional approximation to the 2-RDM (SEKT$@$PF), and the EKT within the screened power functional approximation to the 2-RDM (EKT$@$WPF). The EKT$@$PF spectral function shows a large overestimation of the band gap, wheres the EKT$@$WPF spectral function has no band gap. The SEKT$@$PF spectral function instead compares very well with experiment.
\begin{figure*}
\centering
\includegraphics[width=0.43\columnwidth]{DOS_2.pdf}
\caption{Spectral function of bulk Si: comparison of the EKT$@$PF, SEKT$@$PF and EKT$@$WPF. We used $\alpha=0.65$ in the PF and WPF. The experimental spectrum is taken from Ref.~\citenum{PhysRevB.40.9644}}
\label{fig_comp}
\end{figure*}
\section{Spectral function of the HEG within the EKT}
\begin{figure*}
\centering
\includegraphics[width=0.96\columnwidth]{SEKT_HEG.pdf}
\caption{Quasiparticle dispersion $\epsilon(k)/k_F^2$ for the HEG at $r_s = 3$: EKT@W-PF, EKT@PF, and SEKT@PF are compared with EKT@QMC results
extracted from Ref. \citenum{PhysRevB.90.035125} and QMC quasiparticle dispersion from Ref. \onlinecite{PhysRevLett.127.086401}. We used $\alpha=0.55$ both for PF and W-PF. The free-electron and Hartree-Fock dispersions are also reported.}
\label{fig_HEG}
\end{figure*}
In Fig.~\ref{fig_HEG} we compare the
quasiparticle dispersion of the HEG for EKT@W-PF, EKT@PF, and SEKT@PF with EKT@QMC results extracted from Ref.~\onlinecite{PhysRevB.90.035125} and QMC quasiparticle dispersion from Ref.~\citenum{PhysRevLett.127.086401}.
We note that EKT@PF opens an unphysical band gap, which is much reduced, but still present, in SEKT@PF.
|
2,869,038,154,543 | arxiv | \section{Introduction}
\IEEEPARstart{W}{ith} the rapid development of sensing, computing, and communication technologies, the internet of things (IoT) is a popular solution to solve the problems in industry, agriculture, energy, transportation, etc. However, privacy issues in IoT are often a significant concern have been raised due to the intrusive behavior of sensors \cite{yang2017survey}. Specifically for the internet of vehicles (IoV), it massively parallels each vehicle and various sensors it carries, including global positioning system (GPS), radar, camera, light detection and ranging (LiDAR), etc., enabling pedestrian detection \cite{cao2021handcrafted}, automated driving \cite{kuutti2018survey}, mobility digital twins \cite{wang2022mobility}, and other transportation applications. Federated learning (FL) has received extensive attention for protecting user privacy by sharing only model weights and not including users' raw data. FL is widely known for its successful business case in Google mobile keyboard prediction \cite{hard2018federated}. Nowadays, It has also become one of the mainstream and thriving solutions for privacy protection and efficient learning.
\subsection{Federated Learning and Related Work}
\label{Sec. Federated Learning and Related Work}
FL is a potentially feasible solution to the privacy problem in IoT, which is able to avoid the proliferation, distribution, and exchange of local client data by sharing model parameters after training the model on local client data. FL frameworks are widely used in healthcare \cite{dayan2021federated,rieke2020future}, industrial \cite{hao2019efficient,lu2019blockchain}, IoV \cite{du2020federated,kong2021federated}, etc., due to their usages of large scale and personalized data in an efficient and privacy-preserving way. Although FL has significant contributions to massively parallel devices and computations, it still has a notable drawback in that it cannot efficiently handle non-independent and identically distributed (non-i.i.d.) data. It is required to customize the applicable FL framework according to the features, resources, and constraints possessed by users, data, clients, and servers.
Non-i.i.d. data and heterogeneity have always been a challenge and a key to research in FL \cite{sattler2019robust,karimireddy2020scaffold,horvath2021fjord}. Non-i.i.d. data is a common phenomenon for real-world clients that are scattered and not interoperable: Taking IoV as an example, each driver is heterogeneous as a client. FedAvg \cite{mcmahan2017communication}, as one of the first proposed feasibility methods, has been the subject and center of research. FedAvg averages all local models to get the global model so that the local model may deviate far from the global optimum in the parameter space leading to some limitations in FedAvg. It is necessary to ensure that the local model does not deviate from the global model (prevent overfitting) and, simultaneously, that the local model can effectively learn the local client dataset (prevent underfitting). Based on FedAvg, FedProx \cite{li2020federatedFedProx} is proposed to limit the deviation of the local model from the global model by adding a proximal term.
Besides considering accuracy, the FL framework in IoT should not underestimate communication and training resource constraints, cybersecurity, and ubiquity. Some of the recent surveys summarized challenges, threats, and solutions of the FL decentralization paradigm for IoT, including limited computing power, unreliable and limited availability, local training, accuracy, communication overhead, etc. \cite{ghimire2022recent,li2020federatedchallenge,niknam2020federated,lyu2020threats,kairouz2021advances,li2021survey}.
Transfer and edge learning are popular solutions to reduce communication resource consumption in FL frameworks. Zhang \textit{et al.} \cite{zhang2022privacy} performed a federated transfer learning framework to detect driver drowsiness, where transfer learning was employed to save the communication cost in the FL framework. Su \textit{et al.} \cite{su2021secure} introduced edge servers as a collaborative mechanism, where aggregation of local models was aggregated in the edge server and then sent to the global server to aggregate the global model. The benefit of the additional edge server was that the communication between massively parallel clients and the edge server was consumed because the edge server was geographically close to the clients. High latency and intermittent connections could be mitigated. In addition, the edge server could also provide personalized aggregated local models due to the similarity of geographically adjacent clients.
Cyber attack is a problem that cannot be ignored for FL frameworks. Sun \textit{et al.} \cite{sun2021data} developed an attack method for FL framework in IoT, in which a bi-level optimization framework was proposed to compute optimal poisoning attacked FL framework, including direct, indirect, and hybrid attacks. Meanwhile, Zhang \textit{et al.} \cite{zhang2020poisongan} utilized a generative adversarial network (GAN)-based approach to attack the FL framework, especially since the attacker did not need any prior knowledge to carry out the attack.
Personalization is a common approach for FL frameworks to improve applicability for diverse users \cite{tan2022towards}. Fallah \textit{et al.} \cite{fallah2020personalized} proposed a personalized variant of the FL, which allowed clients to perform several gradient descent iterations on an initial global model using local data to obtain a personalized local model. Wu \textit{et al.} \cite{wu2020fedhome} explored a cloud edge-based personalized FL framework for in-home health monitoring, which addressed the problem that a single global model performed poorly on a specific client. Since the global model could only capture the common features of all clients, it lacked the ability to analyze fine-grained information of specific clients.
\subsection{Federated Learning in Driver Monitoring Applications}
\label{Sec. Federated Learning in Driver Monitoring Applications}
Driver monitoring application (DMA) in IoV is adopted as the research direction in this paper due to its real and visual image data, valuable application scenarios, and relatively blank research area. DMA also has challenges in terms of driver privacy issues, communication, and diversity and personalized driver behavior. Related DMA literature covers a wide variety of devices with algorithms to achieve different purposes, such as dangerous state detection \cite{kashevnik2019methodology}, driver emotion recognition \cite{zepf2020driver}, driver lane change inference \cite{xing2019driver}, etc. Compared to other methods \cite{masood2020security,kuutti2020survey,ramzan2019survey}, FL not only highlights efficient learning but also effectively protects the privacy of driver, passenger, and pedestrian biometric information, driving routes, and confidential driving areas such as military installations.
In this paper, we introduce and adapt FL to DMA. Although some FL frameworks exist for DMA, they all suffer from some critical problems. Doshi \textit{et al.} \cite{doshi2022federated} proposed a FL edge-device framework to obtain a global model by aggregation feature representations and obtained considerable accuracy in recognizing driver activities. For the i.i.d. setting, the dataset was partitioned for each edge node in a random way, while for the non-i.i.d. setting, the dataset was assigned selectively. Zhao \textit{et al.} \cite{zhao2023fedsup} proposed a FL framework to monitor fatigue driving, where the non-i.i.d. setting was simulated by controlling the number of images per client. The above FL frameworks for DMA did not really take into account the actual situation of the application but artificially created a simulation scenario. Therefore, there is an urgent need for realistic analysis and research for real-world DMA, considering that the user (driver) should exist independently and be non-interoperable with different clients (vehicles). Moreover, in addition to the necessity of test datasets, the test client is also a critical evaluation criterion, which can reflect the universality of the FL framework. We summarize the existing neglects and challenges in the current FL for DMA framework as follows.
\begin{itemize}
\item Clients in FL for DMA frameworks are often defined in unreasonable and incomprehensible forms. A real and natural definition of a client should be a driver or a vehicle.
\item There is no paper proposing to test on a testing client (not involved in training process), which lacks universal testing for the FL framework.
\item For DMA scenario, there is a great diversity and individuality of driver behaviors, postures, and facial expressions, which call for more presonalized studies than other general IoV scenarios.
\item Similarly, DMA also has diverse scenarios, including diverse vehicle models, interior colors, seat positions, etc., which will greatly increase the learning difficulty.
\end{itemize}
\subsection{Proposed Solution and Contribution}
\label{Sec. Proposed Solution and Contribution}
In this paper, we aim to propose a FL framework applicable and specific to practical applications in IoV, especially DMA, where an imaginary FL framework for IoV is illustrated in Fig. \ref{Fig. FedTOP structure}. Each local client, i.e., vehicle, includes a training module and a perception module. The training module uploads the model parameters to the server after learning and training the local data. After aggregation and optimizing the parameters of the local client models, the server downloads the global model parameters to the perception module in the local client. Moreover, transfer learning can be used to reduce the number of trainable parameters, resulting in reduced communication consumption. The server can save different global models for different scenarios, such as road types, weather types, and vehicle types, so that the model can have better applicability.
\begin{figure}[t]
\centering
\centerline{\includegraphics[width=\linewidth]{Figure/FedTOP.png}}
\caption{Structure illustration of a FL framework for IoV. The server interacts with the local client and saves different scenarios as different models. Transparent neurons are non-trainable parameters, and non-transparent neurons are trainable parameters.}
\label{Fig. FedTOP structure}
\end{figure}
Therefore, a federated transfer-ordered-personalized learning (FedTOP) framework is proposed to address the problems of accuracy, cybersecurity, communication resources, and diversified scenarios. In addition to the transfer-extension shown in Fig. \ref{Fig. FedTOP structure}, the FedTOP framework also enhances robustness and cybersecurity by orderly dropout clients due to their possible overfitting and poisoning of the data. Furthermore, the FedTOP framework is able to remarkably improve accuracy by adapting all clients through personalized-extension. The contributions of this paper are:
\begin{itemize}
\item For realistic problems and usage scenarios in DMA, we propose a feasible FL framework FedTOP, realizing privacy protection, high accuracy, low communication requirements, cybersecurity, and pervasiveness. To the best of our knowledge, this is one of the first papers to establish a feasible FL framework for DMA.
\item The proposed FedTOP framework is tested on two real-world driver monitoring datasets with and without system heterogeneity, systematically characterizing system heterogeneity in real-world datasets and achieving considerable accuracies with 92.32$\%$ and 95.96$\%$, respectively.
\item The experiments highlight a realistic and natural client setup, i.e., drivers and vehicles are naturally formed as clients. Moreover, we innovatively propose evaluation criteria for training and testing clients to test the generalization ability of the proposed FedTOP on different clients.
\item Through an ablation study, we demonstrate the performance and utility of the transfer, ordered, and personalized extensions. These detachable extensions can be selectively installed according to the task description, and the FL framework combined with different extensions can effectively adapt to different IoT application scenarios.
\end{itemize}
The presentation of this paper is as follows. The problem statement and proposed solution are described in Section \ref{Sec. Methodologies}. The experimental setup, heterogeneity, and results have been demonstrated in Section \ref{Sec. Experiment and Results}. Section \ref{Sec. Discussion} discusses the performances of three extensions of the proposed framework, followed by Section \ref{Sec. Conclusion} summarizing the paper and expounding on future work.
\section{Methodologies}
\label{Sec. Methodologies}
\subsection{Problem Statement}
\label{Sec. Problem Statement}
FL framework protects privacy, increases training efficiency, and saves communication resources by sharing only model parameters in IoT. In this paper, the FL framework is used to solve a driver activity classification task in DMA. Clients in real-world IoT are independent and heterogeneous due to the presence of only a minimal number of users per client. Considering the more general application scenarios, the global model $\omega$ for training clients $C$ aggregation needs to be compatible with non-training clients $C'$ in addition to $C$. The data of each client $D_c$ is non-i.i.d. when the data is not interoperable. We can consider a nested model
\begin{equation}
L_c = \omega_c(D_c),
\label{Eq. nested model}
\end{equation}
where $\omega_c$ is the classifier model corresponding to client $c \in C$. $D_c \in \mathbb{R}^{n_c \times i \times j \times d}$ is the image set with $n_c$ samples, $i$ rows, $j$ columns, and $d$ channels. $L_c \in \mathbb{Z}^{n_c}$ is the corresponding label set. The global model $\omega$ are obtained by aggregating, e.g., averaging the weights of the local models,
\begin{equation}
\omega = \sum_{c \in C} p_c\omega_c = \mathbb{E}[\omega_c | c \in C],
\label{Eq. aggregating}
\end{equation}
where $p_c \in [0,1]$ is a weight density function of clients, for which $\sum p_c=1$, $p_c$ will be assigned according to the number of samples. Therefore, the optimization problem of the FL algorithm can be formulated as minimizing the global loss, which is equivalent to minimizing the sum of the local losses,
\begin{equation}
\min_\omega \mathcal{L}(\omega) = \sum_{c \in C}p_c\mathcal{L}(\omega_c) = \mathbb{E}[\mathcal{L}(\omega_c) | c \in C],
\label{Eq. global loss}
\end{equation}
where $\mathcal{L}$ is the loss function that will be assigned.
For real-world classification tasks, we assume that the distribution of the local model in the parameter space presents a multivariate Normal distribution $\omega_c \sim \mathcal{N}\left(\mu_\omega, \sigma^2_\omega \right)$, where $\mu_\omega$ is mean of all local models, and $\sigma^2_\omega$ is the variance of all local models. Fig. \ref{Fig. parameter space} shows the process of the FL algorithm finding the optimal solution of the global model in the parameter space. After the initial model is trained locally, communicated, and aggregated globally, the final global model will be obtained by averaging and can be estimated as $\hat{\omega} = \mu_\omega$. Especially in the large-scale parallel application scenarios of IoT, according to the law of large numbers, $\hat{\omega} = \mu_\omega = \omega^\ast$ is an unbiased estimation.
However, there are still some defects in the method of obtaining the global model through average aggregation. Firstly, we can confirm that there is enormous system heterogeneity in IoT, and the global model cannot ensure high accuracy for all clients. Secondly, we inevitably need a measure to prevent system heterogeneity and potential attacks and poisoning. As shown in Fig. \ref{Fig. parameter space}, the farther the optimal local model is from the global model, the lower the accuracy, and vice versa. Therefore, it is conceivable that in the FL problem with heterogeneity, the clients' accuracy will also obey a Normal distribution.
\begin{figure}[t]
\centering
\centerline{\includegraphics[width=\linewidth]{Figure/Parameter_Space.png}}
\caption{Illustration of the FL algorithm finds the optimal global model solution in the parameter space. The shaded areas are accuracy contour areas. The farther the optimal local model dissociates from the global model, the lower the client accuracy. Local models enclosed by shaded areas have similar accuracies.}
\label{Fig. parameter space}
\end{figure}
\subsection{Proposed Solution}
\label{Sec. Proposed Solution}
According to the problem statement, we propose a FedTOP algorithm to address all of the following issues. First, the aggregation of global models needs to be more stable, which can be achieved by preventing the overfitting of local models. Second, considering the actual communication situation in IoT, we propose transfer learning to reduce the trainable parameters and hence reduce communication requirements. Third, the global model should have the ability to resist interference, attacks, and data poisoning, which can be achieved by orderly dropping out local models with large loss. Fourth, a global model cannot take into account the situation of all clients, especially in the presence of data and system heterogeneity. Therefore, we recommend personalizing the global model to suit all the training and testing clients.
We refer to FedProx \cite{li2020federatedFedProx} using a proximal term to prevent local models $\omega_c$ from deviating from the global model $\omega$. In which, the proximal item $\mathcal{L}_p$ that computes the distance between the local and global model is added to the loss function,
\begin{equation}
\mathcal{L}_p= \frac{\mu}{2}\|\omega_c-\omega\|^2,
\label{Eq. loss proximal term}
\end{equation}
where $\mu$ is deviation coefficient, $\omega_c$ is local client model parameters, and $\omega$ is global model parameters. The overall loss function can be updated as
\begin{equation}
\mathcal{L} = \mathcal{L}_l + \mathcal{L}_p,
\label{Eq. loss function}
\end{equation}
where $\mathcal{L}_l$ is the loss between the true labels and the predicted labels, such as the negative log-likelihood loss used in our experiments.
\begin{figure}[t]
\centering
\centerline{\includegraphics[width=\linewidth]{Figure/System_Diagram.png}}
\caption{The global model is shared with training and testing clients after iterative training and optimization on massively parallel training clients. Both training and testing clients are personalized locally and then get results on the testing set, respectively. Among them, some attack or poison clients will be discarded, such as Client 2 has a large loss.}
\label{Fig. System Diagram}
\end{figure}
\textit{Transfer-extension} is a common and popular solution in many learning frameworks. In particular, FL framework is favored because it can effectively reduce local client training resources and communication resources. In our experiments, the base model is ResNet34 \cite{he2016deep} pre-trained on ImageNet, where only the last residual block and fully connected layer are trainable parameters. Although ImageNet is a large object classification dataset far from DMA images, the lower layers are similar for convolutional neural networks (CNN) and are used to extract image features. Therefore, the upper layers that are used to obtain high-level features and representations are given more attention. The ratio of reduced communication resource requirement in the network is approximately equal to the ratio of non-trainable parameters to total parameters,
\begin{equation}
\text{Commun}_\downarrow \approx \frac{|\omega_{\text{non-trainable}}|}{|\omega|} = 37.46\%,
\label{Eq. communication resources}
\end{equation}
where $\text{Commun}_\downarrow$ is the reduced communication resource requirement, $|\omega_{\text{non-trainable}}|$ is the number of non-trainable model parameters, and $|\omega|$ is the total number of the model parameters. Therefore, the transfer-extension reduces the communication requirement by 37.46$\%$ by decreasing the trainable parameters.
\textit{Ordered-extension} is for orderly dropout clients with enormous variance, which may be subject to malicious attacks and poisoning, extensive data and system heterogeneity, and model underfitting. These local clients with large losses should be discarded to enhance the applicability of the global model. Ordered-extension not only enhances accuracy and robustness but also secures the global model. After all of the clients upload the local model parameters and the final training loss to the server, the server only aggregates the $q \in \mathbb{N} \leq |C|$ local models with the lowest loss as the global model. The set of $q$ local models can be expressed as
\begin{equation}
C_q \in q-\arg\min_{c \in C} \mathcal{L}(\omega_c).
\label{Eq. ordered-extension}
\end{equation}
\begin{algorithm}[t]
\small
\caption{\small{FedTOP}}
\label{Alg FedTOP}
\begin{algorithmic}
\STATE {\bfseries Input:} Communication rounds ($T$), training client set ($C$), training epoch ($E$), initial global model ($\omega^1$), loss function ($\mathcal{L}_l$), deviation coefficient ($\mu$), number of ordered clients ($q$)
\STATE {\bfseries Output:} Trained global model ($\omega^{T}$)
\FOR{$t=1$ {\bfseries to} $T-1$}
\FOR{$c \in C$ {\bfseries in parallel}}
\FOR{$e=1$ {\bfseries to} $E-1$}
\STATE Backpropagate the loss function and update the local model $\omega_c^{t^{e+1}} \gets \arg\min_{\omega_c^{t^{e}}} \mathcal{L}_l(\omega_c^{t^{e}}) + \frac{\mu}{2}\|\omega_c^{t^{e}}-\omega^t\|^2 $.
\ENDFOR
\STATE Update the local model $\omega_c^{t} \gets \omega_c^{t^{E}}$.
\STATE Client sends $\omega^{t}_c$ to the server.
\ENDFOR
\STATE Find a set $C^t_q$ of top-$q$ clients in $C^t$ in term of
loss values: $C^t \in q-\arg\min_{c \in C^t} \mathcal{L}(\omega^{t}_c)$.
\STATE Server aggregates the $\omega$ as $\omega^{t+1} \gets \frac{1}{q}\sum_{q \in C^t_q} \omega^{t}_q$.
\ENDFOR
\STATE Send $\omega^{T}$ to clients $c \in \{C, C'\}$ do personalization.
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[t]
\small
\caption{\small{Personalized-extension}}
\label{Alg Personalization}
\begin{algorithmic}
\STATE {\bfseries Input:} Training client set ($C$), testing client set ($C'$), personalization epoch ($E$), Trained global model ($\omega^{T}$), loss function ($\mathcal{L}_l$)
\STATE {\bfseries Output:} Personalized local model ($\omega_c$)
\FOR{$c \in \{C, C'\}$}
\FOR{$e=1$ {\bfseries to} $E-1$}
\STATE Backpropagate the loss function and update the local model $\omega_c^{T^{e+1}} \gets \arg\min_{\omega_c^{T^{e}}} \mathcal{L}_l(\omega_c^{T^{e}})$.
\ENDFOR
\STATE Update the personalized local model $\omega_c \gets \omega_c^{T^{E}}$.
\ENDFOR
\end{algorithmic}
\end{algorithm}
\textit{Personalized-extension} is to promote, popularize, and adapt the global model to the heterogeneity of all clients. As shown in Fig. \ref{Fig. parameter space}, the global model cannot be applied to all clients due to the ubiquitous heterogeneity. The region of interest (ROI) of the model may vary depending on system heterogeneity, such as different camera angles, seat positions, and vehicle structures, resulting in differences in the relative position of the driver in the image. However, personalized-extension proposes to train the global model several times in each client to obtain a more personalized local model to improve accuracy. On the one hand, compared with the traditional FL algorithm, the personalized-extension can significantly and effectively improve accuracy and confidence. On the other hand, compared to the method that only trains locally, the personalized FL algorithm improves the training efficiency and avoids the overfitting of the local model. In particular, the personalized FL algorithm can help and generalize to other non-training clients $C'$, which may have minimal training resources. After receiving the global model, the non-training clients $C'$ can obtain a highly accurate and reliable local model with minimal training. The system diagram of the proposed FedTOP is shown in Fig. \ref{Fig. System Diagram}
For the proposed FedTOP framework, the client communicates with the server $T$ rounds, and all clients $C$ train $E$ epochs in parallel between each communication. For our preliminary experiments, we set $T=10$ and $E=5$. For transfer-extension, the local model is the transfer learning model of ResNet34 pre-trained on ImageNet. Only the last residual block and fully connected layer are set as trainable parameters. In addition, we add an additional fully connected layer to match the number of our classification categories. Based on FedProx, the activation function of the last layer is LogSoftmax, and the setting of the loss function $\mathcal{L}_l$ is a negative log-likelihood loss. $\omega^1$ is the initial model parameter. The proposed FedTOP is described in Algorithm \ref{Alg FedTOP}, and the personalization process is described in Algorithm \ref{Alg Personalization}.
\begin{figure*}[t]
\centering
\subfloat[SFDDD texting - right 1]{\includegraphics[width=0.25\linewidth]{Figure/SFDDD_1.jpg}%
\label{Fig. SFDDD 1}}
\hfill
\subfloat[SFDDD texting - right 2]{\includegraphics[width=0.25\linewidth]{Figure/SFDDD_2.jpg}%
\label{Fig. SFDDD 2}}
\subfloat[SFDDD texting - right 3]{\includegraphics[width=0.25\linewidth]{Figure/SFDDD_3.jpg}%
\label{Fig. SFDDD 3}}
\hfill
\subfloat[SFDDD texting - right 4]{\includegraphics[width=0.25\linewidth]{Figure/SFDDD_4.jpg}%
\label{Fig. SFDDD 4}}
\\
\subfloat[DriveAct magazine 1]{\includegraphics[width=0.25\linewidth]{Figure/DriveAct_1.jpg}%
\label{Fig. DriveAct 1}}
\hfill
\subfloat[DriveAct magazine 2]{\includegraphics[width=0.25\linewidth]{Figure/DriveAct_2.jpg}%
\label{Fig. DriveAct 2}}
\subfloat[DriveAct magazine 3]{\includegraphics[width=0.25\linewidth]{Figure/DriveAct_3.jpg}%
\label{Fig. DriveAct 3}}
\hfill
\subfloat[DriveAct magazine 4]{\includegraphics[width=0.25\linewidth]{Figure/DriveAct_4.jpg}%
\label{Fig. DriveAct 4}}
\caption{Exampled activities of four drivers in each of SFDDD and DriveAct datasets.}
\label{Fig. SFDDD and DriveAct dataset}
\end{figure*}
\begin{figure}[t]
\centering
\subfloat[SFDDD]{\includegraphics[width=0.5\linewidth]{Figure/Histogram_of_SFDDD.png}%
\label{Fig. Histogram of SFDDD}}
\hfill
\subfloat[DriveAct]{\includegraphics[width=0.5\linewidth]{Figure/Histogram_of_DriveAct.png}%
\label{Fig. Histogram of DriveAct}}
\caption{Sampled client image histograms of SFDDD and DriveAct datasets.}
\label{Fig. Histogram}
\end{figure}
\section{Experiment and Results}
\label{Sec. Experiment and Results}
Considering the data and system heterogeneity, experiments are conducted on two open real-world driver monitoring datasets, including State Farm Distracted Driver Detection (SFDDD) \cite{farm_2016} and DriveAct \cite{martin2019drive}. In addition to comparing with FedProx as a baseline, this paper also compares the performance of the transfer, ordered, and personalized extensions through an ablation study.
\subsection{Experiment Setup}
\label{Sec. Experiment Setup}
To compare the impact of system heterogeneity on FL frameworks, the proposed FedTOP is tested on driver monitoring datasets with and without system heterogeneity. SFDDD dataset includes 26 drivers and 10 activities, and DriveAct dataset includes 15 drivers and 12 activities. SFDDD dataset considers system heterogeneity, that is, different drivers have different vehicles, different seat positions, different camera angles, etc., as shown in Fig. \ref{Fig. SFDDD 1}, \ref{Fig. SFDDD 2}, \ref{Fig. SFDDD 3}, and \ref{Fig. SFDDD 4}. DriveAct dataset does not take into account system heterogeneity, i.e., all subjects had their data collected in the same system. Recorded from the same camera angle, different drivers read the same magazine in the same vehicle, as shown in Fig. \ref{Fig. DriveAct 1}, \ref{Fig. DriveAct 2}, \ref{Fig. DriveAct 3}, and \ref{Fig. DriveAct 4}.
To show more clearly and visually the heterogeneity between different clients in the two datasets, Fig. \ref{Fig. Histogram} shows histograms of the sample images of the two datasets. It can be seen that the SFDDD dataset with system heterogeneity has a more considerable difference in the distribution of histograms than the DriveAct dataset without system heterogeneity, and the mean value of the SFDDD images is larger. The possible reason is that the vehicle interiors of the DriveAct dataset view are darker, resulting in most of the pixel values being lower. Therefore, the FL framework may be more challenged by the scene information when training on the SFDDD dataset, such as different vehicle interiors.
Clients are naturally divided based on the drivers. In order to better demonstrate the role of personalized-extension, the datasets are first divided into training clients and testing clients at a ratio of about 0.8, 0.2, with $|C_{\text{SFDDD}}|=20$, $|C'_{\text{SFDDD}}|=6$, $|C_{\text{DriveAct}}|=12$, and $|C'_{\text{DriveAct}}|=3$. And then, the datasets for each client are divided into a training set, verification set, and testing set at a ratio of 0.7, 0.15, and 0.15, respectively. After the global model is trained by the training dataset of training clients, the final trained global model is shared with all clients for personalization. The personalization of the global model will only be processed on the training sets, while the personalized local model will be tested on the unseen testing sets. The FL architectures are established on Pytorch and trained on an Intel(R) Core(TM) i9-10850K CPU @ 3.60GHz, and a Nvidia GeForce RTX(TM) 3080 GPU.
\subsection{Ablation Study and Results}
\label{Sec. Ablation Study and Results}
We explore the role of each FedTOP extension on two real-world datasets through an ablation study. FedProx is used as a baseline for comparison. According to the experimental setup described in the previous subsection, the experimental results are shown in Table \ref{Table Experimental results}.
\begin{table*}[ht]
\caption{Performance of FedTOP and ablation study on SFDDD and DriveAct datasets.}
\label{Table Experimental results}
\centering
\begin{tabular}{llccccccccc}
\hline
Dataset & Method \textsuperscript{1} & $|C|$ & $q$ & $\mu$ & Transfer &\multicolumn{2}{c}{Accuracy ($\%$) \textsuperscript{2}} & $\text{Time}_\downarrow$ ($\%$) \textsuperscript{3} & $\text{Commun}_\downarrow$ ($\%$) \textsuperscript{4} & Cybersecurity \\
~ & ~ & ~ & ~ & ~ & ~ & Training & Testing & ~ & ~ & ~ \\
\hline
SFDDD & FedProx (baseline) & 20 & 20 & 1 & No & 54.63 & 16.44 & $\sim$ & $\sim$ & $\sim$ \\
~ & FedOP & 20 & 15 & 1 & No & 97.69 & 96.37 & 1.45 $\downarrow$ & $\sim$ & $\uparrow$ \\
~ & FedTP & 20 & 20 & 1 & Yes & 94.76 & 92.8 & 17.3 $\downarrow$ & 37.46 $\downarrow$ & $\sim$ \\
~ & FedTO & 20 & 15 & 1 & Yes & 46.16 & 16.43 & 18.91 $\downarrow$ & 37.46 $\downarrow$ & $\uparrow$ \\
~ & \textbf{FedTOP} & \textbf{20} & \textbf{15} & \textbf{1} & \textbf{Yes} & \textbf{94.65} & \textbf{92.32} & \textbf{18.91} $\boldsymbol{\downarrow}$ & \textbf{37.46} $\boldsymbol{\downarrow}$ & \text{ } $\boldsymbol{\uparrow}$ \\
DriveAct & FedProx (baseline) & 12 & 12 & 1 & No & 73.18 & 23.96 & $\sim$ & $\sim$ & $\sim$ \\
~ & FedOP & 12 & 10 & 1 & No & 98.07 & 97.97 & 0.44 $\downarrow$ & $\sim$ & $\uparrow$ \\
~ & FedTP & 12 & 12 & 1 & Yes & 97.00 & 95.71 & 16.83 $\downarrow$ & 37.46 $\downarrow$ & $\sim$ \\
~ & FedTO & 12 & 10 & 1 & Yes & 62.30 & 22.89 & 19.18 $\downarrow$ & 37.46 $\downarrow$ & $\uparrow$ \\
~ & \textbf{FedTOP} & \textbf{12} & \textbf{10} & \textbf{1} & \textbf{Yes} & \textbf{97.04} & \textbf{95.96} & \textbf{19.18} $\boldsymbol{\downarrow}$ & \textbf{37.46} $\boldsymbol{\downarrow}$ & \text{ } $\boldsymbol{\uparrow}$ \\
\hline
\end{tabular}
\noindent{\textsuperscript{1} FedOP, FedTP, and FedTO refer to ablating the transfer, ordered, and personalized extensions of the FL framework, respectively.}
\noindent{\textsuperscript{2} Accuracy refers to the testing sets of training clients and testing clients, which is described in Section \ref{Sec. Experiment Setup}.}
\noindent{\textsuperscript{3} $\text{Time}_\downarrow$ refers to the ratio of reduced training time per client to the baseline.}
\noindent{\textsuperscript{4} $\text{Commun}_\downarrow$ refers to ratio of reduced communication consumption to the baseline, which is described in (\ref{Eq. communication resources}).}
\end{table*}
\begin{figure*}[t]
\centering
\subfloat[FedProx]{\includegraphics[width=0.25\linewidth]{Figure/SFDDD_FedProx.png}%
\label{Fig. SFDDD FedProx}}
\hfill
\subfloat[FedT]{\includegraphics[width=0.25\linewidth]{Figure/SFDDD_TP.png}%
\label{Fig. SFDDD TP}}
\hfill
\subfloat[FedO]{\includegraphics[width=0.25\linewidth]{Figure/SFDDD_OP.png}%
\label{Fig. SFDDD OP}}
\hfill
\subfloat[FedTO]{\includegraphics[width=0.25\linewidth]{Figure/SFDDD_TO.png}%
\label{Fig. SFDDD TO}}
\\
\subfloat[FedProx]{\includegraphics[width=0.25\linewidth]{Figure/DriveAct_FedProx.png}%
\label{Fig. DriveAct FedProx}}
\hfill
\subfloat[FedT]{\includegraphics[width=0.25\linewidth]{Figure/DriveAct_TP.png}%
\label{Fig. DriveAct TP}}
\hfill
\subfloat[FedO]{\includegraphics[width=0.25\linewidth]{Figure/DriveAct_OP.png}%
\label{Fig. DriveAct OP}}
\hfill
\subfloat[FedTO]{\includegraphics[width=0.25\linewidth]{Figure/DriveAct_TO.png}%
\label{Fig. DriveAct TO}}
\caption{Accuracy and loss curves of the FL framework and its extensions on the SFDDD and DriveAct datasets, which is the training process of Algorithm \ref{Alg FedTOP}. Personalization does not affect the convergence of the global model in the FL framework.}
\label{Fig. SFDDD and DriveAct Results}
\end{figure*}
\begin{figure}[t]
\centering
\subfloat[SFDDD TOP]{\includegraphics[width=0.5\linewidth]{Figure/SFDDD_TOP.png}%
\label{Fig. SFDDD TOP}}
\hfill
\subfloat[DriveAct TOP]{\includegraphics[width=0.5\linewidth]{Figure/DriveAct_TOP.png}%
\label{Fig. DriveAct TOP}}
\caption{Testing accuracy of the training and testing clients on both SFDDD and DriveAct datasets varies with personalized epoch, which is the testing results of Algorithm \ref{Alg Personalization}.}
\label{Fig. Personalization Results}
\end{figure}
The results and comparisons for two datasets and three extensions are shown in Fig. \ref{Fig. SFDDD and DriveAct Results}, which is equivalent to demonstrating Algorithm \ref{Alg FedTOP}. By observing the accuracy and loss curves on the two datasets, it can be concluded that the SFDDD dataset with system heterogeneity is fundamentally different from the DriveAct dataset without system heterogeneity. It can be clearly seen that the SFDDD dataset with system heterogeneity requires more communication to converge, while the DriveAct dataset without system heterogeneity has a fast convergence speed, especially at the first communication. Therefore, for real-world datasets, system heterogeneity can be mitigated by more communication times.
By observing Fig. \ref{Fig. SFDDD OP}, \ref{Fig. SFDDD TO}, \ref{Fig. DriveAct OP}, and \ref{Fig. DriveAct TO}, it can be found that the ordered-extension diminishes the stability of the system. Although the anomalous large-loss local model is discarded to reduce the bias of the global model, it also increases the variance of the global model resulting in reduced generalizability. By observing Fig. \ref{Fig. SFDDD TP}, \ref{Fig. SFDDD TO}, \ref{Fig. DriveAct TP}, and \ref{Fig. DriveAct TO}, we can see that the effect of transfer-extension is different for datasets with and without system heterogeneity. On the one hand, transfer-extension increases the variance of the model on the SFDDD dataset and leads to a reduced and unstable model convergence. On the other hand, transfer-extension improves the speed of model convergence on DriveAct, and the convergence effect is more stable. The possible reason is that the transfer-extension retains only a small number of trainable parameters, resulting in the neural network model not being able to learn human behavioral features effectively in the SFDDD dataset with system heterogeneity. However, for the DriveAct dataset without system heterogeneity, the factors are constant except for the driver, and the local model does not need to focus on these exact same pixels, but only on the changing pixels, including objects such as drivers, computers, and magazines. Therefore, for the DriveAct dataset, transfer-extension can effectively increase convergence and stability. The proposed FedTOP framework is able to obtain 92.32$\%$ and 95.96$\%$ accuracy on the SFDDD and DriveAct datasets, respectively, when considering five times of personalization training. Compared to FedProx as a baseline, FedTOP can effectively improve the accuracy by 462$\%$ in addition to considering a 37.46$\%$ reduction in communication resources. The results demonstrate the feasibility of the proposed FedTOP in terms of communication resource saving, accuracy improvement, robustness, and cybersecurity.
\begin{figure*}[t]
\centering
\subfloat[Trained global model $\omega^T$]{\includegraphics[width=0.25\linewidth]{Figure/SFDDD_P_0.png}%
\label{Fig. SFDDD P 0}}
\hfill
\subfloat[Personalization Epoch 1 $\omega^{T^1}$]{\includegraphics[width=0.25\linewidth]{Figure/SFDDD_P_1.png}%
\label{Fig. SFDDD P 1}}
\hfill
\subfloat[Personalization Epoch 3 $\omega^{T^3}$]{\includegraphics[width=0.25\linewidth]{Figure/SFDDD_P_3.png}%
\label{Fig. SFDDD P 3}}
\hfill
\subfloat[Personalization Epoch 5 $\omega^{T^5}$]{\includegraphics[width=0.25\linewidth]{Figure/SFDDD_P_5.png}%
\label{Fig. SFDDD P 5}}
\\
\subfloat[Trained global model $\omega^{T}$]{\includegraphics[width=0.25\linewidth]{Figure/DriveAct_P_0.png}%
\label{Fig. DriveAct P 0}}
\hfill
\subfloat[Personalization Epoch 1 $\omega^{T^1}$]{\includegraphics[width=0.25\linewidth]{Figure/DriveAct_P_1.png}%
\label{Fig. DriveAct P 1}}
\hfill
\subfloat[Personalization Epoch 3 $\omega^{T^3}$]{\includegraphics[width=0.25\linewidth]{Figure/DriveAct_P_3.png}%
\label{Fig. DriveAct P 3}}
\hfill
\subfloat[Personalization Epoch 5 $\omega^{T^5}$]{\includegraphics[width=0.25\linewidth]{Figure/DriveAct_P_5.png}%
\label{Fig. DriveAct P 5}}
\\
\caption{CAMs of the test clients in SFDDD and DriveAct datasets during the personalization process. (a), (b), (c), and (d) are a test client in the SFDDD dataset, which is the same as Fig. \ref{Fig. SFDDD 1}. (e), (f), (g), and (h) are a test client in the DriveAct dataset, which is the same as Fig. \ref{Fig. DriveAct 1}.}
\label{Fig. Class activation map}
\end{figure*}
\subsection{Performance of Personalized-Extension}
\label{Sec. Performance of Personalized-extension}
Personalized-extension needs to be further discussed and analyzed as the most effective approach to improve accuracy. Based on the division of training and testing clients in Section \ref{Sec. Experiment Setup}, in this subsection, we further discuss how the trained and aggregated global model is adapted to both training and testing clients. The results of the personalized-extension on the two datasets are shown in Fig. \ref{Fig. Personalization Results} with different personalization epochs, which is equivalent to demonstrating Algorithm \ref{Alg Personalization}. It can be seen that the personalization process differs significantly on the datasets with and without system heterogeneity, which is similar to the results in Fig. \ref{Fig. SFDDD and DriveAct Results}. The clients in the DriveAct dataset have faster convergence, minor accuracy variance, and higher final accuracy. On the contrary, the clients in the SFDDD dataset not only converge slower but also have an anomalous client with relatively low accuracy. The possible reason is that the anomalous client has a huge data and system heterogeneity, causing the optimal model to deviate significantly far from the aggregated global model.
Fig. \ref{Fig. Class activation map} further demonstrates that the trained global model repositions the ROI during the personalized training process via class activation map (CAM) \cite{omeiza2019smooth}. The test client of the SFDDD dataset can be seen struggling with the personalization process. The trained global model focus the ROI on the seat backrest, driver's chest, hand, and knee, and vehicle door. Due to the system heterogeneity present in the SFDDD dataset, the positions of the driver, seat, and steering wheel, as shown in Fig. \ref{Fig. SFDDD P 0} is different from other clients, as shown in Fig. \ref{Fig. SFDDD 2}, \ref{Fig. SFDDD 3}, and \ref{Fig. SFDDD 4}. Therefore, the initial ROI is likely to be a driver's position among other clients. During the five personalization training processes, the local model is able to effectively reposition the ROI to the driver, which is what the personalized-extension is intended to show. Moreover, the personalization process also reduces the number of ROIs while targeting more attention to a specific area.
On the contrary, for the test clients in the DriveAct dataset, the adjustment of the ROI is negligible. Note that the ROI does not necessarily have to cover the driver's body or an object such as the magazine. The ROI should cover those pixels that can distinguish between different activities, such as static activities like reading the magazine, and dynamic activities like wearing a seatbelt in the DriveAct dataset activity setting. These ROIs focus on areas where large differences are likely to occur. The fact that the ROIs in the DriveAct dataset cover almost the same pixels during the personalization process can also prove the negative impact of system heterogeneity on the FL framework.
\section{Discussion}
\label{Sec. Discussion}
The two datasets used, SFDDD and DriveAct, still have some flaws. First, although the SFDDD dataset takes system heterogeneity into account, quite a few drivers collect data in the same vehicle, that is, the number of clients is greater than the number of users. Therefore, there are still some differences between the dataset and the real-world data, which leads to the fact that the proposed FedTOP may need more communication rounds to achieve similar accuracy on a real-world dataset. Second, there is currently no driver monitoring dataset with real poisoning data currently existing, resulting in the effect of ordered-extension not being reflected. The different modalities, positions, and angles of the camera or the method of generating fake data may be a hypothesis for poisoned data, but it cannot be highlighted as real. Moreover, due to road safety guidelines, the current dataset is only driving on safe roads or simulated driving. Therefore, the driver's posture, demeanor, facial concentration, etc., are far from the real driving behavior. Therefore, there is an urgent need for a more realistic dataset that can include camera images of different positions and angles, different vehicle scenes, and more drivers driving on real roads.
For a FL framework in IoT, in addition to accuracy being the evaluation criterion, factors like communication requirements, robustness, fairness, cybersecurity, etc., also need to be considered. Although it seems that transfer and ordered extensions may not improve accuracy but rather reduce it in the current experimental results, it can potentially improve the performance of the FL framework. Therefore, we keep two extensions as one of our future directions. Personalized-extension is an approach similar to transfer learning and incremental learning. On the one hand, the local client is incrementally learned based on the trained global model, but it does not intentionally retain the previously learned knowledge. On the other hand, the global model is transferred to the client dataset as in transfer-extension, but the low-level non-trainable weights are still pre-trained on ImageNet. Therefore, the proposed personalized-extension actually uses the trained global model weights to fit different client data, such as the reposition of ROIs. Although the personalized-extension requires additional training locally for each client, there are many benefits, including high accuracy, applicability to non-training clients, customization, etc. Conceivably, personalized-extension can effectively address the problem of system heterogeneity, e.g., it can be applied to different cameras, camera angles, vehicle interiors, etc.
\section{Conclusion}
\label{Sec. Conclusion}
In this paper, we propose a FL framework FedTOP for DMA to address the issues of privacy preservation, efficient training, communication resource-saving, poisoned data, and diversified scenarios. Through the ablation study, the impact, role, and performance of three extensions, including transfer, ordered, and personalized on the model are disclosed. Moreover, the experiments demonstrate dramatic differences between datasets with and without system heterogeneity. In addition to the proposed FedTOP being able to exhibit 92.32$\%$ and 95.96$\%$ accuracy in two datasets for testing clients, it is also appreciated that FedTOP reduces communication consumption by 37.46$\%$ and potentially improves cybersecurity. The experimental results show that the proposed FedTOP is a highly accurate, lightweight, privacy-preserving, robust, cybersecure, and universally applicable FL framework for potential DMA.
Future work lies in the continued research of extensions. For the ordered-extension, a possible plan is to introduce some malicious local clients to attack and poison with the global model. For example, subjects may not place the camera on the side as instructed but place it on the front or behind instead. Such outliers may cause the global model to deviate significantly from the optimal solution, so in the case, ordered-expansion can prevent the deviation of the global model by discarding the larger value of the losses. For the transfer-extension, there is currently a lack of a general driver monitoring model, so we used a model pre-trained on ImageNet. Future work can pre-train a driver model ourselves as a base model, which will get better performance in DMA. Fig. \ref{Fig. FedTOP structure} shows the FL framework for foresight in IoV, but the dataset used does not contain scenario information such as road, weather, vehicle models, etc. Therefore, we expect a well-developed real-world dataset to include such scenario information, data and system heterogeneity, etc.
\begin{comment}
\section*{Acknowledgments}
This should be a simple paragraph before the References to thank those individuals and institutions who have supported your work on this article.
\end{comment}
\begin{comment}
{ |
2,869,038,154,544 | arxiv | \section{Introduction}
\IEEEPARstart{Q}{uadrature} squeezed states are quantum resources in continuous-variable quantum information processing with quadrature amplitudes of light \cite{Squeeze:Walls,CVQI:PvL,EntanglementFromSq}. Especially, squeezed vacua are used as ancillary inputs for quantum operations such as quantum teleportation \cite{FurusawaTeleportation}, a quantum non-demolition gate \cite{QND:Shiozawa} and quantum key distribution \cite{KeydistributionSq}.
To realize a large-scale quantum circuit, it is important to utilize guided-wave optical components. Nowadays, with the development of telecommunication, highly reliable various fiber-coupled optical components such as lasers, modulators and optical beamsplitters have become commercially available. However, generation of high-level squeezed vacua still relies on optical parametric oscillators (OPOs) with bulk optics \cite{30yearsSq:Leuchs,7dBSq:Sasaki,15dBSq:Schnabel}. This is because guided-wave components has larger loss and lower durability for intense pump beams compared to free space optics. OPOs with capability of direct coupling with optical fibers \cite{Fiber:Fabre,takanashi} and a compact OPO on a breadboard \cite{CompactOPO} have been proposed but the need to control and adjust the cavity length could be an obstacle to scaling up of quantum circuits in the future.
By reducing propagation loss and improving the durability for intense pump beams, optical parametric amplification (OPA) on $\chi^{(2)}$ waveguides could take the place of OPOs. A practical advantage of OPAs is that, unlike OPOs, they do not require troublesome optical length control for the cavities. Additionally, OPAs can achieve THz-order operational bandwidth limited only by dispersion or phase matching conditions \cite{BroadbandOPA,FiberBroadbandOPA,WaveguideBroadbandOPA,OPAFurusawa}, while cavity structures of OPOs limit the bandwidth of the process.
Although it is still below the level for quantum information processing, for example 3 dB, which is a condition of entanglement swapping \cite{3dBentanglementTheory}, the performance of fiber-coupled OPAs as sources of squeezed vacua have made remarkable progress in recent years. The level of squeezed vacua obtained from these components has reached 1.83 dB in 2016 \cite{fiberedOPA18dB} and 2.00 dB in 2019 \cite{fiberOPA20dB0}.
In this letter, we report generation and detection of a 4.0-dB squeezed vacuum from our newly developed fiber-coupled single-pass OPA module based on a dry etched periodically poled LiNbO${}_{3}$ (PPLN) waveguide. The module consists of the PPLN ridge waveguide, lenses for fiber coupling, four optical fiber pigtales, and dichroic beamsplitters.
The high durability of the waveguide and the good separation of the dichroic beamsplitters allows to inject an intense pump beam, resulting high squeezing level. We detect $-$4.0$\pm$0.1 dB of squeezing and 14.1$\pm$0.1 dB of anti-squeezing at 10 MHz with pump power up to 330 mW.
We use a fiber-optic beamsplitter for a homodyne detection, assuming applications of the module in fiber systems. Correcting for the deterioration caused by the measuremental system, the squeezing level and anti-squeezing level at the output port of the module are estimated to be $-$5.7$\pm$0.1 dB and 14.9$\pm$0.1 dB.
\section{Device Design and Experimental Setup}
\begin{figure*}[!t]
\centering\includegraphics[width=13cm]{fig_waveguide.eps}
\caption{Design of our OPA module. (a) Photograph of the module. (b) Schematic of the module. The module consists of a 45-mm long PPLN waveguide, four dichroic beamsplitters, six lenses and four pigtales. In this experiment, the input pigtale for a 1.55 $\mu$m beam is not used. (c) Schematic of the ridge waveguide. The substrate is LiTaO${}_{3}$, and the core is trapezoidal ZnO-doped LiNbO${}_3$. The waveguide has periodic poling for quasi-phase matching. (d) A picture of a waveguide end face taken with a scanning electron microscope. (e) A graph of the electric field amplitude distribution at the end face of the waveguide calculated by a computer simulation.}
\label{waveguide}
\end{figure*}
Figs. \ref{waveguide}(a) and (b) shows the external appearance and the internal schematic of our OPA module. The module consists of a 45-mm long PPLN ridge waveguide, four dichroic beamsplitters, six collimation lenses and four pigtales. The temperature of the waveguide is controlled for quasi-phase matching. Dichroic beamsplitters (High transmission at 0.78 $\mu$m, high reflection at 1.5 $\mu$m) are used for separating a squeezed vacuum from a pump beam. For better separation, a squeezed vacuum is reflected twice on the beamsplitters. The good separation allows the intensity of a pump beam to be increased without any problems in the homodyne detection due to the transmission of an intense pump beam. The transmittance of the module is 56\% at 1.55 $\mu$m and 60\% at 0.78 $\mu$m, which is mainly due to propagation loss in the waveguide and coupling loss on both ends. The propagation loss is considered to be mainly caused by surface roughness on the both sidewalls. The bandwidth of the parametric process in the PPLN waveguide is considered to be limited to THz order by its quasi-phase matching condition.
Fig. \ref{waveguide}(c) shows the schematic of our ridge waveguide. A core layer of PPLN is directly bonded to LiTaO${}_{3}$ substrate. The waveguide is fabricated by dry etching with argon gas using photoresist patterned by photolithography as an etching mask \cite{umeki-how2make}. Unlike diffusion methods, the direct bonding method does not cause defects in the crystal, which helps to increase the durability of the waveguide \cite{Kashiwazaki}. The dry etching method allows to make various shapes of waveguides. For instance, it enables to change the core size according to the position, while mechanical saws which are used to make diced waveguides moves only straightly. Taking this advantage, we create tapered structures at both ends of the waveguide to obtain better coupling with optical fibers. The tapered structures approximate the spot size of a propagating beam to a circle, which optimizes coupling efficiency with optical fibers under the constraint that the waveguide is trapezoidal.
Fig. \ref{waveguide}(d) is a picture of a waveguide end face taken with a scanning electron microscope and Fig. \ref{waveguide}(e) is a graph of the electric field amplitude distribution at the end face calculated by a simulation using a finite-difference method (Optiwave Systems Inc., OptiBPM 12). The mode-match between the calculated mode and a TEM${}_{00}$ mode is 98\%, although the actual coupling efficiency with the fiber is considered to be slightly lower due to manufacturing and assembling errors.
\begin{figure}[!t]
\centering\includegraphics[width=8.5cm]{fig_setup.eps}
\caption{Schematic of the experimental setup. A seed laser is a single frequency laser at 1553.3 nm. Output of the seed laser is amplified by a fiber amplifier. The amplified beam is split into two beams by a 10 dB (90:10) coupler, and the main output of the coupler pumps a frequency doubler after it passes through a bandpass filter. A frequency doubled beam pumps the OPA after it passes through a variable optical attenuator consisting of a half-wave plate and a polarizing beamsplitter. The intensity of the frequency doubled beam is monitored after transmission through the OPA. The tapped output of the 10 dB beamsplitter is used as a local oscillator of a homodyne measurement after passing a variable optical attenuator and a phase modulator. A squeezed vacuum from the OPA and the local oscillator are interfered in a 3 dB coupler, and the difference of the intensities of the output beams of the coupler is detected by a balanced photodetector. Note that only important elements are depicted. BPF, bandpass filter; VOA, variable optical attenuator; PBS, polarizing beamsplitter; HWP, half-wave plate; NC, not connected; PD, photodetector; PM, phase modulator; LO, local oscillator; AR, anti-reflection; BPD, balanced photodetector.}
\label{setup}
\end{figure}
Fig. \ref{setup} shows a schematic of the experiment. A source of continuous-wave laser light at 1553.3 nm is a narrow-linewidth and low-noise seed laser (RIO-OptaSense Inc., ORION module). The output of the seed laser is amplified by a Erbium-doped fiber amplifier (Keopsys, CEDA-C-PB-HP). The output of the fiber amplifier is splitted into two beams and passes through bandpass filters (Alnair Labs, TFF-15-1-PM-L-100-SS-SA) to reduce noise due to amplified spontaneous emission from the fiber amplifier.
The brighter output of the beamsplitter pumps a fiber-coupled frequency doubler (NTT Electronics, WH-0776-000-F-B-C). The frequency doubled beam passes through a custom-made fiber-pigtaled variable optical attenuator consisting of a half-wave plate (Casix, WPZ1210) and a polarizing beamsplitter (Sigma Koki, PBS-5-7800). This beam is used as the pump beam of the OPA module. The less intense output of the beamsplitter passes through a variable attenuator (Thorlabs, VOA50PM-FC) and used as a local oscillator (LO) for homodyne detection. The effect of phase noise of the fiber system is reduced by matching the optical length on the LO path with that on the path for generating the squeezed vacuum. As a result, the phase fluctuation is negligible during the measurement period.
The OPA module has two output ports. One is for 0.78 $\mu$m, and is used for monitoring the intensity of the pump beam. The intensity is measured by a Si photodetector (Newport 818-SL) and we estimate the incident pump power by dividing the monitored power by 0.6, which is the transmittance of the module at 0.78 $\mu$m. The other is for 1.5 $\mu$m, namely a port for the squeezed vacuum. The squeezed vacuum interferes with the LO in a 3 dB coupler (Thorlabs, PN1550R5F2). The phase of the LO is scanned by a phase modulator (Thorlabs, LN53-10-P-S-S-BNL). The output ports of the 3 dB coupler are spliced to anti-relection (AR) coated fibers (Thorlabs, P1-1550PMAR-2). The fibers are connected to a homemade fiber-receptacle InGaAs balanced photodetector consisting of lenses (Thorlabs, TC25FC-1550 and LA1134-C), mirrors (Sigma Koki, TFVM-25.4C05-1550), photodiodes (Laser Components, IGHQEX0100-1550-10-1.0-SPAR-TH-40) and an operational amplifier (Analog devices, AD829) with 18 k$\Omega$ of transimpedance. The signal from the detector is measured by a spectrum analyzer (Keysight, N9010A).
The spectrum analyzer is set to a zero-span mode at the measurement frequency of 10 MHz. The resolution and video bandwidths are set to 3 MHz and 510 Hz, respectively. The measurement frequency is the largest frequency without the deterioration of detector{\textquotesingle}s performance. The large resolution bandwidth and small video bandwidth help to obtain clear signal. Since the bandwidth of a single-pass OPA is on the order of terahertz, the frequency dependence of squeezing level is negligible at the order of megahertz.
\section{Result and discussion}
\begin{figure}[!t]
\centering\includegraphics[width=8cm]{fig_200scan_single.eps}
\caption{Raw data of noise power as a function of the phase of the LO beam (scanned by a 1-Hz triangle wave). The symmetric structure around 0.56s is due to the reversal of the direction of scanning by the triangular wave. The intensity of a incident pump beam is
330 m
. Center frequency is set to 10 MHz. Resolution bandwidth is set to 3 MHz and video bandwidth is set to 510 Hz. (a) Noise of a squeezed vacuum. (b) Shot noise.}
\label{200scan}
\end{figure}
\begin{figure}[!t]
\centering\includegraphics[width=8cm]{fig_pumpdep_60per.eps}
\caption{Pump power dependency of the intensity of squeezed noise and anti-squeezed noise normalized by the intensity of shot noise. Note that the incident pump intensity is calculated by dividing the transmitted pump power measured at the output port of the module by 0.6, which is the transmittance of the module at 0.78 $\mu$m. Circles are measured values and curved line is a theoretical fitting.}
\label{pumpdep}
\end{figure}
Fig. \ref{200scan} shows the squeezed and anti-squeezed noise with the shot noise. The measurement frequency is set to be 10 MHz. The intensity of the pump beam is 330 mW. The transmittance of the module at 0.78 $\mu$m is 60\%. The intensity of the LO beam is 3.0 mW.
The measured squeezing and the anti-squeezing level are $-$4.0$\pm$0.1 dB and 14.1$\pm$0.1 dB.
Fig. \ref{pumpdep} shows the pump power dependence of the squeezing and anti-squeezing level. The squeezing and anti-squeezing level $R_{\pm}$ with total detection loss $L$ are described as \cite{SqSHGPPLN}:
\begin{eqnarray}
R_{\pm} &=& L + (1-L) \exp(\pm 2\sqrt{ap}).
\end{eqnarray}
Here, $a$ is the efficiency of second harmonic generation. $L$, $a$ are fitted to be 38.6\%, 1034\% W${}^{-1}$, respectively.
To get the breakdown of the total detection loss $L$, transmittance of each element on the path of a squeezed vacuum is measured. The transmittance of the OPA module is measured to be 56\%, and that of the 3 dB coupler including a fiber joint loss is measured to be 45\%. Since the squeezed vacuum is generated inside the OPA module, assuming that a squeezed vacuum is generated in the middle of the waveguide, the effective loss can be considered to be $1-\sqrt{0.56}$, namely 25\%. For the 3 dB coupler, since the transmittance of a lossless coupler is 50\%, the effective loss is the excess loss of $1-0.45/0.50$, namely 10\%. The responsivity of the fiber-receptacle detector is measured to be 1.16 A/W, and it can be regarded as an effective loss of 7\%. The equivalent loss of the electric noise is 2 \%. Thus, total detection loss is calculated to be 38 \%, which is well-matched with the fitted value.
The coefficient $a$ represents the nonlinear efficiency of a waveguide. The fitted value is consistent with that of a similar waveguide, 1160 \% W${}^{-1}$ \cite{PPLNOPA6dB}.
Excluding the drop due to the detection efficiency, the original squeezing and anti-squeezing level at the output port of the module can be estimated to be $-$5.7$\pm$0.1 dB and 14.9$\pm$0.1 dB, respectively, which are consistent with those of a similar waveguide measured in a free-space setup \cite{PPLNOPA6dB}. The loss of a squeezed vacuum in the module is estimated as low as 25\%, which is considered to be mainly due to the propagation loss in the waveguide and the coupling mismatch with the output fiber. The propagation loss could be improved by improving dry etching method \cite{DryImprove} or performing wet etching after dry etching \cite{WetAfterDry}.
\section{Conclusion}
Measurement of a squeezed vacuum from a newly developed fiber-coupled single-pass OPA module was demonstrated in a fiber-optical setup. The PPLN ridge waveguide was fabricated with dry etching, which allows to fabricate a highly durable waveguide and to create a tapered structure at the ends of the waveguide to improve the coupling efficiency. The measured squeezing level is $-$4.0$\pm$0.1 dB, which is, to our knowledge, the best squeezing with fiber-coupled single-pass OPA to date. The module has input and output fibers for both fundamental and second harmonic. The good separation by dichroic beamsplitters and the high durability of the waveguide enable to inject an over-300-mW intense pump beam without any trouble in optical path for fundamental. We performed homodyne measurement with a fiber-optic beamsplitter and a fiber-receptacle balanced detector, looking toward fiber-optic applications. It is estimated that the original squeezing level at the output port of the module is $-$5.7$\pm$0.1 dB excluding the detection loss, which is consistent with that of a similar waveguide measured in a free-space setup \cite{PPLNOPA6dB}. A modularized alignment-free fiber-coupled squeezer with high-level noise reduction would play a important role in implementing quantum information processing with light in the near future.
\section*{Acknowledgments}
We thank Carlo Page, Taichi Yamashima and Asuka Inoue for feedback on the manuscript.
\bibliographystyle{IEEEtran}
|
2,869,038,154,545 | arxiv | \section{Introduction}
Even though in the general theory of relativity (GR) the geometry of spacetime is modelled by a \mbox{(pseudo-)Riemannian} metric of Lorentzian signature,
there is no clear physical principle, nor experimental evidence, that tells us that this spacetime geometry should necessarily be \mbox{(pseudo-)Riemannian}. In fact, as suggested already in 1985 by Tavakol and Van den Bergh \cite{TAVAKOL198523, Tavakol_1986, Tavakol2009}, the axiomatic approach by Ehlers, Pirani and Schild (EPS) \cite{Ehlers2012} is compatible with Finsler geometry, a natural extension of \mbox{(pseudo-)Riemannian} geometry. This was originally overlooked due to too restrictive differentiability assumptions, as recently pointed out in \cite{Lammerzahl:2018lhw} and then worked out in detail in \cite{Bernal_2020}. Other axiomatic approaches also allow for types of geometry more general than the type used in GR, see e.g. \cite{Bubuianu2018}. This indicates that such types of geometries should not a priori be excluded from our theories and motivates the study for extensions of general relativity based on more general spacetime geometries. \\
In this regard Finsler geometry is the natural candidate as it provides the most general geometric framework that is still compatible with the clock postulate in the usual sense, namely that the proper time interval measured by an observer between two events can be defined as the length of its worldline connecting these events, in this case the Finslerian length rather than the \mbox{(pseudo-)Riemannian} length. We remark that Weyl geometry, another generalization of Lorentzian geometry, is also compatible with the clock postulate, but in that case the definition of proper time has to be revised \cite{Perlick1987}. \\
Further motivation for the study of Finsler spacetime geometry comes from quantum gravity phenomenology \cite{Addazi_2022}.
Inspired by various approaches to quantum gravity, a generic feature of phenomenological or effective quantum gravity models is the presence of Planck-scale modified dispersion relations (MDR), related to departure from (local) Lorentz symmetry \cite{Addazi_2022, Amelino_Camelia_2013, Mattingly_2005}, which may manifest either in the sense of Lorentz invariance violation (LIV) or in the sense of deformed Lorentz symmetry. It turns out that such MDRs generically induce a Finsler geometry on spacetime \cite{Girelli:2006fw}. The mathematical details of this were investigated in \cite{Raetzel:2010je,Rodrigues:2022mfj}; see e.g. \cite{Amelino-Camelia:2014rga,Lobo_2017, Letizia:2016lew} for applications to specific quantum gravity phenomenology models. \\
Here we consider the (action-based) approach to Finsler gravity outlined in \cite{Pfeifer:2011xi,Hohmann_2019}. Structurally the theory is completely analogous to general relativity, but Einstein's field equation is replaced by Pfeifer and Wohlfarth's field equation. For \mbox{(pseudo-)Riemannian} spacetimes the latter reduces to the former. Although any solution to the field equations of GR is a solution in Finsler gravity, not many exact, properly Finslerian solutions are known as of yet. To the best of our knowledge the only ones currently known in the literature are the ($m$-Kropina type) Finsler pp-waves \cite{Fuster:2015tua} and their generalization as Very General Relativity (VGR) spacetimes \cite{Fuster:2018djw}, and the Randers pp-waves \cite{Heefer_2021}.\\
Here we introduce a large class of exact vacuum solutions that contains most of the aforementioned solutions as special cases, the only exception being those solutions in \cite{Fuster:2018djw} that are not of pp-wave type. Namely, we prove that any Finsler metric constructed from a \mbox{(pseudo-)Riemannian} metric $\alpha$ and a 1-form $\beta$ that is covariantly constant with respect to $\alpha$, is an exact vacuum solution in Finsler gravity if $\alpha$ is a vacuum solution in general relativity. We classify all such solutions, leading to two possibilities: either $\alpha$ is flat Minkowski space, or $\alpha$ is a pp-wave. Our solutions are $(\alpha,\beta)$-metrics of Berwald type.\\
The natural question that arises is whether and how such spacetimes can be physically distinguished from their general relativistic counterparts. To answer this question we consider the linearlized versions of our exact solutions, which may be interpretted as Finslerian gravitational waves, and we study their physical effect. More precisely, we ask the question what would be observed in an interferometer experiment when such a Finslerian gravitational wave would pass the earth, and what would be the difference with a classical general relativistic gravitational wave. The relevant observable measured in interferometer experiments is essentially the radar distance, so we first recall the calculation of this radar distance in the case of a standard GR gravitational wave, reproducing the known results \cite{Rakhmanov_2009}. Then we repeat the calculation in the case of a Finslerian gravitational wave. Although at first sight the expression for the Finsler radar length looks different from the corresponding expression GR, we show that this is nothing but a coordinate artifact. Remarkably, when the two expressions are interpreted correctly in terms of observable quantities, it becomes clear that there is in fact no observational difference between the Finsler and GR case, at least as far as radar distance measurements are concerned. We discuss the significance of this. To the best of our knowledge this is the first time an explicit expression for the Finslerian Radar length has been obtained in the case of finite spacetime separations, and as such our work may be seen as a proof of concept. In contrast, the radar length for infinitesimal separations has been studied in \cite{Pfeifer_2014,Gurlebeck:2018nme}. \\
We do point out that our results rely on the assumption that the amplitude of the gravitational wave, as well as the parameter $\lambda$ that characterized the departure from (pseudo)-Riemannian geometry, are sufficiently small, so that a certain perturbative expansion is valid. This nevertheless seems physically justified. We argue in a heuristic manner that up to first order in $\lambda$, any physically viable $(\alpha,\beta)$-metric can be equivalently described by a slightly modified version of a standard Randers metric.\\
Indeed, the causal structure of the standard Randers metric does not in general have a straightforward physical interpretation. We therefore propose to modify the Randers metric slightly, only changing some relative signs in different subsets of the tangent bundle. We then prove that these modified Randers metrics have the nice property that their causal structure is completely equivalent to the causal structure of some auxiliary \mbox{(pseudo-)Riemannian} metric. This analysis is done in full generality, i.e. not just for our exact solutions. In the special case, however, that the defining 1-form of the Randers metric is covariantly constant (as is the case for our solutions) we prove that not only the causal structure, but also the affine structure of the Finsler metric coincides with that of the auxilliary (pseudo)-Riemannian metric, i.e. the timelike, spacelike and null geodesics of the Finsler metric can be understood, respectively, as the timelike, spacelike and null geodesics of the auxiliary (pseudo)-Riemannian metric. This leads to the particularly nice property that the existence of radar neighborhoods is guaranteed \cite{Perlick2008}, i.e. that given an observer and any event in spacetime, there is (at least locally) exactly one future pointing light ray and one past pointing light ray that connect the event to the worldline of the observer. This is of essential importance in our work, because without this property the notion of radar distance would not even make sense.
\subsection{Structure of this article}
The paper is organized as follows. We begin in Section \ref{sec:Finsler_gravity} with a discussion of Finsler geometry and the core ideas behind Finsler gravity. Then in Section \ref{sec:ab_metrics} we introduce $(\alpha,\beta)$-metrics, and in particular Randers metrics and discuss their relevance to Finsler gravity. We then introduce our new solutions to the field equations and show that after linearization these solutions may be interpretted as Finslerian gravitational waves. \\
Next, in section \ref{sec:Randers} we propose our modification of the standard Randers metric and prove that it has very satisfactory properties with respect to its causal structure, affine structure, Lorentzian signature, etc.\\
Section \ref{sec:radar_distance} is devoted to the calculation of the radar distance at the moment a Finsler gravitational wave passes, say, the Earth. We start by recalling the analogous calculation for a standard gravitational wave in general relativity, and consequently we compute the radar distance in the Finsler setting, clearly pointing out the differences with the general relativity case.\\
We conclude in section \ref{sec:discussion}.
\section{Finsler gravity}
\label{sec:Finsler_gravity}
Before we introduce the basic notions in Finsler geometry, some remarks about notation are in order. We will usually work in local coordinates, i.e., given a smooth manifold $M$ we assume that some chart $\phi:U\subset M\to \mathbb R^n$ is provided, and we identify any $p\in U$ with its image $\phi(p)\in\mathbb R^n$. For $p\in U$ each $Y\in T_pM$ (the tangent spaces to $M$) can be written as $Y = y^i\partial_i\big|_p$, where the tangent vectors $\partial_i \equiv \frac{\partial}{\partial x_i}$ furnish the chart-induced basis of $T_pM$. This provides natural local coordinates on the tangent bundle $TM$ via the chart
\begin{align}
\tilde\phi: \tilde U \to \mathbb R^n\times\mathbb R^n,\qquad \tilde U = \bigcup_{p\in U} \left\{p\right\}\times T_p M\subset TM,\qquad \tilde\phi(p,Y) = (\phi(p),y^1,\dots,y^n)\eqqcolon (x,y).
\end{align}
These local coordinates on $TM$ in turn provide a natural basis of its tangent spaces $T_{(x,y)}TM$, namely
\begin{align}
\bigg\{\frac{\partial}{\partial x^i} = \partial_i, \frac{\partial}{\partial y^i} = \bar{\partial}_i\bigg\}.
\end{align}
Below we start by introducing the basic notions of Finsler geometry in the positive definite case. The generalization to Lorentzian signature is contains some technicalities and will be introduced subsequently. Our spacetime signature convention is $(-,+,+,+)$.
\subsection{Finsler spaces of positive definite signature}\label{sec:FinslerSpaces}
Before discussing Finsler spacetimes, we first introduce the theory in the positive definite case, which is certainly simpler and cleaner. In section \ref{sec:Finser_spacetimes} we discuss what is different in the case of Finsler spacetimes with Lorentzian signature.\\
A Finsler space is a pair $(M,F)$, where $M$ is a smooth manifold and $F$, the so-called Finsler function, is a map $F:TM\to[0,\infty)$ that satisfies the following axioms:
\begin{itemize}
\item $F$ is (positively) homogeneous of degree one with respect to $y$:
\begin{align}
F(x,\lambda y) =\lambda F(x, y)\,,\quad \forall \lambda>0\,;
\end{align}
\item The \textit{fundamental tensor}, with components $g_{ij} = \bar\partial_i\bar\partial_j \left(\frac{1}{2}F^2\right)$, is positive definite.
\end{itemize}
For each $x\in M$ the map $y\mapsto F(x,y)$ is what is known as a Minkowski norm\footnote{Not to be confused with the flat Lorentzian Minkowski metric.} on $T_xM$. The homogeneity condition ensures that the length of any curve $\gamma$, defined as
\begin{align}
L(\gamma)=\int F(\dot{\gamma})\,\text{d} \lambda = \int F(x,\dot{x})\,\text{d} \lambda,\qquad \dot{\gamma}=\frac{d\gamma}{d\lambda},
\end{align}
is independent of its parameterization. A fundamental result that is essential for doing computations in Finsler geometry is Euler's theorem for homogeneous functions. It says that if $f:\mathbb R^n\to\mathbb R$ is (positively) homogeneous of degree $r$, i.e., $f(\lambda y) =\lambda^r f(y)$ for all $\lambda>0$, then $y^i\frac{\partial f}{\partial y_i}(y) = r f(y)$. In particular, this implies the identity
\begin{align}
g_{ij}(x,y)y^i y^j = F(x,y)^2.
\end{align}
Hence the length of curves is formally identical to the length in Riemannian geometry, the difference being that now the metric tensor may depend on the direction in addition to position.\\
The fundamental theorem of Riemannian geometry says that any Riemannian manifold admits a unique torsion-free affine connection that is compatible with the metric, the Levi-Civita connection. A similar result is true in Finsler geometry, and this is sometimes called the fundamental lemma of Finsler geometry: it states that any Finsler space can be endowed with a canonical connection. An essential difference with Riemannian geometry is that the connection on a Finsler space is in general not a linear one. Let us therefore briefly recall the notion of a non-linear connection. A non-linear (or Ehresmann) connection is a smooth decomposition of $TTM$ into a horizontal and a vertical subbundle,
\begin{align}
TTM = HTM \oplus VTM,
\end{align}
where $\oplus$ denotes the Whitney sum of vector bundles. This provides the most general notion of parallel transport of vectors between tangent spaces, and, in particular, it allows one to define whether a curve $\gamma:I=(a,b)\to M$ is autoparallel (`straight'). Intuitively, we would like to call a curve straight whenever the velocity $\dot\gamma:I\to TM$ is `constant'. However, there is no unique way to say, \textit{a priori}, what `constant' means in this context, as each image point of $\dot\gamma$ lies in a different tangent space. As a matter of fact, as $\dot\gamma$, living in the tangent bundle, also contains all information about the base point $\gamma$, it could never be truly constant. Indeed, all we can ask is that $\dot\gamma$ change only `parallel to $M$', and not in the direction of the fibres of $TM$. The rate of change of $\dot\gamma$, i.e. $\ddot\gamma$, is an element of $TTM$. Therefore, in order to be able to say what we mean by a straight line we should split the directions in $TTM$ into a space $HTM$ of directions parallel to $M$ and a space of directions $VTM$ along the fibers of $TM$. We then say that a curve $\gamma:I\to M$ is autoparallel if $\ddot\gamma(\lambda)\in H_{\dot\gamma(\lambda)}TM$ for all $\lambda\in I$. The vertical subbundle $VTM$ is canonically defined on any smooth manifold, namely
\begin{align}
VTM = \text{span}\left\{\bar\partial_i\right\}.
\end{align}
However, there is in general no canonical choice of the horizontal subbundle. In order to be able to speak about straight curves, in the most general sense, one thus needs to select one. In order to do so, a set of functions $N^i_j(x,y)$, the connection coefficients, may be specified, leading to the following horizontal subbundle of $TTM$.
\begin{align}
HTM = \text{span}\left\{\delta_i\equiv \partial_i - N^j_i\bar\partial_j\right\}.
\end{align}
Parallel transport of a vector $V$ along $\gamma$ is then characterized by the parallel transport equation\footnote{Note that the parallel transport map is in general nonlinear. Some authors (e.g. \cite{Bucataru}) choose to define parallel transport differently, namely by requiring a priori that parallel transport should be linear, which leads to the alternative parallel transport equation $\dot V^i + N^i_j(\gamma,\dot\gamma) V^i=0$. This approach, however, seems unnatural to us. Here we follow e.g. \cite{Szilasi}, where parallel transport of a vector is defined via its unique horizontal lift along a given curve. In this case parallel transport is linear if and only if the connection is linear.}
\begin{align}
\label{eq:nonlinear.parallel.transport.eq}
\dot V^i + N^i_j(\gamma,V)\dot \gamma^j = 0\,,
\end{align}
and consequently, autoparallels are precisely the curves that satisfy
\begin{align}
\label{eq:nonlinear.geodesic.eq}
\ddot \gamma^i + N^i_j(\gamma,\dot \gamma)\dot \gamma^j = 0\,.
\end{align}
As mentioned, on generic smooth manifold there is no canonical choice of the connection\footnote{From now on we will refer to the connection coefficients $N^i_j$ simply as the connection.} $N^i_j$, but any Finsler metric induces one, the \textit{Cartan non-linear connection}. This is the unique homogeneous (non-linear) connection on $TM$ that is (smooth on $TM\setminus\{0\}$,) torsion-free and compatible with $F$. Torsion-freeness is the property that $\bar\partial_iN^k_j = \bar\partial_jN^k_i$, and metric-compatibility is the property that $\delta_i F^2 =0$, in terms of the \textit{horizontal derivative} induced by the connection, $\delta_i \equiv \partial_i-N^j_i\bar\partial_j$. Alternatively, metric compatibility can be defined as the property that $\nabla g_{ij} \equiv y^k\delta_k g_{ij} - N^k_i g_{kj} - N^k_j g_{ki} = 0$, in terms of the so-called dynamical covariant derivative $\nabla$. For torsion-free homogeneous connections the latter definition of metric-compatibility is equivalent to the former. This \textit{Cartan non-linear connection} is given in terms of the Finsler function $F$ by
\begin{align}
N^i_j(x,y) = \frac{1}{4}\bar{\partial}_j \bigg(g^{ik}\big(y^l\partial_l\bar{\partial}_k F^2 - \partial_k F^2\big)\bigg)\,
\end{align}
and may be viewed as a generalization of the Levi-Civita connection to Finsler spaces. The autoparallel curves of the non-linear connection coincide with the geodesics (locally length-minimizing curves) of $F$. The curvature tensor, curvature scalar and the Finsler Ricci tensor
of $(M,F)$ are defined, respectively, as
\begin{align}\label{eq:definition_curvatures}
R^i{}_{jk}(x,y) = -[\delta_j,\delta_k]^i = \delta_j N^i_k(x,y)-\delta_k N^i_j(x,y),\quad \text{Ric}(x,y) = R^i{}_{ij}(x,y)y^j,\quad R_{ij}(x,y) = \frac{1}{2}\bar\partial_i \bar\partial_j\text{Ric}.
\end{align}
\subsection{Berwald spaces and the Riemannian limit}\label{sec:Berwald}
A Berwald space is a Finsler space $(M,F)$ for which the Cartan non-linear connection is in fact a linear connection on $TM$.\footnote{See \cite{Szilasi2011} for an overview of the various equivalent characterizations of Berwald spaces and \cite{Pfeifer:2019tyy} for a more recent equivalent characterization.} What this means is that the connection coefficients are of the form
\begin{align}
N^i_j(x,y) = \Gamma^i_{jk}(x)y^k
\end{align}
for a set of functions $\Gamma^i_{jk}:M\to\mathbb R$. From the transformation behavior of $N^i_j$ it can be inferred that the functions $\Gamma^i_{jk}$ have the correct transformation behavior to be the Christoffel symbols of a (torsion-free) affine connection on $M$. We will refer to this affine connection as the associated affine connection, or simply \textit{the} affine connection on the Berwald space.
The parallel transport \eqref{eq:nonlinear.parallel.transport.eq} and autoparallel equations \eqref{eq:nonlinear.geodesic.eq} reduce in this case to the familiar equations
\begin{align}
\dot V^i + \Gamma^i_{jk}(\gamma)\dot \gamma^j V^k = 0, \qquad \ddot \gamma^i + \Gamma^i_{jk}(\gamma)\dot \gamma^j \dot \gamma^k = 0
\end{align}
in terms of the Christoffel symbols. A straightforward calculation reveals that the curvature tensors of a Berwald space can be written as follows
\begin{align}
\label{eq:symm_ricci}
R^j{}_{kl} = \bar R_i{}^j{}_{kl}(x)y^i, \qquad \text{Ric} = \bar R_{ij}(x)y^i y^j, \qquad R_{ij} = \frac{1}{2}\left(\bar R_{ij}(x) + \bar R_{ji}(x)\right),
\end{align}
in terms of $\bar R_l{}^i{}_{jk}= 2\partial_{[j} \Gamma^i_{k]l} + 2\Gamma^i_{m[j}\Gamma^m_{k]l}$ and $\bar R_{lk} = \bar R_l{}^i{}_{ik}$, the Riemann tensor and Ricci tensor, respectively, of the associated affine connection\footnote{We use the notations $T_{[ij]} = \frac{1}{2}\left(T_{ij}-T_{ji}\right)$ and $T_{(ij)} = \frac{1}{2}\left(T_{ij}+T_{ji}\right)$ for (anti-)symmetrization.}, defined in the usual way. In fact, for positive definite Finsler spaces, it follows by Szabo's metrization theorem that $R_{ij} = \frac{1}{2}\left(\bar R_{ij} + \bar R_{ji}\right) = \bar R_{ij} $, but this does not extend to Finsler spacetimes in general \cite{Fuster_2020}.\\
Finsler geometry reduces to Riemannian geometry when the fundamental tensor $g_{ij}(x,y)=g_{ij}(x)$ is independent of the direction $y$, i.e., if the fundamental tensor is a Riemannian metric. Equivalently, the space is Riemannian if $F^2$ is quadratic in the $y$-coordinates. In this case the non-linear connection is actually linear, so that, in particular, any Riemannian manifold is Berwald. In fact, the associated linear connection is in this case just the Levi-Civita connection of the Riemannian metric.
\subsection{Finsler spacetimes}
\label{sec:Finser_spacetimes}
The generalization of positive definite Finsler geometry to indefinite signatures is by no means a trivial matter, and there is as of yet no consensus on what the proper definition should be. To see the basic issue, note that if the fundamental tensor $g_{\mu\nu}$ has Lorentzian signature then there will be (non-zero) null vectors $v\in T_x M$ for which $g_{\mu\nu}v^\mu v^\nu=0$. Then $F(x,v)=\sqrt{g_{\mu\nu}v^\mu v^\nu}=\sqrt{0}$, even though $v\neq 0$, so $F$ can never be smooth everywhere on $TM\setminus 0$, one of the axioms introduced in the previous section. Moreover, for spacelike (or timelike, depending on the convention) directions $w$, $F(x,w)$ would even be imaginary. Thus some things clearly need to be modified in order to give an acceptable definition of a Finsler spacetime, and various approaches are possible. One classical approach \cite{Beem} is to work with $L = F^2$ instead of $F$. Another is to restrict the domain of definition of $F$, for instance to those $(x,y)$ for which $F(x,y)^2 = g_{\mu\nu}(x,y)y^\mu y^\nu>0$ \cite{Asanov}. Several combinations of the two approaches and other variations have been proposed \cite{Pfeifer:2011tk, Pfeifer:2011xi,Lammerzahl:2012kw, Javaloyes2014-1, Javaloyes2014-2}. \\
Which of these general definitions should be the `correct' one is not terribly relevant for our present purposes. Therefore our approach here will be to simply replace the subbundle $TM\setminus 0\subset TM$ by a generic conic subbundle $\mathcal A\subset TM\setminus 0$, i.e. a conic\footnote{The property of being conic means that if $(x,y)\in\mathcal A$ then also $(x,\lambda y)\in\mathcal A$, for any $\lambda>0$.} open subset of $TM\setminus 0$. Furthermore we will not restrict $F$ to have only positive values. Thus we will be using the following definition. \\
A Finsler spacetime is a triple $(M,\mathcal A,F)$, where $M$ is a smooth manifold, $\mathcal A$ is a conic subbundle of $TM\setminus 0$ (with `non-empty' fibers) and $F$, the so-called Finsler function, is a map $F:\mathcal A\to \mathbb R$ that satisfies the following axioms:
\begin{itemize}
\item $F$ is (positively) homogeneous of degree one with respect to $y$:
\begin{align}
F(x,\lambda y) =\lambda F(x, y)\,,\quad \forall \lambda>0\,;
\end{align}
\item The \textit{fundamental tensor}, with components $g_{\mu\nu} = \bar\partial_\mu\bar\partial_\nu \left(\frac{1}{2}F^2\right)$, has Lorentzian signature on $\mathcal{A}$.
\end{itemize}
The discussion and results (for the connection, curvature tensors, etc.) treated in sections \ref{sec:FinslerSpaces} and \ref{sec:Berwald} apply verbatim for Finsler spacetimes, with the understanding that we only consider points $(x,y)\in\mathcal A$. Throughout the article we will also assume that the spacetime dimension is $1+3$.\\
The definition of a Finsler spacetime given above is a very \textit{weak} one in the sense that most other definitions appearing in the literature are more restrictive. Accordingly, our definition allows for a lot of instances, many of which will not be physically viable. This is, in our opinion, a feature rather than a bug, as most of the results in this article can be proven without further restrictions. It should be understood, however, that in order to guarantee that a viable physical interpretation is possible, the geometry should be subjected to more stringent requirements.
\\
\subsection{A note about causal structure and physical interpretation}
\label{sec:Finsler_spacetimes_interpretation}
Given a Finsler spacetime geometry, it is natural to postulate, in analogy with GR, that matter travels along timelike geodesics and light travels on null geodesics. The generalization of the notion of \textit{null} direction is mathematically straightforward. A vector $y^u$ at a point $x^\mu$ is said to be null (or lightlike) if $F(x,y)^2 = g_{\mu\nu}(x,y)y^\mu y^\nu=0$. However, the structure of the light cone, composed of such null vectors, may be non-trivial. In GR it is always the case that the light cone separates the tangent space at each point into three connected components, that we may interpret as forward-pointing timelike vectors, backward-pointing timelike vectors, and spacelike vectors, respectively. It is then a consequence that a timelike vector is one that has positive (or negative, depending on the convention) Riemannian norm. For a generic Finsler spacetime geometry these properties of the lightcone structure are by no means guaranteed and as such it is not obvious in general how to even define what one means a by timelike vector. It certainly does not suffice to define them as positive length vectors. We do not discuss this issue any further in its full generality here. Only in the specific case of the Randers metric, in Section \ref{sec:Randers}, will we dive into the details. We argue that the causal structure of the standard Randers metric does not have a straightforward physical interpretation, but we prove that, by modifying the definition only slightly, the causal structure of such a modified Randers metric has exactly the desirable properties mentioned above in the case of GR, allowing for a straightforward physical interpretation. This will be exploited in Section \ref{sec:radar_distance}, where we compute the radar distance for a Finslerian gravitational wave of (modified) Randers type passing an interferometer.\\
It is worth mentioning that in the ideal case the (forward and backward) timelike cones should be contained in the subbundle $\mathcal A$. This statement is essentially the condition that geometry is well-defined for all timelike infinitesimal spacetime separations. This property is satisfied by our modified Randers metrics (up to a set of measure zero). It can be argued that it is not strictly necessary for spacelike vectors to be contained in $\mathcal A$, as it would not be possible, not even in principle, to perform any physical experiment that probes such directions. Whether the lightcone should be contained in $\mathcal A$ is a more delicate question, which we will not further explore here.
\subsection{The field equations}
In the context of Finsler gravity, arguably the simplest and cleanest proposal for a vacuum field equation was the one by Rutz \cite{Rutz}. The Rutz equation, Ric $= 0$, can be derived from the geodesic deviation equation in complete analogy to the way Einstein's vacuum field equation, $R_{\mu\nu}=0$ (to which it reduces in the classical \mbox{(pseudo-)Riemannian} setting), can be derived by considering geodesic deviation. \\
However, it turns out that Rutz's equation is \textit{not} variational, i.e. it cannot be obtained by extremizing an action functional. In fact, its variational completion (i.e. the variational equation that is \textit{as close as possible to it}, in a well-defined sense \cite{Voicu_2015}) turns out to be the field equation that was proposed by Pfeifer and Wohlfarth in \cite{Pfeifer:2011xi} using a Finsler extension of the Einstein-Hilbert action \cite{Hohmann_2019}. This is again in complete analogy to the situation in GR, where the vacuum Einstein equation in the form $R_{\mu\nu} - \frac{1}{2}g_{\mu\nu}R = 0$ is also precisely the variational completion of the equation $R_{\mu\nu}=0$ \cite{Voicu_2015}. While in the GR case the completed equation happens to be equivalent to the former, this is not true any longer in the Finsler setting.\\
Although several other proposals have been made as well \cite{Horvath1950,Horvath1952, Ikeda1981, Asanov1983, Chang:2009pa,Kouretsis:2008ha,Stavrinos2014,Voicu:2009wi,Minguzzi:2014fxa}, we consider the Pfeifer-Wohlfarth equation\footnote{In the positive definite setting a similar field equation has been obtained by Chen and Shen \cite{Chen-Shen}.} \cite{Pfeifer:2011xi} to be by far the most promising, and from here onwards we will refer to it simply as \textit{the} vacuum field equation in Finsler gravity. We do not show the field equation in full generality here, as its general form is not required for our present purposes. In the case of Berwald spacetimes it can be expressed relatively simply as \cite{Fuster:2018djw}
\begin{align}\label{eq:BEFEs}
\left( F^2 g^{\mu\nu} - 3 y^\mu y^\nu \right) R_{\mu\nu} = 0\,,
\end{align}
where $R_{\mu\nu}$ is the Finsler Ricci tensor and since we are in a Berwald setting, $R_{\mu\nu} = R_{\mu\nu}(x)$ only depends on $x$. Clearly the vanishing of the Finsler Ricci tensor is a sufficient condition for a Berwald spacetime to be a solution to Eq.\,\eqref{eq:BEFEs}. In general it is not a necessary condition, except in specific cases, like for Randers metrics. Indeed for Randers metrics of Berwald type the field equations reduce to Rutz's equation \cite{Heefer_2021}, or equivalently, to the vanishing of the Finsler Ricci tensor,
\begin{align}\label{eq:berwald_randers_field_eq}
R_{\mu\nu}=0.
\end{align}
\section{$(\alpha,\beta)$-metrics}
\label{sec:ab_metrics}
\subsection{$(\alpha,\beta)$-metrics -- basic definitions}
An important class of Finsler geometries is given by the so-called $(\alpha,\beta)$-metrics. Here
$\alpha = \sqrt{|a_{\mu\nu}\dot x^\mu\dot x^\nu|}$ and $\beta = b_\mu\dot x^\nu$ are scalar variables defined in terms of a \mbox{(pseudo-)Riemannian} metric $a_{\mu\nu}$ on $M$ and a 1-form $b_\mu$ on $M$, and an $(\alpha,\beta)$-metric is simply a Finsler metric that is constructed only from $\alpha$ and $\beta$, i.e. $F = F(\alpha, \beta)$. Due to homogeneity it follows that any such $F$ can be written in the standard form $F = \alpha\phi(\beta/\alpha)$ for some function $\phi$, at least whenever $\alpha\neq 0$. Well-known examples of $(\alpha,\beta)$-metrics are:
\begin{itemize}
\item Pseudo-Riemannian Finsler metrics $F = \alpha$;
\item Randers metrics $F = \alpha + \beta$;
\item Kropina metrics $F = \frac{\alpha^2}{\beta}$;
\item Generalized Kropina (or $m$-Kropina) metrics $F = \alpha^{1+m}\beta^{-m}$ with $m$ some real number.
\end{itemize}
For each of these types of $(\alpha,\beta)$-metrics certain conditions need to be fulfilled in order to satisfy the definition of a Finsler space(time).
\subsection{Exact $(\alpha,\beta)$-metric solutions in Finsler gravity}
\label{sec:exact_ab_sols}
From the physical viewpoint, $(\alpha,\beta)$-metrics allow us to deform a GR spacetime $\alpha$ into a Finsler spacetime by the 1-form $\beta$. And it turns out, as we will prove below, that these types of metrics can be used to generalize some of the vacuum solutions to Einstein's field equations to properly Finslerian vacuum solutions in Finsler gravity. This procedure is possible whenever such a solution admits a covariantly constant vector field, or equivalently, 1-form. Namely: if the Lorentzian metric $\alpha$ solves the classical Einstein equations and the 1-form $\beta$ is covariantly constant with respect to $\alpha$ then any $(\alpha,\beta)$-metric constructed from the given $\alpha$ and $\beta$ is a solution to the Finslerian field equations. To see why this is true, we first recall the following well-known result (see e.g. section 6.3.2. in \cite{handbook_Finsler_vol2}):
\begin{prop}
\label{prop:coinciding_spray}
Let $F$ be an $(\alpha,\beta)$-metric. If $\beta$ is covariantly constant with respect to $\alpha$ then $F$ is of Berwald type and the affine connection of $F$ coincides with the Levi-Civita connection of $\alpha$.
\end{prop}
If the affine connection of $F$ is the same as the connection of $\alpha$, the associated curvature tensors and (affine) Ricci tensors are also the same. So if $\alpha$ happens be a vacuum solution to Einstein gravity, i.e. its Ricci tensor vanishes, then it follows that the affine Ricci tensor of $F$ vanishes as well, which implies, by eq. \eqref{eq:BEFEs}, that $F$ is a vacuum solution to Pfeifer and Wohlfarth's field equation in Finsler gravity. We may summarize this result in the following theorem.
\begin{theor}\label{theor:(alpha,beta)solutions}
Let $F$ be any $(\alpha,\beta)$-metric such that $\alpha$ solves the classical vacuum Einstein equations and $\beta$ is covariantly constant with respect to $\alpha$. Then $F$ is a vacuum solution to the field equation in Finsler gravity.
\end{theor}
In this way $(\alpha,\beta)$-metrics provide a mechanism to \textit{Finslerize} any vacuum solution to Einstein's field equations, as long as the solution admits a covariantly 1-form, or equivalently a covariantly constant vector field. The theorem generalizes some of the results obtained in \cite{Heefer_2021} for Randers metrics and in \cite{Fuster:2015tua,Fuster:2018djw} for $m$-Kropina metrics (i.e. VGR spacetimes) to arbitrary Finsler spacetimes with $(\alpha,\beta)$-metric. In particular, all pp-wave type solutions in Finsler gravity currently known in the literature are of this type.\\
Let's investigate this type of solution in some more detail. It turns out that if a vacuum solution $\alpha$ to Einstein's field equations admits a covariantly constant 1-form $\beta$, then either $\alpha$ is flat, or $\beta$ is necessarily null \cite{EhlersKundt1960} (see also \cite{Hall_2000,Batista_2014}). We remark that this result assumes that the spacetime dimension is $1+3$ and generally is not true in higher dimensions. This leads to two classes of solutions.\\
\noindent\textbf{First class of solutions}\\
The first of these possibilities, where $\alpha$ is flat, leads to a class of solutions that can always be written in suitable coordinates in the following way.\\
\fbox{\begin{minipage}{0.9\textwidth}
\textbf{$\bm{(\alpha,\beta)}$-metric solutions (Class 1).} Let the metric $A$ and 1-form $\beta$ be given by
\begin{align}\label{eq:class1_ab_sols}
A = -(\text{d} x^0)^2 + (\text{d} x^1)^2 + (\text{d} x^2)^2 + (\text{d} x^3)^2 , \qquad \beta = b_\mu \text{d} x^\mu,
\end{align}
where $b_\mu=$ const. The any $(\alpha,\beta)$-metric constructed from $\alpha=\sqrt{|A|}$ and $\beta$ is a vacuum solution to the field equations in Finsler gravity. The resulting geometry is of Berwald type with all affine connection coefficients vanishing identically in these coordinates.
\end{minipage}}\\\\
Right below Eq. \eqref{eq:class1_ab_sols} we have used the notation $\alpha = \sqrt{|A|} = \sqrt{|a_{ij}\text{d} x^i \text{d} x^j|}$. This should be understood pointwise, i.e.
\begin{align}
\alpha = \alpha(y) = \sqrt{|a_{ij}\text{d} x^i \text{d} x^j|}(y) = \sqrt{|a_{ij}\text{d} x^i(y) \text{d} x^j(y)|} = \sqrt{|a_{ij} y^i y^j|}.
\end{align}
In other words, we sometimes write $\alpha$ for the function $\sqrt{|a_{ij}\text{d} x^i \text{d} x^j|}:y\mapsto \sqrt{|a_{ij} y^i y^j|}$, and at other times we write $\alpha$ for its value $\sqrt{|a_{ij} y^i y^j|}$ at $y$. It should always be clear from context what is meant.\\
\noindent\textbf{Second class of solutions}\\
The second possibility, that $\beta$ is null, leads to a class of solutions that seems to be more interesting. In this case $\alpha$ is CCNV spacetime metric, meaning that it admits a covariantly constant null vector (CCNV), namely in this case $\beta$, or rather its vector equivalent via the isomorphism induced by $\alpha$. CCNV metrics are also known as \textit{pp-waves} (plane-fronted gravitational waves with parallel rays) and have been studied in detail in \cite{EhlersKundt1960,ehlers1962exact} (see section 24.5 in \cite{stephani_kramer_maccallum_hoenselaers_herlt_2003} for a summary). \\
It is an elementary result that by choosing suitable coordinates $(u,v,x^1,x^2)$, such $\alpha$ and $\beta$ can always be expressed in the form
\begin{align}
A &= -2\text{d} u \left(\text{d} v + H(u,x)\, \text{d} u + \,W_a(u,x)\,\text{d} x^a\right) +h_{ab}(u,x) \text{d} x^a \text{d} x^b, \label{eq:original_pp_wave_metric} \\
\beta &= \text{d} u,\label{eq:original_1form}
\end{align}
where $x^a=x^1,x^2$ and $h_{ab}$ is a two-dimensional Riemannian metric. This holds irrespective of whether $\alpha$ is a solution to Einstein's field equations or not. If $\alpha$ is additionally assumed to be a vacuum solution, as in Theorem \ref{theor:(alpha,beta)solutions}, it turns out that the expression \eqref{eq:original_pp_wave_metric} for $A$ can be simplified even more \textit{without changing the form \eqref{eq:original_1form} of $\beta$}. To see this, we first consider only the metric $A$. Since $A$ is a vacuum solution to Einstein's field equations, it follows that the functions $W_a$ can be eliminated and $h_{ab}$ may be chosen as $\delta_{ab}$, by a suitable coordinate transformation (section 24.5 in \cite{stephani_kramer_maccallum_hoenselaers_herlt_2003}). The metric then takes the form
\begin{align}
A = -2\text{d} u \left(\text{d} v + H(u,x)\, \text{d} u\right) +\delta_{ab} \text{d} x^a \text{d} x^b. \label{eq:reduced_pp_wave_metric}
\end{align}
We are, however, not only interested in the transformation behaviour of $A$ alone, but also in that of $\beta$, because an $(\alpha,\beta)$-metric is composed of both. To see why we may assume without loss of generality that the form of $\beta = \text{d} u$ remains invariant we use the fact that any coordinate transformation
\begin{align}
(u,v,x^1,x^2)\mapsto (\bar u,\bar v,\bar x^1,\bar x^2)
\end{align}
that leaves the generic form of the metric \eqref{eq:original_pp_wave_metric} invariant, but in general changing the expressions for the metric functions $H, W_a, h_{ab}\mapsto \bar H, \bar W_a, \bar h_{ab}$, has the specific property that $u = \phi(\bar u)$ for some function $\phi$ depending on $\bar u$ alone (see section 31.2 in \cite{stephani_kramer_maccallum_hoenselaers_herlt_2003}). This applies in particular to the transformation that relates \eqref{eq:original_pp_wave_metric} and \eqref{eq:reduced_pp_wave_metric}. We can therefore express the 1-form as $\beta = \text{d} u = \phi'(\bar u)\text{d} \bar u$, or equivalently $\bar b_\mu = \phi'(\bar u)\delta_\mu^u$. However, since $\beta$ is covariantly constant with respect to $A$, we must have $\bar \nabla_\mu\bar b_\nu=0$. All Christoffel symbols $\bar \Gamma^u_{\mu\nu}$ of the metric \eqref{eq:reduced_pp_wave_metric} with upper index $u$ vanish identically, however. Hence
\begin{align}
\bar \nabla_{\bar u} \bar b_{\bar u} = \partial_{\bar u} b_{\bar u} - \bar \Gamma^u_{uu}\phi'(\bar u) = \phi''(\bar u)\stackrel{!}{=}0.
\end{align}
It follows that $\phi'(\bar u)= C =$ constant, i.e. $\beta = C \text{d} \bar u$. In this case it is easily seen that scaling $\bar u$ by $C$ and scaling $\bar v$ by $1/C$ leaves the metric \eqref{eq:reduced_pp_wave_metric} invariant and brings the 1-form back into its original form, proving that we may assume without loss of generality that the 1-form remains invariant under the coordinate transformation.\\
Finally, the metric \eqref{eq:reduced_pp_wave_metric} is a vacuum solution to Einstein's field equations if and only if $(\partial_{x^1}^2+\partial_{x^2}^2)H = 0$. We may therefore characterize the second class of solutions in the following way.\\
\fbox{\begin{minipage}{0.9\textwidth}
\textbf{$\bm{(\alpha,\beta)}$-metric solutions (Class 2).} Let $\alpha = \sqrt{|A|}$ and $\beta$ be given by
\begin{align}
A &= -2\text{d} u \left(\text{d} v + H(u,x)\, \text{d} u\right) +\delta_{ab} \text{d} x^a \text{d} x^b, \label{eq:reduced_pp_wave_metric2} \\
\beta &= \text{d} u \label{eq:reduced_pp_wave_1_form2},
\end{align}
such that $\delta^{ab}\partial_a\partial_b H = 0$. Then any $(\alpha,\beta)$-metric constructed from the pair ($\alpha,\beta$) is a vacuum solution to the field equations in Finsler gravity. The resulting geometry is of Berwald type with affine connection identical to the Levi-Civita connection of $\alpha$.
\end{minipage}}\\\\
Note that when $H=0$ the geometries in \textit{Class 2} are also contained in \textit{Class 1}. It is not the case, however, that \textit{Class 1} is a subset of \textit{Class 2} because in \textit{Class 1} the 1-form $\beta$ need not be null, necessarily. The preceding line of argument shows that these two classes of solutions in fact exhaust all possibilities, which we encapsulate in the following theorem.
\begin{theor}\label{theor:(alpha,beta)solutions2}
\textit{Any} vacuum solution of the type of Theorem \ref{theor:(alpha,beta)solutions} must belong to one of the two classes introduced above.
\end{theor}
Before we move on to $(\alpha,\beta)$-type solutions of plane-wave type, we end this section by noting that for specific types of $(\alpha,\beta)$-metrics, stronger results have been obtained than the ones derived above:
\begin{itemize}
\item For Randers metrics of Berwald type \textit{any} vacuum solution to \eqref{eq:BEFEs} must be of the type described in theorem \ref{theor:(alpha,beta)solutions}, that is, $\alpha$ is necessarily a vacuum solution in Einstein gravity and $\beta$ is necessarily covariantly constant \cite{Heefer_2021}. Any such solution is therefore either of \textit{Class 1} or \textit{Class 2} in the terminology introduced above.
\item For $m$-Kropina metrics some vacuum solutions of a more general type than the one in theorem \ref{theor:(alpha,beta)solutions} have been obtained in the context of \textit{Very General Relativity (VGR)} \cite{Fuster:2018djw}.
\item Any pseudo-Riemannian Finsler metric $F = \alpha$ is trivially a vacuum solution in Finsler gravity if and only if it is a vacuum solution in Einstein gravity.
\end{itemize}
To the best of our knowledge this list comprises all exact solutions in Finsler gravity currently known in the literature.
\subsection{Plane wave solutions in Brinkman and Rosen coordinates}
Eq. \eqref{eq:reduced_pp_wave_metric2} expresses the pp-wave metric in Brinkmann form \cite{Brinkmann:1925fr}. For the description of the physical effects of (plane) gravitational waves in general relativity, it is sometimes more convenient to use a different coordinate system, known as Rosen coordinates \cite{rosen1937plane}. This remains true in the Finsler case. When we compute the effect on the radar distance of a passing Randers gravitational wave in section \ref{sec:radar_distance}, our starting point will be the expression for the gravitational wave in Rosen coordinates. Therefore we briefly review the relation between the two coordinate systems here.\\
Rosen coordinates can be introduced for the subclass of pp-waves known as \textit{plane waves}. These can be characterized by the property that the curvature tensor does not change (i.e. is covariantly constant) along the Euclidean `wave surfaces' given in Brinkmann coordinates by $\text{d} u = \text{d} v = 0$, i.e.
\begin{align}\label{eq:invariance_of_Riemann_curvature}
\nabla_{\partial_{x^1}} R^\rho{}_{\sigma\mu\nu} = \nabla_{\partial_{x^2}} R^\rho{}_{\sigma\mu\nu} = 0.
\end{align}
We note that $\nabla_{\partial_v} R^\rho{}_{\sigma\mu\nu} = 0$ always holds, identically, so it would be equivalent to require invariance along the surfaces $\text{d} u = 0$. The conditions \eqref{eq:invariance_of_Riemann_curvature} are equivalent to the statement that $\partial_a\partial_b\partial_c H = 0$ in Brinkmann coordinates \eqref{eq:reduced_pp_wave_metric2}, i.e. that $H(u,x)$ is a second order polynomial in $x^a$. In that case there always exists a coordinate transformation that removes the linear and constant terms (section 24.5 in
\cite{stephani_kramer_maccallum_hoenselaers_herlt_2003}) so that the metric can be written as
\begin{align}\label{eq:plane_wave_brinkmann}
A = -2\text{d} u \text{d} v + A_{ab}(u)x^ax^b\, \text{d} u^2+\delta_{ab}\, \text{d} x^a \text{d} x^b
\end{align}
This is the standard expression for a plane-wave metric in Brinkmann form. Moreover, an argument very similar to the one given in the previous subsection, shows that we may assume without loss of generality that the 1-form $\beta = \text{d} u$ remains unchanged under this transformation. \\
Any such plane wave metric can also be written in Rosen form
\begin{align}\label{eq:Rosen_form}
\text{d} s^2 = -2\text{d} U\text{d} V + h_{ij}(U)\text{d} y^i \text{d} y^j,
\end{align}
where $h_{ij}$ is a two-dimensional Riemannian metric. And conversely, any metric of Rosen form \eqref{eq:Rosen_form} can be cast in the form \eqref{eq:plane_wave_brinkmann}. The two coordinate systems are related via
\begin{align}
U = u,\quad V = v - \dfrac{1}{2}\dot E_{ai} E^i{}_b x^a x^b, \quad x^a = E^a{}_iy^i,
\end{align}
where $A_{ab} = \ddot E_{ai}E^i{}_b$ and $E^a{}_i$ is a vielbein for $h_{ij}$ in the sense that $h_{ij} = E^a{}_i E^b{}_j \delta_{ab}$, satisfying the additional symmetry condition $\dot E_{ai} E^i{}_b = \dot E_{bi} E^i{}_a$. Such a vielbein can always be chosen. For details we recommend the lecture notes \cite{Blau2011} by Matthias Blau and references therein (see also the Appendix of \cite{Blau_2003}). Note that we have momentarily labelled the $y$-coordinates by indices $i,j,k,\dots$ so as to distinguish them from indices $a,b,c,\dots$ in order that we may apply the usual notation with regards to the vielbein indices: $E^i{}_a$ represents the (matrix) inverse of $E^a{}_i$ and indices $a,b,c\dots$ are raised and lowered with $\delta_{ab}$, whereas indices $i,j,k,\dots$ are raised and lowered with $h_{ij}$. The dot that appears sometimes above the vielbein represents a $U$-derivative. Since the vielbein depends only on $U$, this derivative is equivalent to a $u$-derivative, and moreover the raising and lowering of the $a,b,c,\dots$ indices commutes with taking such a derivative of the vielbein.\\
It is again the case that, after relabeling $U,V\mapsto u,v$, the 1-form $\beta = \text{d} u = \text{d} U$ remains unchanged under this transformation, which in this case is easy to see. After also relabelling $y\mapsto x$, we conclude that we can express any \textit{Class 2} solution of plane-wave type in Rosen coordinates as follows,
\begin{align}
F = \alpha\, \phi(\beta/\alpha), \qquad A = -2\text{d} u\text{d} v + h_{ij}(u)\text{d} x^i \text{d} x^j, \qquad \beta = \text{d} u \label{eq:ab_plane_wave},
\end{align}
where $\alpha = \sqrt{|A|}$. And conversely, for any choice of $\phi,h_{ij}(u)$, this is a vacuum solution to the field equations in Finsler gravity if $A$ is a vacuum solution to Einstein's field equation. The resulting geometry is of Berwald type with affine connection identical to the Levi-Civita connection of $\alpha$.
\subsection{Linearized gravitational wave solutions}
\label{sec:linearized_Randers_sols}
The exact vacuum field equation for plane-wave metrics does not have a particularly nice expression in Rosen coordinates \eqref{eq:ab_plane_wave}. The linearlized field equation, however, turns out to be very simple. So let's consider the scenario that the pseudo-Riemannian metric $\alpha$ is very close to the Minkowski metric. In this case we may write $h_{ij}(u) = \delta_{ij}+\varepsilon f_{ij}(u)$ with $\varepsilon \ll 1$. The linearlized field equations (i.e. to first order in $\varepsilon$) for $\alpha$ then simply read\footnote{The full linearlized vacuum field equation \eqref{eq:BEFEs} for $F$ is more complicated in general, but as discussed extensively above, if the vacuum field equation for $\alpha$ is satisfied then so is the field equation for $F$. In the case of Randers metrics, to which we will turn momentarily, the field equation for $F$ is even equivalent to the field equation for $\alpha$. Hence for our present purposes the field equations for $\alpha$ suffice.}
\begin{align}
f_{11}''(u) + f_{22}''(u)=0.
\end{align}
Hence $f_{11}$ and $-f_{22}$ must be equal up to an affine function of $u$. Here we will focus on the case where $f_{11}=-f_{22}$, which can always be achieved by means of the transverse traceless gauge\footnote{We leave open the question whether the form of the 1-form $\beta=\text{d} u$ always remains invariant under such a transformation to the transverse traceless gauge.}. Conventionally one writes the subscripts as $f_{11} = -f_{22} \eqqcolon f_+$ and $f_{12} \eqqcolon f_\times$, denoting the plus and cross polarization of the gravitational wave, so we will stick to that notation from here onwards.
That brings us to the following expression that describes Finslerian gravitational waves of $(\alpha,\beta)$-type:
\begin{align}\label{eq:ab_grav_waves}
F = \alpha\, \phi(\beta/\alpha), \qquad\left\{\begin{array}{ll}
A = -2\text{d} u \text{d} v + (1+\varepsilon f_+(u)) \text{d} x^2 + (1-\varepsilon f_+(u))\text{d} y^2 + 2\varepsilon f_\times (u) \text{d} x\,\text{d} y \\
\beta = \text{d} u
\end{array}\right.
\end{align}
Note that if we substitute $u= (t-z)/\sqrt{2}$ and $v = (t+z)/\sqrt{2}$, then $A$ reduces to the standard expression for a gravitational wave metric in GR, i.e.
\begin{align}
F = \alpha\, \phi(\beta/\alpha), \qquad\left\{\begin{array}{ll}
A = -\text{d} t^2 + (1+\varepsilon f_+(t-z)) \text{d} x^2 + (1-\varepsilon f_+(t-z)\text{d} y^2 + 2\varepsilon f_\times (t-z) \text{d} x\,\text{d} y+ \text{d} z^2 \\
\beta = \frac{1}{\sqrt{2}}\left(\text{d} t - \text{d} z\right)
\end{array}\right. ,
\end{align}
for any choice of the function $\phi$.
\subsection{Linearized $(\alpha,\beta)$-metrics are Randers metrics}
\label{sec:lin_ab_metric_is_randers}
It is natural to linearlize not only in $\varepsilon$, characterizing the departure from flatness, but to also use a perturbative expansion in the `size' of the 1-form, characterizing the departure from GR and pseudo-Riemannian geometry. The physical intuition here is that, seeing how well GR works in most regimes, the most interesting class of Finsler spacetimes constists of those ones that are very close to GR spacetimes. The purpose of this section is to highlight that any $(\alpha,\beta)$-metric is perturbatively equivalent to a Randers metric, to first order, so that from the physics point of view, Randers metrics are actually quite a bit more general than they might seem at first glance. After pointing this out we will turn our focus exclusively to Randers metrics for the remainder of the article.\\
So consider an $(\alpha,\beta)$-metric constructed form a pseudo-Riemannian metric $\alpha$ and a 1-form $\beta$ such that $\beta\ll 1$. To see what happens in such a scenario, we replace $\beta$ with $\lambda\beta$ and expand to first order in $\lambda$. Then we obtain
\begin{align}
F = \alpha \phi\left(\frac{\lambda\beta}{\alpha}\right) \approx \alpha \left(\phi(0)+ \lambda \phi'(0)\frac{\beta}{\alpha}\right) = \alpha \phi(0) + \lambda\phi'(0)\beta = \tilde\alpha + \tilde\beta.
\end{align}
Hence to first order in $\lambda$, any $(\alpha,\beta)$-metric is indeed equivalent to a Randers metric\footnote{Actually this is not true for \textit{all} $(\alpha,\beta)$-metrics but only those which allow an expansion around $s = \beta/\alpha = 0$. This excludes Kropina metrics, for instance, because they are not well-behaved in the limit $\beta\to 0$.}. Consequently, by replacing $\text{d} u$ by $\lambda\,\text{d} u$ in \eqref{eq:ab_grav_waves}, which technically can be achieved by a coordinate transformation that scales $u$ by $\lambda$ and $v$ by $1/\lambda$, it follows that to first order in $\lambda$ the Finsler metric of the $(\alpha,\beta)$-type gravitational waves takes the form,
\begin{align}\label{eq:metric_naive_Randers_grav_wave0}
F = \alpha + \beta, \qquad\left\{\begin{array}{ll}
A = -2\text{d} u \text{d} v + (1+\varepsilon f_+(t-z)) \text{d} x^2 + (1-\varepsilon f_+(t-z)\text{d} y^2 + 2\varepsilon f_\times (t-z) \text{d} x\,\text{d} y \\
\beta = \lambda\,\text{d} u
\end{array}\right. .
\end{align}
The parameter $\lambda$ then characterizes the departure from GR and pseudo-Riemannian geometry. We will assume without loss of generality that $\lambda> 0$. Finally, replacing also $u$ and $v$ by $t$ and $z$, according to $u= (t-z)/\sqrt(2)$ and $v = (t+z)/\sqrt{2}$, we can write the metric in the following way, which we will take as the starting point for the calculation of the radar distance in Section \ref{sec:radar_distance}.
\begin{align}\label{eq:metric_naive_Randers_grav_wave}
F = \alpha + \beta, \qquad\left\{\begin{array}{ll}
A = -\text{d} t^2 + (1+\varepsilon f_+(t-z)) \text{d} x^2 + (1-\varepsilon f_+(t-z)\text{d} y^2 + 2\varepsilon f_\times (t-z) \text{d} x\,\text{d} y+ \text{d} z^2 \\
\beta = \frac{\lambda}{\sqrt{2}}\left(\text{d} t - \text{d} z\right)
\end{array}\right.
\end{align}
\section{Modified Randers metrics}
\label{sec:Randers}
Motivated by the argument above we will now turn our focus to the simplest properly Finslerian $(\alpha,\beta)$-metric, the Randers metric, conventionally defined as $F = \alpha+\beta$. We will argue that in order to have a physically acceptable causal structure, the conventional definition must be modified slightly. It might seem to the reader that modifying the Randers metric would be in conflict with the spirit of the previous section, since to first order any $(\alpha,\beta)$-metric should reduce to a Randers metric. It is important to note, however, that there is in principle the possibility that to different regions of the tangent bundle could correspond different Randers metrics. More precisely, we could define one $(\alpha,\beta)$-metric $F_1$ on a conic subbundle $\mathcal A_1\subset TM\setminus 0$ and another $(\alpha,\beta)$-metric, $F_2$, on a different conic subbundle $\mathcal A_2\subset TM\setminus 0$. If the two subbundles do not overlap then this defines a perfectly valid $(\alpha,\beta)$-type Finsler spacetime on the union $\mathcal A = \mathcal A_1\cup\mathcal A_2$. To first order in the deviation from \mbox{(pseudo-)Riemannian} geometry this Finsler metric would reduce to certain Randers metric on $\mathcal A_1$ and to a different Randers metric on $\mathcal A_2$. Our modification of the Randers metric, introduced below, is therefore completely consistent with the previous results.\\
After a heuristic argument that motivates the desired modification, we show that our proposed version of the modified Randers metric has a very satisfactory causal structure. As a result of this a clear (future and past) timelike cone can be identified and within these timelike cones the signature of the Fundamental tensor is Lorentzian everywhere. The only constraint is that $b^2\equiv a^{\mu\nu}b_\mu b_\nu>-1$, which, interestingly, is in some sense the opposite of the condition $b^2<1$ that appears in the well-known positive definite case, see e.g. \cite{ChernShen_RiemannFinsler}. In one were to adopt the opposite signature convention to ours, however, the constraint in the Lorentzian case would also turn out to be $b^2<1$, matching the positive definite case.
\subsection{Motivation and definition}
\label{sec:Randers_modified1}
First of all, let us review why the definition of a Randers metric is not as clear in Lorentzian signature as it is in Euclidean signature. The original definition of a Randers metric, in positive definite Finsler geometry, is just $F = \alpha + \beta$, with $\alpha = \sqrt{a_{ij}y^i y^j}$ a Riemannian metric and $\beta = b_i y^i$ any 1-form\footnote{In order to satisfy all the axioms of a Finsler space, the 1-form must satisfy $|b|^2<1$, see e.g. \cite{ChernShen_RiemannFinsler}.}. This is well-defined as long as $\alpha$ is positive-definite, because in that case $A \equiv a_{ij}y^iy^j$ is always positive. If we allow $a_{ij}$ to be a Lorentzian metric, however, the quantity $A$ can become negative, in which case $\sqrt{A}$ is ill-defined, as we want $F$ to be a real function. One way to remedy this, at least at a technical level, is to restrict the conic subbundle $\mathcal A\subset TM\setminus 0$ to those vectors for which $a_{ij}y^iy^j>0$.
This was the approach in e.g. \cite{Heefer_2021}, where it was shown that if $\mathcal A$ is defined as the forward timecone\footnote{We note that the signature convention in \cite{Heefer_2021} is the opposite as the one employed here, so in that case the condition $a_{ij}y^iy^j>0$ precisely select the timelike, not spacelike, vectors.} corresponding to $\alpha$, then under certain conditions on the 1-form $\beta$, such a Randers spacetime satisfies all axioms of a Finsler spacetime. The fact that $\mathcal A$ is restricted in this way, however, leads to issues when it comes to the physical interpretation. Here we take a different approach.\\
The obvious first alternative to restricting $\mathcal A$ to vectors with positive norm is to simply replace $A$ by $|A|$ and define $\alpha = \sqrt{|A|}$, as we have done throughout this article. In that case there's no need to restrict $\mathcal A$ to the timecone anymore. This leads to a Randers metric of the form $F = \sqrt{|A|} +\beta$. An undesirable consequence of this definition, however, is that light rays can only propagate into one half of the tangent space, namely the half given by $\beta<0$, which follows immediately from the null condition $F=0$. In fact, the light cone separates the tangent space into only two connected components\footnote{This can be checked easily in suitable coordinates adapted to $\beta$.} and there is consequently not a straightforward interpretation in terms of timelike, spacelike and lightlike directions, at least not in the conventional way\footnote{We note that in the approach by Javaloyes and S\'anchez \cite{Javaloyes2014-1, Javaloyes2014-2} a single, future pointing (by definition) cone is sufficient, though.}.
We therefore take the viewpoint that outside of the half plane $\beta<0$ in each tangent space, this version of the Randers metric cannot be valid, and we need to modify it in that region.
It is possible to remove the condition $\beta\leq 0$, extending the lightcone to the other half-plane $\beta>0$, by changing $F$ to $F=\text{sgn}(A)\sqrt{|A|}+\text{sgn}(\beta)\beta = \text{sgn}(A)\sqrt{|A|}+|\beta|$. The result of this is that, under some mild assumptions (details will follow below) the single lightcone (from the $\beta<0$ half space) is mirrored to the complementary ($\beta\geq 0$) half space, whereas in the original half space intersected with the original cone of definition consisting of $\alpha$-timelike vectors, $F$ reduces to the standard Randers metric with an overall minus sign, $F = -(\alpha+\beta)$. This minus sign is not of any relevence, though, as the geometry is essentially determined by $F^2$. In particular, $F$ is now reversible, i.e. invariant under $y\to-y$. Notice also that we could have chosen a minus sign instead of a plus sign in the modified definition of $F$, but it turns out that in that case the resulting Finsler metric would not be guaranteed to have Lorentzian signature everywhere inside of the timelike cones\footnote{In case one employs the opposite signature convention $(+,-,-,-)$ the converse would be true. In that case the preferable choice would be $F=\text{sgn}(A)\alpha-|\beta|$ rather than $F=\text{sgn}(A)\alpha+|\beta|$.}. The present metric \textit{does} have this property as long as $b^2>-1$, and we discuss this in detail below.\\
\begin{defi}
Motivated by the preceding heuristic argument we define the \textit{modified Randers metric} as follows,
\begin{align}
F = \text{sgn}(A)\alpha + |\beta|,\label{eq:modified_randers_metric}
\end{align}
where we recall for completeness that $\alpha = \sqrt{|A|}$, $A = a_{ij}y^i y^j$ $\beta = b_i y^i$.
\end{defi}
Both $\alpha$ and $A$ will sometimes be referred to as the (pseudo-)Riemannian metric, by a slight abuse of language, but it should always be clear from context what is meant.
\subsection{Causal structure}
\label{sec:Randers_causality}
Next we will show that the modified Randers metric \eqref{eq:modified_randers_metric} indeed has very nice properties. By definition, the light cone is given by
\begin{align}
F = 0 \qquad \Leftrightarrow \qquad \text{sgn}(A)\leq 0 \,\&\, |A| = |\beta|^2 \qquad \Leftrightarrow \qquad A = -\beta^2.
\end{align}
It therefore follows that
\begin{align}
F=0 \qquad \Leftrightarrow\qquad (a_{\mu\nu}+ b_\mu b_\nu)\text{d} x^\mu \text{d} x^\nu = 0,
\end{align}
meaning that the light cone of $F$ is just the light cone of the auxilliary Lorentzian metric $\tilde a_{\mu\nu}(x) = a_{\mu\nu}+b_\mu b_\nu$. Indeed, the matrix determinant lemma guarantees that as long as $b^2 = a_{\mu\nu} b^\mu b^\nu>-1$ the metric $a_{\mu\nu}+ b_\mu b_\nu$ has Lorentzian signature, provided that $a_{\mu\nu}$ has Lorentzian signature. (For a proof see appendix \ref{sec:proof_of_signature}.) This shows that as long as $b^2>-1$ the light cone separates the tangent space at each point into three connected components, which we can naturally interpret in the usual manner as the forward time cone, backward timecone, and the remainder consisting of spacelike vectors. Coincidentally we note that
\begin{align}
F<0 \qquad \Leftrightarrow\qquad (a_{\mu\nu}+ b_\mu b_\nu) y^\mu y^\nu < 0,
\end{align}
and hence it also follows that
\begin{align}
F>0 \qquad \Leftrightarrow\qquad (a_{\mu\nu}+ b_\mu b_\nu)y^\mu y^\nu > 0.
\end{align}
This leads to the additional convenience that $F$-timelike vectors are precisely given by $F<0$, and $F$-spacelike vectors by $F>0$, in addition to the null vectors being given, by definition, by $F=0$. We summarize these results in the following proposition.\\
\begin{prop}
As long as $b^2>-1$, the causal structure of the modified Randers metric $F = \text{sgn}(A)\alpha + |\beta|$ is identical to the causal structure of the \textit{Lorentzian} metric $a_{\mu\nu}+ b_\mu b_\nu$, with null vectors given by $F=0$, timelike vectors given by $F<0$, and spacelike vectors by $F>0$.
\end{prop}
As a result of these nice features of the causal structure of the modified Randers metric, it is possible to define time orientations in the usual manner, by means of a nowhere vanishing timelike vector field $T$. This is not possible for Finsler spacetimes in general. Such $T$ selects one of the two timelike cones as the `forward' one, namely the one that contains $T$. Then another timelike vector $y$ is future oriented (i.e. lies in the same forward cone as $T$) if and only if $(a_{\mu\nu}+ b_\mu b_\nu) T^\mu y^\nu <0$. \\
In the special case that $\beta$ is covariantly constant with respect to $\alpha$ we have even more satisfactory results. In that case not only the causal structure but also the affine structure of $F$ can be understood in terms of $a_{\mu\nu}+b_\mu b_\nu$.
\begin{prop}
If $\beta$ is covariantly constant with respect to $\alpha$ and satisfies $b^2>-1$ then the causal structure and the affine structure of the modified Randers metric $F = \text{sgn}(A)\alpha + |\beta|$ are identical to those of the Lorentzian metric $\tilde a_{\mu\nu} = a_{\mu\nu} + b_\mu b_\nu$. In other words, the timelike, spacelike and null geodesics of $F$ coincide with the timelike, spacelike and null geodesics of $\tilde a_{\mu\nu}$.
\end{prop}
\begin{proof}
The discussion above indicates that the causal structures coincide. It remains to show that also the affine structures concide in the case of a covariantly constant 1-form. This is again a result of the properties of $\tilde a_{\mu\nu}$. It can be shown (see Appendix \ref{sec:proof_of_signature}) that the Christoffel symbols of $\tilde a_{\mu\nu}$ can be expressed in terms of the Christoffel symbols of $a_{\mu\nu}$ as
\begin{align}
\widetilde\Gamma^\rho_{\mu\nu} &= \Gamma^\rho_{\mu\nu} +\frac{1}{1+b^2}b^\rho \nabla_{(\mu}b_{\nu)} - \left(a^{\rho\lambda} - \frac{1}{1+b^2}b^\rho b^\lambda\right)\left( b_\mu\nabla_{[\lambda} b_{\nu]} + b_\nu\nabla_{[\lambda} b_{\mu]}\right).
\end{align}
Hence it follows immediately that if $b_\mu$ is covariantly constant then $\widetilde\Gamma^\rho_{\mu\nu} = \Gamma^\rho_{\mu\nu}$ and the affine structure of $\tilde a_{\mu\nu}$ is the same as that of $a_{\mu\nu}$. We also known, by Prop. \ref{prop:coinciding_spray}, that the affine structure of $F$ is the same as that of $a_{\mu\nu}$. Hence the affine structure of $F$ is the same as that of $\tilde a_{\mu\nu}$.
\end{proof}
From this it immediately follows that the existence of radar neighborhoods is guaranteed \cite{Perlick2008}. More precisely, given an observer and any event in spacetime, there is (at least locally) exactly one future pointing light ray and one past pointing light ray that connect the event to the worldline of the observer. This is of essential importance in our work, because what it essentially says is that the radar distance, calculated in Section \ref{sec:radar_distance}, is a well-defined notion.
\subsection{Regularity and signature}
Given an $(\alpha,\beta)$-metric of the form $F = \alpha \phi(s)$, with $s = \beta/\alpha$ and $\alpha = \sqrt{|A|}$, it can be shown that the determinant of the fundamental tensor is given by
\begin{align}\label{eq:ab_det_formula_main_text}
\det g_{ij} = \phi^{n+1}(\phi-s\phi')^{n-2}(\phi-s\phi' + (\text{sgn}(A) b^2-s^2)\phi'')\det a_{ij}.
\end{align}
The proof can be found in Appendix \ref{sec:ab_determinant}. Because of the appearance of $\text{sgn}(A)$ the expression is slightly different from the well--known positive definite analogue, to which it reduces when $A>0$, i.e. $\text{sgn}(A)=1$. For a modified Randers metric of the form $F = \text{sgn}(A)\alpha + |\beta|$ the function $\phi$ is given by $\phi(s) = \text{sgn}(A) + |s|$, so this reduces to
\begin{align}
\frac{\det g}{\det a} = \text{sgn}(A)^{n-1}\left(\text{sgn}(A)+|s|\right)^{n+1} = \left(\text{sgn}(A)\frac{F}{\alpha}\right)^{n+1}.
\end{align}
Assuming the spacetime dimension $n$ is even, this means that $g$ has Lorentzian signature\footnote{The argument is the same as in the positive definitive case, using the same methods as those employed in Appendix \ref{sec:proof_of_signature}.} if and only if $\text{sgn}(A)F>0$. Let us see what this entails. First note that $F<0$ trivially implies $A<0$. Hence $F<0$ implies Lorentzian signature. Before we move on, we should point out that this is a very satisfactory result. It means that within the entire timelike cone of $F$, the signature of the fundamental tensor is Lorentzian. Similarly, $A>0$ implies $F>0$. Hence $A>0$ also implies Lorentzian signature. What remains is the region where $A\leq 0$ and $F\geq 0$. Equivalently, $A\leq 0$ and $A+\beta^2\geq 0$. In this region, the determinant of the fundamental tensor is either undefined, is positive, or vanishes, so in any case the signature is not Lorentzian. But as this region lies outside the timelike cone, this is not a problem, as argued in section \ref{sec:Finsler_spacetimes_interpretation}.\\
It is helpful to think in terms of both the light cone of the metric $a_{ij}$ and the light cone of the metric $a_{ij}+b_ib_j$ (i.e. that of $F$). As mentioned previously, as long as $b^2>-1$, the latter metric is Lorentzian, provided the former is. That means its light cone is just a conventional one that we're familiar with from GR, just like the light cone of $a_{ij}$. The only region where the signature is \textit{not} Lorentzian, is precisely the region in between these two lightcones. Note that since $F<0$ implies $A<0$, the $F$-lightcone can never reach outside of the $a_{ij}$-light cone. The details depend on the causal character of the 1-form $\beta$ and are listed below. These properties can be checked easily by noting we may always choose coordinates such that at a given point $x\in M$ the metric $A$ has the form of the Minkowski metric and the 1-form $\beta$ has only one component (in the timelike or spacelike case) or two components (in the null case)\footnote{We recall that this can be seen as follows. First, since $a_{ij}$ is Lorentzian, it is always possible to choose coordinates such that $A$ is just the Minkowski metric at a given point $x\in M$. Writing $b^\mu = (b^0,b^1,\dots b^{n-1})$ in these coordinates, we may do a spatial rotation on the coordinates $b^1,\dots,b^{n-1}$, such that they are transformed into $(b^1,0,\dots,0)$, leaving the metric at $x$ unchanged. Then $b^\mu = (b^0,b^1,0,\dots,0)$. Now we separate the three cases. If $b^2=0$, it follows that $b^1=\pm b^0$ and by applying if necessary a spatial reflection in the $x^1$ direction we may choose either sign. If $b^2<0$ then we may go to the local rest frame by a Lorentz transformation, making $b^1=0$. If on the other hand $b^2>0$ we may perform a Lorentz transformation making $b^0=0$.}.
\begin{itemize}
\item If $\beta$ is null it is easily seen that the two lightcones intersect only for $y^\mu$ that are multiples of $b^\mu$. Thus their intersection spans a single line in the tangent space.
\item If $\beta$ is timelike and $b^2>-1$ then the light cones do not intersect (apart from the trivial intersection in the origin).
\item If $\beta$ is spacelike (and assuming $\dim M>2$), then $a_{ij}$ induces a Lorentzian metric
on the $(\dim M-1)$-dimensional hypersurface defined by $\beta=0$. In this case the two light cones intersect along the light cone of this induced Lorentzian metric.
\item If $b^2 = -1$ there is only a single cone, namely the one corresponding to $\alpha$. The `light cone' corresponding to $F=0$ is now in fact a line, consisting all of multiples of $b^\mu$. This case therefore does not have a viable physical interpretation.
\item If $b^2 < -1$ there is only a single cone, namely the one corresponding to $\alpha$. The `light cone' corresponding to $F=0$ is now non-existent, as $F=0$ has no solutions. This case therefore does not have a viable physical interpretation either.
\end{itemize}
\begin{figure}
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{figL1}
\caption{Null 1-form with $\rho=0.6$}
\label{fig:L1}
\end{subfigure}
\hspace{20px}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{figL2}
\caption{Null 1-form with $\rho=1$}
\label{fig:L2}
\end{subfigure}
\hspace{20px}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{figL3}
\caption{Null 1-form with $\rho=1.4$}
\label{fig:L3}
\end{subfigure}
%
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{figT1}
\caption{Timelike 1-form with $\rho=0.65$}
\label{fig:T1}
\end{subfigure}
\hspace{20px}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{figT2}
\caption{Timelike 1-form with $\rho=0.8$}
\label{fig:T2}
\end{subfigure}
\hspace{20px}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{figT3}
\caption{Timelike 1-form with $\rho=0.9$}
\label{fig:T3}
\end{subfigure}
%
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{figS1}
\caption{Spacelike 1-form with $\rho=0.8$}
\label{fig:S1}
\end{subfigure}
\hspace{20px}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{figS2}
\caption{Spacelike 1-form with $\rho=1.4$}
\label{fig:S2}
\end{subfigure}
\hspace{20px}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{figS3}
\caption{Spacelike 1-form with $\rho=2$}
\label{fig:S3}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{figLwithNormEqualToOne}
\caption{Timelike 1-form with $b^2=-1$.}
\label{fig:T5}
\end{subfigure}
\hspace{20px}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{figLwithNormGreaterThanOne}
\caption{Timelike 1-form with $b^2<-1$.}
\label{fig:T4}
\end{subfigure}
\caption{The figures show the lightcone and the signature of the fundamental tensor of $F = \text{sgn}(A)\alpha - |\beta|$, where $A = -(y^0)^2+(y^1)^2+(y^2)^2+(y^3)^2$ and $\beta = \rho(y^0+y^1)$ (in the null case) or $\beta = \rho \,y^0$ (in the timelike case) or $\beta = \rho \,y^1$ (in the spacelike case) for several representative values of $\rho$, shown in the tangent tangent space $T_xM$ at any point $x\in M$, at $y^3=0$. Green regions correspond to Lorentzian signature, red regions to non-Lorentzian signature. Fig. \ref{fig:L1} - \ref{fig:S3} show the physically reasonable scenarios, where $b^2>-1$. In that case two cones can be observed. The inner cone is the true light cone of $F$ (i.e. the set $F=0$), and the outer cone is the light cone of $a_{ij}$ (i.e. the set $A=0$). The only region with non-Lorentzian signature is precisely the gap in between the two cones. If on the other hand $b^2=-1$ (Fig. \ref{fig:T5}) then the light `cone' of $F$ is the line $y^1=y^2=y^3=0$. And if $b^2<-1$ (Fig. \ref{fig:T4}) then the light `cone' of $F$ consists only of the origin. Therefore we deem the latter two cases not physically interesting.}
\label{fig:lightcones_and_signatures}
\end{figure}
To get a better idea, Fig. \ref{fig:lightcones_and_signatures} displays the lightcones and the regions (in green) where the signature of the fundamental tensor is Lorentzian, for the modified Randers metric
\begin{align}
F = \text{sgn}(A)\alpha + |\beta| ,\qquad A = -(\text{d} x^0)^2+(\text{d} x^1)^2+(\text{d} x^2)^2,\qquad \beta =\left\{\begin{array}{ll}
\rho \,\text{d} x^0 & \text{if \textit{timelike}}\\
\rho\,(\text{d} x^0+\text{d} x^1) &\text{if \textit{null}}\\
\rho \,\text{d} x^1 & \text{if \textit{spacelike}}
\end{array}\right. ,
\end{align}
for a number of representative values of the parameter $\rho$. In each subfigure, the inner lightcone is that of $F$ and the outer lightcone that of $A$. Note that for any $a_{ij}$ and $b_i$, it is always possible at any given point $x\in M$, to choose coordinates in such a way that $A$ and $\beta$ have the above form (or rather their analog in the relevant spacetime dimensionality). The following proposition summarizes these results.\\
\begin{prop}
As long as $b^2>-1$, the signature of the fundamental tensor of $F = \text{sgn}(A)\alpha + |\beta|$ is Lorentzian within the entire timelike cone, which is given by $F<0$. Immediately outside of the timelike cone there is a region that does not have Lorentzian signature, and further away (namely when $A>0$) the signature becomes Lorentzian again. When $b^2\leq -1$ the timelike cone is the empty set, so this case is not physically interesting.
\end{prop}
Since we only require Lorentzian signature within the timelike cone, these results are very satisfactory. Finally, regarding the regularity of $F$, clearly $F$ is smooth everywhere except when $A=0$ or $\beta=0$. In particular, the set where $F$ is not smooth has measure zero.
\section{Radar distance for a Finsler gravitational wave}
\label{sec:radar_distance}
Now we are finally in the position to analyze the physical effects of a passing Finslerian gravitation wave of $(\alpha,\beta)$-type. We have seen in Section \ref{sec:lin_ab_metric_is_randers} that, to first order, an $(\alpha,\beta)$-metrics is equivalent to a Randers metric. And we have argued in section \ref{sec:Randers} that this should not be the standard Randers metric but rather our modified Randers metric. Thus our starting point will be the linearlized gravitational wave solution of modified-Randers type. That is, we are interested in the solution \eqref{eq:metric_naive_Randers_grav_wave} but with the conventional Randers metric $F=\alpha+\beta$ replaced by the modified Randers metric $F = \text{sgn}(A)\alpha + |\beta|$, where $\alpha = \sqrt{|A|}$, $A = a_{ij}y^iy^j$. Note that this modification does not change any of the results pertaining to classification of solutions to the field equations, by the argument given at the beginning of section \ref{sec:Randers} that a modified Randers metric is `locally' equivalent to a standard Randers metric, in a certain precise sense. The relevant Finsler metric is therefore given by
\begin{align}\label{eq:metric_Randers_grav_wave}
F = \text{sgn}(A)\alpha + |\beta|, \qquad\left\{\begin{array}{ll}
A = -\text{d} t^2 + (1+\varepsilon f_+(t-z)) \text{d} x^2 + (1-\varepsilon f_+(t-z)\text{d} y^2 + 2\varepsilon f_\times (t-z) \text{d} x\,\text{d} y+ \text{d} z^2 \\
\beta = \frac{\lambda}{\sqrt{2}}\left(\text{d} t - \text{d} z\right)
\end{array}\right.
\end{align}
Since actual gravitational wave measurements are done with interferometers, that effectively measure the \textit{radar distance}, the aim of this section is to compute that radar distance during the passing of a gravitational wave of the form \eqref{eq:metric_Randers_grav_wave}. \\
The setup is as follows. A light ray is emitted from some spacetime location with coordinates $(t_0, x_0, y_0, z_0)$, travels to another location in spacetime with coodinates $(t_0+\Delta t, x_0+\Delta x, y_0 + \Delta y, z_0+\Delta z)$, where it is reflected and after which it travels back to the original (spatial) location, with spacetime coordinates $(t_0+\Delta t_{\text{tot}}, x_0, y_0, z_0)$, being received there again. We are interested in the amount of proper time that passes between emission and reception of the light ray, as measured by an `inertial' observer\footnote{In this context, we say that an observer is intertial if it would considered an intertial observer in the absence of the wave (i.e. when $f_+ = f_\times=0$). In other words, thinking of the gravitational wave as having a finite duration as it passes the Earth, an observer is inertial precisely if it is inertial before and after the wave passes.} located at spatial coordinates $(x_0, y_0, z_0)$. Because light travels forwards \textit{and} backwards during this time interval, one half of the time interval is usually called the \textit{radar distance} between the two spacetime points (sometimes the value is multiplied by the velocity of light, $c$, which we have set to 1, so that it has the dimensions of distance). In other words, the radar distance can be expressed as $R = \Delta \tau/2$.\\
We will compute the radar distance first for a classical GR gravitational wave
and then we repeat the calculation for the Randers gravitational wave, so that it is clear where each of the Finslerian effects enters precisely. In Section \ref{sec:geodesics} we derive the explicit form of the geodesics. Conveniently, the geodesics in the Finsler setting are the same as those in the GR setting, so these results are general. (Null geodesics are different, though, because the null conditions are not the same.) Then, using the form of the geodesics, we first recall the calculation of the radar distance in the GR setting \cite{Rakhmanov_2009} in Section \ref{sec:radar_GR}, and then in Section \ref{sec:radar_Finsler} we compute the radar distance in the Finsler setting. Remarkably the results, when interpreted correctly, turn out to be identical to the ones for GR.
\subsection{Geodesics}\label{sec:geodesics}
The first important observation here is that the geodesics in a Randers gravitational wave spacetime with Finsler metric $F = \text{sgn}(A)\alpha + |\beta|$ as in Eq. \eqref{eq:metric_Randers_grav_wave} coincide with the geodesics of the GR spacetime with metric $\text{d} s^2 = A$, because the affine connection of $F$ coincides with the Levi-Civita connection of $A$, by Prop. \ref{prop:coinciding_spray}. For the derivation of the general form of geodesics, we may therefore assume the geometry is given by $\text{d} s^2 = A$. The results then apply both to the GR scenario as well as the Randers one. Thus our point of departure is the metric
\begin{align}
\text{d} s^2 = A =-\text{d} t^2 + (1+\varepsilon f_+(t-z)) \text{d} x^2 (1-\varepsilon f_+(t-z))\text{d} y^2 + 2\varepsilon f_\times (t-z) \text{d} x\,\text{d} y + \text{d} z^2, \qquad \varepsilon\ll 1.
\end{align}
Throughout this section and the next, we essentially follow \cite{Rakhmanov_2009}, although our notation and presentation is sometimes different. \\%The metric above satisfies the linearized Einstein field equations for any choice of the functions $f_+$ and $f_\times$, representing the two independent polarizations of the wave. \\
If we use coordinates $u= (t-z)/\sqrt(2)$ and $v = (t+z)/\sqrt{2}$ the geodesic equations to first order in $\varepsilon$ are given by
\begin{align}
-\dot u \eqqcolon p_ v &= \text{const}\\
(1+\varepsilon f_+(u))\dot x +\varepsilon f_\times(u)\dot y\eqqcolon p_ x &= \text{const}\\
(1-\varepsilon f_+(u))\dot y +\varepsilon f_\times(u)\dot x\eqqcolon p_y &= \text{const}\\
\ddot v +\frac{1}{2} \epsilon \left(\dot{x}^2-\dot{y}^2\right) f_+'(u) + \varepsilon f_\times '(u)\dot x\dot y &= 0
\end{align}
The first three equations can be rewritten to first order as
\begin{align}
\dot u = -p_v,\qquad \dot x = (1-\varepsilon f_+(u))p_x - \varepsilon f_\times(u)p_y,\qquad \dot y = (1+\varepsilon f(u))p_y- \varepsilon f_\times(u)p_x,
\end{align}
and can be integrated (with respect to an affine parameter $\sigma$, chosen without loss of generality such that $\dot u = 1$) to
\begin{align}
u = u_0 + \sigma,\quad x = x_0 + \sigma\left[\left(1 - \varepsilon \bar f_+(\sigma)\right)p_x- \varepsilon \bar f_\times(u)p_y\right],\quad y = y_0 + \sigma\left[\left(1 + \varepsilon \bar f_+(\sigma)\right)p_y- \varepsilon \bar f_\times(u)p_x\right],
\end{align}
where
\begin{align}
\bar f_{+,\times}(\sigma) \equiv \frac{1}{\sigma}\int_0^\sigma f_{+,\times}(u_0+\sigma)\text{d}\sigma
\end{align}
is the averaged value of $f_{+,\times}$. The equation for $v$ can be integrated to
\begin{align}
\dot v = -\tilde p_{u0}-\frac{1}{2} \epsilon \left(p_x^2-p_y^2\right) f_+(u_0+\sigma) - \varepsilon f_\times (u_0+\sigma)p_x p_y
\end{align}
where $\tilde p_{u0} = p_{u0}-\frac{\varepsilon}{2}(p_x^2-p_y^2)f(u_0)- \varepsilon f_\times (u_0)p_x p_y$, $p_u = -\dot v$ (not necessarily constant) and $p_{u0}$ is its initial value at $\sigma = 0$. Integrating once again, we obtain
\begin{align}
v = v_0 - \tilde p_{u0}\sigma-\frac{1}{2}\varepsilon (p_x^2-p_y^2)\sigma \bar f_+(\sigma)- \varepsilon \bar \sigma f_\times (\sigma)p_x p_y.
\end{align}
Any geodesic emanating from a given point $x_0^\mu$ can then be described by the following parameterized path, for certain values of $p_x, p_y$ and $\tilde p_{u0}$:
\begin{align}
u(\sigma) &= u_0 + \sigma, \label{eq:geod_prim_u}\\
x(\sigma) &= x_0 + \sigma\left[\left(1 - \varepsilon \bar f_+(\sigma)\right)p_x- \varepsilon \bar f_\times(\sigma)p_y\right],\label{eq:geod_prim_x}\\
y(\sigma) &= y_0 + \sigma\left[\left(1 + \varepsilon \bar f_+(\sigma)\right)p_y- \varepsilon \bar f_\times(\sigma)p_x\right],\label{eq:geod_prim_y}\\
v(\sigma) &= v_0- \tilde p_{u0}\sigma-\frac{1}{2}\varepsilon (p_x^2-p_y^2)\sigma \bar f_+(\sigma)- \varepsilon \sigma\bar f_\times (\sigma)p_x p_y.\label{eq:geod_prim_v}
\end{align}
\subsection{Null geodesics and radar distance - GR case}\label{sec:radar_GR}
To find the radar distance, we need to know the expression for \textit{null} geodesics. In the GR case, null curves satisfy $-2\dot u\dot v + (1+\varepsilon f(u))\dot x^2 + (1-\varepsilon f(u))\dot y^2 +2\varepsilon f_\times \dot x\dot y= 0$. Substituting the general form of $u,x,y,v$ for geodesics, Eqs. \eqref{eq:geod_prim_u}-\eqref{eq:geod_prim_v}, into the null condition, and eveluating it at $u=u_0$ (i.e. $\sigma=0$), the condition reduces to $2\tilde p_{u0} + p_x^2+p_y^2=0$. We may therefore eliminate $\tilde p_{u0}$ and directly substitute this into the expression \eqref{eq:geod_prim_v} for $v(\sigma)$. A null geodesic starting at $(u_0,x_0,y_0,v_0)$ at $\sigma=0$ can therefore be described by the following parameterized path,
\begin{align}
u &= u_0+\sigma, \\
x &= x_0 + \sigma\left[\left(1 - \varepsilon \bar f_+(\sigma)\right)p_x- \varepsilon \bar f_\times(u)p_y\right],\\
y &= y_0 + \sigma\left[\left(1 + \varepsilon \bar f_+(\sigma)\right)p_y- \varepsilon \bar f_\times(u)p_x\right],\\
v &= v_0 + \frac{\sigma}{2}(p_x^2+p_y^2)-\frac{1}{2}\varepsilon (p_x^2-p_y^2)\sigma \bar f_+(\sigma)- \varepsilon \sigma \bar f_\times (\sigma)p_x p_y
\end{align}
Next we plug in the boundary conditions at the receiving point, $(u_0+\Delta u, x_0+\Delta x, y_0 + \Delta y, v_0+\Delta v)$. Note that $\sigma = \Delta u$ at that point, and hence from the middle two equations we infer that
\begin{align}
p_x = \frac{\Delta x}{\Delta u}\left(1 + \varepsilon \bar f_+(\Delta u)\right) + \varepsilon \bar f_\times(\Delta u)\frac{\Delta y}{\Delta u}, \qquad p_y = \frac{\Delta y}{\Delta u}\left(1 - \varepsilon \bar f_+(\Delta u)\right)+ \varepsilon \bar f_\times(\Delta u)\frac{\Delta x}{\Delta u}.
\end{align}
Plugging this into the $v$ equation yields
\begin{align}
2\Delta u\Delta v = \Delta x^2 (1+\varepsilon \bar f(\Delta u)) + \Delta y^2 (1-\varepsilon \bar f(\Delta u)) + 2\varepsilon \bar f_\times \Delta x\Delta y,
\end{align}
or equivalently,
\begin{align}
\Delta t^2 = \left(1 + \varepsilon \bar f(\Delta u)\right)\Delta x^2 + \left(1 - \varepsilon \bar f(\Delta u)\right)\Delta y^2 + 2\varepsilon \bar f_\times(\Delta u) \Delta x\Delta y +\Delta z^2 ,
\end{align}
where we have used that $-2\Delta u\Delta v = -\Delta t^2+\Delta z^2$.
Hence to first order in $\varepsilon$ we have
\begin{align}\label{eq:GR_delta_t_outgoing}
\Delta t = \Delta \ell + \left(\frac{\Delta x^2 - \Delta y^2 }{2\Delta\ell}\right)\varepsilon \bar f_+(\Delta u) + \left(\frac{\Delta x \Delta y }{\Delta\ell}\right)\varepsilon \bar f_\times(\Delta u),
\end{align}
where $\Delta \ell \equiv \sqrt{\Delta x^2 + \Delta y^2 + \Delta z^2}$.\\
The right hand side in principle still depends on $t$ though, via $\bar f(\Delta u)$, so this is not yet a closed formula for $\Delta t$. However, since $\bar f$ only appears together with $\varepsilon$, and since we are only interested in the first order expression for $\Delta t$, any zeroth order expression for $\bar f$ suffices in this formula. We have
\begin{align}
\bar f(\Delta u) &= \frac{1}{\Delta u}\int_0^{\Delta u} f(u_0+\sigma)\text{d}\sigma = \frac{\sqrt{2}}{\Delta t - \Delta z}\int_0^{(\Delta t - \Delta z)/\sqrt{2}} f(u_0+\sigma)\text{d}\sigma \\
&= \frac{\sqrt{2}}{\Delta \ell - \Delta z}\int_0^{(\Delta \ell - \Delta z)/\sqrt{2}} f(u_0+\sigma)\text{d}\sigma + \mathcal O(\varepsilon)\\
&= \frac{\sqrt{2}}{\Delta \ell - \Delta z}\int_0^{(\Delta \ell - \Delta z)/\sqrt{2}} f\left(\frac{1}{\sqrt{2}}(t_0-z_0)+\sigma\right)\text{d}\sigma + \mathcal O(\varepsilon)\label{eq:avg_perturbation_zeroth_order}
\end{align}
since $\Delta t = \Delta \ell + \mathcal O(\varepsilon)$. We introduce another symbol for this expression, namely
\begin{align}\label{eq:avg_perturbation_zeroth_order_tz}
\bar f(\Delta \ell, \Delta z, t_0-z_0)\equiv \frac{\sqrt{2}}{\Delta \ell - \Delta z}\int_0^{(\Delta \ell - \Delta z)/\sqrt{2}} f\left(\frac{1}{\sqrt{2}}(t_0-z_0)+\sigma\right)\text{d}\sigma,
\end{align}
where the explicit display of the arguments serves to remind us that $\bar f$ depends only on $\Delta \ell, \Delta z$ and the initial value of $t-z$. Since $\varepsilon \bar f(\Delta u) = \varepsilon \bar f(\Delta \ell, \Delta z, t_0-z_0) + \mathcal O(\varepsilon^2)$, it follows that we can rewrite Eq. \eqref{eq:GR_delta_t_outgoing}, to first order, a
\begin{align}
\Delta t = \Delta \ell + \left(\frac{\Delta x^2 - \Delta y^2 }{2\Delta\ell}\right)\varepsilon \bar f_+(\Delta \ell, \Delta z, t_0-z_0) + \left(\frac{\Delta x \Delta y }{\Delta\ell}\right)\varepsilon\bar f_\times(\Delta \ell, \Delta z, t_0-z_0),\label{eq:GR_time_elapsed_single_trip}
\end{align}
which is a closed expression for the elapsed coordinate time $\Delta t$ interval for a light ray traveling a certain spatial distance, in terms of the spatial coordinate separations and the initial value of $t-z$. \\
Now let's consider the complete trip, from $x^\mu_0$ to $x^\mu_0 + \Delta x^\mu$ and `back'. The total coordinate time elapsed during this trip is the sum of the forward trip and the backward trip time intervals. Schematically:
\begin{align}
\Delta t_\text{tot} &= \Delta t(\Delta x,\Delta y,\Delta z,t_0-z_0) + \Delta t(-\Delta x, -\Delta y, -\Delta z,t_0+\Delta t-(z_0+\Delta z))
\end{align}
since the spatial interval on the backward trip is simply minus the forward spatial interval, and the `initial' value of $t-z$ for the backward trip is just the final value $t_0 -z_0 + \Delta t -\Delta z$ corresponding to the forward trip. Plugging in \eqref{eq:GR_time_elapsed_single_trip} yields
\begin{align}\label{eq:GR_time_elapsed_total_trip}
\Delta t_\text{tot} = 2\Delta \ell &+ \varepsilon\left(\frac{\Delta x^2 - \Delta y^2 }{2\Delta\ell}\right) \bar f_{+,\text{tot}} + \varepsilon \left(\frac{\Delta x \Delta y }{\Delta\ell}\right) \bar f_{\times,\text{tot}},
\end{align}
where $\bar f_{+,\text{tot}} = \bar f_{+,\text{forward}} + \bar f_{+,\text{backward}}$ and similarly for the $\times$-polarization, in terms of the forward and backward averaged amplitudes, respectively, given by
\begin{align}
\bar f_{+,\times,\text{forward}} &= \bar f_{+\times,}(\Delta \ell, \Delta z, t_0-z_0) \\
&= \frac{\sqrt{2}}{\Delta \ell - \Delta z}\int_0^{(\Delta \ell - \Delta z)/\sqrt{2}} f_{+,\times}\left(\frac{1}{\sqrt{2}}(t_0-z_0)+\sigma\right)\text{d}\sigma, \label{eq:bar_f_forward}\\
\bar f_{+,\times,\text{backward}} &= \bar f_{+,\times}(\Delta \ell, -\Delta z, t_0-z_0 + \Delta t - \Delta z) \\
&= \frac{\sqrt{2}}{\Delta \ell + \Delta z}\int_0^{(\Delta \ell + \Delta z)/\sqrt{2}} f_{+,\times}\left(\frac{1}{\sqrt{2}}(t_0+\Delta\ell-z_0 - \Delta z)+\sigma\right)\text{d}\sigma, \label{eq:bar_f_backward}
\end{align}
where in the last expression we have replaced $\Delta t$ by $\Delta \ell$ in the argument of $f_{+,\times}$, because to zeroth order this makes no difference, and only the zeroth order expression for $\bar f_{+,\text{backward}}$ is relevant because $\bar f_{+,\text{backward}}$ always appears multiplied with $\varepsilon$ in the expressions we care about, like $\Delta t_\text{tot}$.\\
Equation \eqref{eq:GR_time_elapsed_total_trip} gives the total coordinate time elapsed during the trip forward and back. Recall that the radar distance is defined as $R = \Delta \tau/2$ in terms of the proper time measured by the stationary observer local to the emission and reception of the light ray. For such a stationary observer $x=y=z=const$, so in fact the proper time coincides with coordinate time. The radar distance is therefore given by $R = \Delta t_\text{tot}/2$, that is
\begin{align}
\boxed{
R = \Delta \ell + \varepsilon\left(\frac{\Delta x^2 - \Delta y^2 }{4\Delta\ell}\right) \bar f_{+,\text{tot}} + \varepsilon \left(\frac{\Delta x \Delta y }{2\Delta\ell}\right) \bar f_{\times,\text{tot}} + \mathcal O (\varepsilon^2).\label{eq:GR_radar_distance}
}
\end{align}
This agrees with the result obtained in \cite{Rakhmanov_2009}.
\subsection{Null geodesics and radar distance - Finsler case}\label{sec:radar_Finsler}
Now we consider the full Randers metric \eqref{eq:metric_Randers_grav_wave}. The 1-form appearing in the metric is $\beta = \lambda\,\text{d} u$, with $0<\lambda\ll 1$. The result \eqref{eq:GR_radar_distance} can be regarded as the Radar distance in the special case that $\lambda= 0$. Our aim in this section is to find the corresponding expression for non-zero values of $\lambda$. As argued at the end of Section \ref{sec:linearized_Randers_sols}, in addition to linearizing in $\varepsilon$, we also use a perturbative expansion in $\lambda$. In fact, instead of working to first order in $\lambda$, we will work to second order in the Finslerian parameter, as certain important Finslerian effects only enter at second order, as we will see. We also neglect terms of combined order $\varepsilon\lambda^2$ and higher.\\
Recall that the geodesic structure of the Randers metric is equivalent to that of the GR metric characterized by $\lambda=0$. Hence the expressions \eqref{eq:geod_prim_u}-\eqref{eq:geod_prim_v} for the explicit form of the geodesics still apply in the current scenario.\\
The first place where the Finsler character enters is due to the modified null condition $F=0$, that one may equivalently think of as a modified dispersion relation (MDR) for massless particles. According to Section \ref{sec:Randers_causality}, null curves now satisfy $A = -\beta^2$, i.e. $-2\dot u\dot v + (1+\varepsilon f(u))\dot x^2 + (1-\varepsilon f(u))\dot y^2 +2\varepsilon f_\times \dot x\dot y= -\beta^2 = -\lambda^2\dot u^2$, which, after substitution of the form of our geodesics above, becomes $2\tilde p_{u0} + p_x^2+p_y^2=-\lambda^2$. Here we can make two important observations:
\begin{enumerate}
\item The effect due to the MDR or modified null condition enters at order $\lambda^2$;
\item In the limit $\lambda\to 0$ we recover the standard null condition used in the previous section.
\end{enumerate}
As before, using the null condition, we may eliminate $\tilde p_{u0}$ and directly substitute this into the expression \eqref{eq:geod_prim_v} for $v(\sigma)$. It follows that a null geodesic starting at $(u_0,x_0,y_0,v_0)$ at $\sigma=0$ can be described by the following parameterized path:
\begin{align}
u &= u_0+\sigma, \\
x &= x_0 + \sigma\left[\left(1 - \varepsilon \bar f_+(\sigma)\right)p_x- \varepsilon \bar f_\times(u)p_y\right],\\
y &= y_0 + \sigma\left[\left(1 + \varepsilon \bar f_+(\sigma)\right)p_y- \varepsilon \bar f_\times(u)p_x\right],\\
v &= v_0 + \frac{\sigma}{2}(p_x^2+p_y^2+\lambda^2)-\frac{1}{2}\varepsilon (p_x^2-p_y^2)\sigma \bar f_+(\sigma)- \varepsilon \sigma \bar f_\times (\sigma)p_x p_y
\end{align}
We can now use exactly the same methods as we did in the previous section for the GR wave to find the coordinate time interval between the emission and reception of a light ray between two points. From the middle two equations it follows again that
\begin{align}
p_x = \frac{\Delta x}{\Delta u}\left(1 + \varepsilon \bar f_+(\Delta u)\right) + \varepsilon \bar f_\times(\Delta u)\frac{\Delta y}{\Delta u}, \qquad p_y = \frac{\Delta y}{\Delta u}\left(1 - \varepsilon \bar f_+(\Delta u)\right)+ \varepsilon \bar f_\times(\Delta u)\frac{\Delta x}{\Delta u},
\end{align}
and substituting this into the $v$ equation yields
\begin{align}
2\Delta u\Delta v = \Delta x^2 (1+\varepsilon \bar f(\Delta u)) + \Delta y^2 (1-\varepsilon \bar f(\Delta u)) + 2\varepsilon \bar f_\times \Delta x\Delta y + \lambda^2\Delta u^2,
\end{align}
or equivalently,
\begin{align}
(1-\frac{\lambda^2}{2})\Delta t^2 = \left(1 + \varepsilon \bar f(\Delta u)\right)\Delta x^2 + \left(1 - \varepsilon \bar f(\Delta u)\right)\Delta y^2 + 2\varepsilon \bar f_\times(\Delta u) \Delta x\Delta y +(1+\frac{\lambda^2}{2})\Delta z^2 - \lambda^2\Delta z\Delta t,
\end{align}
where we have used that $-2\Delta u\Delta v = -\Delta t^2+\Delta z^2$. This equation is solved to first order in $\varepsilon$ and $\lambda^2$ (neglecting $\varepsilon\lambda^2$ terms) by\footnote{In addition to this solution there is, formally, another solution to the equation. However, this other solution has the wrong zeroth order term, namely a negative one, which renders it physically irrelevant.}
\begin{align}
\Delta t = \Delta \ell + \left(\frac{\Delta x^2 - \Delta y^2 }{2\Delta\ell}\right)\varepsilon \bar f_+(\Delta u) + \left(\frac{\Delta x \Delta y }{\Delta\ell}\right)\varepsilon \bar f_\times(\Delta u)+ \frac{1}{2}\left(\frac{\Delta x^2 + \Delta y^2 + 2 \Delta z^2}{2\Delta\ell} - \Delta z\right)\lambda^2,
\end{align}
where $\Delta \ell \equiv \sqrt{\Delta x^2 + \Delta y^2 + \Delta z^2}$. Again, as in the GR case, we may replace $\bar f$ by its zeroth order equivalent, which is still given by Eq. \eqref{eq:avg_perturbation_zeroth_order_tz}. To see why this is still the case, observe that $\bar f$ can be expressed as a zeroth order term plus an $\varepsilon$ correction, plus a $\lambda^2$ correction. Since $\bar f$ only appears in $\Delta t$ as the product $\varepsilon \bar f$, the two correction terms in $\bar f$ result in a $\mathcal O(\varepsilon^2)$ term and a $\mathcal O(\varepsilon\lambda^2)$ term, respectively, both of which we may neglect in our current perturbative expansion. It follows that, to first order in $\varepsilon$ and $\lambda^2$ we have the closed expression
\begin{align}
\Delta t = \Delta \ell &+ \left(\frac{\Delta x^2 - \Delta y^2 }{2\Delta\ell}\right)\varepsilon \bar f_+(\Delta \ell, \Delta z, t_0-z_0)+ \left(\frac{\Delta x \Delta y }{\Delta\ell}\right)\varepsilon \bar f_\times(\Delta \ell, \Delta z, t_0-z_0) \\
&+ \frac{1}{2}\left(\frac{\Delta x^2 + \Delta y^2 + 2 \Delta z^2}{2\Delta\ell} - \Delta z\right)\lambda^2, \label{eq:Randers_time_elapsed_single_trip}
\end{align}
where the expressions for $\bar f_{+,\times}(\Delta \ell, \Delta z, t_0-z_0)$ are identical to their GR counterparts, Eq. \eqref{eq:avg_perturbation_zeroth_order_tz}. This is the coordinate time interval needed for a light ray to travel a spatial distance $(\Delta x,\Delta y,\Delta z)$. \\
The total coordinate time elapsed during the complete trip from $x^\mu_0$ to $x^\mu_0 + \Delta x^\mu$ and `back' is the sum of the forward trip and the backward trip time intervals. Just as in the GR case we can write this schematically as
\begin{align}
\Delta t_\text{tot} &= \Delta t(\Delta x,\Delta y,\Delta z,t_0-z_0) + \Delta t(-\Delta x, -\Delta y, -\Delta z,t_0+\Delta t-(z_0+\Delta z))
\end{align}
Plugging in \eqref{eq:Randers_time_elapsed_single_trip} yields
\begin{align}\label{eq:Randers_time_elapsed_total_trip}
\Delta t_\text{tot} = 2\Delta \ell &+ \varepsilon\left(\frac{\Delta x^2 - \Delta y^2 }{2\Delta\ell}\right) \bar f_{+,\text{tot}} + \varepsilon \left(\frac{\Delta x \Delta y }{\Delta\ell}\right) \bar f_{\times,\text{tot}} \nonumber\\
&+ \frac{1}{2}\lambda^2\left(\frac{\Delta x^2 + \Delta y^2 + 2 \Delta z^2}{\Delta\ell} \right),
\end{align}
where the expressions for $\bar f_{+,\text{tot}} = \bar f_{+,\text{forward}} + \bar f_{+,\text{backward}}$ and $\bar f_{\times,\text{tot}} = \bar f_{\times,\text{forward}} + \bar f_{\times,\text{backward}}$ are again identical to their GR counterparts, Eqs. \eqref{eq:bar_f_forward},\eqref{eq:bar_f_backward}.\\
The last step in the computation of the radar distance $R = \Delta \tau/2$ is to convert the coordinate time interval to a proper time interval. This is where a second Finslerian effect enters. We consider again a `stationary' observer located at the point where the light ray is emitted and later received. Such an observer has a 4-velocity given by $(\dot t,0,0,0)$, where we will assume without loss of generality that $\dot t>0$. The proper time measured by an observer is given by the Finslerian length of its worldline $\Delta \tau =-\int F\, \text{d} \sigma$. If we use $\sigma = \tau$ as our curve parameter, differentiating with respect to it shows that $F$ should be normalized as $F=-1$. This is the Finsler equivalent of the fact that in GR the worldline of a particle parameterized proper-time should always satisfy $g_{\mu\nu}\dot x^\mu \dot x^\nu=-1$ (or $+1$, depending on the signature convention). In the case of our observer the condition becomes
\begin{align}
F = \text{sgn}(A)\alpha + |\beta| = \text{sgn}(-\dot t^2)\sqrt{|\dot t^2|} + \frac{|\lambda\dot t|}{\sqrt{2}} = -|\dot t| + \frac{|\lambda\dot t|}{\sqrt{2}} = \left(-1+\frac{\lambda }{\sqrt{2}}\right)\dot t \stackrel{!}{=} -1.
\end{align}
It follows that
\begin{align}
\Delta \tau = \left(1-\frac{\lambda }{\sqrt{2}}\right)\Delta t_\text{tot}
\end{align}
along the worldine of the stationary observer. Plugging in Eq. \eqref{eq:Randers_time_elapsed_total_trip} we conclude that, to first order in $\epsilon$ and second order in $\lambda$, the radar distance is given by
\begin{align}
R = \left(1 - \frac{\lambda }{\sqrt{2}}\right)\Delta \ell &+ \varepsilon\left(1 - \frac{\lambda }{\sqrt{2}}\right)\left(\frac{\Delta x^2 - \Delta y^2 }{4\Delta\ell}\right) \bar f_{+,\text{tot}} +\left(1 - \frac{\lambda }{\sqrt{2}}\right)\left(\frac{\Delta x \Delta y }{2\Delta\ell}\right)\varepsilon \bar f_{\times,\text{tot}} + \frac{\lambda^2}{4}\left(\Delta\ell + \frac{\Delta z^2}{\Delta\ell} \right).\label{eq:Randers_Radar_Distance}
\end{align}
This expresses the radar distance as a function of the spatial coordinate distances and the initial value of $t-z$ (the latter enters the expression via $\bar f_{+,\times,\text{tot}}$). Before we move on, let us summarize in what ways the Finslerian parameter $\lambda$ has entered in our derivation so far:
\begin{enumerate}
\item The null trajectories are altered due to the fact the Finsler metric induces a modified null condition or MDR. As a result, it takes a \textit{larger} coordinate time interval for a light ray to travel a given spatial coordinate distance. This effect works in all spatial directions, even the direction parallel to the propagation direction of the light ray. This effect enters at order $\lambda^2$.
\item The ratio of proper time and coordinate time is altered with the result that \textit{less proper time is experienced per unit coordinate time} . This effect enters at order $\lambda$.
\end{enumerate}
There is, however, a third way in which the parameter enters. Namely in the relation between the coordinate distance and radar distance \textit{in the absence of the wave}. For a gravitational wave in GR these conveniently coincide; in the case of our Randers waves they don't. The formula for the radar distance derived above refers merely to coordinates. In order to make sense of the result, we would like to express the right hand side in terms of measurable quantities, like the radar distances in the various directions in the absence of the wave. Employing Eq. \eqref{eq:Randers_Radar_Distance} we write
\begin{align}
\Delta X = \left(1 - \frac{\lambda }{\sqrt{2}}\right)\Delta x + \frac{\lambda^2}{4}\Delta x, \\
\Delta Y = \left(1 - \frac{\lambda }{\sqrt{2}}\right)\Delta y + \frac{\lambda^2}{4}\Delta y , \\
\Delta Z = \left(1 - \frac{\lambda }{\sqrt{2}}\right)\Delta z + \frac{\lambda^2}{2}\Delta z ,
\end{align}
for the radar distance in the $x,y$ and $z$ direction \textit{in the absence of the wave}, and
\begin{align}
R_0 = \left(1 - \frac{\lambda }{\sqrt{2}}\right)\Delta \ell + \frac{\lambda^2}{4}\left(\Delta\ell + \frac{\Delta z^2}{\Delta\ell} \right),
\end{align}
for the radar distance \eqref{eq:Randers_Radar_Distance} in the relevant direction \textit{in the absence of the wave}. Eliminating the coordinate distances in favour of the physical radar distances by virtue of the inverse transformations, valid to second order in $\lambda$,
\begin{align}
\Delta x &= \Delta X\left(1 + \frac{\lambda }{\sqrt{2}} + \frac{\lambda^2}{4}\right)\\
\Delta y &= \Delta Y\left(1 + \frac{\lambda }{\sqrt{2}} + \frac{\lambda^2}{4}\right)\\
\Delta z &= \Delta Z\left(1 + \frac{\lambda }{\sqrt{2}}\right) \\
\Delta \ell &= R_0\left(1 + \frac{\lambda }{\sqrt{2}} + \frac{3}{4}\lambda^2\right) - \frac{\Delta z^2}{4 R_0}\lambda^2 \\
&= R_0\left(1 + \frac{\lambda }{\sqrt{2}} + \frac{\lambda^2}{4}\right) - \frac{\Delta Z^2}{4 R_0}\lambda^2
\end{align}
we can express the radar distance in the presence of the wave as
\begin{align}
\boxed{
R = R_0 + \varepsilon\left(\frac{\Delta X^2 - \Delta Y^2 }{4R_0}\right)\bar f_{+,\text{tot}} + \varepsilon\left(\frac{\Delta X\Delta Y }{2R_0}\right)\bar f_{\times,\text{tot}} + \mathcal O(\epsilon^2, \lambda^3, \epsilon\lambda^2).
}
\end{align}
This is a remarkable result. By expressing the radar distance in terms of the physical observables $\Delta X,\Delta Y$ and $R_0$ rather than merely coordinates,
all dependence on $\lambda$ has disappeared to the desired order and the expression is identical to its GR counterpart, Eq. \eqref{eq:GR_radar_distance}! We must conclude, therefore, that the effect of a Randers gravitational wave on interferometer experiments is virtually indistinguishable from that of a conventional GR gravitational wave. \\
It is important to remark that by no means this implies that all phenomena in such a Finsler spacetime are identical to their GR counterparts. It might be possible to detect the presence of a non-vanishing $\lambda$ by some other means. This is a very interesting and important questions, however it is beyond the scope of this article and something to explore in future work. Our results pertain merely to gravitational wave effects as observed by interferometers.
\section{Discussion}
\label{sec:discussion}
The main aim of this paper was to study the physical effect of Finslerian gravitational waves and, in particular, to investigate the question if and how such waves can be distinguished, observationally, from the classical gravitational waves of general relativity. To this effect we have derived an expression for the radar distance at the moment a Finsler gravitational passes, say, the earth. This radar distance is the main observable that is measured by interferometers. Remarkably, we have found that the expression for the radar distance is indistinguishable from its non-Finslerian counterpart, leading us to conclude that interferometer experiments would not be able to distinguish between a general relativistic and a Finslerian gravitational wave, at least not with regards to the radar distance. This is on the one hand disappointing, since indicates means we cannot use such measurements to test the Finslerian character of our spacetime. On the other hand, though, it means that the current gravitational wave measurements are all compatible with the idea that spacetime has a Finslerian nature. To the best of our knowledge this is the first time an explicit expression for the Finslerian Radar length has been obtained for the case of finite spacetime separations, and as such our work may be seen as a proof of concept. Repeating the analysis for other Finsler spacetime geometries may lead to additional insight as to the observational signature of Finsler gravity. \\
The other parts of the article, leading up to the calculation of the radar length, were more mathematical in nature. We have introduced a class of exact solutions to the field equation in Finsler gravity that have a close resemblance to the well-known general relativistic pp-waves, and that generalize all of the pp-wave-type solutions currently known in the literature \cite{Fuster:2015tua, Fuster:2018djw, Heefer_2021}. These solutions are $(\alpha,\beta)$-metrics, where $\alpha$ is a classical pp-wave and $\beta$ is its defining covariantly constant null 1-form. Consequently our solutions are of Berwald type. Their linearized versions, we have shown, may be interpreted as Finslerian gravitational waves of modified Randers type.\\
Indeed, along the way we have introduced a small modification to the standard definition the Randers metric, motivated by the observation that the physical interpretation of the causal structure of the standard Randers metric is not immediately obvious. In contrast, we have shown that our modified Randers metrics have the nice property that their causal structure is completely equivalent to the causal structure of some auxiliary \mbox{(pseudo-)Riemannian} metric, hence leading to a perfectly clear physical interpretation. We stress that this auxilliary metric is different from the \textit{defining} \mbox{(pseudo-)Riemannian} metric $\alpha$. In the special case that the defining 1-form of the Randers metric is covariantly constant (which is the case, for example, for our solutions) we have even more satifactory results. In this case not only the causal structure, but also the affine structure of the Randers metric coincides with that of the auxilliary (pseudo)-Riemannian metric, i.e. the timelike, spacelike and null geodesics of the Finsler metric can be understood, respectively, as the timelike, spacelike and null geodesics of the auxiliary (pseudo)-Riemannian metric. A particularly nice consequence of this is the guaranteed existence of radar neighborhoods, i.e. that given an observer and any event in spacetime, there is (at least locally) exactly one future pointing light ray and one past pointing light ray that connect the worldline of the observer to the event. This is of essential importance in our work, because without this property it would have not been possible to perform the calculation of the radar distance in the last part of the article, simply because the notion of radar distance would not even make sense in that case. \\
Let us now point out some of the limitations of our investigation. First of all, it is by no means expected that the Finslerian gravitational waves discussed here should be only possible ones. Although being much larger than even the complete class of \textit{all} Lorentzian (i.e. non-Finslerian) geometries, the class of $(\alpha,\beta)$-metrics of Berwald type, to which we have restricted our analysis, is still relatively quite restrictive in the large scheme of (Finsler geometric) things. So even though our results suggest that there is no observable difference between the Finslerian gravitational waves discussed in this article and their GR counterparts, there might be more general types of Finslerian gravitational waves that \textit{could} be distinguished observationally from the general relativistic ones by means of interferometer experiments. Furthermore, radar distance experiments are by no means the only way of probing our spacetime geometry. It might be possible to detect the Finslerian character of spacetime in some other way. We have not explored this possibility here, but we plan to investigate this in the future.\\
Moreover, we have assumed in our calculations that the amplitude of the gravitational waves as well as the Finslerian deviation from general relativity are sufficienty small such that a perturbative approach to first order in the former and second order in the latter is valid. It would be of interest to repeat the calculation to higher order in perturbation theory. We expect that this would in principle be a straightforward, yet possibly tedious, exercise.
\begin{acknowledgments}
S.H. wants to thank Rick Sengers and Nicky van den Berg for fruitful discussions and in their input with regards to the figures. S.H. also wants to thank Luc Florack for fruitful discussions, in particular his suggestions with regards to perturbation theory. We would like to acknowledge networking support by the COST Action CA18108, supported by COST (European Cooperation in Science and Technology).
\end{acknowledgments}
|
2,869,038,154,546 | arxiv | \section{Introduction}
\label{sec:introduction}
A \emph{sparse model} is one in which signals of a given type $\vec{\data}
\in \reals^{\ndims}$ can be represented accurately as sparse linear combinations
of the columns (atoms) of a learned dictionary $\mat{\dict} \in \reals^{\ndims{\times}\natoms}$,
$\vec{\data} = \mat{\dict}\vec{\coef} + \vec{\err},$ where by accurate we mean that
$\norm{\vec{\err}}\ll \norm{\vec{\data}}$ (in some norm), and by sparse we mean that
the number of non-zero elements in $\vec{\coef}$, denoted by $\norm{\vec{\coef}}_0$,
is small compared to its dimension $p$. These concepts will be
formalized in the next section.
Such models, especially when $\mat{\dict}$ is learned from training samples, are
by now a well established tool in a variety of fields and applications,
see~\cite{bruckstein09,rubinstein10ieee,wright10ieee} for recent reviews.
When sparsity is a modeling device and not an hypothesis about the nature of
the analyzed signals, parameters such as the \emph{desired} sparsity in the
solutions, or the size $p$ of the dictionaries to be learned, play a
critical role in the effectiveness of sparse models for the data and tasks
at hand. However, lacking theoretical guidelines for such parameters,
published applications based on learned sparse models often rely on either
cross-validation or ad-hoc methods for determining such critical parameters
(an exception for example being the Bayesian approach,
e.g.,~\cite{carin11}). Clearly, such techniques can be impractical and/or
ineffective in many cases. This in turn hinders the further application of
such models to new types of data and applications, or their evolution into
different, possibly more sophisticated, models.
At the bottom of the aforementioned problem lie fundamental questions such
as: \emph{How rich or complex is a sparse model? How does this depend on the
required sparsity of the solutions, or the size of the dictionaries? What is
the best model for a given data class and a given task?}
The general problem of answering such questions and, in particular, the
latter, is known as \emph{model selection}. Popular model selection
techniques such as Akaike's Information Criterion
(AIC)~\cite{akaike74}, Bayes Information Criterion
(BIC)~\cite{schwartz78}, and the Minimum Description Length principle
(MDL)~\cite{rissanen78,rissanen84,barron98}, work by building a cost
function which balances a measure of \emph{goodness of fit} with one of \emph{model
complexity}, and search for a model that minimizes such cost. In this sense,
these tools can be regarded as practical implementations of the Occam's
razor principle, which states that, given two (equally accurate)
descriptions for a given phenomenon, the simpler one is usually the best.
In the Minimum Description Length principle, given a family or model class
$\mathcal{M}$ of candidate models indexed by a parameter $M$, and a data
sample $\vec{\data}$, the best model $\opt{M} \in \mathcal{M}$ is the one
that can be used to describe $\vec{\data}$ completely (including the parameters
$M$ themselves) with the fewest number of bits,
\begin{equation}
\opt{M} = \arg\min_{M \in \mathcal{M}} L(\vec{\data},M),
\label{eq:mdl-gen}
\end{equation}
where $L(\vec{\data},M)$ is a \emph{codelength assignment function} which
defines the theoretical codelength required to describe $(\vec{\data},M)$
\emph{uniquely}, and which is a key component of any MDL-based
framework. The underlying idea of MDL is that \emph{compressibility is a
good indirect way of measuring the ability of a model to capture
regularity from the data}. Common practice in MDL uses the \emph{Ideal
Shannon Codelength Assignment} \cite[Chapter~5]{cover06} to define
$L(\vec{\data},M)$ in terms of a \emph{probability assignment}
$\dpdf{p}(\vec{\data},M)$ as $L(\vec{\data},M)=-\log
\dpdf{p}(\vec{\data},M)$ (all logarithms will be assumed on base $2$
hereafter). In this way, the problem of choosing $L(\cdot)$ becomes one of
choosing a suitable probability model for $(\vec{\data},M)$. Note here how
MDL considers probability models not as a statement about the true nature of
the data, but only as a modeling tool. If we now write
$\dpdf{p}(\vec{\data},M)=\dpdf{p}(\vec{\data}|M)\dpdf{p}(M)$, we obtain
the more familiar \emph{penalized likelihood} form,
\begin{equation}
\opt{M} = \arg\min_{M \in \mathcal{M}} -\log \dpdf{P}(\vec{\data}|M) -\log \dpdf{P}(M),
\label{eq:mdl-twoparts}
\end{equation}
with $-\log \dpdf{P}(M)$ representing the model complexity, or
\emph{model cost}, term.
The use of MDL for sparse signal modeling has been explored for example in
the context of wavelet-based denoising (where $M=\vec{\coef} \in
\ensuremath{\mathbb{R}}^\ndims$, $p=\ndims$ and $\mat{\dict} \in
\ensuremath{\mathbb{R}}^{\ndims{\times}\ndims}$ is \emph{fixed}) of images corrupted by
additive white Gaussian noise (AWGN) \cite{saito94, krim95, moulin99,
rissanen00, roos09}. In \cite{saito94,krim95,moulin99}, the data is
described using \refeq{eq:mdl-twoparts} with $-\log\dpdf{p}(\vec{\data}|\vec{\coef})$
assumed to be \emph{solely due to noise}, and an $L(\vec{\coef})$ term which
exploits sparsity,
\begin{equation}
\opt{\vec{\coef}} = \arg\min_{\vec{\coef} \in \mathcal{M}} \frac{1}{2\sigma^2_e}\norm{\vec{\data}-\mat{\dict}\vec{\coef}}_2^2 + L(\vec{\coef}).
\end{equation}
Here the first term corresponds to the ideal codelength, up to a constant,
of an IID Gaussian sequence of zero mean and known variance
$\sigma^2_e$. The difference between \cite{saito94, krim95, moulin99} lies
in the definition of $L(\vec{\coef})$. The line of work \cite{rissanen00,roos09}
follows the modern MDL approach by using sophisticated tools from coding
theory, the so called \emph{one-part universal codes}, which encodes
$(\vec{\data},\vec{\coef})$ jointly, and reduces the arbitrariness in defining
$L(\vec{\coef})$. However, such tools can only be applied for certain choices of
$\dpdf{p}(\vec{\data}|\vec{\coef})$ and $\dpdf{p}(\vec{\coef})$. In the case of
\cite{rissanen00,roos09}, the choice is to use continuous Gaussian models
for both. As Gaussian models are \emph{not well suited to the typically
observed statistical properties of such data}, the performance of the
resulting denoisers for example is very poor compared to the current
state-of-the-art.
The present work extends and/or improves on the aforementioned work in the
following ways:\footnote{This paper extends preliminary results reported in
\cite{ramirez11icassp}. In particular, new dictionary learning algorithms
are developed which include $\cost{1}$ atom regularization, forward and
backward dictionary size adaptation. We also develop a new model for the
low-rank matrix approximation problem.}
\begin{itemize}
\item MDL-based sparse coding is extended to the case of
\emph{non-orthonormal, possibly over-complete and learned dictionaries}
$\mat{\dict}$. As we will see in Section~\ref{sec:encoding-algorithms}, this
extension, critical to deal with modern, very successful sparse modeling
approaches, poses not only new design problems but also significant
computational challenges compared to the orthonormal case.
\item Efficient codelengths (probability distributions) for the different
components to encode (error, coefficients, dictionary) are obtained by
\emph{applying universal coding schemes to priors that are suited to the
typically observed statistics of such data.}
\item As a particular point of the above item, systematic model-fit
deviations are naturally taken into account in
$\dpdf{p}(\vec{\data}|\vec{\coef})$. The resulting fitting terms fall into the
category of robust estimators (see~\cite{huber64}), thus marrying robust
statistics with information theory and with sparse modeling (dictionary
learning).
\item We comply with the basic MDL sanity check, meaning, that \emph{the
theoretical codelengths obtained are smaller than a ``raw''
description of the data}. We do so by including quantization in our models,
and treating its effect rigorously.
\item Dictionary learning within the MDL framework allows us to
\emph{optimize both the number of atoms $p$, as well as their
structure}, resulting in a natural and objective form of regularization
for $\mat{\dict}$.
\item Structure is naturally added to the sparse models in the form of
Markovian dependencies between adjacent data samples. We also show an
extension of the model to the problem of low-rank matrix completion.
\end{itemize}
As a result of the above features, we obtain for the first time an
MDL-based, parameter-free framework for signal modeling that is able to
yield state-of-the-art results.
At the theoretical level, this brings us a step closer to the fundamental
understanding of \emph{learned} sparse models and brings a different
perspective, that of MDL, into the sparse modeling world.
The remainder of this paper is organized as follows. Sparse models, and the
associated notation, are described in detail in
Section~\ref{sec:background}. Section~\ref{sec:mdl-model-selection}
introduces MDL, and its application to sparse models. In
Section~\ref{sec:encoding-scheme} we present the probability models used to
assign codelengths to different parts of the encoded data, while
sections~\ref{sec:encoding-algorithms} and~\ref{sec:learning-algorithms}
describe the actual sparse coding and dictionary learning algorithms
developed. Experimental results follow in Section~\ref{sec:results}, and the
paper is concluded in Section~\ref{sec:conclusion}.
\section{Background on sparse modeling}
\label{sec:background}
Assume we are given $n$ $\ndims$-dimensional data samples ordered as
columns of a matrix $\mat{\data}=[\vec{\data}_1|\vec{\data}_2|\ldots|\vec{\data}_n] \in
\ensuremath{\mathbb{R}}^{\ndims{\times}n}$. Consider a linear model for $\mat{\data}$,
$\mat{\data} = \mat{\dict}\mat{\coef} + \mat{\err},$ where
$\mat{\dict}=[\vec{\dict}_1|\vec{\dict}_2|\ldots|\vec{\dict}_p]$ is an
$\ndims{\times}p$ dictionary consisting of $p$ atoms,
$\mat{\coef}=[\vec{\coef}_1|\vec{\coef}_2|\ldots|\vec{\coef}_n] \in
\ensuremath{\mathbb{R}}^{p{\times}n}$ is a matrix of coefficients where each
$\si$-th column $\vec{\coef}_\si$ specifies the linear combination of columns of
$\mat{\dict}$ that approximates $\vec{\data}_\si$, and
$\mat{\err}=[\vec{\err}_1|\vec{\err}_2|\ldots|\vec{\err}_n] \in
\ensuremath{\mathbb{R}}^{\ndims{\times}n}$ is a matrix of approximation errors.
We define the support, or active set, of a vector $\vec{\coef} \in \reals^{\natoms}$
as $\ensuremath{\Gamma}\xspace{\vec{\coef}}= \setdef{\ai:\vec{\coef}_\ai \neq 0}$. Let
$\Gamma=\ensuremath{\Gamma}\xspace{\vec{\coef}}$. We also represent the support of $\vec{\coef}$ as a
binary vector $\vec{\supp} \in \setdef{0,1}^{\natoms}$ such that $z_i=1$ for $i \in
\Gamma$, and $0$ otherwise. We refer to the sub-vector in
$\ensuremath{\mathbb{R}}^{|\Gamma|}$ of non-zero elements of $\vec{\coef}$ as either
$\vec{\coef}\svec{\Gamma}$ or $\vec{\coef}\svec{\vec{\supp}}$. Both conventions are
extended to refer to sets of columns of matrices, for example,
$\mat{\dict}\svec{\Gamma}$ is the matrix formed by the $|\Gamma|$ columns of
$\mat{\dict}$ indexed by $\Gamma$. We will use the pseudo-norm
$\norm{\vec{\coef}}_0:= |\Gamma| =\sum\vec{\supp}$ to denote the number of
non-zero elements of $\vec{\coef}$. We say that the model is \emph{sparse} if we
can achieve $\norm{\vec{\err}_\si}_2 \ll \norm{\vec{\data}_\si}_2$ and
$\norm{\vec{\coef}}_0 \ll p$ simultaneously for all or most
$\si=1,\ldots,n$.
The result of quantizing a real-valued variable $y$ to precision
$\delta$ is denoted by $\quant{y}_\delta$. This notation is extended to
denote element-wise quantization of vector (e.g., $\quant{\vec{\err}}$) and
matrix operands (e.g., $\quant{\mat{\err}}$).
\subsection{Sparse coding}
\label{sec:background:coding}
One possible form of expressing the \emph{sparse coding problem} is given
by
\begin{equation}
{\opt{\vec{\coef}}_\si} \!=\! \arg\min_{\vec{\aux} \in \reals^{\natoms}}
\norm{\vec{\data}_\si\!-\!\mat{\dict}\vec{\aux}}_2
\ensuremath{\quad\mathrm{s.t.}\quad}\norm{\vec{\aux}}_0 \leq \gamma,
\label{eq:l0-sparse-coding}%
\end{equation}
where $\gamma \ll p$ indicates the desired \emph{sparsity level} of
the solution. Since problem \refeq{eq:l0-sparse-coding} is non-convex and
NP-hard, approximate solutions are sought. This is done either by using
greedy methods such as Matching Pursuit (MP) \cite{mallat93}, or by solving a
convex approximation to \refeq{eq:l0-sparse-coding}, commonly known as the
\emph{lasso}~\cite{tibshirani96},
\begin{equation}
\opt{\vec{\coef}}_\si = \arg\min_{\vec{\aux} \in \reals^{\natoms}} \frac{1}{2}\norm{\vec{\data}_\si-\mat{\dict}\vec{\aux}}_2
\!\!\ensuremath{\quad\mathrm{s.t.}\quad}\! \norm{\vec{\aux}}_1 \leq \tau.
\label{eq:l1-sparse-coding-lasso}
\end{equation}
There exists a body of results showing that, under certain conditions on
$\gamma$ and $\mat{\dict}$, the problem \refeq{eq:l0-sparse-coding} can be solved
exactly via \refeq{eq:l1-sparse-coding-lasso} or MP (see for example~\cite{bruckstein09,candes06}).
In other cases, the objective is not to solve \refeq{eq:l0-sparse-coding},
but to guarantee some property of the estimated $\opt{\vec{\coef}}_\si$. For
example, in the above mentioned case of AWGN denoising in the wavelets
domain, the parameter $\tau$ can be chosen so that the resulting estimators
are universally optimal with respect to some class of signals
\cite{donoho94}. However, if $\mat{\dict}$ is arbitrary, no such choice
exists. Also, if $\mat{\dict}$ is orthonormal, the problem
\refeq{eq:l1-sparse-coding-lasso} admits a closed form solution
obtained via the so-called \emph{soft thresholding}
\cite{donoho94}. However, again, for general $\mat{\dict}$, no such solution
exists, and the search for efficient algorithms has been a hot topic
recently, e.g., \cite{friedman08,beck09siam,efron04}.
\subsection{Dictionary learning}
\label{sec:background:learning}
When $\mat{\dict}$ is an optimization variable, we refer to the resulting problem
as \emph{dictionary learning}:
\begin{equation}
(\opt\mat{\coef},\opt\mat{\dict}) =
\arg\min_{\mat{\coef},\mat{\dict}} \sum_{\si=1}^{n}
{\frac{1}{2}\norm{\vec{\data}_\si-\mat{\dict}\vec{\coef}_\si}_2^2\;
\ensuremath{\quad\mathrm{s.t.}\quad} \norm{\vec{\coef}_\si}_r}\leq\tau\,\forall\,j,\;\;
\norm{\vec{\dict}_k}_2 \leq 1\,\forall k,
\label{eq:traditional-dictionary-learning}
\end{equation}
with $0 \leq r \leq 1$. The constraint $\norm{\vec{\dict}_k}_2 \leq
1\,,\;k=1,\ldots,p$, is necessary to avoid an arbitrary decrease of the
cost function by setting $\mat{\dict} \leftarrow \alpha\mat{\dict}$, $\mat{\coef} \leftarrow
\frac{1}{\alpha}\mat{\coef}$, for any $\alpha > 1$.
The cost function in \refeq{eq:traditional-dictionary-learning} is
non-convex in $(\mat{\coef},\mat{\dict})$, so that only local convergence can be
guaranteed. This is usually achieved using alternate optimization in
$\mat{\dict}$ and $\mat{\coef}$. See for example \cite{aharon06,mairal10jmlr} and
references therein.
\subsection{Issues with traditional sparse models: a motivating example}
\label{sec:background:issues}
Consider the K-SVD-based~\cite{aharon06} sparse image restoration framework
\cite{mairal08a}. This is an \cost{0}-based dictionary learning framework,
which approximates \refeq{eq:traditional-dictionary-learning} for the case
$r=0$ by alternate minimization. In the case of image denoising, the general
procedure can be summarized as follows:
\begin{enumerate}
\item An initial, \emph{global} dictionary $\mat{\dict}_0$ is learned
using training samples for the class of data to be processed (in this case
small patches of natural images). The user must supply a patch width $w$,
a dictionary size $p$ and a value for $\tau$.
\item The noisy image is decomposed into overlapping $w{\times}w$ patches (one patch
per pixel of the image), and its noisy patches are used to further adapt $\mat{\dict}$ using the following
\emph{denoising} variant of \refeq{eq:traditional-dictionary-learning},
\begin{align}
(\opt\mat{\dict},\opt\mat{\coef}) =
\arg\min_{\mat{\dict},\mat{\coef}} \sum_{\si=1}^{n}
\norm{\vec{\coef}_\si}_0
\,,
\ensuremath{\quad\mathrm{s.t.}\quad}
&\frac{1}{2}\norm{\vec{\data}_\si-\mat{\dict}\vec{\coef}_\si}_2^2 \leq C\sigma^2\,,\;
&\norm{\vec{\dict}_k}_2 = 1\,,\;k=1,\ldots,p.
\label{eq:traditional-dictionary-learning-denoising}
\end{align}
Here the user must further supply a constant $C$ (in \cite{mairal08a}, it is
$1.32$), the noise variance $\sigma^2$, and the number of iterations $J$ of
the optimization algorithm, which is usually kept
small to avoid over-fitting (the algorithm is \emph{not} allowed to converge).
\item The final image is constructed by assembling the patches in
$\opt\mat{\data}=\opt\mat{\dict}\opt\mat{\coef}$ into the corresponding original
positions of the image. The final pixel value at each location is an
average of all the patches to which it belongs, plus a small fraction $0
\leq \lambda \leq 1$ of the original noisy pixels ($\lambda=30/\sigma$ in \cite{mairal08a}).
\end{enumerate}
Despite the good results obtained for natural
images, several aspects of this method are not satisfactory:
\begin{itemize}
\item Several parameters ($w$, $p$, $\tau$, $C$, $J$, $\lambda$) need to be tuned. \emph{There
is no interpretation, and therefore no justifiable choice for these
parameters, other than maximizing the empirical performance of the
algorithm (according to some metric, in this case PSNR) for the data at
hand.}
\item The effect of such parameters on the result is shadowed by the effects
of later stages of the algorithm and their associated parameters
(e.g. overlapping patch averaging). \emph{There is no fundamental way to
optimize each stage separately.}
\end{itemize}
As a partial remedy to the first problem, Bayesian sparse models were
developed (e.g., \cite{carin11}) where these parameters are assigned prior
distributions which are then learned from the data. However, this approach
still does not provide objective means to compare different models (with
different priors, for example). Further, the Bayesian technique implies
having to repeatedly solve possibly costly optimization problems, increasing
the computational burden of the application.
As mentioned in the introduction, this work proposes to address the above
practical issues, as well as to provide a new angle into dictionary
learning, by means of the MDL principle for model selection. The details on
how this is done are the subject of the following sections.
\section{Sparse model selection and MDL}
\label{sec:mdl-model-selection}
Given data $\mat{\data}$, a maximum support size $\gamma$ and a dictionary size
$p$, traditional sparse modeling provides means to estimate the best
model $M=(\mat{\coef},\mat{\dict})$ for $\mat{\data}$ within the set
$\mathcal{M}(\gamma,p)$ defined as
\begin{equation}
\mathcal{M}(\gamma,p) := \setdef{(\mat{\coef},\mat{\dict}): \norm{\vec{\coef}_\si}_0 \leq \gamma, \si=1,\ldots,n, \mat{\dict} \in \reals^{\ndims{\times}\natoms} }.
\label{eq:traditional-sparse-modeling}
\end{equation}
We call such set a \emph{sparse model class} with hyper-parameters
$(\gamma,p)$. Such classes are nested in the following way: first, for
a fixed dictionary size $p$ we have $\mathcal{M}(\gamma-1,p)
\subset \mathcal{M}(\gamma,p)$. Also, for fixed $\gamma$, if we
consider $\mathcal{M}(\gamma,p-1)$ to be a particular case of
$\mathcal{M}(\gamma,p)$ where the $p$-th atom is all-zeroes and
$a_{p\si}=0,\,\forall j$, then we also have that
$\mathcal{M}(\gamma,p-1) \subset \mathcal{M}(\gamma,p)$.
If one wants to choose the best model among all possible classes
$\mathcal{M}(\gamma,p)$, the problem becomes one of \emph{model
selection}. The general objective of model selection tools is to define
an objective criterion for choosing such model. In particular, MDL model
selection uses codelength as such criterion. More specifically,
this means first computing the best model within each family as
\[
(\mat{\coef}(\gamma,p),\mat{\dict}(\gamma,p)) = \arg\min \{ L(\mat{\data},\mat{\coef},\mat{\dict}): {(\mat{\coef},\mat{\dict}) \in \mathcal{M}(\gamma,p)}\},
\]
and then choosing $(\opt{\gamma},\opt{p}) = \arg\min
\setdef{L(\mat{\data},\mat{\coef}(\gamma,p),\mat{\dict}(\gamma,p)):
0\leq\gamma\leqp,\,p > 0}$.
When $\mat{\dict}$ is fixed, which is the case of sparse coding, the only model
parameter is $\mat{\coef}$, and we have $p+1$ possible classes,
$\mathcal{M}(\gamma) = \setdef{\mat{\coef}: \norm{\vec{\coef}_\si}_0 \leq \gamma,
\si=1,\ldots,n}$, one for each $0 \leq \gamma \leq p$. If
each data sample $\vec{\data}_\si$ from $\mat{\data}$ is encoded independently, then,
as with traditional sparse coding (the framework can also be extended to
\emph{collaborative} models), the model selection problem can be broken into
$n$ sub-problems, one per sample, by redefining the model class
accordingly as $\mathcal{M}(\gamma) = \setdef{\vec{\coef}: \norm{\vec{\coef}}_0 \leq
\gamma}$. Clearly, in the latter case, the optimum $\gamma$ can vary from
sample to sample.
Compared to the algorithm in Section~\ref{sec:background:issues}, we have a
\emph{fundamental, intrinsic} measure of the quality of each model, the
codelength $L(\mat{\data},\mat{\coef},\mat{\dict})$, to guide our search through the
models, and which is unobscured from the effect of possible later stages of
the application. In contrast, there is no obvious intrinsic measure of
quality for models learned through \refeq{eq:traditional-sparse-modeling},
making comparisons between models learned for different parameters (patch
width $w$, regularization parameter $\tau$, norm $r$, constants $C,\lambda$) possible only in terms
of the observed results of the applications where they are embedded.
The second advantage of this framework is that it allows to select, in a
fundamental fashion, the best model parameters \emph{automatically}, thus
resulting in parameter-free algorithms.\footnote{For the case of image
processing, the patch width $w$ is also a relevant parameter that could be
automatically learned with the same MDL-based framework presented
here. However, since it is specific to image processing, and due to space
constraints and for clarity of the exposition, it will not be considered
as part of the model selection problem hereafter.}
Such advantages will be of practical use only if the resulting computational
algorithms are not orders of magnitude slower than the traditional ones, and
efficient algorithms are a critical component of this framework, see
Section~\ref{sec:encoding-algorithms}.
\subsection{A brief introduction to MDL}
\label{sec:mdl-model-selection:intro}
For clarity of the presentation, in this section we will consider $\mat{\dict}$
fixed, and a single data sample $\vec{\data}$ to be encoded. The Minimum
Description Length principle was pioneered by Rissanen~\cite{rissanen78} in
what is called ``early MDL,'' and later refined by himself~\cite{rissanen84}
and other authors to form what is today known as ``modern MDL'' (see
\cite{grunwald07} for an up-to-date extensive reference on the subject). The
goal of MDL is to provide an objective criterion to select the model $M$,
out of a family of competing models $\mathcal{M}$, that gives the best
description of the \emph{given} data $\vec{\data}$. In this case of sparse coding
with fixed dictionary we have $M=\vec{\coef}$.
The main idea of MDL is that, the best model for the data at hand is the one
that is able to capture more \emph{regularity} from it. The more regularity
a model captures, the more succinct the description of the data will be
under that model (by avoiding redundancy in the description). Therefore, MDL
will select the best model as the one that produces the shortest (most
efficient) description of the data, which in our case is given by
$L(\vec{\data},\vec{\coef})$.
As mentioned in Section~\ref{sec:introduction}, MDL translates the problem
of choosing a codelength function $L(\cdot)$ to one of choosing probability
models by means of the ideal Shannon codelength assignment
$L(\vec{\data},\vec{\coef}) = -\log \dpdf{P}(\vec{\data},\vec{\coef})$. It is common to extend
such ideal codelength to continuous random variables $x$ with probability
density function $\cpdf{p}(x)$ as $L(x) = -\log \cpdf{p}(x)$, by assuming
that they will be quantized with sufficient precision so that
\begin{equation}
\dpdf{P}(\quant{x}_\delta) \approx \cpdf{p}(x)\delta,
\label{eq:approx-codelength}
\end{equation}
and disregarding the
constant term $-\log \delta$ in $L(x)$, as it is inconsequential for model
selection. However, in our framework, the optimum quantization levels will
often be large enough so that such approximations are no longer valid.
To produce a complete description of the data $\vec{\data}$, the best model
parameters $\opt{M}$ used to encode $\vec{\data}$ need to be included in the
description as well. If the only thing we know is that $\opt{M}$ belongs to
a given class $\mathcal{M}$, then the cost of this description will depend
on how large and complex $\mathcal{M}$ is. MDL will penalize more those
models that come from larger (more complex) classes. This is summarized in
one of the fundamental results underlying MDL~\cite{rissanen84}, which
establishes that the minimum number of bits required for encoding \emph{any}
data vector $\vec{\data}$ using a model from a class $\mathcal{M}$ has the form
$
L_\mathcal{M}(\vec{\data}) = \mathcal{L}_\mathcal{M}(\vec{\data}) + \mathcal{C}(\mathcal{M}),
$
where $\mathcal{L}_\mathcal{M}(\vec{\data})$ is called the \emph{stochastic
complexity}, which depends only on the particular instance of $\vec{\data}$
being encoded, and $\mathcal{C}(\mathcal{M})$ is an unavoidable
\emph{parametric complexity} term, which depends \emph{solely} on the
structure, geometry, etc., of the model class $\mathcal{M}$.
In the initial version of MDL~\cite{rissanen78}, the parameter $\opt{M}$ was
first encoded separately using $L(\opt{M})$ bits, and then $\vec{\data}$ was
described given $\opt{M}$ using $L(\vec{\data}|\opt{M})$ bits, so that the
complete description of $\vec{\data}$ required $L(\vec{\data}|\opt{M}) + L(\opt{M})$
bits. This is called a \emph{two-parts code}. An asymptotic expression of
this MDL was developed in~\cite{rissanen78} which is equivalent to the BIC criterion~\cite{schwartz78},
only in the asymptotic regime.
As we will see next, modern MDL departs significantly from this two-parts
coding scheme.
\subsection{Modern MDL and universal coding}
\label{sec:mdl-model-selection:intro2}
The main difference between ``early''~\cite{rissanen78} and
``modern''~\cite{rissanen84,barron98} MDL is the introduction of
\emph{universal codes} as the main building blocks for computing
codelengths. In a nutshell, universal coding can be regarded as an extension
of the original Shannon theory to the case where the probability model
$\dpdf{P}(\cdot)$ of the data to be encoded is not fully specified, but only
known to belong to a certain class of candidate probability models
$\mathcal{M}$ (recall that classic Shannon theory assumes that
$\dpdf{P}(\cdot)$ is perfectly known). For example, $\mathcal{M}$ can be a
family of parametric distributions indexed by some parameter $M$. Akin to
Shannon theory, for an encoding scheme to be called \emph{universal}, the
codelengths it produces need to be optimal, in some sense, with respect to
the codelengths produced by all the models in $\mathcal{M}$.
Various universality criteria exist. For example, consider the
\emph{codelength redundancy} of a model $\dpdf{Q}(\cdot)$,
$
\mathcal{R}(\vec{\data};Q) = -\log \dpdf{Q}(\vec{\data}) - \left[\arg\min_{\dpdf{P} \in \mathcal{M}} -\log \dpdf{P}(\vec{\data})\right].
$
In words, this is the codelength overhead obtained with $\dpdf{Q}(\cdot)$
for describing an instance $\vec{\data}$, compared to the best model in
$\mathcal{M}$ that could be picked for $\vec{\data}$, \emph{with hindsight} of
$\vec{\data}$. For example, if $\mathcal{M}$ is a parametric family, such model
is given by the maximum likelihood (ML) estimator of $M$.
A model $\dpdf{Q}(\cdot)$ is called \emph{minimax universal}, if it
minimizes the \emph{worst case redundancy},
$
\mathcal{R}(Q) = \arg\max_{\vec{\data} \in \reals^{\ndims}} \mathcal{R}(\vec{\data};Q).
$
One of the main techniques in universal coding is \emph{one-part coding},
where the data $\vec{\data}$ and the best class parameter $\hat{M}$ are encoded
jointly. Such codes are used in the line of work of ``MDL denoising'' due
to Rissanen and his collaborators~\cite{rissanen00,roos09}. However,
applying one-part codes at this level restricts the probability models to be
used.\footnote{In particular, those used in~\cite{rissanen00,roos09} are
based on the Normalized Maximum Likelihood (NML) universal
model~\cite{shtarkov87}, which requires closed-form MLE estimators for its
evaluation, something that cannot be obtained for example with a Laplacian
prior on $\vec{\coef}$ and non-orthogonal dictionaries.} As a consequence, the
results obtained with this approach in such works are not competitive with
the state-of-the-art. Therefore, in this work, we maintain a two-parts
encoding scheme (or three parts, if $\mat{\dict}$ is to be encoded as well),
where we separately describe $\vec{\coef}$, $\mat{\dict}$, and $\vec{\data}$ given
$(\vec{\coef},\mat{\dict})$. We will however use universal codes to describe each of
these parts as efficiently as possible. Details on this are given in the
next section.
\section{Encoding scheme}
\label{sec:encoding-scheme}
We now define
the models and encoding schemes used to describe each of the parts that
comprise a sparse model for a data sample $\vec{\data}$; that is, the dictionary
$\mat{\dict}$, the coefficients $\vec{\coef}$, and the approximation error
$\vec{\err}=\vec{\data}-\mat{\dict}\vec{\coef}$ (which can include both the noise and the model
deviation), which can be regarded as the conditional description of $\vec{\data}$
given the model parameters $(\vec{\coef},\mat{\dict})$. The result will be a cost
function $L(\vec{\data})$ of the form (note that $\vec{\data}=\mat{\dict}\vec{\coef}+\vec{\err}$ can
be fully recovered from $(\vec{\err},\vec{\coef},\mat{\dict})$),
\[
L(\vec{\data},\vec{\coef},\mat{\dict}) = L(\vec{\err}|\vec{\coef},\mat{\dict}) + L(\vec{\coef}|\mat{\dict}) + L(\mat{\dict}).
\]
While computing each of these parts, three main issues need to be dealt
with:
\begin{enumerate}
\item {\bf Define appropriate probability models.} Here, it is fundamental
to incorporate as much prior information as possible, so that no cost is
paid in learning (and thereby coding) already known statistical features
of the data. Examples of such prior information include sparsity itself,
invariance to certain transformations or symmetries, and (Markovian)
dependencies between coefficients.
\item {\bf Deal with unknown parameters.} We
will use universal encoding strategies to encode data efficiently
in terms of families of probability distributions.
\item {\bf Model the effect of quantization.} All
components, $\vec{\err},\vec{\coef},\mat{\dict}$ need to be quantized to some precisions,
respectively $\delta_e,\delta_a,\delta_d$, in order to obtain finite,
realistic codelengths for describing $\vec{\data}$ (when the precision variable
is obvious from the argument to which it is applied, we drop it to
simplify the notation, for example, we will write
$\quant{e}_{\delta_e}$ as $\quant{e}$). This quantization introduces several
complications, such as optimization over discrete domains, increase of
sparsity by rounding to zero, increase of approximation error, and
working with discrete probability distributions. \end{enumerate}
All such issues need to be considered with efficiency of computation in
mind. The discussion will focus first on the traditional, single-signal case
where each sample $\vec{\data}$ is encoded separately from the rest. At the end
of this section, we will also discuss the extension of this framework to a
multi-signal case, which has several algorithmic and modeling advantages
over the single-signal case, and which forms the basis for the dictionary
learning algorithms described later.
\subsection{Encoding the sparse coefficients}
\label{sec:encoding:coefficients}
\noindent {\bf Probability model:} Each coefficient in $\vec{\coef}$ is modeled
as the product of three (non-independent) random variables, $\alpha =
\zeta\phi(\nu+\delta_a)$, where $\zeta=1$ implies $\alpha
\neq 0$, $\phi =\mathrm{sgn}(\alpha)$, and
$\nu=\max\{\abs{\alpha}-\delta_a,0\}$ is the absolute value of
$\alpha$ corrected for the fact that $\nu\geq \delta_a$ when
$\zeta=1$. \footnote{Note that it is necessary to encode $\phi$ and
$\nu$ separately, instead of considering $\phi\nu$ as one random
variable, so that the sign of $\alpha$ can be recovered when
$|\nu|=0$.}
We model $\zeta$ as a Bernoulli variable with
$\dpdf{p}(\zeta=1)=\rho_a$. Conditioned on $\zeta=0$,
$\phi=\nu=0$ with probability $1$, so no encoding is
needed.\footnote{We can easily extend the proposed model beyond
$\phi=\nu=0$ and consider a distribution for $\phi,\nu$ when
$\zeta=0$. This will naturally appear as part of the coding cost. This
extends standard sparse coding to the case where the non-sparse component
of the vector are not necessarily zero.} Conditioned on $\zeta=1$, we
assume $\dpdf{P}(\phi=-1)=\dpdf{P}(\phi=1)=1/2$, and $\nu$ to be a
(discretized) exponential, $\ensuremath{\mathrm{Exp}}(\theta_{a})$. With these
choices, $\dpdf{P}(\phi\nu|\zeta=1)$ is a (discretized) Laplacian
distribution, which is a standard model for transform (e.g., DCT, Wavelet)
coefficients. This encoding scheme is depicted in
Figure~\ref{fig:coef-model}(a,b). The resulting model is a particular case
of the ``spike and slab'' model used in statistics (see \cite{ishwaran05}
and references therein). A similar factorization of the sparse coefficients
is used in the Bayesian framework as well~\cite{carin11}.
\begin{figure*}[p]
\begin{center}%
\includegraphics[height=1.5in]{coef-model.png}
\vspace{-0ex}\caption{\label{fig:coef-model} Encoding of the sparse
code. (a) After quantization (here $\delta_a\!=\!1$), each coefficient
$a_\ai$ is decomposed into three variables,
$z_\ai\!=\!\ensuremath{\mathbf{1}}(a_\ai)$,
$s_\ai\!=\!\mathrm{sgn}(a_\ai)$ and
$v_\ai\!=\!\max\{|a_\ai|-\delta_a,0\}$. These are respectively
modeled by random variables $\zeta \sim \ensuremath{\mathrm{Ber}}(\rho_a)$,
$\phi \sim \ensuremath{\mathrm{Ber}}(1/2)$, $\nu \sim \ensuremath{\mathrm{Exp}}(\theta_a)$
(only the shaded numbers are actually encoded) (b) Scheme of the mapping
from continuous coefficients (random variable $\alpha$), into $\zeta$,
$\phi$ and $\nu\xspace\!$. (c) Ideal codelength for the MOE model
for $\nu$, $-\log \cpdf{q}_\nu(\nu;\kappa,\beta)$. This is a
smooth, concave function.}
\end{center}
\end{figure*}
\noindent {\bf Unknown parameters:} According to the above
model, the resulting encoding scheme for the coefficients (sparse code) is a three-parts
code: $L(\vec{\coef}) = L(\vec{\supp}) + L(\vec{\sign}|\vec{\supp}) + L(\vec{\coef}|\vec{\sign},\vec{\supp})$.
The support $\vec{\supp}$ is described using the \emph{enumerative two-parts
code} \cite{cover73}, which first describes its size, $\norm{\vec{\coef}}_0$,
using $\log p$ bits, and then the particular arrangement of the ones
in $\vec{\supp}$ using $\log {p \choose \norm{\coefv}_0}$ bits. The total
codelength for coding $\vec{\supp}$ is then
$
L(\vec{\supp}) = \log p + \log {p \choose \norm{\vec{\coef}}_0}.
$
This is a universal encoding scheme, and as such is more efficient than
those used previously in \cite{saito94,moulin99}. Then,
$L(\vec{\sign}|\vec{\supp})=\norm{\vec{\coef}}_0$ bits are needed to encode
$\vec{\sign}\svec{\vec{\supp}}$, the actual signs of the non-zero
coefficients. Finally, we need to encode the magnitudes of the
$\norm{\vec{\coef}}_0$ non-zero coefficients, $\vec{\coef}\svec{\vec{\supp}}$. We do so by
considering it first as a sequence of exponentially-distributed continuous
random variables, to which quantization is applied later. Since the
parameter $\theta_a$ of the exponential is unknown,\footnote{This
parameter is related to the sparsity level, and as discussed in
Section~\ref{sec:background:issues}, is usually assumed known or
determined via cross-validation. Following~\cite{ramirez10tip}, here we
use tools from universal modeling, which permit to also automatically
handle the non-stationarity of this parameter and its expected variability
for different non-zero entries of $\vec{\coef}$.} we use a universal model
$\cpdf{q}_\nu(\cdot)$ for the class of continuous exponential
distributions instead. We obtain such universal model
$\cpdf{q}_\nu(\nu)$ via a convex mixture, one of the standard techniques
for this,
\begin{align}
\cpdf{q}_\nu(\nu;\kappa_a,\beta_a) &=\!\!
\int_{0}^{+\infty}{\!\!\!\!\!\!\!\!\Gamma(\theta;\kappa_a,\beta_a)\frac{\theta}{2}e^{-\theta|\nu|}d\theta},
\label{eq:val-mixture}
\end{align}
where the mixing function
$\Gamma(\theta;\kappa,\beta)={\Gamma(\kappa)}^{-1}\theta^{\kappa-1}\beta^{\kappa}e^{-\beta\theta},$
is the Gamma density function of (non-informative) shape and scale parameters $\kappa$ and
$\beta$. With this choice, \refeq{eq:val-mixture} has a closed form
expression, and the degenerate cases $\theta=0$ and $\theta=\infty$ are
given zero weight. The resulting \emph{Mixture of Exponentials} (MOE)
density function $\cpdf{q}_\nu(\nu)$, is given by (see
\cite{ramirez10tip} for details),
\[
\cpdf{q}_\nu(\nu;\beta_a,\kappa_a) = \kappa_a\beta_a^{\kappa_a}(\nu+\beta_a)^{-(\kappa_a+1)},\;\nu \in \ensuremath{\mathbb{R}}^{+}.
\]
Note that the universality of this mixture model does not depend on the
values of the parameters $\kappa_a,\beta_a$, and guided by
\cite{ramirez10tip}, we set $\kappa_a=3.0$ and $\beta_a=50$. The
ideal Shannon codelength for this density function distribution is given by
$
-\log \cpdf{q}_\nu(\nu;\kappa_a,\beta_a) = -\log \kappa_a -\kappa_a\log \beta_a + (\kappa_a+1)\log(\nu+\beta_a).$
This function, shown in Figure~\ref{fig:coef-model}(c), is non-convex,
however continuous and differentiable for $\nu > 0$.
\noindent {\bf Quantization:} On one hand, quantizing the coefficients
to a finite precision $\delta_a$ increases the approximation/modeling error, from
$\vec{\data}-\mat{\dict}\vec{\coef}$ to $\vec{\data}-\mat{\dict}\quant{\vec{\coef}}_{\delta_a}$. This
additional error, $\mat{\dict}(\vec{\coef}-\quant{\vec{\coef}}_{\delta_a})$, will
clearly increase with $\delta_a$. On the other hand, larger
$\delta_a$ will reduce the description length of the non-zero values of
the coefficients, $\quant{\vec{\val}}_{\delta_a}$.
In practice, for reasonable quantization steps, the error added by such
quantization is negligible compared to the approximation error. For
example, for describing natural images divided in patches of $8{\times}8$
pixels, our experiments indicate that there is no practical advantage in
using a value smaller than $\delta_a=16$. Consequently, our current
algorithms do not attempt to optimize the codelength on this parameter, and
we have kept this value fixed throughout all the experiments of
Section~\ref{sec:results}.
\subsection{Encoding the error}
\label{sec:encoding:error}
\noindent {\bf Probability model:} Most sparse coding frameworks, including
all the mentioned MDL-based ones, assume the error $\vec{\err}$ to be solely due
to measurement noise, typically of the AWGN type. However, $\vec{\err}$ actually
contains a significant component which is due to a systematic deviation of
the model from the clean data. Following this, we model the elements of
$\vec{\err}$ as samples of an IID random variable $\epsilon$ which is the linear
combination of two independent variables,
$\epsilon=\hat{\epsilon}+N$. Here $N \sim
\ensuremath{\mathcal{N}}(0,\sigma^2_e)$ represents random measurement noise in $\vec{\data}$.
We assume the noise variance $\sigma^2_e$ known, as it can be easily and
reliably estimated from the input data (for example, taking the minimum
empirical variance over a sufficient number of sub-samples). The
distribution of the second variable, $\hat{\epsilon} \sim
\ensuremath{\mathrm{Lap}}(0,\theta_e)$ is a Laplacian of unknown parameter
$\theta_e$, which represents the error component due to the model itself.
The resulting continuous distribution $\cpdf{p}_\epsilon(\epsilon)$, which we
call ``LG,'' is the convolution of the distributions of both components (see
~\cite{ramirez10dude} for details on the derivation),
\begin{align}
\cpdf{p}_\epsilon(\epsilon;\sigma^2_e,\theta_e) =&
\int_{\zeta=-\infty}^{+\infty}{\GaussianPDF[\sigma_e]{\zeta}\LaplacianPDF[\theta_e]{\epsilon-\zeta}d\zeta} \nonumber \\
=& \frac{1}{4\theta_e} e^{ \frac{\sigma^2_e}{2\theta_e^2} }
\left[
e^{ \epsilon/\theta_e}\mathrm{erfc}\left( \frac{ \epsilon+\sigma^2_e/\theta_e}{\sqrt{2}\sigma_e} \right)+
e^{-\epsilon/\theta_e}\mathrm{erfc}\left( \frac{-\epsilon+\sigma^2_e/\theta_e}{\sqrt{2}\sigma_e} \right)
\right],
\label{eq:lg-convolution}
\end{align}
where $\mathrm{erfc}(u) =
\frac{2}{\sqrt{\pi}}\int_{u}^{+\infty}{e^{-t^2}dt}$ is the
\emph{complimentary Gauss error function}. The ideal codelength, $-\log
\cpdf{p}_\epsilon(\epsilon)$, is shown in Figure~\ref{fig:error-model}(a) for
various parameter values. This function is convex and differentiable on
$\ensuremath{\mathbb{R}}$, which is nice for optimization
purposes. Figure~\ref{fig:error-model}(b) shows its derivative, or so called
``influence function'' in robust statistics. It can be verified that $-\log
\cpdf{p}_\epsilon(\epsilon)$ behaves like a Laplacian with parameter $\theta_e$
for large values of $\epsilon$. Further, since its derivative is bounded, the
influence of outliers is diminished. In fact, $-\log \cpdf{p}_\epsilon(\epsilon)$
is easily verified to be a $\psi$-type M-estimator, a family of functions
used in robust statistics (see \cite{huber64}). Thus, using this model, we
obtain an information-theoretic robust estimator, which is consistent with
the motivations leading to its use in our framework, and which has a
significant practical impact in the experimental results.
\begin{figure*}[p]
\begin{center}
\includegraphics[width=0.95\textwidth]{error-model.png}
\vspace{-0ex}\caption{\label{fig:error-model}Residual probability model. (a) Ideal
codelength function of the ``LG'' distribution, $-\log
\cpdf{p}_\epsilon(\epsilon)$, (b) LG influence function, that is, $(-\log
\cpdf{p}_\epsilon(y))'$, (c) universal mixture for the LG model
(MOEG), (d) MOEG influence function.}
\end{center}
\end{figure*}
\noindent {\bf Unknown parameters:} Since $\theta_e$ is unknown, encoding
$\vec{\err}$ efficiently calls for the use of universal codes. In this case,
again, we employ a mixture model. Since the parameter $\theta_e$ comes
from the underlying Laplacian component, we again use a Gamma for the mixing
function,
\begin{align}
\cpdf{Q}_\epsilon(\epsilon;\sigma^2_e,\kappa_e,\beta_e) &= \!\! \int_{0}^{+\infty}{\!\!\!\!\!\!\!\!\Gamma(\theta;\kappa_e,\beta_e)\cpdf{p}_\epsilon(\epsilon;\sigma^2_e,\theta)d\theta}.
\label{eq:lg-mixture}
\end{align}
We call this model MOEG. As with the MOE model, the universality of this
model is guaranteed by the theory for the choice of its underlying mixing
function, for any (non-informative) $\kappa_e$ and $\beta_e$. In this case, we use
$\kappa_e=3.0$ and $\beta_e=\delta_e$. Also, we know from the
discussion above that $\sigma^2_e$ can be easily and reliably estimated
from the data. Thus, we can say that the model for $\epsilon$ is
parameter-free in this case as well. Figure~\ref{fig:error-model}(c) shows
the numerical evaluation of the ideal Shannon codelength $-\log
\cpdf{q}_\epsilon(\epsilon;\sigma^2_e,\kappa_e,\beta_e)$, which is
non-convex. However, it is twice differentiable
everywhere, again a desirable property for optimization purposes (more on
this in sections~\ref{sec:encoding-algorithms}
and~\ref{sec:learning-algorithms}). As with the LG distribution, $-\log
\cpdf{q}_\epsilon(\epsilon)$ is an $\psi$-type M-estimator, in this case, a
\emph{redescending} M-estimator, since its derivative
(Figure~\ref{fig:error-model}(d)) vanishes to $0$ at $\infty$. As such,
$-\log \cpdf{q}_\epsilon(\epsilon)$, derived from the universal model
corresponding to $\cpdf{p}_\epsilon(\epsilon)$, can reject outliers even more
aggressively than $-\log \cpdf{p}_\epsilon(\epsilon)$, again marrying robust
statistics with information theory in a natural way.
\noindent {\bf Quantization:} To losslessly encode finite-precision input
data such as digital images, the quantization step of the error
coefficients needs not be more than that of the data itself,
$\delta_y$, and we simply quantize the error coefficients uniformly with
step $\delta_e=\delta_y$. For example, for $8$-bit digital images, we
set $\delta_e=\delta_y=1$.
\subsection{Model for the dictionary}
\label{sec:encoding:dictionary}
\noindent {\bf Probability model:} Dictionary learning practice shows that
learned atoms, unsurprisingly, present features that are similar to
those of the original data. For example, the piecewise smoothness
of small image patches is to be expected in the atoms of learned
dictionaries for such data. This prior information, often neglected in dictionary learning algorithms, needs to be taken into
account for encoding such atoms efficiently.
We embody such information in the form of \emph{predictability}. This is, we
will encode an atom $\vec{\dict} \in \reals^{\ndims}$ as a sequence of causal
prediction residuals, $\vec{\dpred} \in \reals^{\ndims}$, $b_{i+1} =
d_{i+1}-\tilded_{i+1}(d_1,d_2,\ldots,d_i),\, 1 \leq i <
\ndims$, a function of the previously encoded elements in $\vec{\dict}$. In
particular, if we restrict $\tilded_{i+1}$ to be a linear function, the
residual vector can be written as $\vec{\dpred} = \mat{W}\vec{\dict}$, where $\mat{W}
\in \ensuremath{\mathbb{R}}^{\ndims{\times}\ndims}$ is lower triangular due to the causality
constraint (this aspect has important efficiency consequences in the
algorithms to be developed in Section~\ref{sec:learning-algorithms}). This
is depicted in Figure~\ref{fig:dict-model}, along with the specific
prediction scheme that we adopted for the image processing examples in
Section~\ref{sec:experiments}. In this case we consider an atom $\vec{\dict}$ to
be an $\sqrt{\ndims}{\times}\sqrt{\ndims}$ image patch, and use a causal
bi-linear predictor where the prediction of each pixel in the dictionary
atom is given by $\mathrm{north\_pixel} + \mathrm{west\_pixel} -
\mathrm{northwest\_pixel}$.
As a general model for linear prediction residuals, we assume $\vec{\dpred}$ to
be a sequence of IID Laplacian samples of parameter $\theta_d$. In
principle, $\theta_d$ is also unknown. However, describing $\mat{\dict}$ is
only meaningful for dictionary learning purposes, and, in that case,
$\mat{\dict}$ is updated iteratively, so that when computing an iterate
$\mat{\dict}\iter{t}$ of $\mat{\dict}$, we can use $\mat{\dict}\iter{t-1}$ to estimate and
fix $\theta_d$ via ML (more on this $\theta_d$ later in
Section~\ref{sec:learning-algorithms}). Thus, we consider $\theta_d$ to
be known.
\begin{figure*}[p]
\begin{center}
\includegraphics[width=0.95\textwidth]{dict-model.png}%
\caption{\label{fig:dict-model} Prediction scheme used for learning natural
image patches dictionaries (in this example, $3{\times}3$ patches, and
$\ndims=9$). An atom $\vec{\dict}_\ai$ is arranged as a ${3{\times}3}$ patch,
and a causal bi-linear predictor (shown as a $2{\times}2$ template) with
zero-padding (pixels outside of the patch are assumed $0$) is applied to
it, producing a predicted atom $\hat\vec{\dict}_\ai$ and a residual
$\vec{\dpred}_\ai=\vec{\dict}_\ai-\hat\vec{\dict}_\ai$. The previous operation can be
written as $\vec{\dpred}_\ai=\mat{W}\vec{\dict}_\ai$, with $\mat{W} \in
\ensuremath{\mathbb{R}}^{9{\times}9}$ the linear mapping from atom to prediction
residuals corresponding to this example.}
\end{center}
\end{figure*}
\noindent {\bf Quantization:} When $\mat{\coef}$ is fixed during a dictionary
learning iteration (which consists of an alternate descent between $\mat{\dict}$
and $\mat{\coef}$), we can view $(\mat{\coef},\mat{\data})$ as $n$ input-output
training pairs, and $\mat{\dict}$ as the ML estimator of the linear coefficients
describing such mapping via $\mat{\data}=\mat{\dict}\mat{\coef}+\mat{\err}$. Based on this, we
use the quantization step $\delta_d=1/\sqrt{n}$, which is an
optimal step for encoding the ML parameter in two-part codes, as described
in \cite[Theorem~1]{rissanen84}.
\noindent {\bf Computation:} Computing $L(\mat{\dict})$ is only relevant for
learning purposes. In general, since $\norm{\vec{\dict}_k}_2 \leq 1$, and
$\norm{\vec{\dict}_k}_2 \leq \sqrt{\ndims}\norm{\vec{\dict}_k}_1$, we have that
$\hat\theta_d=(p\ndims)^{-1}\sum_\ai\norm{\vec{\dict}_\ai }_1 \leq
(p\sqrt{\ndims})^{-1} \ll \delta_d=\sqrt{n}$, and the error
of using the approximation \refeq{eq:approx-codelength} is not significant,
\begin{align}
L(\mat{\dict}) =& \sum_{\ai=1}^{p} L(\vec{\dict}_\ai) \approx \sum_{\ai=1}^{p}\left\{ -\log \cpdf{p}(\mat{W}\vec{\dict}_\ai;\theta_d) - \ndims\log \delta_d \right\}
= \theta_d\sum_{\ai=1}^{p}{\norm{\mat{W}\vec{\dict}_\ai}_1} + \frac{\ndimsp}{2}\log n + c,
\end{align}
where $\cpdf{p}(\mat{W}\vec{\dict}_\ai)$ is the IID Laplacian distribution over
the $\ai-$th atom prediction residual vector $\mat{W}\vec{\dict}_\ai$, and $c$
is a fixed constant. For $p$ fixed (we will later see how to learn the
dictionary size $p$ as well), the above expression is simply an
\cost{1} penalty on the atom prediction residual coefficients. As we will
see in Section~\ref{sec:learning-algorithms}, this allows us to use
efficient convex optimization tools to update the atoms.
\subsection{Extension to sequential (collaborative) coding}
\label{sec:encoding:collaborative}
One natural assumption that we can often make on the set of data samples
$\mat{\data}$ is that, besides all being sparsely representable in terms of the
learned dictionary $\mat{\dict}$, they share other statistical properties. For example,
we can assume that the underlying unknown model parameters, $\theta_e$,
$\rho_a$, $\theta_a$, $\theta_d$, are the same for all columns of the
sparse data decomposition ($\mat{\err}$, $\mat{\coef}$).
Under such assumption, if we encode each column of $\mat{\data}$ sequentially%
, we can learn statistical information from the ones already encoded and
apply it to estimate the unknown parameters of the distributions used for
encoding the following ones. The general idea is depicted in
Figure~\ref{fig:collaborative-encoding}(a). Concretely, suppose we have
already encoded $j-1$ samples. We can then use
$[\vec{\err}_1,\vec{\err}_2,\ldots,\vec{\err}_{(j-1)}]$ to estimate $\theta_e$, and
$[\vec{\coef}_1,\vec{\coef}_2,\ldots,\vec{\coef}_{(j-1)}]$ to estimate $\theta_a$ and
$\rho_a$, and ``plug-in'' these parameters to encode the $\si$-th
sample. This justifies the name of this encoding method, which is known in
the coding literature as \emph{sequential plug-in} encoding. This encoding
strategy has several advantages: 1) For common parameter estimators such as
ML, this method can be shown to be universal; 2) Since all distribution
parameters are fixed (pre-estimated) when encoding the $\si$-th sample, we
can use the ``original,'' non-universal distributions assumed for modeling
$\vec{\err}_j$ (LG) and $\vec{\coef}_j$ (Laplacian), which have closed forms and are
usually faster to compute (together with \refeq{eq:approx-codelength}) than
their universal mixture counterparts; 3) Furthermore, these original
distributions are convex, so that in this case, given a fixed support, we
are able to exactly minimize the codelength over the non-zero coefficient
values; 4) With many samples available for parameter estimation, we can
potentially afford more complex models.
\begin{figure*}[p]
\begin{center}
\includegraphics[width=\textwidth]{collaborative-encoding.png}%
\caption{\label{fig:collaborative-encoding}Collaborative encoding
scheme. (a) In this example, $3$ samples have already been encoded, and we
are about to encode sample $4$. The formulas for estimating the various
model parameters are shown for $j=4$, in particular those for the error
and the coefficients associated to the $\ai$-th atom (the $\ai$-th row of
$\mat{\coef}$). (b) Markov model for the coefficients support matrix
$\mat{\supp}$. Here, a sample patch $\vec{\data}$ is about to be encoded. Here the
first atom was only used by the pixel to the west, so that the Markov
state for modeling $z_1$ is $(n,w,nw)=(0,1,0)$, and
$P(z_1=1)=\rho_{(0,1,0)}^{1}$. As for the $\ai$-th atom, only the $nw$
pixel has used it, so that the Markov state for $z_\ai$ is $(0,0,1)$,
that is, $P(z_\ai=1)=\rho_{(0,0,1)}^{\ai}$. }
\end{center}
\end{figure*}
\noindent{\bf Residual:} We estimate $\theta_e$ in two steps. First,
since the random variable $\epsilon$ is an independent sum of two random
variables, $\epsilon=\hat{\epsilon}+N$, we have that
$\fun{var}(\epsilon)=\fun{var}(\hat\epsilon)+
\fun{var}(N)=\fun{var}(\hat{\epsilon})+\sigma_e^2$. Now, since
$\hat{\epsilon}$ is Laplacian, we have that
$\fun{var}(\hat{\epsilon})=2\theta_e^2$. Combining both equations we have
that $\theta_e=0.5\sqrt{\fun{var}(\hat\epsilon)-\sigma_e^2}$. With the
noise variance $\sigma_e^2$ assumed known, and using the standard
unbiased variance estimator,
$\hat{\fun{var}}(\hat\epsilon)=(p(\si-1))^{-1}\norm{\mat{\err}\svec{1,\ldots,(\si-1)}}_F^2$,
we obtain
\[
\hat\theta_e=0.5\sqrt{\mathrm{max}\{(p(\si-1))^{-1}\norm{\mat{\err}\svec{1,\ldots,(\si-1)}}_F^2-\sigma_e^2,0\}},
\]
where the maximization guarantees that $\hat\theta_e \in \ensuremath{\mathbb{R}}^+$.
\noindent{\bf Coefficients:} In the case of $\vec{\coef}$, we have in principle
two unknown parameters, the probability of an element being non-zero,
$\rho_a$, and the scale parameter of the Laplacian governing the
non-zero values, $\theta_a$ (both previously handled with universal
models). Here, however, we extend the model, drawing from the well known
empirical fact that coefficients associated to different atoms can have very
different statistics, both in frequency and variance. This is typical of DCT
coefficients for example (see \cite{lam00}), and has been consistently
observed for learned dictionaries as well~\cite{ramirez10tip}. Therefore, we
will consider a separate set of parameters
$(\rho_a^\ai,\theta_a^\ai)$ for each \emph{row} $\ai$ of $\mat{\coef}$,
$\vec{\coef}^\ai$. We update such parameters from the coefficients observed in
the respective row for the already-computed samples,
$(a_{k1},a_{k2},\ldots,a_{k(j-1)})$, and encode each $\ai$-th
coefficient in $\vec{\coef}_j$ (more specifically, in $\vec{\supp}_j$, and $\vec{\val}_j$),
as the $\si$-th sample of the respective row. Concretely, let $n_1^\ai =
\sum_{\si'=1}^{(\si-1)}{z_{\ai\si'}}$ be the number of non-zero
coefficients observed so far in the $\ai$-th row. For $\rho_a^\ai$, we
use the Krichevsky-Trofimov (KT) estimator~\cite{krichevsky81},
\begin{equation}
\hat\rho_a^\ai = \frac{n_1^\ai + 0.5}{j},
\label{eq:kt}
\end{equation}
which is a universal plug-in encoding scheme for Bernoulli sequences of
unknown parameter. For encoding $v_{\ai\si}$, we apply the ML estimator for
the exponential family to the non-zero coefficients observed so far in the
$\ai$-th row. Recalling that
$v_{\ai\si}=\max\{|a_{\ai\si'}|-\delta_a,0\}$, the resulting
estimator is given by
\[ \hat\theta_a^\ai = \frac{\sum_{\si'=1}^{(\si-1)} \max\{|a_{\ai\si'}|-\delta_a,0\}}{n_1^\ai }.
\]
\noindent {\bf Markovian dependencies:} In many applications,
spatially/temporally adjacent samples are statistically dependent. For
example, we may assume that an atom is more likely to occur for a sample
$\si$ if it has been used by, say, the $(\si-1)$-th sample (see also
\cite{zhou11aistats}). In that case, we may consider different estimations
of $\rho^\ai$ depending on the value of $z_{\ai(\si-1)}$, $\rho^\ai_1 =
\dpdf{P}(z_{\ai\si}=1|z_{\ai(\si-1)}=1)$, and $\rho^\ai_0 =
\dpdf{P}(z_{\ai\si}=1|z_{\ai(\si-1)}=0)$. In particular, for the
image processing results of Section~\ref{sec:results}, we use a Markovian
model which depends on three previous samples, corresponding to the
(causal) neighboring west, north, and northwest patches of the one being
encoded. Thus, for each atom $\ai$ we will have $8$ possible parameters,
$\rho^\ai_{(n,w,nw)}, (n,w,nw) \in \setdef{0,1}^3$, where each value of
$(n,w,nw)$ indicates a possible \emph{Markov state} in which a sample may
occur. This is depicted in Figure~\ref{fig:collaborative-encoding}(b). For
each state $(n,w,nw)$, we estimate $\rho^\ai_{(n,w,nw)}$ using \refeq{eq:kt},
with the average taken over the samples which occur in the same state $(n,w,nw)$.
\section{MDL based sparse coding}
\label{sec:encoding-algorithms}
For the encoding problem, $\mat{\dict}$ is fixed (it has already been learned),
and we consider encoding a single data sample $\vec{\data}$. The model selection
problem here is that of choosing the model (indexed by the sparse code
$\vec{\coef}$) among all the models belonging to the nested family of model
classes $\mathcal{M}(\gamma) =\setdef{\vec{\coef} \in \reals^{\natoms},
\norm{\vec{\coef}}_0 \leq \gamma}, \gamma=0,\ldots,p$, that yields the
smallest codelength for describing $\vec{\data}$. In principle, this calls for
finding the best model $\vec{\coef}(\gamma)$ within each model class
$\mathcal{M}(\gamma)$, and then selecting $\opt\vec{\coef} = \arg\min_{0\leq
\gamma \leq p} L(\vec{\data},\vec{\coef}(\gamma))$. However, in order to be
computationally efficient, and as with most sparse coding and model
selection algorithms, several simplifications and approximations are needed.
Let us first consider the problem of finding $\vec{\coef}(\gamma)$,
\begin{align}
\vec{\coef}(\gamma) &:= \arg\min_{\vec{\coef} \in \mathcal{M}(\gamma)}
L(\vec{\data},\vec{\coef}) \nonumber
= \arg\min_{\vec{\coef} \in \reals^{\natoms}} -\log \dpdf{p}_\epsilon(\vec{\data}-\mat{\dict}\vec{\coef}) -\log\dpdf{p}(\vec{\supp}) -\log\dpdf{p}(\vec{\sign}|\vec{\supp})
- \log \dpdf{p}_\nu(\vec{\coef}|\vec{\sign},\vec{\supp}) \nonumber \\
&= \arg\min_{\vec{\coef} \in \reals^{\natoms}} -\log \dpdf{p}_\epsilon(\vec{\data}-\mat{\dict}\vec{\coef}) - \log {p \choose \norm{\vec{\coef}}_0} + \norm{\vec{\coef}}_0 -
\log \dpdf{p}_\nu(\vec{\coef}) \ensuremath{\quad\mathrm{s.t.}\quad} \norm{\vec{\coef}}_0 \leq \gamma.
\label{eq:fixed-support}
\end{align}
For quantized $\vec{\coef}$, this is an optimization problem over a discrete,
infinite domain, with a non-convex (in the continuous domain) constraint,
and a non-differentiable cost function in $\vec{\coef}$. Based on the literature
on sparse coding, at least two alternatives can be devised at this
point. One way is to use a pursuit technique, e.g.,
~\cite{mallat93}. Another option is to use a convex relaxation of the
codelength function, e.g., ~\cite{chen98}. For the sake of brevity, here we
will describe an algorithm loosely based on the first alternative. Details
on the convex relaxation method for MDL-based sparse coding will be
published elsewhere.
The pursuit-like algorithm, which we call COdelength-Minimizing Pursuit
Algorithm (COMPA), is summarized in Algorithm~\ref{alg:compa}. This is a
non-greedy cross-breed between Matching Pursuit (MP) and Forward Stepwise
Selection (FSS)~\cite{hastie09}. As with those methods, COMPA starts with
the empty solution $\vec{\coef}\iter{0}=\vec{0}$, and updates the value of one
single coefficient at each iteration. Then, given the current correlation
$\vec{\corr}\iter{t}=\mat{\dict}\vec{\err}\iter{t}$ between the dictionary
atoms and the current residual, each $\ai$-th coefficient in
$\vec{\coef}\iter{t}$ is tentatively incremented (or decremented) by
$\Delta_\ai=\quant{g\iter{t}_\ai}$, and a candidate codelength $
\hat L_\ai$ is computed in each case. The coefficient that
produces the smallest $\hat L(\vec{\data},\vec{\coef})$ is updated to produce
$\vec{\coef}\iter{t+1}$.
The logic behind this procedure is that the codelength cost of adding a new
coefficient to the support is usually very high, so that adding a new
coefficient only makes sense if its contribution is high enough to produce
some noticeable effect in the other parts of the codelength. A variant of
this algorithm was also implemented where, for each candidate $\ai$, the
value of the increment $\Delta_\ai$ was refined in order to minimize $
\hat L_\ai$. However, this variant turned out to be significantly
slower, and the compression gains where below $0.01$ bits per sample
(uncompressed codelength is $8$ bits per sample). Assuming that $L\iter{t}$
is unimodal, the algorithm stops if the codelength of a new iterate is
larger than the previous one. To assess the validity of this assumption, we
also implemented a variant which stops, as MP or FSS, when the
residual-coefficients correlation $\norm{\vec{\corr}\iter{t}}_\infty$ is no
longer significant, which typically requires many more iterations. With this
variant we obtained a negligible improvement of $0.004$ bits per sample,
while increasing the computational cost about three times due to the extra
iterations required.
\begin{figure*}[p]
\begin{center}
\includegraphics[width=0.7\textwidth]{compa-evolution.png}
\caption{\label{fig:compa-evolution}Typical evolution of the COMPA
algorithm. (a) coefficients. (b) codelength. The best iterate (code) is
marked with a black circle. Also note that describing the support ($L(Z)$)
actually takes more bits than describing the non-zero values ($L(V)$). }
\end{center}
\end{figure*}
\begin{algorithm}[t]
\begin{scriptsize}
\caption{\label{alg:compa}COdelength Minimizing Pursuit Algorithm (COMPA)}
\SetKw{Init}{initialize}
\SetKw{Set}{set}
\SetKw{Choose}{choose}
\SetCommentSty{textit}
\KwIn{Data sample $\vec{\data}$, dictionary $\mat{\dict}$}
\KwOut{$\opt\vec{\coef}$, $\opt\vec{\err}$}
\Init $t \leftarrow 0; \vec{\coef}\iter{0} \leftarrow \mat{0}; \vec{\err} \leftarrow \vec{\data}; L\iter{0} \leftarrow L(\vec{\data},\mat{0});\; \vec{\corr}\iter{t} \leftarrow \mat{\dict}^T\vec{\err}\iter{t}$ \tcp*{$\vec{\corr}\iter{t}$ correlation of current residual with the dictionary}
\Repeat{$L\iter{t} \geq L\iter{t-1}$}{
\For{$\ai \leftarrow 1,2,\ldots,p $}{
$\Delta_{\ai} \leftarrow [g\iter{t}_\ai]_{\delta_a}$ \tcp*{step $\Delta_{\ai}$ is correlation, quantized to prec. $\delta_a$}
$\tilde{L}_{\ai} \leftarrow L([\vec{\err}-\Delta_\ai\vec{\dict}_\ai]_{\delta_e},\,\vec{\coef} + \Delta_\ai \omega_\ai)$ \tcp*{$\omega_\ai\!=$ $\ai$-th canonical vec. of $\ensuremath{\mathbb{R}}^{p}$}
}
$L\iter{t+1} \leftarrow \min \{ \tilde{L}_{\ai}: \ai=1,\ldots,p\}$ \;
$\vec{\coef}\iter{t+1} \leftarrow \vec{\coef}\iter{t} + \Delta_{\opt{\ai}} \omega_{\opt{\ai}}$
\tcp*{update coefficients vector}
$\vec{\corr}\iter{t+1} \leftarrow \vec{\corr}\iter{t} - \Delta_{\opt{\ai}} \vec{\dict}_{\opt{\ai}}$
\tcp*{update correlation}
$t \leftarrow t + 1$ \;
}
$\opt\vec{\coef} \leftarrow \vec{\coef}(\gamma-1)$ \;
$\opt\vec{\err} \leftarrow \quant{\vec{\data}-\mat{\dict}\opt\vec{\coef}}$ \;
STOP \;
\end{scriptsize}
\end{algorithm}
\section{MDL based dictionary learning}
\label{sec:learning-algorithms}
Given that our sparse coding algorithm in
Section~\ref{sec:encoding-algorithms} can select the best support size
$\gamma$ for each sample in $\mat{\data}$, the definition of the model class
$\mathcal{M}(\gamma,p)$ given in
Section~\ref{sec:mdl-model-selection}, which assumes the same $\gamma$ for
all samples in $\mat{\data}$, is no longer appropriate (we could of course add
$0$-weight coefficients to make $\gamma$ equal for all data). Instead, for
dictionary learning, we consider the model class family
$\mathcal{M}(p) = \setdef{(\mat{\coef},\mat{\dict}),\mat{\dict} \in \reals^{\ndims{\times}\natoms},
\vec{\coef}_\si \in \mathcal{M}(\gamma;\mat{\dict}), \si=1,\ldots,n}$, where
$\mathcal{M}(\gamma;\mat{\dict})$ is the model class family of sparse codes based
on a fixed dictionary $\mat{\dict}$ defined in
Section~\ref{sec:encoding-algorithms}, with the dependency on $\mat{\dict}$ made
explicit. It is easy to see that the model classes $\mathcal{M}(p)$
are nested. We now need to solve
\begin{equation}
(\mat{\coef}(p),\mat{\dict}(p)) =
\arg\min_{(\mat{\coef},\mat{\dict}) \in \mathcal{M}(p)} L(\mat{\err},\mat{\coef},\mat{\dict}),
\label{eq:dictionary-learning-problem}
\end{equation}
for $p\!=\!0,1,\ldots$, and then choose
$(\opt\mat{\coef},\opt\mat{\dict})=(\mat{\coef}(\hat{p}),\mat{\dict}(\hat{p}))$
with the optimal dictionary size $$\hat{p} = \arg\min_p
\setdef{ L(\mat{\err},\mat{\coef}(p),\mat{\dict}(p)):
p\!=\!0,1,\ldots}. $$ As with sparse coding, here we exploit the
nested nature of the model classes to speed up the model selection. For
this, we propose a forward-selection algorithm, described in
Algorithm~\ref{alg:mdl-dl-forward}, which starts from $\mathcal{M}(0)$ (the
empty dictionary), and then approximates the best model in
$\mathcal{M}(p+1)$ by adding a new atom to the dictionary computed
for $\mathcal{M}(p)$ and then invoking Algorithm
~\ref{alg:mdl-dictionary-learning}, which is discussed in depth in the next
subsection.
A backward-selection algorithm was also developed which first learns the
model for $\mathcal{M}(p_{\mathrm{max}})$ via
\refeq{alg:mdl-dictionary-learning}, where $p_{\mathrm{max}}$ is a
given maximum dictionary size, and then prunes the less frequently used
atoms until no further decrease in codelength is observed. This algorithm
allows us to provide especially-constructed initial dictionaries for
Algorithm~\refeq{alg:mdl-dictionary-learning}, e.g., an (overcomplete) DCT
frame, which can be critical for finding good local minima of the non-convex
problem \refeq{eq:dictionary-learning-problem}. We do this for example to
learn a dictionary for the whole class of natural images, see
Section~\ref{sec:experiments}.
\begin{algorithm}[t]
\begin{scriptsize}
\caption{\label{alg:mdl-dl-forward}MDL-based dictionary learning via forward
selection.} \SetKw{Init}{initialize} \SetKw{Set}{set}
\SetKw{Choose}{choose} \SetCommentSty{textit} \KwIn{Data $\mat{\data}$}
\KwOut{$(\opt{\mat{\coef}},\opt{\mat{\dict}})$} \Init $p\assign0$;
$\mat{\coef}(0)\leftarrow\emptyset$; $\mat{\dict}(0)\leftarrow\emptyset$; $\mat{\err}(0) \leftarrow
\mat{\data}$; $L(0) \leftarrow L(\mat{\err}(0),\mat{\coef}(0),\mat{\dict}(0))$ \;
\Repeat{$L(p) \geq L(p-1)$ } { $\tilde\vec{\dict} \leftarrow \vec{u}_1,
\mat{U}\Sigma\mat{V}^\intercal = \mat{\err}(p)$ \tcp{Initial value of new
atom is the left-eigenvector associated to the largest singular value of
$\mat{\err}\iter{t}$.} $\mat{\dict}^0 \leftarrow
[\,\mat{\dict}(p)\,|\,\tilde\vec{\dict}\,]$ \tcp{Initial dictionary for
optimization below.} $(\mat{\coef}(p+1),\mat{\dict}(p+1)) \leftarrow
\arg\min_{(\mat{\coef},\mat{\dict}) \in \mathcal{M}(p+1)}
L(\mat{\err},\mat{\coef},\mat{\dict})$ \tcp{Optimize dict. via
Algorithm~\ref{alg:mdl-dictionary-learning}} $ p \leftarrow
p + 1 $ \; $L(p) \leftarrow
L(\mat{\err}(p),\mat{\coef}(p),\mat{\dict}(p))$ \; } $\opt\mat{\coef}
\leftarrow \mat{\coef}(p-1)$; $\opt\mat{\dict} \leftarrow \mat{\dict}(p-1)$\;
\end{scriptsize}
\end{algorithm}
\subsection{Optimizing the dictionary for fixed p}
For fixed $p$, and given an initial $\mat{\dict}$,
Algorithm~\ref{alg:mdl-dictionary-learning} adapts the atoms of $\mat{\dict}$ to
fit the training data $\mat{\data}$.
\begin{algorithm}[t]
\begin{scriptsize}
\caption{MDL-based dictionary learning for a given size $p$}
\label{alg:mdl-dictionary-learning}
\SetKw{Init}{initialize}
\SetKw{Set}{set}
\SetKw{Choose}{choose}
\SetCommentSty{textit}
\KwIn{Data $\mat{\data}$, initial dictionary $\mat{\dict}^0$, multiplier $\lambda$, $\eta$}
\KwOut{Local-optimum $(\opt{\mat{\coef}},\opt{\mat{\dict}})$}
\Init $\mat{\dict}\iter{0}=\mat{\dict}^0$, $t=1$ \;
\Repeat{$ \frac{ \norm{ \mat{\dict}\iter{t} - \mat{\dict}\iter{t-1} }_2 }{ \norm{ \mat{\dict}\iter{t} }_2 } \leq \mathrm{\epsilon}$ }
{
\For{$j = 1,\ldots,n$ } {
$\vec{\coef}_j\iter{t} \leftarrow \arg\min_{\mat{\coef}} L(\vec{\err},\vec{\coef},\mat{\dict}\iter{t-1})$ \;
}
Update plug-in parameters: $\theta_e, \setdef{(\theta_a^\ai,\rho_a^\ai),\ai=1,\ldots,p}, \theta_d$\;
$\mat{\dict}\iter{t} \leftarrow \arg\min_{\mat{\dict}} L(\mat{\err},\mat{\coef},\mat{\dict})$ \;
$ t \leftarrow t + 1 $ \;
}
\end{scriptsize}
\end{algorithm}
At the high level, our algorithm is very similar to the traditional approach
of alternate minimization over $(\mat{\coef},\mat{\dict})$. However, there are a
number of important differences, namely: 1) The cost function minimized is now
the cumulative codelength of describing $\mat{\data}$,
$L(\mat{\err},\mat{\coef},\mat{\dict})$; 2) Minimizing over $\mat{\coef}$ is done sample by sample
following Section~\ref{sec:encoding-algorithms}; 3) Since $\mat{\dict}$ needs to be
described as well, it has an associated codelength (see
Section~\ref{sec:encoding:dictionary}), resulting in regularized dictionary
update, described below; 4) in a cross-breed between Expectation-Maximization,
and plug-in estimation, we estimate the model parameters for the current
iterate $(\mat{\err}\iter{t},\mat{\coef}\iter{t},\mat{\dict}\iter{t})$, from the
accumulated statistics of previous iterates
$\setdef{(\mat{\err}\iter{t'}\mat{\coef}\iter{t'},\mat{\dict}\iter{t'}),t'=1,\ldots,t-1}$.
At the end of the learning process, these parameters are ``saved'' as part
of the learned model and can be used for modeling future data along with
$\mat{\dict}$.
At the $t$-th iteration of the alternate minimization between $\mat{\dict}$ and
$\mat{\coef}$, with $\mat{\coef}\iter{t}$ just computed and kept fixed, the dictionary
step consists of solving the sub-problem $$\mat{\dict}\iter{t} = \arg\min_{\mat{\dict}
\in \reals^{\ndims{\times}\natoms}} L(\mat{\data},\mat{\coef}\iter{t},\mat{\dict}) = \arg\min_{\mat{\dict}
\in \reals^{\ndims{\times}\natoms}} L(\mat{\data}|\mat{\coef}\iter{t},\mat{\dict})+L(\mat{\dict}).$$ According to
Section~\ref{sec:encoding:dictionary}, we have
$L(\mat{\dict})=\frac{1}{\theta_d\iter{t}}\sum_{\ai=1}^{p}
\norm{\mat{W}\vec{\dict}_\ai}_1$, where
$\theta_d\iter{t}=\frac{1}{\ndimsp}\sum_{\ai=1}^{p}\sum_{\di=1}^{\ndims}|d_{\di\ai}\iter{t-1}|$
is the Laplacian MLE of $\theta_d$ based on $\mat{\dict}\iter{t-1}$.
Correspondingly, the data fitting term, via~\refeq{eq:approx-codelength} and
disregarding the constant terms, is given by
$L(\mat{\data}|\mat{\coef}\iter{t},\mat{\dict}) =
L(\mat{\data}-\mat{\dict}\mat{\coef}\iter{t}|\theta_e\iter{t},\sigma_e^2) =
\sum_{\si=1}^{n}\sum_{\di=1}^{\ndims} -\log
LG(y_{\di\si}-(\mat{\dict}\mat{\coef}\iter{t})_{\di\si};\theta_e\iter{t},\sigma_e^2)$,
where $\theta_e\iter{t}$ is the estimator of $\theta_e$ given
$\mat{\err}=\mat{\data}-\mat{\dict}\iter{t-1}\mat{\coef}\iter{t}$ (see Section
\ref{sec:encoding:collaborative}) and $\sigma_e^2$ is assumed known. The
problem can now be written as,
\begin{equation}
\mat{\dict}\iter{t} = \arg\min_{\mat{\dict}} L(\mat{\data}-\mat{\dict}\mat{\coef}\iter{t}|\theta_e\iter{t},\sigma_e^2) + \theta_d\iter{t}\sum_{\ai=1}^{p} \norm{\mat{W}\vec{\dict}_\ai}_1.
\label{eq:learning-cost-1}
\end{equation}
For general $\mat{W}$, the optimization of \refeq{eq:learning-cost-1} is
challenging since none of the above terms are separable, in particular, the
non-differentiable \cost{1} term. However, since $\mat{W}$ is easily
invertible (as described in
Section~\ref{sec:encoding:dictionary}, it is lower triangular with $1$'s in
the diagonal), we can perform a change of variables and solve the equivalent
problem in the \emph{prediction residual matrix} $\mat{U}=\mat{W}\mat{\dict}$ instead,
\begin{equation}
\opt{\mat{U}} = \arg\min_{\mat{U}} L(\mat{\data}-\mat{W}^{-1}\mat{U}\mat{\coef}\iter{t}|\theta_e\iter{t},\sigma_e^2) + \theta_d\iter{t}\sum_{k=1}^{p} \norm{\mat{u}_k}_1.
\label{eq:learning-cost-2}
\end{equation}
Since the regularization term in~\refeq{eq:learning-cost-2} is decoupled in
the elements of $\mat{U}$, and
$L(\mat{\data}-\mat{W}^{-1}\mat{U}\mat{\coef}|\theta_e\iter{t},\sigma_e^2)$ is
convex and differentiable in $\mat{U}$ (see
Figure~\ref{fig:error-model}(a)), \refeq{eq:learning-cost-2} can be
efficiently solved using the existing techniques for separable
non-differentiable regularization terms. In our case, we employ the
backtracking variant of FISTA~\cite{beck09siam}, focusing on an efficient
numerical evaluation of each step.
\section{Experimental results}
\label{sec:results}
\label{sec:experiments}
\subsection{Coding performance}
\label{sec:results:coding}
The first experiment in this section assesses the ability of our coding
scheme to actually produce a compressed description of the data, in this
case $8$-bit gray-scale images. To this end, a dictionary $\mat{\dict}$ was
learned using the backward-selection algorithm, for the training samples
from the Pascal'06 image
database, \footnote{\small\url{http://pascallin.ecs.soton.ac.uk/challenges/VOC/databases.html}}
converted to $8$-bit gray-scale images and decomposed into $8{\times}8$
patches. The initial dictionary was an overcomplete DCT frame with
$p=256$. The resulting global dictionary $\mat{\dict}$ has $p=250$
atoms. We then encoded the testing samples from the same database, obtaining
an average codelength of $4.1$ bits per pixel (bpp), confirming the ability of
our model to produce a compressed description of the data.
\subsection{Learning performance}
\label{sec:results:learning}
We compare the performance of the forward and backward dictionary learning
algorithms proposed in Section~\ref{sec:learning-algorithms} by applying
each method to learn a dictionary for the standard ``Boats'' image (taken
from the SIPI
database, \footnote{\small\url{http://sipi.usc.edu/database/database.php?volume=misc\&image=38\#top}}
along with ``Lena,'' ``Barbara'' and ``Peppers'' used in the following
experiments), and then measuring the final codelength and computation
time. For the backward case, the initial dictionary is the global dictionary
learned in the previous experiment. As for the forward method, we also
include a faster ``partial update'' variant which performs a few ($10$)
iterations of Algorithm~\ref{alg:mdl-dictionary-learning} after adding a new
atom, instead of allowing it to converge. The backward method produced a
dictionary of size $p=170$, yielding a compression level of $5.13$bpp
at a computational cost of $3900$s. For the convergent forward method, a
dictionary with $p=34$, yielding $5.19$bpp and requiring
$800$s. Finally, the forward method resulted in a dictionary of size
$p=20$, $5.22$bpp, and required $150$s. In all cases, the running
times were measured for a parallelized C++ implementation running on an
Athlon Phenom II X6 at 2.6GHz). In summary, all three methods reach
similar, significant, compression levels. Slightly better results are
obtained with the backward method, at the cost of a significant increase in
computational time. On the other hand, the partial forward variant is
significantly faster than the other two, yielding similar codelengths.
\subsection{Denoising of natural images}
\label{sec:results:denoising}
The task in this case is to estimate a clean image from an observed noisy
version whose pixels are corrupted by AWGN of known variance
$\sigma^2_e$. Here $\mat{\data}$ contains all (overlapping) $8{\times}8$
patches from the noisy image. The denoising algorithm proceeds in two
stages. In the first one, a dictionary $\mat{\dict}$ is learned from the noisy
image patches $\mat{\data}$. We use the backward selection algorithm since it
allows us to use the global dictionary as the starting point, a common
practice in this type of problems,~\cite{aharon06,mairal08a}. Secondly, the
clean patches are estimated as sparse combinations of atoms from
$\mat{\dict}$. In our case, the second stage admits two variants. The first one
is a rate-distortion (RD) procedure akin to the traditional method used for
example in~\cite{aharon06}, where each clean sample $\opt\vec{\data}_j$ is
estimated using a distortion-constrained formulation. In our case, we
minimize the codelength (or ``rate'') of describing $\vec{\data}_j$ up to a
prescribed distortion proportional to the noise level, $ \opt \vec{\data}_j =
\mat{\dict}\opt\vec{\coef}_j, \opt\vec{\coef}_j = \arg\min_\vec{\aux} L(\vec{\aux}) \ensuremath{\quad\mathrm{s.t.}\quad}
\norm{\vec{\data}_j-\mat{\dict}\vec{\aux}}_2 \leq C\sigma^2_N. $ Here we use
$C=1.0$. The second variant, coined ``post-thresholding'' (PT) is more
consistent with the learning phase, and is truly parameter-free, since the
estimation derives from the same codelength minimization procedure used for
learning the dictionary $\mat{\dict}$. In this case we obtain an initial estimate
$ \tilde \vec{\data}_j = \mat{\dict}\tilde\vec{\coef}_j,\; \tilde\vec{\coef}_j = \arg\min_\vec{\aux}
L(\vec{\aux}) + L(\vec{\data}_j|\vec{\aux}). $ However, according to the model developed
in Section~\ref{sec:encoding:error}, the encoding residual
$\tilde\vec{\err}=\vec{\data}_j-\tilde\vec{\data}_j$ may contain a significant portion of
clean data due to modeling errors. We can then think of $\tilde\vec{\err}$ as
clean data corrupted by noise of variance $\sigma_e^2$. To extract the
clean portion, we solve another codelength-minimization sub-problem, this
time with a Gaussian prior for the error, and a Laplacian prior for the
clean part, $ \bar \vec{\err}_j = \arg\min_u
\frac{1}{\sigma_e^2}\norm{\tilde\vec{\err}_j-\vec{\aux}}_2 + \frac{1}{\hat
\theta_e } \norm{\vec{\aux}}_1, $ where $\hat\theta_\vec{\err} = \sqrt{0.5 \max
\{0,\fun{var}(\tilde\vec{\err}_j)-\sigma_e^2 \} }$, following
Section~\ref{sec:encoding:collaborative}. We then compute the final
estimate as $\hat\vec{\data}_j = \tilde\vec{\data}_j + \bar\vec{\err}_j$. In either
variant, the model used for $L(\vec{\coef})$ includes the Markovian dependency
between the occurrence of each atom in a patch and its previously-encoded
neighbors, as described in Section~\ref{sec:encoding:collaborative}.
Denoising performance is summarized in Figure~\ref{fig:denoising}, along
with a detail of the result for $\sigma_e=10$ for the ``Boats'' image in
Figure~\ref{fig:denoising}. In all cases, there is a $1$ to $5$ dB
improvement over the best MDL-based results in \cite{roos09}, thus showing
the relevance of overcoming the limitations in previous MDL applications to
sparse coding. Both the RD and PT methods yield results which are comparable
to those of \cite{aharon06}, which depend significantly on several carefully tuned
parameters.\footnote{To the best of our knowledge, these results, as well as
those in \cite{aharon06}, are among the best that can be obtained for
gray-scale images without using multi-scale and/or spatial aggregation of
patches as in \cite{dabov07b,mairal09iccv}.} While the RD variant performs
better than PT in terms of PSNR, PT is faster and tends to produce less
artifacts than RD, thus resulting in more visually pleasant images than
RD. This, which can be clearly seen in Figure~\ref{fig:denoising}, occurs in
all other cases as well. Including the Markov dependency in
$L(\vec{\coef})$ produced an average improvement of up to $0.2$dB.
\begin{figure*}[p]
\begin{center}
\begin{minipage}{0.293\textwidth}
\resizebox{\textwidth}{!}{%
\begin{tabular}[b]{|l|cccc|}\hline
$\sigma_e=10$ & PT & RD & \cite{roos09} & \cite{aharon06} \\\hline
lena & 34.9 & 35.2 & 32.4 & 35.5 \\
barbara & 33.0 & 33.8 & 29.4 & 34.4 \\
boat & 33.1 & 33.2 & 30.5 & 33.6 \\
peppers & 34.1 & 34.4 & 32.2 & 34.3 \\\hline
$\sigma_e=20$ & PT & RD & \cite{roos09} & \cite{aharon06} \\\hline
lena & 32.0 & 32.2 & 29.4 & 32.4 \\
barbara & 29.7 & 30.6 & 25.7 & 30.8 \\
boat & 29.5 & 30.3 & 27.5 & 30.3 \\
peppers & 31.7 & 31.6 & 29.4 & 30.8 \\ \hline
\end{tabular}%
\end{minipage} %
\begin{minipage}{0.697\textwidth}
\includegraphics[width=0.245\textwidth]{boat.png}\hspace{1pt}%
\includegraphics[width=0.245\textwidth]{boat_g10.png}\hspace{1pt}%
\includegraphics[width=0.245\textwidth]{boat_g10_d0_dict_l2.png}\hspace{1pt}%
\includegraphics[width=0.245\textwidth]{boat_g10_par0530_d010.png}\\[1pt]
\includegraphics[width=0.245\textwidth]{boat_g10_coded.png}\hspace{1pt}%
\includegraphics[width=0.245\textwidth]{boat_g10_coding_residual.png}\hspace{1pt}%
\includegraphics[width=0.245\textwidth]{boat_g10_added_back.png}\hspace{1pt}%
\includegraphics[width=0.245\textwidth]{boat_g10_thres_final.png}\hspace{1pt}
\end{minipage}
\vspace{-0ex}\caption{\label{fig:denoising}Denoising results.
Left table: denoising performance, in PSNR, of K-SVD ~\cite{aharon06}, MDL denoising~\cite{roos09}, and the
Post-Thresholding (PT) and Rate-Distortion (RD) denoising variants.
Images, top row: clean ``Boats'', noisy version, learned dictionary for this
image (final $p=248$), image recovered using RD. Images, bottom row: image
reconstructed from the initial estimation $\tilde\vec{\data}_j$ obtained in the
PT method, its residual, portion of residual that was added back, final
PT estimation.}
\end{center}
\end{figure*}
\subsection{Texture mosaic segmentation}
\label{sec:results:texture}
Here we are given $c$ images with sample textures, and a target
mosaic of textures,\footnote{Taken from
\small\url{http://www.ux.uis.no/~tranden/}.} and the task is to assign
each pixel in the mosaic to one of the textures. Again, all images are
decomposed into overlapping patches. This time a different dictionary is
learned for each texture using patches from corresponding training
images. In order to capture the texture patterns, a patch width $w=16$ was
used. Then, each patch in the mosaic is encoded using all available
dictionaries, and its center pixel is assigned to the class which produced
the shortest description length for that patch.
This seemingly natural procedure results in a success rate of $77\%$, which
is consistent with the second picture of
Figure~\ref{fig:texture-segmentation}. The problem is that this procedure is
inconsistent with the learning formulation, because each dictionary is
adapted to minimize the \emph{average} codelength of describing each patch
in the respective texture. Therefore, good results can only be expected if
the decision is made for groups of patches simultaneously, that is, by
considering the cumulative codelength of a set of patches. We implement this
by deciding on each patch on the basis of comparing the average codelength
obtained with each dictionary for encoding that patch and all patches in a
circular neighborhood with a radius of $20$ pixels. The success rate in this
case is $95.3\%$, which is comparable to the state-of-the art for this type
of problems (see for example~\cite{mairal08c}, which learns sparse models
for explicitly maximizing the success rate). The Markovian model improved
our results by $1\%$.
\begin{figure*}[p]
\begin{center}
\includegraphics[width=0.975\textwidth]{mosaic.png}
\vspace{-0ex}\caption{\label{fig:texture-segmentation} Left to right: Texture mosaic,
dictionaries learned for each class (note the automatically learned
different sizes), patch-wise codelength-based classification map --each
shade of gray corresponds to a texture class -- ($77.0\%$ success rate),
classification map obtained by averaging the codelength over a
neighborhood of patches ($95.4\%$ success rate). }
\end{center}
\end{figure*}
\subsection{Low-rank matrix approximation}
\label{sec:results:low-rank}
The low-rank matrix approximation family of problems (see~\cite{candes11acm}
for a review) can be seen as an extension to the problem of sparse coding
where sparsity is substituted by matrix rank. Concretely, the task is to
recover a matrix $\mat{\coef} \in\ensuremath{\mathbb{R}}^{\ndims{\times}n}$ from an
incomplete and/or corrupted observation $\mat{\data}$, under the assumption that
the rank of $\mat{\coef}$, $\mathrm{rank}(\mat{\coef})$, is small. As with sparse coding,
$\mathrm{rank}(\mat{\coef})$ is relaxed using the \cost{1} equivalent for matrix rank,
which is the nuclear norm, $\norm{\mat{\coef}}_* := \sum_i \sigma_i(\mat{\coef})$,
where $\sigma_i(\mat{\coef})$ is the $i$-th singular value of $\mat{\coef}$. It has
been shown in \cite{candes11acm} that, under certain assumptions on
$\mathrm{rank}(\mat{\coef})$, the following estimation function is able to recover
$\mat{\coef}$ from a noisy observation $\mat{\data}$, and with a significant fraction
of its coefficients arbitrarily corrupted,
\begin{equation}
\hat\mat{\coef} = \arg\min_{\mat{W}} \norm{\mat{W}}_* + \lambda \norm{\mat{\data}-\mat{W}}_1,\quad \lambda=1/\sqrt{\max \{\ndims,n\}}.
\label{eq:rpca}
\end{equation}
A common proof of concept is to use this framework for robust background
estimation in camera surveillance video sequences~\cite{wright09nips}, and
we apply our proposed framework for the same application.
To perform our MDL-based model selection within this formulation, we solve
\refeq{eq:rpca} for increasing values of $\lambda$, obtaining a low-rank
approximation to $\mat{\coef}$,
$(\mat{\coef}(\lambda),\mat{\err}(\lambda)=\mat{\data}-\mat{\coef}(\lambda))$, which we encode
using the universal models described in
Section~\ref{sec:encoding-scheme}. We modified the algorithm described in
\cite{lin09arxiv} to allow for warm restarts, using the solution for the
previous $\lambda$ as a starting point for the next $\lambda$ for faster convergence.
Consistently with the $\cost{1}$ fitting term of \refeq{eq:rpca}, we encode
the non-zero values of $\mat{\err}(\lambda)$ as a Laplacian sequence of unknown
parameter. To exploit the potential sparsity in $\mat{\err}(\lambda)$, the
locations of the non-zero values are encoded, as
in~\ref{sec:encoding:coefficients}, using an enumerative two-parts code for
Bernoulli sequences of unknown parameter. To exploit low-rank in the
encoding, the matrix $\mat{\coef}(\lambda)$ is encoded via its reduced SVD
decomposition $\mat{\coef}(\lambda) = \mat{U}(\lambda) \Sigma(\lambda)
\mat{V}(\lambda)^\intercal$. For $\mathrm{rank}(\mat{\coef}(\lambda))=r$, we have that
$\mat{U}(\lambda) \in \ensuremath{\mathbb{R}}^{\ndims{\times}r}$ are the left-eigenvectors, $\Sigma
\in \ensuremath{\mathbb{R}}^{r{\times}r}$ is the diagonal matrix whose diagonal are the
non-zero singular values of $\mat{\coef}(\lambda)$, and $\mat{V}(\lambda) \in
\ensuremath{\mathbb{R}}^{r{\times}n}$ are the right-eigenvectors of
$\mat{\coef}(\lambda)$. Each column of $\mat{U}$ is encoded (in this video
example) as a smooth image via a causal bilinear predictor identical to the
one used for predictive coding of $\mat{\dict}$ in~\ref{sec:encoding:dictionary},
using a Laplacian model for the prediction residuals. Each column of
$\mat{V}$ is encoded as a smooth one-dimensional sequence, using a zero
order predictor (the predicted value for the next coefficient is the
previous coefficient value), with a Laplacian prior on the prediction
residuals. Finally, the values of $\mat{\Sigma}$, which can be arbitrary,
are quantized and encoded using the universal code for
integers~\cite{rissanen83}.
The encoding method is very simple, with all unknown parameters encoded
using a two-parts code, and codelenghts for the discretized Laplacian
pre-computed in look-up tables. Quantization for this case is as follows:
the codelength associated with the $r$ non-zero singular values is
negligible, and we minimize unwanted distortion encoding them with high
precision ($1e-16$). As for the columns of $\mat{U}$ and $\mat{V}$, they all
have unit norm, so that the average magnitude of their elements are close to
$\sqrt{1/\ndims}$ and $\sqrt{1/n}$ respectively. Based on this, our
algorithm encodes the data with $\delta_u=Q/\sqrt{\ndims}$ as the
precision for encoding $\mat{U}$, and $\delta_v=Q/\sqrt{\ndims}$ for
$\mat{V}$, for several values of $Q$ in $(0,1)$, keeping the one producing
the smallest codelength.
The MDL-based estimation algorithm then chooses the model for which
the codelength $L(\mat{\data};\lambda) = L(\mat{U}(\lambda)) + L(\Sigma(\lambda))
+ L(\mat{V}(\lambda))$ is minimized.
As in \cite{wright09nips}, here we show results for two sequences taken from
\cite{li04tip}: ``Lobby'' (Figure~\ref{fig:low-rank}(a)), and
``ShoppingMall'' (Figure~\ref{fig:low-rank}(b)). Full videos can be viewed at
{\small\url{http://www.tc.umn.edu/~nacho/lowrank/}}.
\begin{figure*}[p]
\begin{center}
\subfloat[\label{fig:low-rank-a}Results for ``Lobby'' sequence, featuring a
room with lights that are switched off and on. The rank of the
approximation for this case is $\mathrm{rank}=10$. The moment where the lights
are turned off is clearly seen here as the ``square pulse'' in the middle
of the first two right-eigenvectors (bottom-right). Also note how
$\vec{u}_2$ (top-right) compensates for changes in
shadows.]{\includegraphics[width=0.485\textwidth]{lobby.png}}\hspace{2ex}%
\subfloat[\label{fig:low-rank-b}Results for ``ShoppingMall'', a fixed camera
looking at a crowded hall. In this case, the rank of the approximation
decomposition is $\mathrm{rank}=7$. Here, the first left-eigenvector models the
background, whereas the rest tend to capture people that stood still for a
while. Here we see the ``phantom'' of two such persons in the second
left-eigenvector
(top-right).]{\includegraphics[width=0.485\textwidth]{shopping.png}}%
\vspace{-1ex}\caption{\label{fig:low-rank}Low-rank approximation results. Both figures
show the first two left-eigenvectors as 2D images at the top, two sample
frames from the approximation error sequences in the middle, which should
contain the people that were removed from the videos, and the curve
$L(\lambda)$ and the right-eigenvalues, scaled by $\Sigma$ (representing
the ``activity'' of each left-eigenvector along time), at the bottom. }
\end{center}
\end{figure*}
In both cases, the recovered backgrounds are very
accurate. In particular, for the Lobby sequence, the selected model captures
just the eigenvectors needed to recover the background along with its
lighting changes, including corrections for local shadows, leaving out only
the people passing by.
\section{Concluding remarks}
\label{sec:conclusion}
We have presented an MDL-based sparse modeling framework, which
automatically adapts to the inherent complexity of the data at hand using
codelength as a metric.
The framework features a sparse coding algorithm and
automatic tuning of the sparsity level on a per-sample basis, including a
sequential collaborative variant which adapts the model parameters as it
processes new samples, and two dictionary learning variants which learn the
size of the dictionaries from the data. In all cases, the
information-theoretic formulation led to robust coding and learning
formulations, including novel robust metrics for the fitting term (LG and
MOEG), and robust $\cost{1}$-based dictionary regularization term. This
formulation also allowed us to easily incorporate more prior information
into the coding/learning process, such as Markovian spatial dependencies, by
means of simple modifications to the probability models used.
As a result, the framework can be applied out-of-the-box to very different
applications, from image denoising to low-rank matrix approximation,
obtaining competitive results in all the cases presented, with minimal
interaction from the user.
\balance
|
2,869,038,154,547 | arxiv | \section{Introduction}
Shapley's Constellation~III in the Large Magellanic Cloud (LMC) is one
of the most enigmatic structures in the local universe: a coherent
semicircular arc spanning several hundred parsecs, composed of
thousands of bright young stars and tens of star clusters. Its
regularity across such a large scale defies the fractal-like
distributions that young stellar populations typically follow; in
fact, Constellation~III may be unique in this regard. In addition,
Constellation~III is embedded inside the supergiant shell LMC~4, a
circular hole in the LMC's HI disk that spans more than a kiloparsec
\citep{kim99}, and whose rim is dotted with HII regions \citep{mea80}.
The singular nature of Constellation~III invites speculation about its
formation mechanism, which must have been similarly unique given the
absence of anything resembling this structure in other nearby galaxies.
\cite{wm66} popularized the ``Constellation~III'' designation, and
speculated that its stars were formed from material swept up in the
shock of a ``super-supernova''. \cite{ee98} suggested that the
combined winds from a relatively small number of massive stars could
have swept LMC4 clean of gas, and subsequently triggered the formation
of Constellation~III. However, as they point out in \cite{ee99}, the
unique nature of Constellation~III belies such a mundane formation
mechanism. After all, the LMC is home to thousands of star clusters
which must have hosted similarly strong massive-star winds, yet it
contains no other structures like Constellation~III. Instead,
\cite{ee99} favor an updated ``super-supernova'' idea, suggesting the
LMC4 cavity was blown by a gamma-ray burst formed by the coalescence
of an X-ray binary (perhaps ejected from the nearby rich star cluster
NGC 1978), and the swept-up material subsequently formed
Constellation~III. \cite{dop85} presented the idea that
Constellation~III is the result of a stochastic self-propagating star
formation (SSPSF) process, directed in an outward radial propagation.
However, recent studies of the distribution of ages in
Constellation~III have not confirmed the radial age gradient reported
by \citeauthor{dop85} \citep{ols97, bra97, dh98}.
In this paper, we present a map of the past history of star formation
in the vicinity of Constellation~III, in order to constrain the
various formation scenarios for this unique structure. In
Section~\ref{sec:data}, we briefly review the photometric data and our
StarFISH analysis software. In Section~\ref{sec:map}, we present a
detailed map of the star formation history (SFH) throughout the
Constellation~III region. We discuss the implications of our SFH map
in Section~\ref{sec:discuss}, and summarize the results in
Section~\ref{sec:summary}.
\clearpage
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.4, angle=0]{figures/constellationIII.eps}
\caption{A stellar flux density map of a $2.5^\circ\times2.5^\circ$
region in the LMC, including Constellation~III. The map was derived
from our MCPS photometry: each pixel's value is proportional to the
total stellar flux in $B$, $V$ and $I$ (for blue, green and red,
respectively). Major structures and clusters are labeled, including
Constellation~III itself (dashed outline), and the approximate
position of the LMC~4 supergiant shell (large dotted circle). Note
that Constellation~III is actually one of a few large stellar arcs
in this region; our analysis will include all of these
arcs. }\label{fig:constellationIII}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.4, angle=0]{figures/s3grid.eps}
\caption{The same flux-density maps as in
Figure~\ref{fig:constellationIII}, with our conformal grid
overplotted. We determined the best-fit SFH solution independently
for each grid cell in order to generate a map of the SFH in this
region.}\label{fig:grid}
\end{center}
\end{figure}
\section{Overview of the Magellanic Clouds Photometric Survey and StarFISH}\label{sec:data}
The Magellanic Clouds Photometric Survey
\citep[MCPS,\ ][]{zar02,zar04} is a drift-scan survey of both the LMC
and the Small Magellanic Cloud (SMC), undertaken at the Las Campanans
Observatory 1-meter Swope telescope between 1995 and 2000. The MCPS
provided CCD imaging to $V=21$~mag in $U$, $B$, $V$, and $I$ filters,
covering $8.5^\circ\times7.5^\circ$ in the LMC, and
$4^\circ\times4.6^\circ$ in the SMC. Our catalogs contain astrometry
and photometry for 24~million LMC stars and more than 6~million SMC
stars.
In Figure~\ref{fig:constellationIII}, we show a stellar flux density
map derived from our MCPS photometry, for a $2.5^\circ\times2.5^\circ$
region including Constellation~III. The Figure shows that the arc
traditionally known as Constellation~III is actually one of at least
three large stellar arcs in this region; all of these arcs lie in the
interior of the LMC~4 HI supergiant shell.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.4, angle=0]{figures/shapley3.cmds.eps}
\caption{The six panels on the left show $(B-V)$ vs. $V$ CMDs for six
of our Constellation~III regions (as labeled at the top of each
panel; see Figure~\ref{fig:grid}). The termination of the main
sequence at $V$=12--13~mag indicates that the youngest stars in
these regions is aged 10--15~Myr (the red isochrones overplotted in
each panel). In addition, the three populations in the top row show
an isolated clump of red supergiants near $V=13.5$~mag, indicative
of a burst of star formation activity around 30~Myr ago (the green
isochrones overplotted in the top panels). This is further
illustrated by the synthetic stellar populations in the rightmost
column. These two populations differ only in their SFH between 10
and 100 Myr: in the top panel, the stars in this age range all have
an age of 30~Myr, whereas in the bottom panel, the ages are
uniformly distributed between 10 and 100~Myr.}\label{fig:cmds}
\end{center}
\end{figure}
The stellar populations of a galaxy represent a fossil record of its
past star-formation activity. By statistically comparing multicolor
photometry of resolved stellar populations to synthetic populations
based on theoretical isochrones \citep[e.g., ][]{gir02}, we can
reconstruct the SFH of the target galaxy. We have developed a
software package \citep[starFISH; ][]{hz01}, which performs robust SFH
analysis of resolved stellar photometry. StarFISH works by
constructing a library of synthetic color-magnitude diagrams (CMDs),
each of which is built from isochrones spanning a small range in age
and metallicity. The synthetic CMDs are designed to replicate the
characteristics of the observed data in every way (distance,
extinction, IMF, binarity, photometric errors, and incompleteness).
Each synthetic CMD therefore represents the contribution to the CMD of
stars of a particular age and metallicity. Through linear combination
of synthetic CMDs spanning all relevant combinations of age and
metallicity, we can generate composite model CMDs that represent any
arbitrary SFH. These composite model CMDs can then be quantitatively
compared to the real, observed CMD, and by minimizing the differences
between them, the best-fitting SFH solution can be obtained.
\section{Mapping the SFH of the Constellation~III Region}\label{sec:map}
In order to distinguish between the various competing theories for the
formation of Constellation~III, we constructed a spatially-resolved
map of the SFH of the entire $2.5^\circ\times2.5^\circ$ region. In
order to maximize our ability to detect any radial or azimuthal
population gradients present in these stellar arcs, we constructed a
conformal grid that follows their curvature (see
Figure~\ref{fig:grid}), and determined an independent SFH solution for
the stars in each grid cell. The synthetic CMDs employed for each
grid cell used extinction distributions and empirical photometric
error models derived directly from the grid cell's stellar population.
Previous analyses of the SFH of Constellation~III have largely sought
to determine a single characteristic age for different locations in
the structure, without considering the full distribution of stellar
ages that makes up the true SFH. In particular, the ages assigned
have been the {\em youngest} age present. As an illustration that a
more complete SFH is warranted for these regions, we examine the
$(B-V)$ CMDs of eight of our Constellation~III regions in
Figure~\ref{fig:cmds}. Each of these regions shows a prominent main
sequence, and by simple isochrone fitting, one can determine the age
of the youngest stellar population in each region (as shown by the red
curves in each panel). However, these CMDs show clear evidence for a
more complex SFH. The red giant branch and red clump are obvious
tracers of old stellar populations, but there are also supergiants
present which trace star formation that occurred several tens of
millions of years ago. In particular, the CMDs in the top row of
Figure~\ref{fig:cmds} each show an isolated knot of red supergiants
with $V=13.5$~mag. An isolated knot of supergiants at a common
luminosity is strong evidence for an isolated burst of star formation
activity in the recent history of the region, because the luminosity
of a supergiant is directly and unambiguously correlated with its age
\citep{dp97}. The isochrone overplotted in green in
Figure~\ref{fig:cmds} indicates that the supergiants in these knots
are roughly 30~Myr old. Thus, by simple visual inspection of these
CMDs, we can already conclude that {\em some} of the Constellation~III
regions have experienced multiple, isolated bursts of star formation.
Variations like these will be recovered in our StarFISH analysis,
giving us a much more complete picture of the SFH of
Constellation~III.
The SFH map of Constellation~III resulting from our StarFISH analysis
is shown in Figure~\ref{fig:sfhmap}. In this Figure, each panel
represents a map of the past star formation rate for a different time
step, from 12~Gyr ago to 5~Myr ago. It is immediately apparent from
the SFH map that star formation occurred in Constellation~III over an
extended time interval, from about 30~Myr ago until 8~Myr ago. Star
formation was active in different parts of Constellation~III at
different times; however, there do not appear to be systematic,
large-scale age gradients revealed in these maps. We also note that
Constellation~III does not distinguish itself from the background
stellar population until the onset of recent star-formation 30~Myr
ago, so it is likely that 30~Myr ago marks the time of its initial
formation. This can also be seen in Figure~\ref{fig:totsfh}, the
summed total SFH for the entire Constellation~II region
When examining Figure~\ref{fig:sfhmap}, it is important to understand
that SFH maps like this do not allow us to display information on the
uncertainties associated with the best-fit star formation rates (SFRs)
in each time step. However, StarFISH {\it does} estimate these
uncertainties, and they do include covariance between adjacent age
bins. So, for example, in the map there are many regions that have a
large SFR in the 10~Myr panel, but a very low SFR in the 8~Myr panel,
and vice versa. The uncertainties computed by StarFISH for these bins
indicates that these variations are not significant; in other words,
it is clear from the fit that there was a large amount of star
formation activity 8--10~Myr ago in many of these regions, but in most
cases, the data do not allow us to distinguish 8~Myr old stars from
10~Myr old stars, and this non-uniqueness is reflected in the computed
uncertainties.
\begin{figure*}[ht]
\begin{center}
\includegraphics[scale=0.8, angle=0]{figures/s3map_young.eps}
\caption{The SFH map for the Constellation~III region. Each panel
displays the star formation rates for a single time step in the
LMC's history, from 12~Gyr ago in the upper left to 5~Myr ago in the
lower right. Within a panel, the greyscale is proportional to the
relative star formation rate in each grid cell (with darker color
corresponding to a larger star formation rate). }\label{fig:sfhmap}
\end{center}
\end{figure*}
\section{Discussion and Implications}\label{sec:discuss}
Most previous analyses of Constellation~III's SFH have concluded that
the stellar arc is 10--15~Myr old. We have shown that the {\em
youngest} stars in these arc structures are 10--15~Myr old, but that
they also contain abundant populations as old as 30~Myr. This extended
epoch of star formation is difficult to reconcile with currently-proposed
ideas about the formation mechanism of Constellation~III. In the
``super-supernova'' or GRB shockwave scenario, star formation would be
triggered throughout the arc structure on a short timescale, resulting
in a small age spread among the stars in Constellation~III. The SSPSF
scenario predicts a more protracted epoch of star formation, but as it
is usually discussed in the literature \citep{dop85, ols97}, the
propagation wave is directed radially outward from the center of
Constellation~III. Our SFH map confirms that there are no large-scale
age gradients in Constellation~III that would be required by this model.
We propose a new scenario in which the pre-stellar material was swept
into large arcs (perhaps by dramatic forces such as a GRB-like
explosion, or the combined winds of massive stars), but that star
formation was not immediately triggered throughout the structure by
these forces. Rather, star formation proceeded stochastically
throughout the giant prestellar cloud complex, according to the local
physical conditions of the interstellar medium. Self-propagation may
well be a part of this process. We do not reject SSPSF {\em per se} in
the formation of Constellation~III, but the large-scale
radially-directed form in which it is usually discussed for this region.
It may seem unlikely that it would be possible to sweep material into
large, coherent structures without triggering massive, rapid star
formation in the material. However, the LMC currently contains an
example of just such a large, coherent prestellar cloud complex: there
is a ridge of molecular gas extending more than 1.5~kpc southward from
30~Doradus, with a typical width of 100~pc \citep{miz01}. This ridge
contains abundant molecular and atomic gas, and yet its specific star
formation rate is currently quite low. While the physical processes
that led to the formation of this giant molecular ridge may be quite
different from the forces that scuplted Constellation~III, its
existence is evidence that it is at least possible to gather
prestellar material into a large coherent structure, without
simultaneously triggering rapid star formation throughout it.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.3, angle=-90]{figures/totsfh.eps}
\caption{The summed total SFH of the entire Constellation~III region,
shown as the star formation rate as a function of time. The time
axis is displayed linearly, but the scale changes in the three
panels so that the very narrow time intervals at the young end can
be displayed. The left panel covers about 100~Myr, the middle panel
covers about 1~Gyr, and the right panel covers more than 10~Gyr.
The black histogram represents the SFH of the stellar arcs in the
Constellation~III region, while the grey histogram represents the
SFH of the background population in this region.}\label{fig:totsfh}
\end{center}
\end{figure}
\section{Summary}\label{sec:summary}
We present a reconstruction of the spatially-resolved SFH of the
enigmatic Constellation~III region in the northern LMC disk. We find
that stars in the giant stellar arcs in this region formed over an
extended period, from 30 to 10~Myr ago. While there are significant
spatial variations in the SFH, there don't appear to be large-scale
age gradients as would be expected in a SSPSF formation scenario.
Since our detailed SFH reconstruction of Constellation~III fits
neither of the widely-discussed formation scenarios for this unique
structure, we propose a new scenario in which the prestellar material
was swept up into large arcs, but star formation was not immediately
triggered throughout the cloud, or at least not violently so. The
molecular ridge south of 30~Doradus provides evidence that it is
possible to organize kpc-scale coherent structures in prestellar
material without immediately triggering rapid star formation in the
gas.
\section*{Acknowledgments}
Much of this work was performed while JH was supported by NASA
through Hubble Fellowship grant HF-01160.01-A awarded by the Space
Telescope Science Institute, which is operated by the Association of
Universities for Research in Astronomy, Inc., under NASA contract NAS
5-26555. DZ acknowledges financial support from National Science
Foundation grant AST-0307482 and a fellowship from the David and
Lucille Packard Foundation.
\bibliographystyle{apj}
|
2,869,038,154,548 | arxiv | \section{Introduction}
Complexity of quantum evolution is of wide theoretical and practical interest. It can be captured in different ways, one common idea is to quantify what is colloquially called scrambling of quantum information~\cite{Hayden07,Susskind08}. Details of what scrambling means depend on a particular situation, but broadly speaking one can quantify it in two ways: either by properties of the evolved state, for instance its entanglement under random circuit evolution~\cite{emerson03}, or by complexity of time-evolved operators as measured for instance by the out-of-time-ordered (OTOC) correlations~\cite{lashkari13,shenker14,maldacena16}. In the present paper we shall study dynamics of OTOC $O^\beta(i,j,t)$, being equal to the Hilbert-Schmidt norm of the commutator, $O^\beta(i,j,t)=\frac{1}{2}\langle AA^\dagger\rangle=\frac{1}{2^{n+1}}\tr{(AA^\dagger)}$, where $A=[\sigma_i^\alpha(t), \sigma_j^\beta]$ is a commutator between two local traceless operators, one of them being evolved in time. It can therefore measures how fast correlations spread from the spatial position $i$ to $j$, and, more importantly for our discussion, also how fast $\sigma_i^\alpha(t)$ becomes ``random''.
Due to its relative simplicity and relevance for quantum information OTOC have been studied in very many different contexts. Limiting just to homogeneous systems, these include field theory~\cite{roberts15,swingle17}, Luttinger liquids~\cite{moessner17}, and (chaotic) many-body systems~\cite{knap17,ueda17,xu19,lin18,nakamura19,smith19}. In studies of quantum many-body systems one has to usually resort to numerics and that is why any exact results are greatly appreciated. Simplification that allows for analytical results often comes due to symmetries, either exact ones (e.g., integrable systems) or for instance effectively increasing symmetry by some averaging procedure. One such ``averaging'' case is that of random circuits where it has been shown that the average dynamics can be described by a Markov chain~\cite{oliveira}, leading to exact entanglement generation speeds for specific circuits~\cite{PRA08}. Today many other theoretical approaches to random circuits are known. A prominent example is generic hydrodynamic description of operator spreading and OTOC dynamics~\cite{Frank18,adam18}, explicitly verified by exact results for random U(4) circuits. For a slightly different Brownian Hamiltonian evolution see Ref.~\cite{zhou19}. Another powerful example of exactly solvable dynamics are so called dual-unitary circuits~\cite{DU-LC}, among them also random dual-unitary circuits~\cite{Bruno20}. One of the distinguished features of dual-unitary circuits is that 2-point spatio-temporal correlations are nonzero only on the light-cone boundary~\cite{DU-LC} and that the results can be expressed in terms of finite transfer matrices. Similar is the case in integrable circuits~\cite{lamacraft21} in which the gates satisfy the Yang-Baxter equation. For dual-unitary circuits OTOC decay exponentially with time close to the light-cone boundary~\cite{maximum_velocity}. Some simplification is possible also for certain small perturbations to dual-unitarity~\cite{kos21}.
Many studies try to find some indication of chaoticity. Remembering that the notion of chaos is in classical systems defined as a property of the long-time limit, one might ask if such long-time complexity is somehow reflected also in the OTOC dynamics. The answer is not clear, what one can say however is that for lattice systems with finite local Hilbert space dimension it is not clear how to distinguish chaoticity from integrability via OTOC. A possible approach is to get some measure of instability, like ``quantum'' Lyapunov exponents, from OTOC dynamics. However this is bound to fail for several reasons. One is that one might get an exponential behavior that is unrelated to chaos, for instance simply due to unstable fixed points~\cite{cao20,hashimoto20,santos20}. Hydrodynamic behavior of the operator front might also look the same in chaotic and integrable systems~\cite{sarang18} (for free systems see Ref.~\cite{riddell21}). On top of it, in lattice models with finite local dimension, like chains of qubits, there is no obvious small parameter and so any possible exponential Lyapunov-like growth of OTOC can hold only upto finite (short) times~\cite{saso17,khemani18}.
We are going to study OTOC dynamics in random quantum circuits, mostly in one-dimensional geometry and for qubits. In random circuits there is no dichotomy between integrability and chaos -- random circuits can be thought of as being models of chaotic systems -- and so we are not aiming at coming up with some chaoticity criterion. What we shall focus on is the long-time dynamics of OTOC, specifically on how fast OTOC relaxes to its asymptotic value reached at long times that corresponds to a completey scrambled evolution. As we shall see, this will reveal interesting mathematical and physical properties. Deriving a Markovian description of the average OTOC dynamics in random circuits we shall show that the relaxation rate typically exhibits a discontinuity at a specific time linear in the number of qubits. What is more, the relaxation time in this first phase, which is dominant in the thermodynamic limit, is not given by the gap of the Markovian matrix. Instead, it is given by a so-called phantom eigenvalue -- a fake ``eigenvalue'' that is not in the spectrum which though nevertheless determines the relaxation. The same phenomenon has been recently observed also in purity dynamics~\cite{prejsnji_clanek}. Perhaps also related is an observation that in nonequilibrium dynamics described by the Lindblad equation the gap does not necessarily give the correct relaxation time~\cite{Mori20,ueda21}, and of non-Hermiticity of transfer matrix describing integrable circuits~\cite{lamacraft21}.
\section{Random quantum circuits}
In this paper we deal with random quantum circuits defined on a system of $n$ qubits. The unitary propagator $U$ is a product of local elementary gates $U_{i,j}$ acting on qubit pairs $(i,j)$, that is $U = \prod_{i,j} U_{i,j}$. Every elementary step is, in turn, defined as a product of two independent one-site random unitaries $V_i$ and $V_j$ and a two-site unitary gate $W_{i,j}$; namely $U_{i,j} = W_{i,j} V_{i} V_{j}$. Two examples of random quantum circuits, where the product of elementary gates is ordered in a brick-wall (BW) pattern and in a staircase (S) pattern, can be seen in Fig.~\ref{fig:BWandS}. As can be deduced from the name, the BW protocol is defined as a configuration where in each unit of time we first couple nearest-neighbor qubits $(i,i+1)$ with an odd $i$, then all pairs with even $i$. Apart from being widely studied for its simplicity, we mainly focus on this protocol because it turned out to be the fastest possible local scrambler of entanglement \cite{prejsnji_clanek}. Another configuration that we will encounter in this paper is the S configuration. The S configuration consists of operators $U_{i,i+1}$, where at each step we increase $i$ by $1$. In the main part we shall mostly focus on random quantum circuits acting on 1-dimensional (1D) chains of qubits with either open boundary conditions (OBC) or periodic boundary conditions (PBC); that is, qubits are distributed on a line (OBC) or on a circle (PBC).
One obtains various random circuits by different choices of a fixed two-site gate $W_{i,j}$ and the ordering of elementary steps. To distinguish various choices of $W_{i,j}$ we shall parametrize it in the following canonical form \cite{dekompozicija_1,dekompozicija_2,dekompozicija_recept}
\begin{align}
W_{j,k} &= V_{j}^{(1)} V_{k}^{(2)} w_{j,k} (\textbf{a})V_{j}^{(2)} V_{k}^{(3)} \nonumber \\
w_{j,k}(\textbf{a}) &= \exp \left[ {\rm i} \frac{\pi}{4} \left( a_{\rm x} \sigma^{\rm x}_j \sigma^{\rm x}_k + a_{\rm y} \sigma^{\rm y}_j \sigma^{\rm y}_k + a_{\rm z} \sigma^{\rm z}_j \sigma^{\rm z}_k \right) \right],
\label{eq:parametrizacija}
\end{align}
where $V_k^\alpha$ are one-site unitary operators, $\sigma^{\rm x,\rm y,\rm z}$ are Pauli matrices and $\textbf{a} = (a_{\rm x}, a_{\rm y} , a_{\rm z})$ are three real parameters, which can be constrained to $0 \leq a_{\rm z} \leq a_{\rm y} \leq a_{\rm x} \leq 1$. In this paper, we will be interested in the average dynamics of OTOC generated by random quantum circuits. Due to randomness on single qubits (at every elementary step we act with random unitaries $V_{i}$ and $V_{j}$) the choice of local operators $V_k^\alpha$ does not affect our averaged dynamics, so only the choice of the three real parameters $(a_{\rm x}, a_{\rm y} , a_{\rm z})$ is what matters. To conclude, without loss of generality we can take our fixed two-site unitary to be $w_{i,j}$, which is in turn parametrized by only three constrained real parameters $0 \leq a_{\rm z} \leq a_{\rm y} \leq a_{\rm x} \leq 1$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=85mm]{BWinS_PBC-clanek.pdf}
\caption{Illustration of a brick-wall (BW) protocol (a) and a staircase (S) protocol (b) on a qubit chain of size $n=8$. Blue boxes represent elementary steps $U_{i,j}$. Red dotted lines represent integer times, which are measured so that one unit corresponds to the action of one period of the random quantum circuit. One period of a BW protocol consist of local operators $U_{i,i+1}$ where we first act on qubits with odd $i$, then on qubits with even $i$. In a S protocol in one period we subsequently act with elementary steps $U_{i,i+1}$ starting from $i=1$ and increasing $i$ by one for each local operator.}
\label{fig:BWandS}
\end{center}
\end{figure}
\section{OTOC's Markov chain}
We shall study the out-of-time-order correlations (OTOC) defined as
\begin{align}
O^\beta(i,j,t) &= \frac{1}{2^{n+1}} \tr{\,\vline\, \left[ \sigma_i^\alpha(t), \sigma_j^\beta \right]\space\vline\,}^2 \nonumber \\
&= 1-\frac{1}{2^n}\tr{\left( \sigma_i^\alpha(t) \sigma_j^\beta \sigma_i^\alpha(t) \sigma_j^\beta \right)},
\label{eq:OTOC_first_def}
\end{align}
with $\sigma_k^\gamma$ denoting the Pauli matrix at position $k$, and $\gamma \in \{\rm x,\rm y,\rm z\}$. The time-evolved Pauli matrix is obtained as $\sigma_i^\alpha(t)=U^\dagger \sigma_i^\alpha U$. OTOC thus measure how correlations between two initially localized operators spread in the system. Its minimal value is $0$ for operators that commute, i.e. until $\sigma_i^\alpha(t)$ begins to overlap with $\sigma_j^\beta$, whereas its maximal value is $2$ reached for e.g. $\sigma_j^\beta=\sigma^{\rm x}_j$ and $\sigma_i^\alpha(t)=\sigma^{\rm y}_j$. If $\sigma_i^\alpha(t)$ at large times randomly spreads over all available operator space the average OTOC will converge towards its thermal value $O_\infty \approx 1$ (see Appendix~\ref{app:O_inf}). We are going to study how OTOC converge to this long-time stationary value.
It has been shown that averaging over one-site random unitaries leads to a Markov chain description of the evolution of the average purity~\cite{oliveira07,metoda_redukcija}. Because OTOC are, similarly as purity, also quadratic in the time-evolved operator, their average evolution can also be written in term of a Markov chain. This has been done for the special case of a random U(4) elementary step $U_{i,j}$ in Ref.~\cite{adam18}, whereas we derive the Markovian matrix description for a protocol consisting of an arbitrary two-qubit $W_{i,j}$ conjugated by independent single-qubit unitaries.
The derivation relies on the fact that it is possible to express OTOC as a linear combination of all possible purities of a system of $n$ qubits. Writing the operator $\sigma_i^\alpha(t)$ in the basis of Pauli strings with coefficients $a_{\boldsymbol{ \sigma }}(t)$
\begin{equation}
\sigma_i^\alpha(t) = \sum_{\boldsymbol{ \sigma }} a_{\boldsymbol{ \sigma }}(t) \boldsymbol{ \sigma },
\label{eq:salpha}
\end{equation}
where $\boldsymbol{ \sigma } = (\sigma_1,\sigma_2,\dots,\sigma_n)$, with $\sigma_i \in \{ \mathbbm{1},\sigma^{\rm x},\sigma^{\rm y},\sigma^{\rm z} \}$, we obtain
\begin{equation}
O^\beta(i,j,t) = 2 \sum_{\boldsymbol{ \sigma }; \sigma_j \in S_1 \setminus \sigma_j^\beta} a_{\boldsymbol{ \sigma }}^2(t),
\label{eq:Obeta}
\end{equation}
where for brevity we defined two sets
\begin{equation}
S_0=\{\mathbbm{1} \},\qquad S_1=\{ \sigma^x,\sigma^y,\sigma^z\}
\label{eq:S}
\end{equation}
that will be useful in specifying various summations. For instance the sum in Eq.~(\ref{eq:Obeta}) runs over all Pauli strings $\boldsymbol{ \sigma } = (\sigma_1,\sigma_2,\dots,\sigma_n)$ except for those having $\sigma_j^\beta$ or $\mathbbm{1}$ at the site $j$.
We wish to relate a vector containing all possible OTOC $O(i,j,t)$ for every position $j$ to a vector of purities through a linear transformation. To obtain purity we write the density operator in terms of Pauli strings coefficients $c_{\boldsymbol{ \sigma }}$, $\rho(t)=\frac{1}{\sqrt{2^n}}\sum_{\boldsymbol{ \sigma }}{c_{\boldsymbol{ \sigma }} \sigma}$. Purity $I_{\rm A}$, which measures pure-state entanglement between two complementary subsets of qubits denoted by $\rm A$ and $\rm B$ (consisting of ${n_{\rm A}}$ and ${n_{\rm B}}$ qubits, respectively), is then
\begin{equation}
I_{\rm A} = {\rm tr}_{\rm A} \left( {\rm tr}_{\rm B} \rho \right)^2 = 2^{{n_{\rm B}}} \sum_{\boldsymbol{ \sigma }; \forall i \in \rm B, \sigma_i=\mathbbm{1}} c_{\boldsymbol{ \sigma }}^2.
\label{eq:IA}
\end{equation}
Expression (\ref{eq:IA}) is invariant with respect to an arbitrary permutation of the three Pauli matrices at any site. In other words, it is only the totally symmetric sum of $c_{\boldsymbol{ \sigma }}^2$ for all three Pauli matrices that matters for purity. For instance, for a system of two qubits with subsystem $\rm A$ being the 1st qubit, we have $I_{\rm A}=2 c^2_{(\mathbbm{1},\mathbbm{1})}+2(c^2_{(\sigma^{\rm x},\mathbbm{1})}+c^2_{(\sigma^{\rm y},\mathbbm{1})}+c^2_{(\sigma^{\rm z},\mathbbm{1})})$. So instead of bookkeeping all $4^2$ coefficients $c^2_{(\sigma_1.\sigma_2)}$ it is enough to keep track of only $2^2$ of them, which we can neatly pack into a two-site vector (for definition of $S_1$ see Eq.(\ref{eq:S}))
\begin{equation}
\Phi = \begin{pmatrix}
c_{(\mathbbm{1},\mathbbm{1})}^2 \\
\sum_{\sigma_1 \in S_1} c_{(\sigma_1,\mathbbm{1})}^2 \\
\sum_{\sigma_2 \in S_1} c_{(\mathbbm{1},~\sigma_2)}^2 \\
\sum_{\sigma_1,\sigma_2 \in S_1} c_{(\sigma_1,\sigma_2)}^2
\end{pmatrix}.
\label{eq:In2}
\end{equation}
We can obtain purities for all possible bipartitions of two qubits from components of $\Phi$, specifically, if the 1st qubit is in $\rm A$ we have $I_{\rm A}=2\Phi_{0}+2\Phi_{1}$, whereas if the 2nd qubit is in $\rm A$ one has $I_{\rm A}=2\Phi_{0}+2\Phi_2$, where we labeled the 4 components in Eq.~(\ref{eq:In2}) by $\Phi_{0,1,2,3}$. Generalizing $\Phi$ to $n$ qubits it will have $2^n$ components that we label by bit strings $\mathbf{s}=(s_1,\ldots,s_n)$, where $s_j\in \{0,1\}$, with the components being
\begin{equation}
\Phi_{\mathbf{s}}=\sum_{\boldsymbol{ \sigma };\sigma_j \in S_{s_j}} c^2_{\boldsymbol{ \sigma }}.
\label{eq:Phis}
\end{equation}
To shorten the notation we shall occasionally also use the integer value of the bit string $\mathbf{s}$ instead of specifying the full $\mathbf{s}=(s_1,\ldots,s_n)$, as $\mathbf{s} \equiv \sum_{j=1}^n 2^{j-1} s_j$. Purity for an arbitrary bipartition is now given by a particular component of vector $\Phi_I$ obtained as
\begin{equation}
\Phi_I := A_I \Phi, \quad
A_I = \begin{pmatrix}
1 & 1 \\
2 & 0
\end{pmatrix}^{\otimes n},
\label{eq:I_sum}
\end{equation}
Specifically, the component $[\Phi_I]_{\mathbf{s}}$ is equal to the purity for a bipartition in which the subsystem A consists of qubits for which $s_j=0$, i.e., the bit $s_j$ encodes the subsystem in which the $j$-th qubit is.
Ref.~\cite{metoda_redukcija} showed that it is possible to write the evolution of purities $\Phi_I$ averaged over one-site Haar random unitaries as a Markov chain. Abusing notation and from now on using $\Phi_I(t)$ to denote the average purity after $t$ steps of our random circuits (\ref{fig:BWandS}), one has
\begin{equation}
\Phi_I(t) = M' \Phi_I(t-1).
\label{eq:Phit}
\end{equation}
The transfer matrix $M'$ describing one period of our circuit is a product of matrices $M'_{i,j}$, one for each elementary step $U_{i,j}$~\cite{metoda_redukcija}. For example, a transfer matrix describing $t_2$ periods of a BW PBC circuit on $n=4$ qubits would be $(M')^{t_2} = (M'_{4,1} M'_{2,3} M'_{3,4} M'_{1,2})^{t_2}$. Note that because the two-site gates $W_{i,j}$ are the same for all steps all transfer matrices are independent of time.
Looking at the expressions for OTOC in Eq.~(\ref{eq:Obeta}) and $\Phi$ in Eq.~(\ref{eq:Phis}) we can see that they look rather similar. Because we know how average purities are evolved (\ref{eq:Phit}), we also know how to evolve $\Phi(t)$, namely defining $\Phi(t)=A_I^{-1} \Phi_I(t)$ gives us $\Phi(t)=A_I^{-1}M'A_I\Phi(t-1)$. This will in turn lead us to the evolution of OTOC.
To achieve that let us rather look at the OTOC averaged over three possible $\sigma_j^\beta$,
\begin{equation}
O(i,j,t) := \frac{1}{3} \sum_{\beta \in \{ \rm x,\rm y, \rm z \}} O^\beta(i,j,t) = \frac{4}{3} \sum_{\boldsymbol{ \sigma }; \sigma_j \in S_1} a_{\boldsymbol{ \sigma }}^2(t).
\label{eq:OTOC_final_def}
\end{equation}
Note that the dependence on site index $i$ is implicitly hidden in the expansion coefficients $a_{\boldsymbol{ \sigma }}(t)$ (\ref{eq:salpha}) of the initial $\sigma_i^\alpha$. Using $\Phi$ for a vector defined as in Eq.~(\ref{eq:Phis}) but for coefficients $a_{\boldsymbol{ \sigma }}$, and formally defining a vector $\Phi_O$ by
\begin{equation}
\Phi_O := A_O \Phi, \quad
A_O = \begin{pmatrix}
1 & 1 \\
0 & \frac{4}{3}
\end{pmatrix}^{\otimes n},
\label{eq:O_sum}
\end{equation}
one can verify that $O(i,j,t)$ is equal to the $2^{j-1}$-th component of the vector $\Phi_O$. That is, $O(i,j,t)=[\Phi_O]_{\mathbf{s}}$, where $s_j=1$ and $s_{k\neq j}=0$. Therefore, $n$ components of $\Phi_O$ are equal to OTOC while the other $2^n-n$ components are some other combinations of $a^2_\mathbf{s}$ not related to OTOC. Note that the choice of $A_O$ is not unique; the two $1$ in the top row take care of summing over both sets $S_0$ and $S_1$ for sites $k\neq j$ in Eq.~(\ref{eq:OTOC_final_def}), while the $\frac{4}{3}$ in the 2nd row accounts for an overall prefactor accounted by a single bit $s_j$ being $1$, ie., summation only over $S_1$ at site $j$. The initial value of OTOC $O(i,j,0)$ is easily computed from the initial value of $a_{\boldsymbol{ \sigma }}=\delta_{\boldsymbol{ \sigma },(\mathbbm{1},\ldots,\mathbbm{1},\sigma_i^\alpha,\mathbbm{1},\ldots,\mathbbm{1})}$ ($\delta_{\boldsymbol{ \sigma },\boldsymbol{ \sigma }'}=\Pi_k \delta_{s_k,s'_k}$ is a Kronecker multi-delta), which in turn gives (\ref{eq:Phis}) $[\Phi(t=0)]_{\mathbf{s}}=\delta_{\mathbf{s},(0,\ldots,0,1_i,0,\ldots,0)}=\delta_{\mathbf{s},2^{i-1}}$, which then through (\ref{eq:O_sum}) results in the initial condition
\begin{equation}
\Phi_O(t=0) = \frac{4}{3} \textbf{e}_{2^{i-1}}+\textbf{e}_{0},
\label{eq:OTOC_initial}
\end{equation}
where the vector $\textbf{e}_{k}$ has components $[\textbf{e}_{k}]_{\mathbf{s}}=\delta_{\mathbf{s},k}$. The vector $\Phi$ containing coefficients of $\sigma_i^\alpha(t)$, instead of $\rho(t)$, is propagated in exactly the same way as for average purity~\cite{foot0}, that is, because $\Phi=A_I^{-1} \Phi_I$, we have $\Phi_O = A_O A_I^{-1} \Phi_I$. The OTOC vector $\Phi_O$ averaged over single-site random unitaries is therefore propagated as
\begin{equation}
\Phi_O(t) = M \Phi_O(t-1), \quad M = A_O A_I^{-1} M' A_I A_O^{-1},
\label{eq:OTOC_MC}
\end{equation}
where $M'$ is the transfer matrix propagating purities.
Using $M'$ calculated for random circuits and an arbitrary $W_{i,j}$ parameterized by $(a_{\rm x},a_{\rm y},a_{\rm z})$, Eq.(13) from Ref.~\cite{prejsnji_clanek}, we immediately get the transfer matrix $M$ describing the evolution of average OTOC under one elementary step,
\begin{equation}
M_{i,j} =
\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & c_+ & c_- & d \\
0 & c_- & c_+ & d \\
0 & -4d/3 & -4d/3 & (2d+v)/3
\end{pmatrix}.
\label{eq:M_cd}
\end{equation}
Here $c_\pm = \frac{1}{12} \left( 9\pm 2u-v\right)$ and $d=\frac{1}{6} (v-3)$, with $u = \cos\left( \pi a_{\rm x} \right)+\cos\left( \pi a_{\rm y} \right)+\cos\left( \pi a_{\rm z} \right)$ and $v = \cos\left( \pi a_{\rm x} \right)\cos\left( \pi a_{\rm y} \right)+\cos\left( \pi a_{\rm x} \right)\cos\left( \pi a_{\rm z} \right)+\cos\left( \pi a_{\rm y} \right)\cos\left( \pi a_{\rm z} \right)$. Note that $\textbf{e}_0$ is a trivial eigenvector of $M$ corresponding to the eigenvalue $1$, i.e., is a stationary state. However, $M$ has another nontrivial eigenvector with $\lambda=1$ containing the asymptotic stationary values of OTOC $O_\infty$.
The transfer matrix description of average OTOC dynamics (\ref{eq:OTOC_MC}) that we obtained offers several advantages. First, it gives a neat analytical description on which one can use standard tools of analyzing Markov chains, like for instance trying to connect the spectral properties of $M$ to the asymptotic relaxation of OTOC to its infinite-time values. Second, it also greatly simplifies numerical simulations of OTOC -- instead of e.g. explicitly simulating the dynamics of operators, averaging over different realizations, one can directly simulate the average OTOC dynamics.
\section{Exact dynamics on the light-cone}
\label{sec:LC}
In this section we will obtain the exact dynamics of OTOC on the light-cone, from which we will be able to determine in a very simple way the set of two-site $W$ that result in maximum velocity circuits. Maximum velocity quantum circuits were defined in Ref.~\cite{maximum_velocity} as circuits where the butterfly velocity $v_{\mathrm{B}}$ \cite{vB_1,vB_2,vB_3,ballistic} equals the Lieb-Robinson velocity $v_{\mathrm{LR}}$ \cite{LiebRobinson}. The Lieb-Robinson velocity determines the causality light-cone of which boundaries are at positions $k = i \pm v_{\mathrm{LR}} t$ ($i$ is the location of $\sigma_i^\alpha(t=0)$), so that OTOC $O(i,j,t)$ with parameters $(j,t)$ outside the light-cone vanish. In a random circuit its value is determined solely by the circuit geometry and is for instance $v_{\mathrm{LR}}=2$ for the BW configuration. The butterfly velocity on the other hand is determined as $v_{\mathrm{B}}=|j-i|/t_{\rm min}$, where $t_{\rm min}$ is the minimal time when $O(i,j,t) \sim 1$ at fixed large $|j-i|$. Contrary to $v_{\mathrm{LR}}$ the butterfly velocity depends both on the geometry and on the choice of the gate $W$. For most random quantum circuits, $v_{\mathrm{B}} \neq v_{\mathrm{LR}}$. An illustration of these two velocities can be found in Fig.~\ref{fig:v_LR}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=85mm]{v_LB-density-clanek.pdf}
\caption{Operator spreading in a random quantum circuit with the brick-wall configuration and the two-qubit gate $W$ with $a_{\rm x}=0.5$, $a_{\rm y}=0.3$ and $a_{\rm z}=0.1$ (\ref{eq:parametrizacija}). Background colors represent values of $O(i,j,t)$ -- light gray denotes $O(i,j,t)\approx 0$ and red colors $O(i,j,t)\approx 1$. The operator $\sigma_i^\alpha$ is initially located on the 14th qubit marked by a circle and horizontal dashed red lines are at integer $t$.}
\label{fig:v_LR}
\end{center}
\end{figure}
Let us focus on a random quantum circuit with a brick-wall configuration of gates acting on an infinite system, $n\rightarrow \infty$. We would like to compute quantities $O(i,i \pm v_{\mathrm{LR}} t,t)$. Due to symmetry it is enough to consider OTOC only at the right light-cone. We also limit ourselves to odd $i$ such that the light-cone edge is at $k(t)=i + 2 t$ (for even $i$ one would have $O(i,i+2t,t)=0$ while $O(i,i+2t-1,t)\neq 0$). Remember that in our Markov chain picture $O(i,k(t),t)$ is equal to the $2^{k(t)-1}\equiv(0,\ldots,0,1_k,0,\ldots,0)$-th component of $\Phi_O(t)$. We can get such $O(i,k(t),t)$ by acting with the relevant $M_{k(t)-1,k(t)}$ (red boxes in Fig.~\ref{fig:v_LR}) on a previous half-step $\Phi_O(t-1/2)$. Taking into account that $M_{k(t)-1,k(t)}$ can change only bits at site $k(i)$ and $k(i)-1$, the 3rd row of $M$ (\ref{eq:M_cd}) gets us
\begin{align}
\label{eq:cp}
O&(i,k(t),t) = c_- [\Phi_O(t-1/2)]_{2^{k(t)-1}}\, + \\
& + c_+ [\Phi_O(t-1/2)]_{2^{k(t)}} + d [\Phi_O(t-1/2)]_{2^{k(t)-1}+2^{k(t)}}.\nonumber
\end{align}
It is important to note that due to causality all values $[\Phi_O(t)]_{p}$ with $p \ge 2^{k(t)}$ vanish, therefore only one term in Eq.~(\ref{eq:cp}) is nonzero, resulting in $O(i,k(t),t) = c_- O(i,k(t)-1,t-1/2)$. Iterating this by half-steps to smaller times until we reach $O(i,i,0) = 4/3$ one obtains the OTOC on the right light-cone. Similar procedure works also on the left light-cone and even $i$, resulting in
\begin{equation}
O(i,i\pm 2 t,t) = \frac{4}{3} (c_-)^{2t}.
\label{eq:OTOC_lcr}
\end{equation}
Looking at the left light-cone at odd $i$, or the right light-cone and even $i$, one instead gets
\begin{equation}
O(i,i\pm(2 t-1),t)= \frac{4}{3} c_+(c_-)^{2t-1}.
\label{eq:OTOC_lcl}
\end{equation}
The additional term $c_+$ in Eq.~(\ref{eq:OTOC_lcl}) comes from the interaction at time $t=1/2$, namely $O(i,i,1/2) = c_+ O(i,i,0)$.
OTOC on the light-cone therefore decay exponentially with the rate $\ln{c_-}$, hence one gets $v_{\mathrm{B}}=v_{\mathrm{LR}}=2$ iff $c_-=1$. Solving $c_-=1$ for $a_{\rm x},a_{\rm y},a_{\rm z}$ we obtain $a_{\rm x}=a_{\rm y}=1$ and an arbitrary $a_{\rm z}$, which corresponds to dual-unitary circuits~\cite{DU-LC} and for which one can explicitly calculate all 2-point correlations which are nonzero only on the light-cone boundary~\cite{DU-LC}. This means that taking $W$ from the dual-unitary set of gates (i.e., so-called XXZ gates) is the only choice leading to the maximum velocity random circuits (of the type studied in this paper), i.e., circuits for which OTOC do not decay along the light-cone. The same set of maximum velocity gates was also obtained in Ref.~\cite{maximum_velocity} for circuits without one-site random unitaries. Besides identifying maximum velocity gates our simple derivation also gets us the exact dynamics of OTOC on the light-cone for arbitrary gates $W$. Note that for circuits with a dual-unitary two qubit gate $W$ one can also get a closed expression for the OTOC decay in the vicinity of the light-cone, for non-random circuits see~\cite{maximum_velocity}, for random~\cite{Bruno20}.
We also observe that the same set of gates, except at $a_{\rm z}=1$, results in the maximal possible entanglement scrambling speed~\cite{prejsnji_clanek}. The gate with $a_{\rm z}=1$ is the SWAP gate and is special. The OTOC dynamics for a random circuit with the SWAP gate is trivial because the transfer matrix $M_{i,j}$ itself (\ref{eq:M_cd}) is equal to a SWAP gate resulting in $O(i,j,t)$ that is non-zero only on the light-cone, while at the same time such $W$ produces no entanglement.
\section{Convergence rate}
Under the application of a random quantum circuit the initially localized operator will spread in space, causing OTOC to increase from being zero outside of a light-cone to a nonzero value inside it. Often one is interested in this ramp-up of OTOC as for instance measured by the butterfly velocity. We shall instead investigate the late-time convergence rate of OTOC $O(i,j,t)$. That is, we are interested in how fast $O(i,j,t)$ at some fixed $i$ and $j$ relaxes towards its final value, see Fig.~(\ref{fig:OTOC_convergence}) for an illustration.
\begin{figure}[t]
\begin{center}
\includegraphics[width=85mm]{OTOC_convergence-clanek.pdf}
\caption{Average OTOC for the BW random circuit with PBC and the XY gate ($\textbf{a} = (1,1,0)$), $n = 28$. Operator relaxation towards long-time thermal asymptotics will be studied by observing how a fixed-position $O(i=1, j = 6, t)$ converges towards $O_{\infty}$ with time (inset).}
\label{fig:OTOC_convergence}
\end{center}
\end{figure}
The asymptotic value $O_\infty$ is reached at long times when the time evolved operator $\sigma_i^\alpha(t)$ becomes a uniform mixture of all possible Pauli strings (identity excluded) on $n$ qubits, i.e. when the propagator $U$ resembles a random unitary. A derivation of $O_\infty$ can be found~\cite{yoshida17} in \ref{app:O_inf} and gives us
\begin{equation}
O_\infty = 1+\frac{1}{4^n-1}.
\label{eq:OTOC_inf}
\end{equation}
If the eigenvalues of the transfer matrix $M$ are gapped away from $1$, which indeed is the case, we expect that OTOC exponentially relax to their asymptotic value $O_\infty$ as
\begin{equation}
|O(i,j,t)-O_\infty| \asymp \mathrm{e}^{-r t}. \label{eq:OTOC_asymp}
\end{equation}
Our main object of study is the convergence rate $r$. Considering that OTOC are propagated by $M^t$ one might think that the convergence rate will be determined by the 2nd largest eigenvalue $\lambda_2$ of $M$. However, as we shall see, this is not always the case.
In section \ref{sec:random} we shall first discuss protocols in which at each step we randomly pick a pair of qubits on which we act, that is protocols with a random ordering of gates. For those we will see that indeed OTOC decay exponentially with the convergence rate $r$ given by the second largest eigenvalue of the transfer matrix $r = -\ln|\lambda_2|$.
In sections \ref{sec:pbc} and \ref{sec:obc} we shall on the other hand study protocols with a nearest-neighbor deterministic order of gates, mostly the BW or the S configuration (Fig.~\ref{fig:BWandS}), and find, similarly as for purity~\cite{prejsnji_clanek}, that the convergence rate can be either greater, equal, or smaller than $-\ln|\lambda_2|$. In Appendix~\ref{app:randomness} we also numerically demonstrate that in the thermodynamic limit one does not need explicit averaging over single-qubit unitaries, i.e., dynamics is self-averaging and therefore one will obtain the same results also for a single circuit realization.
Relying on a map from OTOC to a partition function of an Ising-like model found in \cite{adam18} we analytically compute OTOC for the BW PBC and BW OBC in the case where every elementary step of the circuit is independently drawn from the Haar measure on U(4). Contrary to previous literature, where the analytic expression for OTOC was obtained only in the TDL \cite{adam18,maximum_velocity,Bruno20}, in Appendix~\ref{app:U4_results} we present new results in finite systems with either OBC or PBC.
\subsection{Random protocols}
\label{sec:random}
Random protocols studied here are defined as random quantum circuit where at every elementary step we couple two qubits chosen randomly. We have two different possibilities: a) at each step we uniformly choose one of the all possible $n$ qubits, for example the $i$-th one, and we act with gates $M_{i,i+1}$, and b) at each step we randomly choose two qubits $i$ and $j$ and we act with $M_{i,j}$. We call the former case the random nearest neighbor protocol (r.n.n.) and the latter scenario the all-to-all coupling.
In the r.n.n. case, the average elementary step can be written as the average over all possible choices of $i$, namely
\begin{equation}
\bar{M} = \frac{1}{L} \sum_{i=1}^L M_{i,i+1},
\end{equation}
with $L=n-1$ or $L=n$, depending on the boundary conditions. Similarly, for the all-to-all case we obtain
\begin{equation}
\bar{M} = \frac{1}{L} \sum_{i<j} M_{i,j},
\end{equation}
with $L=n(n-1)/2$. The transfer matrix propagating OTOC for one unit of time is $M=\bar{M}^{L}$.
Because each $M_{i,j}$ is just a similarity transform of the purity $M'_{i,j}$, Eq.~(\ref{eq:OTOC_MC}), the spectrum of $M$ is identical to the spectrum of purities transfer matrix $M'$. Furthermore, as was shown in \cite{prejsnji_clanek}, the average elementary steps $M'_{i,j}$ propagating purity can be linearly transformed to a real symmetric matrix. Therefore the spectrum of $M$ is equal to the spectrum of a symmetric purity matrix. This is important because the spectrum is real with orthogonal eigenvectors. The spectral decomposition of a Hermitian $M$ takes the form $M = \sum_k \lambda_k \ket{v_k} \bra{v_k}$, and we can expand $\ket{\Phi}$ as $\ket{\Phi} = \sum_k c_k \ket{v_k}$ with $c_k = \braket{v_k}{\Phi}$ being bounded by $|c_k|^2 \le \braket{\Phi}{\Phi}$. The time iteration is $\Phi(t)=M^t \Phi$ from which it follows that $|\Phi(t)-\Phi(t\rightarrow \infty)| \asymp |\lambda_2|^t$ with $\Phi(t\rightarrow \infty) = v_1$.
For a Hermitian $M$ there are therefore no surprises; if the 2nd largest eigenvalue $\lambda_2$ is gapped away from other eigenvalues the asymptotic decay rate will be given by $\lambda_2$ and will kick-in at a system size independent time. For the r.n.n. protocol the 2nd largest eigenvalue has been computed numerically for arbitrary gates \cite{Znidaric_2007} and analytically for a few Clifford gates \cite{PRA08}. For $\lambda_2$ in the all-to-all case and Clifford $W$ see Ref.~\cite{PRA08}, for arbitrary $W$ Ref.~\cite{prejsnji_clanek}.
\subsection{Brick-wall protocol with PBC}
\label{sec:pbc}
For protocols with a deterministic order of gates things can and will be completely different. The decay rate will not necessarily be given by $\lambda_2$. Remember that for a deterministic order of gates the transfer matrix is just a product of corresponding two-site $M_{i,j}$, for instance, for a 4 qubit BW protocol it is $M=M_{4,1} M_{2,3} M_{3,4} M_{1,2}$. The difference compared to random protocols is that a product of symmetric matrices needs not to be symmetric. As a consequence, the eigenvectors of such $M$ are not orthogonal, $c_k$ are not upper bounded, and, as has been seen in purity evolution \cite{prejsnji_clanek}, the relevant decay can differ from $|\lambda_2|^t$.
In the following we shall plot how the value $O(1,j,t)$ behaves for a fixed position $j$. We will always fix $i=1$, because OTOC in PBC circuits depend only on $|j-i|$. We will plot values of $|O(1,j,t)-O_\infty|$ and the time derivative
\begin{equation}
r(t) := -\frac{\mathrm{d}}{\mathrm{d}t} \ln|O(1,j,t)-O_\infty|
\end{equation}
in order to investigate OTOC convergence rate (note that $r(t\rightarrow \infty) = r$ from Eq.~\ref{eq:OTOC_asymp}).
Let us start with a generic two-qubit gate
\begin{equation}
W_g = W(\textbf{a}), \quad \textbf{a}=(0.5,0.3,0.1).
\label{eq:Wg}
\end{equation}
Data for $O(1,j=7,t)$ is shown in Fig.~\ref{fig:PBC_BW_nondual} and demonstrates that OTOC converge to their final value with a rate different than $-\ln|\lambda_2|$ (for $W_g$ one has $|\lambda_2|\approx0.72$ for $n=20$). The rate is (initially) smaller, as if there would be an eigenvalue larger than $\lambda_2$ - a phantom eigenvalue. Such slower decay persists up to times that are proportional to the system size. The value of the phantom eigenvalue is equal to the second largest eigenvalue of the transfer matrix for the BW OBC circuit. Remember that we are looking at a circuit with PBC, not OBC, nevertheless, it is perhaps expected that for initially localized quantities and until the boundary conditions (PBC) influence OTOC dynamics, the convergence rate is given by $\lambda_2$ of BW OBC (see also next section). Namely, choosing the initial vector localized roughly equally far from the left and right boundary ($i\approx n/2$), the dynamics generated by BW PBC or OBC circuit is identical up to times $t\approx n/4$. Therefore, what might be surprising is that $\lambda_2$ of $M$ for BW with PBC and OBC are different. Looking at OTOC on a different site, $j\neq 7$, one might observe a slightly different graph from Fig.~\ref{fig:PBC_BW_nondual}, however the behavior remains qualitatively the same: at early times that scale as $\sim n$ the dynamics is always determined by a phantom eigenvalue, which is the same for every $j$, whereas at late time the dynamics is given by the second largest eigenvalue of $M$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=85mm]{non_dual_PBC-clanek.pdf}
\caption{Convergence rate of $O(1,j=7,t)$ for a BW PBC circuit with the gate $W_g$ (Eq.~\ref{eq:Wg}). There is a phantom eigenvalue: initially, the rate is given by $\lambda_2$ for a BW OBC circuit (red dashed line). At late times the rate is instead equal to $-\ln|\lambda_2|$ for a BW PBC circuit (green dashed line). Dashed lines are obtained from numerically calculated $\lambda_2$ for $n=30$ (red lines) and $n=20$ (green lines). The inline plot shows a transition in the exponential decay of the same data, including red and green dashed exponential functions corresponding to red and green rates in the main plot.}
\label{fig:PBC_BW_nondual}
\end{center}
\end{figure}
Of special interest are gates with canonical parameters $\textbf{a} = (1,1,a_{\rm z})$, $a_{\rm z} < 1$ (dual unitary $W$). We purposely skip $a_{\rm z}=1$ because of its trivial dynamics. Contrary to the generic gates $W_g$, for dual unitary gates we will see that early-time dynamics is always determined by $\lambda_2$ of an S PBC circuit (shown in Fig.~\ref{fig:BWandS}(b)), which though is always larger than $\lambda_2$ for BW PBC. One will therefore again have a situation where the relevant relaxation rate is not given by $\lambda_2$ of the BW PBC circuit.
We will first take a look at a circuit with the dual-unitary gate with $a_{\rm z}=0.2$. Because there are some differences between even and odd $j$ at later times, essentially due to even/odd effects of the light-cone boundary position (see Sec.\ref{sec:LC}), we show in Fig.~\ref{fig:PBC_BW_az02} how $O(1,j,t)$ converge for $j=7$ in (a) and (b), as well as for $j=8$ in (c) and (d). Looking at Fig.~\ref{fig:PBC_BW_az02}(c) that focuses on short times we can see that $r$ is zero until the right light-cone boundary hits the site $j=8$. We assume that $j-i$ is odd and $j-i<n-(j-i)$, i.e., the first information that hits the site $j$ comes from the right light-cone boundary, not from the wrapped-around (PBC) left light-cone boundary. OTOC and the rate are therefore zero until $t \approx (j-i)/2$. After that $r$ stays at a value that is not given by $|\lambda_2|$ of the BW PBC transfer matrix, but rather by $\lambda_2$ of the transfer matrix for the S PBC (red dashed line) and for which we have a conjectured analytical form, see Ref.\cite{foot1}). At the time $t_c = (n+1)/2-(j-i)/2$, determined by the time when the left light-cone boundary hits the site $j$, the rate suddenly transitions to its ultimate asymptotic value given by $\lambda_2$ of the BW PBC (green line). In Fig.~\ref{fig:PBC_BW_az02}(d) we can see that this rate stays roughly constant upto small modulations at times larger than $t_c$, e.g. at $t\approx 20$ for $n=34$. They happen at times of successive light-cone boundary wrappings (for more details see next paragraph). There are some interesting differences for odd $j$ (Fig.\ref{fig:PBC_BW_az02}(a,b)). Specifically, because for odd $i=1$ the left light-cone boundary is at even sites and therefore never overlaps with an odd $j$, the rate has a transition to its asymptotic form only when the right light-cone boundary hits the odd site $j=7$ for the 2nd time (due to PBC). This happens at $t_c =(j-i)/2+n/2$, e.g., $t_c=20$ for shown $n=34$ and $j=7$. As one can see from $O$ in Fig.\ref{fig:PBC_BW_az02}(b), the rate itself does not change; rather the OTOC exhibits a jump. As we shall see in the next paragraph, the ultimate asymptotic decay is nevertheless still determined by $\lambda_2$ of the BW PBC circuit.
\begin{figure}[t]
\begin{center}
\includegraphics[width=80mm]{az02_PBC-clanek.pdf}
\caption{Convergence rate $r$ for the BW PBC circuit with $\textbf{a}=(1,1,0.2)$. Dynamics up to times $t\sim n$ is determined by the second largest eigenvalue of the transfer matrix for the S PBC configuration (red dashed line), whereas the late time dynamic is determined by $|\lambda_2|$ of the BW PBC (green dashed line).}
\label{fig:PBC_BW_az02}
\end{center}
\end{figure}
In order to be able to better explore those spikes we shall next look at a dual unitary gate with $a_{\rm z}=0.6$, because OTOC decay slower and we are able to simulate longer times (rounding errors of double precision floating point numbers ultimately limit the smallest $O$ we can calculate). Results are shown in Fig.~\ref{fig:PBC_BW_az06}. From the figure we learn that the convergence of the initial rate to that given by the eigenvalue of the S PBC protocol is rather slow with $n$; smaller system sizes have rates that do not yet converge to $-\ln|\lambda_2|_{\mathrm{S-PBC}}$. There are also small kinks in the decay of $|O(1,j,t)-O_\infty|$ that are due to light-cone wrapping boundaries. Overall though the rate changes only once from the initial one to the asymptotic $-\ln|\lambda_2|_{\mathrm{BW-PBC}}$ at the already discussed time that is proportional to $n$ (see frames (b) and (d)). We can now also clearly see several spikes at times when the light-cone boundary wraps around the system multiple times. Specifically, starting with an odd $i=1$ the right light-cone boundary will hit a site at odd $j$ at times $t=(j-i)/2+k n/2$, where $k$ in an integer (blue vertical lines in the Figure), whereas the left light-cone boundary will hit it at times $t=k n/2-(j-i-1)/2$ (black vertical lines). There is a slight asymmetry between the effects of left and right light-cone boundary: spikes due to the left one are prominent only for even $j$ which comes due to an asymmetry in the behavior of OTOC on the light-cone boundary, Eqs.(\ref{eq:OTOC_lcr}\ref{eq:OTOC_lcl}) -- for even $j$ the left light-cone has an additional factor $c_+=\frac{1}{3}(1+\cos{(\pi a_{\rm z})})=1-|\lambda_2|_{\rm S-PBC}$.
We also observe that $r$ decreases with increasing $a_{\rm z}$ (\cite{foot1}). This means that the fastest relaxation of OTOC among dual-unitary gates is obtained for the circuit with the XY gate, i.e. $\textbf{a}=(1,1,0)$, when one has $r=\log{3}$ at $t \lesssim n$ .
\begin{figure}[t]
\begin{center}
\includegraphics[width=80mm]{az06_PBC-clanek.pdf}
\caption{Time evolution of OTOC convergence rate for BW PBC circuits with $\textbf{a}=(1,1,0.6)$. Red and green dotted lines represent the convergence rate determined by $\lambda_2$ of S PBC and BW PBC circuits respectively. When $j$ is odd ((a) and (b)) spikes in $r$ are found at times when the right light-cone boundary hits site $j$, $i+2t=j \pmod{n}$ (blue vertical dashed lines plotted for $n=34$). For even $j$ ((c) and (d)) these spikes can be found also at $i-2t+1=j \pmod{n}$ when the left light-cone boundary hits site $j$ (black vertical dashed lines for $n=34$).}
\label{fig:PBC_BW_az06}
\end{center}
\end{figure}
\bigskip
We have seen that in all BW circuits with PBC, for generic gates as well as dual unitary gates, the relevant relaxation rate of OTOC that holds upto times of order $\sim n$, i.e., until OTOC become exponentially small in system size, is not given by the 2nd largest eigenvalue of the BW PBC transfer matrix. For the generic gate $W_g$ the rate was given by $|\lambda_2|_{\mathrm{BW-OBC}}$ which is larger than $|\lambda_2|_{\mathrm{BW-PBC}}$ -- a phantom eigenvalue phenomenon. One might be inclined to justify this result based on a trivial fact that the choice of boundary conditions of course does not matter up to times that are proportional to $\sim n$. Until boundary effects kick in OTOC evolve as they would in a BW OBC system (if the Pauli matrix at time $t=0$ is positioned ``far enough" from the boundaries). This however is not really a full explanation; remember also that for dual unitary gates the rate (phantom eigenvalue) was given by $|\lambda_2|_{\mathrm{S-PBC}}$ and not $|\lambda_2|_{\mathrm{BW-OBC}}$, despite the evolution still being the same as it would be in the BW OBC circuit.
In the next section we shall study circuits with OBC. Based on results presented so far we can predict that for BW OBC with generic gates one will have no phantoms, whereas we expect to see a phantom rate given by $|\lambda_2|_{\mathrm{PBC-S}}$ for BW OBC circuits with dual unitary gates.
\subsection{OBC protocols}
\label{sec:obc}
Let us first stress one important property of a family of OBC protocols that comes about due to the locality of the initial vector $\Phi_O(t=0)$ (Eq.~\ref{eq:OTOC_initial}). We conjecture that OTOC dynamics is not influenced by permutations of elementary gates in one period of the BW OBC circuit. For example, looking at a BW OBC protocol one could permute the order of elementary steps in one period and obtain an S OBC circuit without affecting the average OTOC dynamics, e.g., its decay rate.
\begin{figure}[t]
\begin{center}
\includegraphics[width=50mm]{BWisS-clanek.pdf}
\caption{Comparison between $O(6,8,t)$ obtained using 5 iterations of a S OBC circuit and using a BW OBC circuit. Operators in the same period of the S OBC circuit are represented with the same color, meanwhile operators in the same period of BW OBC are labeled by the same parameter $t$ (see Fig.~\ref{fig:BWandS}). Due to causality, all gates outside the future light-cone starting from $i=6$, and the past light-cone originating from $j=8$, vanish (crossed out gates). By stacking together S OBC protocols one obtains the same set of gates as for BW OBC.}
\label{fig:BW_is_S}
\end{center}
\end{figure}
To support this claim, we will show that for a fixed $W$ the BW OBC protocol generates the same OTOC dynamics as the S OBC protocol upto a constant time-shift. We will rely on Fig.~\ref{fig:BW_is_S} to explain the equivalence. One can easily see that stacking together $S$ protocols one obtains a circuit of the form shown in Fig.~\ref{fig:BW_is_S}, i.e., a brick wall protocol in the middle (between $t=2$ and $t=3$ in the Figure), and two ``triangles", one at the top right (gates after time $t=3$ in Fig.~\ref{fig:BW_is_S}) and one at bottom left (gates before time $t=2$ in Fig.~\ref{fig:BW_is_S}). Let us focus on the calculation of $O(i,j,t)$. Due to causality the evolved local operator vanishes outside the light-cone starting from the $i$-th qubit. We are interested in the component of $\Phi_O(t)$ representing $O(i,j,t)$, i.e. the $2^{j-1}$-th one, this means that also operators in the past light-cone starting from the qubit $j$ vanish. The relevant gates are therefore those outside the two light-cones, i.e., in Fig.~\ref{fig:BW_is_S} the gates that are not crossed. The same set of relevant gates would be obtained acting with a BW OBC protocol. The only difference between S OBC and BW OBC circuits is a time-shift that comes from the difference between $v_{\mathrm{LR}}$ of the two circuits. This is reflected in the fact that $O(i,j,1) \neq 0$ for arbitrary $j$ in S OBC circuits, whereas using a BW OBC protocol we have $O(i,j,t) = 0$ at all times smaller than $\Delta t \approx |j-i|/2$, which is equal to the time-shift between the two protocols. For instance, by counting the number of BW layers of relevant gates in Fig~\ref{fig:BW_is_S} one can see that $O_{\rm S}(6,8,5)=O_{\rm BW}(6,8,6)$. The time shift is constant and depends only on the value $j-i$ (and can be a half-integer). This can be seen also in explicit numerical data in Fig.~\ref{fig:diff} where $O(1,8,t)$ for BW OBC circuit (triangles) is the same as $O(1,8,t-3)$ (squares) obtained for the S OBC.
Using similar arguments one can see that if one iterates an arbitrary OBC configuration, that is a protocol in which each nearest-neighbor gate is applied exactly once per unit of time, one always gets a brickwall pattern of gates. Therefore one can show that the OTOC of local operators and any OBC protocol is upto a timeshift equal to the one in say BW OBC circuit. We have also checked numerically on a few examples of random gate permutations that this is indeed the case. We remark that in Ref.~\cite{prejsnji_clanek} it has been shown that the spectra of transfer matrices $M$ for a single iteration are the same for all OBC protocols.
This equivalence though holds only for OBC; in particular, PBC S and BW protocols can behave rather differently, see circles and stars in Fig.~\ref{fig:diff}. Also, the BW PBC circuit exhibits a phantom eigenvalue while the S PBC does not ($r=-\ln|\lambda_2|_{\rm S-PBC}$ regardless of the choice of the gate $W$).
\begin{figure}[t]
\begin{center}
\includegraphics[width=80mm]{diffs_az05_p8-clanek.pdf}
\caption{Comparison of OTOC relaxation for different protocols and $\textbf{a}=(1,1,0.5)$, $i=1$, $j=8$ and $n=26$. BW OBC and S OBC are equivalent up to a time shift which is equal to $3$. For PBC on the other hand S and BW, while having the same initial decay, exhibit different relaxation rate at long times. The asymptotic decay of BW PBC is given by $|\lambda_2|$ of the BW PBC (green dashed line), while that of S PBC it is given by $|\lambda_2|$ of the S PBC (red dashed line).}
\label{fig:diff}
\end{center}
\end{figure}
Regarding possible phantoms in the OBC setting we can see in Fig.~\ref{fig:diff} that for dual unitary gates BW OBC does exhibit a phantom (the initial rate is given by $|\lambda_2|$ of the S PBC), while for generic gates, expectedly (we have seen in Fig.~\ref{fig:PBC_BW_nondual} that the rate for BW PBC was given by $|\lambda_2|$ of BW OBC, and the two $O(i,j,t)$ should agree until $t \sim n$), it does not (data not shown). Let us have a closer look at the dual unitary gate $a_{\rm z}=0.5$ and BW OBC protocol. From data in Fig.~\ref{fig:OBC_BW_az05} we indeed see that there is a phantom -- the initial rate is smaller -- and that there are, similar as in the PBC case (Fig.~\ref{fig:PBC_BW_az06}), again spikes in the rate. Those spikes are associated with jumps in the relaxation of OTOC (frame (b)) that happen every time the reflected right light-cone returns to site $j$, i.e, at times $kn-(j-i-1)/2$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=80mm]{az05_OBC-clanek.pdf}
\caption{Convergence rate for BW OBC circuits with $a_{\rm z}=0.5$, $j=8$. Red and green dashed lines denote the rate predicted by $|\lambda_2|$ for S PBC and BW OBC circuits respectively. Spikes in the rate are again found at times when the light-cone boundary is reflected from boundaries back to site $j$.}
\label{fig:OBC_BW_az05}
\end{center}
\end{figure}
\section{Conclusion}
We have derived a Markovian propagator for the average out-of-time-ordered correlations of local operators in random quantum circuits in which each two-qubit transformation is composed of a fixed two-qubit gate $W$ and two random single-qubit unitaries. This allows us to get an exact expression for OTOC on the light-cone and any $W$. We then focus on the asymptotic relaxation rate at long times with which OTOC relaxes to its long-time average corresponding to a completely scrambled evolution. Similarly as in the case of purity relaxation~\cite{prejsnji_clanek} we find that this OTOC relaxation rate is also not always given by the second largest eigenvalue of the Markovian matrix. In fact, compared to purity we find more instances where the relaxation rate is given by a so-called phantom eigenvalue -- a nonexistent eigenvalue larger than any true nontrivial eigenvalue which though nevertheless dictates the relaxation rate that is relevant in the thermodynamic limit. Relaxation in such cases proceeds in two steps, where in the first step that lasts upto times that are linear in system size the rate is given by the phantom eigenvalue, while in the second it is eventually given by the second largest eigenvalue. Because the transition time between the two regimes diverges in the thermodynamic limit one has a situation where at a fixed system size and $t \to \infty$ one gets the naively expected (but thermodynamically incorrect) relaxation rate given by $\lambda_2$, while in the correct thermodynamic limit of first taking system size to infinity and only then time to infinity one will observe the relaxation rate given by the phantom eigenvalue (phantom ``eigenvalue'' is not an eigenvalue of any finite Markovian transition matrix). In short, limits $t \to \infty$ and $n \to \infty$ do no commute. Mathematically the phenomenon comes about because the Markovian matrix is not symmetric, resulting in spectral expansion coefficients that blow up with system size~\cite{prejsnji_clanek}, see also Refs.~\cite{Mori20,sarang21,ueda21,mori21,lamacraft21}.
We find such phantom-eigenvalue relaxation for brick-wall circuits with dual unitary as well as with generic gates $W$, and for periodic or open boundary conditions. Phantoms are also found for the staircases configuration with open boundary conditions. For circuits with open boundary conditions we demonstrate that up to a time-shift all different circuit geometries, i.e., brick-wall, staircases, etc., have the same OTOC dynamics. We also numerically verify that the dynamics is self-averaging, that is, one will get a phantom relaxation even for a single random circuit realization, and even without spatial or time independence of single-qubit random unitaries. An explicit randomness therefore seems not to be essential. This leaves an interesting possibility that a similar phenomenon could be observed also in in other systems, for instance in Floquet models.
The important message therefore is that: (i) when one deals with finite non-Hermitian matrices the leading eigenvalue might not give the correct asymptotic dynamics, and (ii) that this leads to a two-step relaxation process with a sudden discontinuous transition in the relaxation rate at a time when the light-cone hits the site in question for the second time (either due to a reflection from a boundary for open boundaries, or due to a wrapping around for periodic boundary conditions). On the mathematical level it is therefore due to the fact that boundary conditions apparently can affect the leading relevant eigenvalue in a nontrivial way. While we do obtain some exact properties of the Markovian matrix, like a conjectured exact expression for $\lambda_2$ in the case of periodic boundary conditions, much remains to be understood, in particular under which physical conditions one gets such a two-step relaxation.
Support from Grants No.~J1-1698 and No.~P1-0402 from the Slovenian Research Agency is acknowledged.
|
2,869,038,154,549 | arxiv |
\section{Experimental setups}
\renewcommand\thefigure{\thesection.\arabic{figure}}
\setcounter{figure}{0}
\noindent\textbf {Experimental setup to characterize resonance tuning}:
The experimental setup to characterize the resonance frequency tuning versus voltage applied on the AlN actuator is shown in Fig. \ref{Fig:SI1}.
A tunable laser (Toptica CTL) is locked to a Si$_3$N$_4$ microresonator resonance, via a PDH lock loop using an EOM.
When the resonance is tuned by varying the applied voltage, the laser frequency follows the resonance shift.
The beat signal between the laser locked to the resonance and a reference laser (another Toptica CTL) is measured using a fast photodiode and an electrical spectrum analyzer (ESA).
A programmable DC power supply (Keithley 2400) is used to apply the voltage on the AlN actuator.
A ramp signal is applied on the power supply in order to output the voltage between $\pm140$ V with a voltage increment / decrement of 2.8 V.
The interval time between two subsequent measurements is 200 ms.
The change in the two lasers' beatnote signal recorded by the ESA corresponds to the resonance frequency shift, as one laser is locked to the resonance and the other is frequency-fixed.
These measurements are repeated continuously for multiple (3 to 5) scans between $\pm140$ V in order to confirm the hysteresis.
\begin{figure*}[t!]
\centering
\includegraphics{SI_SetupResTunning.pdf}
\caption{
\footnotesize
Experimental setup to characterize the resonance tuning versus voltage applied.
EOM: electro-optic modulator.
OSA: optical spectrum analyzer.
ESA: electrical spectral analyzer.
PD: photodiode.
}
\label{Fig:SI1}
\end{figure*}
\noindent\textbf{Microresonator Q characterization results}:
Figure~\ref{Fig:SI1b} compares the measured loaded linewidths with different applied voltages.
The resonances remain critically coupled, and no linewidth change is observed.
The estimated intrinsic quality factor, $Q_0>15\times10^6$, with integrated AlN actuators is identical to bare microresonators without AlN\cite{Liu:18a}, demonstrating that the monolithically integrated AlN actuators are compatible with ultralow-loss Si$_3$N$_4$ waveguide platform (linear optical loss of $\sim1$ dB/m).
\begin{figure*}[t!]
\centering
\includegraphics{SI_Q.pdf}
\caption{
\footnotesize
Comparison of loaded linewidths with different applied voltages.
No voltage-dependent linewidth change is observed.
}
\label{Fig:SI1b}
\end{figure*}
\noindent\textbf{Experimental setup and result for Long-term stabilization of the soliton microcomb}:
The experimental setup to stabilize the soliton microcomb over 5 hours is shown in Fig.~\ref{Fig:SI1a}(a).
A feedback loop is applied in order to fix the soliton detuning at 317 MHz and eliminate the detuning fluctuation over a long term.
The VNA is used only to monitor the soliton detuning over a long term.
Figure~\ref{Fig:SI1a}(b) shows the evolution of three soliton comb lines over 5 hours.
The final soliton loss after 5 hours is caused by the drift of the fiber-chip coupling using suspended lensed fibers, and can be mitigated via gluing the fibers to the chip~\cite{Raja:20}.
\begin{figure*}[t!]
\centering
\includegraphics{SI_LongTermStabilization.pdf}
\caption{
\footnotesize
(a)~Experimental setup for long-term stabilization of the soliton microcomb.
OSC: oscilloscope.
VNA: vector network analyzer.
HVA: high voltage amplifier.
BPF: bandpass filter.
FBG: fiber Bragg grating.
(b)~Soliton stabilization over 5 hour, realized by locking the resonance to the laser and maintaining the soliton detuning.
}
\label{Fig:SI1a}
\end{figure*}
\noindent \textbf{Experimental setup to generate PDH error signals using HBAR modes}:
\indent Figure \ref{Fig:SI2}(a) shows the experimental setup to generate PDH error signals using HBAR modes induced by the AlN actuation.
The measured $S_{21}(\omega)$ response of the AlN actuation, up to 400~MHz, is plotted in the linear frequency scale in Fig. \ref{Fig:SI2}(b), showing both cases when the laser is on- and off-resonance.
Different modulation frequencies corresponding to different HBAR modes are investigated, which are marked with stars in Fig. \ref{Fig:SI2}(b).
The PDH error signals modulated at these HBAR frequencies are shown in Fig. \ref{Fig:SI2}(c), as well as the studied microresonator resonance.
A microwave source providing $\sim8$~dBm RF power is used to modulate the Si$_3$N$_4$ microresonator via AlN actuation.
The same RF power is used for modulation at all the HBAR frequencies.
The decrease in error signal contrast at higher HBAR frequency is likely caused by the lower acousto-optic transduction $S_{21}$.
\begin{figure*}[t!]
\centering
\includegraphics{SI_PDHsignal.pdf}
\caption{
\footnotesize
On-chip generation of PDH error signals using the HBAR modes induced by the AlN actuation.
(a) Experimental setup.
LPF: low-pass filter.
Amp.: RF power amplifier.
(b) The measured $S_{21}(\omega)$ response of the AlN actuator in the linear frequency scale.
Both cases, when the laser is on- and off-resonance, are measured.
(c) The PDH error signals modulated at the HBAR frequencies marked with stars in (b).
}
\label{Fig:SI2}
\end{figure*}
\section{Soliton microcomb source for parallel FMCW LiDAR}
Figure \ref{Fig:SI4} shows the experimental setup to synchronously scan the microresonator and the pump laser (i.e. the feed-forward scheme).
A single-sideband modulator driven by a voltage-controlled oscillator (VCO) is used to fast scan the laser frequency, instead of directly scanning the laser piezo due to the limited piezo scan speed of our laser ($\sim200$ Hz).
A voltage ramp signal from the same dual-channel arbitrary waveform generator (AWG) is applied on the VCO and on the AlN actuator.
The ramp signal sent to the AlN actuator is further amplified by a high-voltage amplifier (HVA) with $\times50$ voltage amplification and 3-dB bandwidth of $\sim5$ MHz.
The synchronous scan of the laser frequency and the microresonator resonance is performed by adjusting the amplitude and the phase of the ramp signal applied on the VCO.
A PDH lock can further improve the synchronization by locking the resonance to the laser with a constant frequency difference~\cite{Stone:18}.
Initially, a ramp signal from the AWG with a peak-to-peak voltage V$_\text{pp}$ of 3 V (HVA amplifies to 150 V) and 10 kHz scanning rate is applied on the AlN.
The amplitude V$_\text{pp}$ and the phase of the ramp signal driving the VCO is adjusted until stable $\mathcal{C}.$-resonance is observed on VNA.
The tuning into soliton states is realized either by changing the laser frequency via laser piezo tuning, or by turning on and off the VCO which allows fast tuning of the laser to the effectively red-detuned side of the resonance.
A reference laser is used to probe the chirp of different comb lines (the pump line, $\pm 10^{th}$ comb lines etc).
A fast oscilloscope of 2~GHz bandwidth and 5 GSamples/s is used to capture the heterodyne beatnote detected on the fast photodiode, for further off-line data processing such as fast Fourier transform and fitting triangular signal.
\begin{figure*}[t!]
\centering
\includegraphics{SI_SetupLIDAR.pdf}
\caption{
\footnotesize
Experimental setup for synchronous scan of the laser frequency and the microresonator resonance, using the feed-forward scheme.
VCO: voltage-controlled oscillator.
AFG: arbitrary waveform generator.
QPSK: quadrature phase shift keying.
DSO: digital storage oscilloscope.
}
\label{Fig:SI4}
\end{figure*}
\section{Soliton repetition rate stabilization using an EO comb}
Figure \ref{Fig:SI3a}(a) shows the experimental setup to stabilize the soliton repetition rate referenced to an electro-optic frequency comb (``EO comb'').
The EO comb is generated using a scheme described in Ref. \cite{Obrzud:17, Anderson:19}, and has a comb line spacing of 14.6974 GHz.
The EO comb and soliton microcomb are pumped by the same laser (Toptica CTL).
The measure the beat signal between the $-1^{st}$ line of the microcomb and the $-13^{th}$ line of the EO comb, is further compared to a reference signal of 60.0~MHz.
The error signal is applied directly on the AlN actuator, such that the actuation on the microresonator stabilizes the soliton repetition rate to the EO comb line spacing.
The measured \emph{in-loop} phase noise of the beat signal between the $-1^{st}$ line of the microcomb and the $-13^{th}$ line of the EO comb, is shown in Fig.~3(g) in the main manuscript.
\begin{figure*}[t!]
\centering
\includegraphics{SI_EOcombLocking.pdf}
\caption{
\footnotesize
Soliton repetition rate stabilization using an EO comb and AlN actuation.
(a) Experimental setup to characterize the \emph{in-loop} phase noise of the beat signal between the $-1^{st}$ line of the microcomb and the $-13^{th}$ line of the EO comb.
The beat signal and the phase noise of the beat signal are shown in Fig.~3(e, f) in the main manuscript.
(b) Modified experimental setup to characterize the \emph{out-of-loop} phase noise of the beat signal between the $-1^{st}$ line of the microcomb and the $+13^{th}$ line of the EO comb.
(c) Schematic of referencing the microcomb to the EO comb.
The beatnote (in-loop) between the $+1^{st}$ line of the microcomb and $+13^{th}$ line of the EO comb is detected and referenced to a 20.0~MHz microwave signal.
The beatnote (out-of-loop) between the $-1^{st}$ line of the microcomb and $-13^{th}$ line of the EO comb is characterized.
(d) Comparison of SSB phase noises measured in different cases.
MZM: Mach-Zehnder modulator.
BPF: bandpass filter.
PNA: phase noise analyzer.
}
\label{Fig:SI3a}
\end{figure*}
To measure the \emph{out-of-loop} beat signal and its phase noise, we used a modified setup as shown in Fig.~\ref{Fig:SI3a}(b).
The pump laser's frequency to generate the soliton microcomb is shifted by 77.0~MHz via a fiber-coupled acousto-optic modulator (AOM).
The reason to shift the microcomb's pump frequency is to cancel out the drift and noise caused by the imbalanced paths in delayed self-homodyne measurement, by detecting the beatnote (77.0~MHz shift) between the pump lines of the microcomb and the EO comb.
By down-mixing the 77.0~MHz heterodyne beatnote signal using the same microwave source that drives the AOM, the feedback signal is applied to the laser current such that the pump laser's frequency is stabilized and the noise in the delayed self-homodyne measurement is removed.
Then, the beatnote between the $+1^{st}$ line of the microcomb and $+13^{th}$ line of the EO comb is detected and referenced to a 20.0~MHz microwave signal, in order to stabilize the microcomb repetition rate.
The entire schematic of referencing the microcomb to the EO comb is shown in Fig. \ref{Fig:SI3a}(c).
Figure \ref{Fig:SI3a}(d) compares the single-sideband (SSB) phase noise of the beat signals, for:
\begin{itemize}
\item Dashed red: The free-running phase noise of the beat signal between the $+1^{st}$ line of the microcomb and the $+13^{th}$ line of the EO comb while the pump laser’s frequency is locked.
\item Dashed blue: The free-running phase noise of the beat signal between the $-1^{st}$ line of the microcomb and the $-13^{th}$ line of the EO comb while the pump laser’s frequency is locked.
\item Solid green: When the pump laser's frequency is locked, the phase noise between the EO comb's pump and the microcomb's pump (shifted by 77~MHz).
\item Solid red: The \emph{in-loop}, locked phase noise of the beat signal between the $+1^{st}$ line of the microcomb and the $+13^{th}$ line of the EO comb.
\item Solid blue: The \emph{out-of-loop} phase noise of the beat signal between the $-1^{st}$ line of the microcomb and the $-13^{th}$ line of the EO comb, when the $+1^{st}$ line of the microcomb and the $+13^{th}$ line of the EO comb are locked.
\end{itemize}
In the out-of-loop phase noise of the beat signal between the $-1^{st}$ line of the microcomb and the $-13^{th}$ line of the EO comb, a reduction in phase noise is observed with the AlN actuation.
The locking bandwidth in this case is $>300$ kHz.
To further evaluate the long-term stability of the locked system, frequency counting measurements of the relative Allan deviations are performed, as shown in Fig.~\ref{Fig:SI3b}.
The 77~MHz microwave source (used to lock the pump laser's frequency) is referenced to the 20 MHz microwave oscillator (used to down-mix the in-loop beat signal to derive the error signal).
Similarly, the microwave source driving the EOMs (14.6974 GHz) for EO comb generation is also referenced to the same 20~MHz microwave oscillator.
The relative Allan deviation of the beat signals between the free-running microcomb and EO comb are not converging, while the beat signal between the locked pump lines of the microcomb and EO comb show 10$^{-2}$ at 1 s averaging time.
After locking the soliton repetition rate to the EO comb by actuating on AlN, the in-loop and the out-of-loop beat signals show similar frequency stability.
\begin{figure*}[t!]
\centering
\includegraphics{SI_ADEV.pdf}
\caption{
\footnotesize
Comparison of measured relative Allan deviation for the microcomb and EO comb beat signals in different cases, to evaluate the long-term stability of the locked system.
}
\label{Fig:SI3b}
\end{figure*}
\bibliographystyle{apsrev4-1}
|
2,869,038,154,550 | arxiv |
\subsection{Experimental estimation of activation volume of C14 CaMg$_2$}
The available data on specifically basal slip in the C14 CaMg$_2$ Laves phase studied here allows at least an approximate comparison of the activation volumes \cite{freund2021plastic}. Using the scarce, slip system-specific data with a linear fit through the data points of the critical resolved shear stress from single crystal micropillar compression at room temperature, 150 and 250~\textdegree C \cite{freund2021plastic} (giving a change in stress of 0.09~GPa) and assuming motion of $1/3[10\bar10]$ with $|b|=0.365$ nm, a dislocation density $\rho_m$ of 10$^{12}$ m$^{-2}$, an attempt frequency $\nu_{A}$ of 10$^{11}$~s$^{-1}$, and an average shear strain rate $\dot \gamma$ of 0.001~s$^{-1}$ in
\begin{equation}
\Omega=\frac{{{T_1} - {T_2}}}{{{\tau _1} - {\tau _2}}}\frac{k}{1}{\text{ }}\ln \left[ {\frac{{\dot \gamma }}{{{\rho _{\text{m}}}{b^2}{\nu _{\text{A}}}}}} \right]
,
\end{equation}
\noindent we find an activation volume of $\Omega = 13 b^3$. This volume is higher than expected for a purely lattice resistance controlled mechanism, however, given the large uncertainty from the underlying experimental data and a possible underestimation of the critical resolved shear stress versus temperature slope due to a lower strain rate used at the lowest temperature \cite{freund2021plastic, zehnder2019plastic}, the calculations in this study and experiments appear consistent at least.
\clearpage
\section{\label{Intro}Introduction}
Laves phases are topologically close-packed structures that form in many alloys and have a large impact on their mechanical properties due to the high strength compared to the matrix phases \cite{sinha1972topologically,paufler2011early,stein2021laves}. Laves phase alloys often exhibit excellent mechanical properties at high temperatures, however, their extreme brittleness at ambient temperatures limits their applications as structural materials \cite{livingston1992laves,pollock2010weight,stein2021laves}. The understanding of the underlying deformation mechanisms of Laves phases is thus crucial for tailoring material properties of the composites.
Laves phases with the ideal chemical composition AB\textsubscript{2} have three common polytypes: cubic MgCu\textsubscript{2} (C15), hexagonal MgZn\textsubscript{2} (C14) and MgNi\textsubscript{2} (C36). Laves crystals have a layered structure along the basal or $\{ 1 1 1 \}$ planes, which consists of quadruple atomic layers.
The quadruple atomic layers in turn consist of a single-layer of B-type atoms forming a Kagom\'e net and a triple-layer with an A–B–A structure. The same layers also forms part of related structures as an intergrowth with other structural elements, such as CaCu$_5$ and Zr$_4$Al$_3$, the latter forming the $\mu$-phases \cite{PartheGmelin1993Handbook,schroders2019structure}.
Synchroshear, as a dominant mechanism for dislocation-mediated plasticity on the basal plane in Laves phases, was already predicted in the 1960s \cite{kramer1968gittergeometrische}.
It was later confirmed by experimental observations of synchro-Shockley dislocations and synchroshear-induced stacking faults in the C14 HfCr\textsubscript{2} Laves phase \cite{chisholm2005dislocations}. Recently, $ab$ $initio$ calculations \cite{vedmedenko2008first} and atomistic simulations using semi-empirical potentials \cite{guenole2019basal} confirmed synchroshear as the energetically favorable mechanism compared to other crystallographic slip mechanisms for basal slip in Laves phases.
Synchro-Shockley dislocation, as a typical zonal dislocation \cite{kronberg1957plastic, anderson2017theory}, is formed by the cooperative motion of two coupled Shockley partial dislocations on the adjacent planes of a triple-layer \cite{hazzledine1992synchroshear}. After the glide of a synchro-Shockley dislocation in a C14 (or C15) Laves phase, the alternate triple-layer transforms into a slab of C15 (or C14) structure as a stacking fault.
I.e., synchroshear in Laves phases is always associated with the creation and extension of a stacking fault.
In Laves phases, point defects such as anti-site atoms and vacancies widely exist in off-stoichiometric compositions and at high temperatures \cite{zhu1999point,stein2021laves}. The presence of these point defects has significant effects on the deformation behavior, such as hardness \cite{voss2008composition,takata2016nanoindentation,luo2020composition} and phase transformation kinetics \cite{kumar2004polytypic}. A progressive decrease in hardness at B-type-rich off-stoichiometric compositions was reported in nanoindentation experiments on NbCo\textsubscript{2} \cite{voss2008composition,luo2020composition} and NbFe\textsubscript{2} \cite{voss2008composition,takata2016nanoindentation} Laves phases. Single-phase NbCr\textsubscript{2} exhibits more rapid synchroshear-induced phase transformation than TiCr\textsubscript{2} and TaCr\textsubscript{2} counterparts, and the transformation is rendered sluggish by the addition of substitutional defects \cite{kumar2004polytypic}. These experimental observations were attributed to the interactions between synchro-Shockley dislocations with constitutional and thermal point defects affecting the dislocation mobility \cite{kumar2004polytypic,takata2016nanoindentation}.
Although the geometry of slip by synchroshear is well established, the atomic-scale mechanisms of motion of synchro-Shockley dislocations on the basal plane in Laves phases are still not well understood. Kink propagation and short-range diffusion were proposed as possible mechanisms of dislocation motion in Laves phases in the 1990s \cite{hazzledine1992synchroshear}, however, there has been so far no evidence from experiments and modelling. In addition, the effects of point defects on dislocation motion in Laves phases are yet to be explored.
In this study, the core structures and energies of synchro-Shockley dislocations in C14 and C15 Laves phases were investigated using atomistic simulations. The mechanisms of motion of synchro-Shockley dislocations with and without point defects and corresponding activation energies were determined by identifying transition states on the potential energy surfaces. As stacking fault has been confirmed as the dominant defect structure on the basal slip plane in Laves phases instead of perfect dislocation \cite{hazzledine1992synchroshear,chisholm2005dislocations}, the motion of synchro-Shockley dislocations was aligned to the direction of expansion of the stacking fault. The stress-dependent activation energy and volume of dislocation motion were investigated and correlated to thermal activation.
\section{\label{Methods}Simulation methods}
The atomistic simulations presented in this study were performed using the MD software package LAMMPS \cite{LAMMPS}.
The interatomic interactions were modeled by the modified embedded atom method (MEAM) potential by Kim et al. \cite{kim2015modified} for Mg-Ca the MEAM potential by Jang et al. \cite{jang2021modified} for the Al-Ca system.
Both potentials reasonably describe the mechanical properties of C14 CaMg\textsubscript{2} and C15 CaAl\textsubscript{2} Laves phases as compared to experiments and $ab$ $initio$ calculations (see TABLE S I). Additionally, both potentials successfully predicted the synchroshear as the energetically favorable mechanism for propagating dislocations within the basal and $\{ 1 1 1 \}$ planes in C14 CaMg\textsubscript{2} \cite{guenole2019basal} and C15 CaAl\textsubscript{2} Laves phases, respectively.
The C14 CaMg\textsubscript{2} and C15 CaAl\textsubscript{2} Laves structures were constructed using Atomsk \cite{hirel2015atomsk} with the lattice constant $a_{0}$ of the respective Laves phase at 0 K
\cite{kim2015modified,jang2021modified} and the following crystallographic orientations: for C14 $\mathbf{x}=[ 1 1 \bar{2} 0 ]$, $\mathbf{y}=[ \bar{1} 1 0 0 ]$ and $\mathbf{z}=[ 0 0 0 1 ]$; for C15 $\mathbf{x}=[ 1 \bar{1} 0 ]$, $\mathbf{y}=[ 1 1 \bar{2} ]$ and $\mathbf{z}=[ 1 1 1 ]$.
To obtain the structures for the further study of synchro-Shockley partial dislocations, perfect screw dislocations with Burgers vectors \textbf{b}\textsubscript{C14} = $a_{0}^\text{C14}$/3 $[ \bar{1} \bar{1} 2 0 ]$ and \textbf{b}\textsubscript{C15} = $a_{0}^\text{C15}$/2 $[ \bar{1} 1 0 ]$ were introduced following the method detailed in \cite{rodney2000dislocation} and \cite{bitzek2005dynamic}. After relaxation using the conjugate gradient (CG) with box relaxation and the FIRE \cite{bitzek2006structural,guenole2020assessment} algorithm (force tolerance: $10^{-8}$ eV/$\text{\AA}$), the inserted full screw dislocation dissociated into two widely separated 30\textdegree{} synchro-Shockley dislocations with Burgers vectors \textbf{b}\textsubscript{C14} = $a_{0}^\text{C14}$/3 $[ \bar{1} 0 1 0 ]$ and $a_{0}^\text{C14}$/3 $[ 0 \bar{1} 1 0 ]$ (for C15 CaAl\textsubscript{2}: \textbf{b}\textsubscript{C15} = $a_{0}^\text{C15}$/6 $[ \bar{1} 2 \bar{1} ]$ and $a_{0}^\text{C15}$/6 $[ \bar{2} 1 1 ]$) bounded by stacking faults, see the sketch in FIG. \ref{fig0}(a).
In the following, the partial dislocation to the right with the edge Burgers vector component along $[ \bar{1} 1 0 0]$ (or $[1 1 \bar{2}]$ for C15 CaAl\textsubscript{2}) direction is termed partial I and the partial dislocation to the left with the edge Burgers vector component along $[ 1 \bar{1} 0 0]$ (or $[\bar{1} \bar{1} 2]$ for C15 CaAl\textsubscript{2}) direction is called partial II.
To investigate partial I and II dislocations separately, the atomic displacement field corresponding to the partial Burgers vector was imposed to the upper half of the crystal ($> l_z/2$) from the edge to the center of the simulation box with box dimensions $l_y$, $l_z\approx$ 2000 or 300 \text{\AA}. After relaxation with the above-mentioned algorithms, partial I and II located at the center of the simulation box with a stacking fault bounded to the edge were obtained.
To calculate the core energies of the 30\textdegree{} synchro-Shockley dislocations, cylindrical samples were cut out from the initial simulation box ($l_y$, $l_z\approx$ 2000 \text{\AA}) with each of the 30\textdegree{} partial dislocations located in the center of the simulation setup, and the stacking fault bounded at the surface of the cylinder as shown in FIG. \ref{fig0}(b).
Atoms in the outermost layers of the setups with a thickness of 14 \text{\AA} (2 times the interatomic potential cutoff) were fixed in $y$ and $z$ directions. The radius of the cylindrical setup is around 1000 \text{\AA} to reduce the effect of the boundary conditions. Periodic boundary conditions (PBC) were applied in $x$ direction (the direction of the dislocation line) and the dimension along the $x$ direction is more than 2 times the interatomic potential cutoff ($l_x\approx 18.4$ \text{\AA} for C14 CaMg\textsubscript{2} and $l_x\approx 23.6$ \text{\AA} for C15 CaAl\textsubscript{2}).
After relaxation with the aforementioned boundary conditions, the core energies were calculated by measuring the total dislocation energy as a function of radius $R$ and then extrapolating the far-field elastic energy back to the chosen cutoff radius ($r_{c}=b$):
\begin{equation}
\label{e1}
E_\text{tot}(R)-N(R)E_\text{0}=K\text{ln}(R/r_{c})+E_\text{SF}(R)+E_\text{core}|r_{c},
\end{equation}
where $E_\text{tot}$($R$) is the energy of the $N$ atoms contained within a cylinder of radius $R$, $E_\text{0}$ is the atomic cohesive energy, $K$ is an elasticity coefficient containing anisotropic elastic constants and the Burgers vector $b$, $E_\text{SF}(R)$ is the energy contribution of the stacking fault as a function of radius $R$, $E_\text{core}|r_{c}$ is the core energy defined at the chosen cutoff radius $r_{c}=b$.
Note that as the stacking fault extends all the way to the edge of the simulation setup, $E_\text{SF}(R)$ is continuously increasing with $R$.
Finally, the excess in strain energy $E_\text{ESE}$ due to the elastic distortion of the lattice induced by the dislocation, normalized with the dislocation line length \textit{L} can be expressed from EQU.~\ref{e2}:
\begin{equation}
\label{e2}
E_\text{ESE} = \frac{E_\text{tot}(R)-N(R)E_\text{0}}{L}.
\end{equation}
$E_\text{ESE}$ was calculated at 100 different $R$ values from 1$b$ to 250$b$.
\begin{figure*}[htbp!]
\centering
\includegraphics[width=\textwidth]{Fig_sim_setup.pdf}
\caption{(a) Schematic representation of the dissociated basal screw $\langle a \rangle$ dislocation in hexagonal Laves phases into two 30{\textdegree} synchro-Shockley dislocations: partial I and II. (b) Cylindrical setup for the structural optimization of dislocation cores and the calculation of dislocation core energies. The radius of the cylindrical setup ($R_\text{0}$) is around 1000 \AA. (c) Slab setup for the nudged elastic band (NEB) calculation of minimum energy path (MEP) of dislocation motion.
The dimensions of the slab ($D$) in non-periodic directions are around 300 \AA.
Periodic boundary conditions are applied along the dislocation line direction ($\xi$) in both setups.
Semi-fixed outer layers are marked in light red, and the thickness is more than 2 times the interatomic potential cutoff ($>$ 14 \AA).
Please note that the shown setup corresponds to partial I, for partial II the stacking fault is to the right of the dislocation.
}
\label{fig0}
\end{figure*}
Climbing image nudged elastic band (NEB) \cite{henkelman2000climbing,henkelman2000improved} calculations were performed on initial (before dislocation motion) and final (after dislocation motion) atomistic configurations to find saddle points and minimum energy paths (MEPs) of dislocation motion. The initial configurations were built with the 30\textdegree{} partial dislocations located in the center of the slab setups, as illustrated in FIG. \ref{fig0}(c).
By altering the width of the displacement field of the inserted partial Burgers vector, the final configuration contains the same partial dislocation sitting at the next Peierls valley adjacent to the initial one, corresponding to dislocation motion and expansion of stacking fault by one Burgers vector.
Atoms in the outermost layers of the setups with a thickness of 14 \text{\AA} were fixed in $y$ and $z$ directions. The dimensions of the slab in $y$ and $z$ directions $l_y$ and $l_z\approx$ 300 \text{\AA}. Slab setups with different dislocation line lengths (in PBC) from 3$\times$ to 30$\times$ unit cells ($l_x$ from 18.4 to 184.2 \text{\AA} for C14 CaMg\textsubscript{2}) were simulated.
The spring constants for parallel and perpendicular nudging forces are both 1.0 eV/$\text{\AA}^{2}$ \cite{MARAS201613}. Quickmin \cite{sheppard2008optimization} was used as the damped dynamics minimizer to minimize the energies across all replicas with the force tolerance of 0.01 eV/$\text{\AA}$. Different numbers of intermediate replicas from 48 to 144 were simulated and all intermediate replicas were equally spaced along the reaction coordinate (RC). The first (RC:0) and last (RC:1) reaction coordinates were determined as the configurations located at the local energy minima along the MEP close to the initial and final configurations, respectively.
The stress-dependent activation energies in C14 CaMg\textsubscript{2} were obtained by pre-shearing the initial and final configurations along $[ \bar{1} 0 1 0 ]$ direction for partial I and $[ 0 1 \bar{1} 0 ]$ direction for partial II and performing NEB calculations with the above-mentioned boundary conditions. The magnitude of stress was calculated according to the pre-strain value and shear modulus on the $\{ 2 \bar{1} \bar{1} 0 \}$ plane ($\mu_{\{ 2 \bar{1} \bar{1} 0 \}}$=25.6 GPa).
Laves phase crystal analysis (LaCA) \cite{xie2021laves} was used to analyze the dislocation and stacking fault structures in C14 and C15 Laves phases. The Open Visualization Tool (OVITO) \cite{stukowski2009visualization} was used to visualize the atomistic configurations and calculate the displacement vectors of atoms of the dislocation motion.
\section{\label{Results}Results}
\subsection{\label{Results1}Dislocation core energies and structures}
\begin{figure*}[htbp!]
\centering
\includegraphics[width=\textwidth]{Fig_core_energy.pdf}
\caption{(a) Excess in elastic strain energy $E_{ESE}$ for 30{\textdegree} synchro-Shockley partial I (\textbf{\textit{b}}=$\frac{1}{3}$$[ \bar{1} 0 1 0]$) and II (\textbf{\textit{b}}=$\frac{1}{3}$$[ 0 \bar{1} 1 0 ]$) dislocations in C14 CaMg\textsubscript{2}. The core energy is obtained by extrapolating the far-field elastic energy back to the chosen cutoff radius $r_{c}=b$ ($R/b = 1$, solid lines). Dislocation core structures of 30{\textdegree} synchro-Shockley (b) partial I and (c) II dislocations. Left: colored by deviation of potential energy to bulk ($\Delta E\textsubscript{p}$). Right: colored by Laves phase Crystal Analysis (LaCA). Large and small atoms are Ca and Mg atoms, respectively.}
\label{fig1}
\end{figure*}
Core structures of 30\textdegree{} synchro-Shockley dislocations were analyzed and the corresponding core energies were calculated according to EQU. \eqref{e1}. Two types of 30\textdegree{} synchro-Shockley dislocations with Burgers vectors of $\frac{1}{3}$$[ \bar{1} 0 1 0]$ (partial I) and $\frac{1}{3}$$[ 0 \bar{1} 1 0 ]$ (partial II) were obtained after the energy minimization of the perfect screw dislocations. Both core structures of partial I (FIG. \ref{fig1}(b)) and II (FIG. \ref{fig1}(c)) were observed in Laves crystal structures experimentally \cite{chisholm2005dislocations,zhang2011undulating,cheng2021atomic}. Dislocation core energies of partial I and partial II were calculated by extra\-polating the far-field elastic energy back to the chosen cutoff radius $b$, see FIG. \ref{fig1}(a). The excess in strain energy $E_\text{ESE}$ was plotted against logarithm of the ratio of $R/b$ in FIG. \ref{fig1}(a). The deviation of potential energy within the radius $R<2b$ is significant as shown in FIG. \ref{fig1}(b-c) and FIG. S1(b-c). In contrast, the atoms belonging to the stacking faults show less energy deviation due to the low stacking fault energies of the Laves phases ($E_\text{SF}^\text{CaMg\textsubscript{2}}$=14 mJ/$\text{m}^\text{2}$ and $E_\text{SF}^\text{CaAl\textsubscript{2}}$=52 mJ/$\text{m}^\text{2}$). The slope of the total energy $E_\text{tot}$ versus ln($R/b$) is close to linear for radii $R$ significantly larger than the core region except when $R$ approaches the semi-fixed boundary, which is in agreement with elasticity theory. A linear model was fitted to the data of $E_\text{ESE}$ vs. $\ln R/b$ from 5$b$ to 100$b$. An elasticity coefficient ($K$/($b^\text{2}/4\pi$)=27.8 GPa) was obtained, which is less than 2\% deviation from the theoretical $K$ value ($K_\text{elast}^\text{30\textdegree}$/($b^\text{2}/4\pi$)=27.3 GPa) calculated using the elastic constants of the interatomic potential:
\begin{equation}
\label{e3}
K_\text{elast}^\theta = \frac{\mu b^2}{4\pi}\left[\text{cos}^2\theta+\frac{\text{sin}^2\theta}{(1-\nu)}\right].
\end{equation}
For the simulated C14 CaMg\textsubscript{2}, isotropic shear modulus $\mu=25.6$ GPa and Poisson's ratio $\nu=0.217$.
The core energies of partial I and II dislocations at the chosen cutoff radius $r_{c}=b$ are 0.16 and 0.28 eV/$\text{\AA}$, respectively. The core energy of partial II is 75 \% higher than partial I, which indicates that partial I is more energetically favorable than partial II in the simulated C14 CaMg\textsubscript{2} phase. Similar results were also obtained in the simulated C15 CaAl\textsubscript{2} phase as shown in FIG. S1(a). In addition, LaCA successfully identified structures of Laves phases matrix, dislocation core and stacking faults, see FIG. \ref{fig1}(b-c).
\subsection{\label{Results2}Mechanisms of dislocation motion}
\begin{figure*}[htbp!]
\centering
\includegraphics[width=\textwidth]{Fig_kink.pdf}
\caption{Transition mechanism of 30{\textdegree} synchro-Shockley partial I dislocation motion along the $[ 1 \bar{1} 0 0 ]$ direction via kink propagation in C14 CaMg\textsubscript{2} ($l_x$=6.1 nm, $l_y$=$l_z$=30 nm). (a) Atomic configurations of kink nucleation and propagation. Only atoms belong to stacking fault (green) and dislocation core (white) according to LaCA are shown here. (b) Excess energy versus reaction coordinate (RC) is calculated using NEB. Energy profile is separated into different stages based on individual activation events. (c) Mechanisms of kink nucleation and propagation. Only atoms in the triple-layer where the dislocation glided are shown here. Large and small atoms are Ca and Mg atoms, respectively. Dark and light red atoms indicate Ca atoms in different layers of the triple-layer. Black arrows indicate displacement vectors (RC: 0 configuration as the reference). Orange circles indicate the locations of vacancies. Green arrow indicates the direction of dislocation motion.}
\label{fig2}
\end{figure*}
The mechanisms of synchro-Shockley dislocation motion were investigated by exploring the transition states between two dislocation structures at adjacent Peierls valleys along the MEP using the NEB method. Overall, we investigated the mechanisms of motion for both partial I and II and also the effect of point defects on the motion of partial I, where point defects were found to form as part of the dislocation motion.
\subsubsection{Motion of partial I}
Partial I in C14 CaMg\textsubscript{2} exhibits a transition mechanism of dislocation motion via kink-pair nucleation and kink propagation, as shown in FIG. \ref{fig2}(a). The bow-out of a kink-pair occurs around the reaction coordinate (RC) 0.15 with a height of one partial Burgers vector, which corresponds to the motion of the dislocation from one Peierls valley to the next. Then the two kinks propagate in opposite directions and finally merge with each other due to the PBC along the dislocation line direction. The motion of partial I is along the $[ 1 \bar{1} 0 0 ]$ direction. The energy profile of the transition processes of kink-pair nucleation and kink propagation is shown in FIG. \ref{fig2}(b). Transition state peaks and intermediates with similar shapes and values in the energy profile appear repeatedly, which indicates similar events are repeatedly activated along the MEP. The transition mechanism of partial I motion can be divided into several stages: Nucleation (Nucl.), I, II, III, and merge (Merg.). The detailed mechanism of each stage is illustrated in FIG. \ref{fig2}(c) via the atomic displacement relative to the initial configuration (RC 0). To better visualize the individual events between the transition states, only atoms in the triple-layer where the dislocation glided are shown since most atomic movements occurred within the triple-layer. The Ca atoms in different layers within the triple-layer are colored differently.
\begin{table*}[!htbp]
\centering
\caption[]{\label{tab.1}Activation energies of overall and individual events of the motion of synchro-Shockley dislocations in C14 CaMg\textsubscript{2} ($l_x$=6.1 nm, $l_y$=$l_z$=30 nm). V\textsubscript{X}$\bot$: vacancy at X site at dislocation; X(Y)$\bot$: antisite defect X at Y site at dislocation.}
\centering
\scriptsize
\begin{tabular}{p{0.2\textwidth}p{0.1\textwidth}p{0.1\textwidth}p{0.1\textwidth}p{0.1\textwidth}p{0.1\textwidth}p{0.1\textwidth}}
\hline\hline
\addlinespace[0.1cm]
\multicolumn{1}{l}{} & \multicolumn{6}{c}{Activation energy (eV)}\\
\cmidrule(lr){2-7}
Sample & Overall & Nucleation & I & II & III & Merge \\
\addlinespace[0.1cm]
\hline
\addlinespace[0.1cm]
Partial I & 3.28 & 2.33 & 1.03 & 0.29 & 0.46 & 0.35 \\
Partial I (V\textsubscript{Mg}$\bot$) & 1.53 & - & 1.01 & 0.24 & - & - \\
Partial I (V\textsubscript{Ca}$\bot$) & 1.40 & - & 0.99 & 0.26 & - & - \\
Partial I (Mg(Ca)$\bot$) & 2.59 & 1.75 & 1.03 & 0.21 & - & 0.39 \\
Partial I (Ca(Mg)$\bot$) & 3.15 & 2.29 & 1.08 & - & 0.46 & 0.32 \\
\addlinespace[0.1cm]
\end{tabular}
\begin{tabular}{p{0.2\textwidth}p{0.1\textwidth}p{0.22\textwidth}p{0.15\textwidth}p{0.15\textwidth}}
\hline\hline
\addlinespace[0.1cm]
\multicolumn{1}{l}{} & \multicolumn{4}{c}{Activation energy (eV)}\\
\cmidrule(lr){2-5}
Sample & Overall & Non-sequential shuffling & Shear straining & Rearrangement \\
\addlinespace[0.1cm]
\hline
\addlinespace[0.1cm]
Partial II & 1.79 & 0.31 & 1.10 & 0.11 \\
\hline\hline
\end{tabular}
\end{table*}
In the nucleation stage (from RC 0 to 0.13), a Ca atom (colored dark red) shuffles from top to bottom (the direction along x-$[ 1 1 \bar{2} 0 ]$ is defined as 'up' for ease of reference here) into the lattice and creates a vacancy (marked by an orange circle) and an interstitial in the triple-layer. Meanwhile, the nearby atoms downwards from the Ca atom along the dislocation line shuffle together. The nucleation of the kink-pair can be treated as the formation of two kinks. The formation of the upper and lower kinks are associated with the formation of the vacancy and interstitial defects, respectively. The energy barrier of the kink-pair nucleation is 2.33 eV, which is the highest activation energy among all individual events along the MEP of the motion of partial I. Therefore, the kink-pair nucleation process is the rate-limiting step on the reaction coordinate diagram. The energy barriers of overall and individual events along the MEP of dislocation motion in this work are summarized in TABLE \ref{tab.1}. In stage I (from RC 0.13 to 0.20), the Mg atom above the Ca vacancy shuffles to the vacancy and creates another Mg vacancy. This process has an activation energy of around 1.03 eV. Following stage I, stage II (from RC 0.20 to 0.25) corresponds to the shuffling of a Ca atom (colored dark red) to the Mg vacancy and the creation of a Ca vacancy. The activation energy of this event is around 0.29 eV, which is the lowest activation energy of the individual events along the MEP. The combination of stage I and II corresponds to the propagation of the upper kink from bottom to top via repeated formation and occupation of vacancies. Stage III corresponds to the propagation of the lower kink from top to bottom via an interstitial-like mechanism with an energy barrier of around 0.46 eV. The Mg and Ca atoms (colored dark red) below the lower kink shuffle simultaneously and create an interstitial defect at the dislocation core region. In the following events of kink propagation, stage I, III and II repeatedly occur until the kinks merge with an activation energy of 0.35 eV. The final state exhibits higher energy than the initial one because of the expansion of the stacking fault after the motion of the partial dislocation. The overall energy barrier to the motion of partial I is 3.28 eV. In general, for partial I, the mechanism of dislocation motion is kink-pair formation and propagation in which the kinks propagate via two mechanisms, namely, vacancy-hopping and interstitial shuffling, depending on the character of the kinks.
Similar mechanisms were identified in partial I dislocations with a longer dislocation length $l_x$=18.4 nm (FIG. S2) and different numbers of intermediate replicas (FIG. S3) as well as in C15 CaAl\textsubscript{2} (FIG. S4). In contrast, the two-dimensional (2D) setup with $l_x$=1.8 nm is too small to resolve the complex reactions of kink-pair nucleation and propagation. Instead, the synchronous movement of the Ca (colored dark red) and Mg atoms with one saddle point along the MEP is obtained (see FIG. S5).
\begin{figure*}[htbp!]
\centering
\includegraphics[width=\textwidth]{Fig_partialII.pdf}
\caption{Motion of 30{\textdegree} synchro-Shockley partial II dislocation along the $[ \bar{1} 1 0 0 ]$ direction via diffusion-like mechanism in C14 CaMg\textsubscript{2} ($l_x$=6.1 nm, $l_y$=$l_z$=30 nm). (a) Excess energy versus reaction coordinate is calculated using NEB. (b) Diffusion-like mechanism mixed of non-sequential shuffling (NSS), shear straining ($\epsilon_\text{xz}$) and short-range rearrangement (rearr.). Only atoms in the triple-layer where the dislocation glided are shown here. Large and small atoms are Ca and Mg atoms, respectively. Dark and light red atoms indicate Ca atoms in different layers of the triple-layer. Black arrows indicate displacement vectors (RC: 0 configuration as the reference). Green arrow indicates the direction of dislocation motion.}
\label{fig3}
\end{figure*}
\subsubsection{Motion of partial II}
Partial II dislocation shows a different mechanism of dislocation motion compared to partial I. Instead of sequential atomic shuffling in kink propagation of partial I, partial II in C14 CaMg\textsubscript{2} exhibits a non-sequential atomic shuffling during its motion. In addition, the two coupled Shockley partial dislocations of partial II move separately. To investigate the mechanism of motion of leading partial dislocation (extension of stacking fault), the motion of partial II is along the $y$-\hkl[-1 1 0 0] direction.
The energy profile and detailed mechanism of partial II dislocation motion are shown in FIG. \ref{fig3}. Similar to the energy profile of partial I, the reaction path of the motion of partial II can be divided into individual events and separated stages, see FIG. \ref{fig3}(a). Between RC 0 and 0.17, three non-adjacent Mg atoms shuffle to the adjacent free volume sites separately as shown in FIG. \ref{fig3}(b). Three similar peaks with an average energy barrier of 0.31 eV are obtained for these non-sequential shufflings (NSS). After that, a shear straining ($\epsilon_\text{xz}$) along the $x$-$[ 1 1 \bar{2} 0 ]$ direction takes place with an activation energy of 1.1 eV and followed by five non-sequential shufflings of Mg atoms (from RC 0.17 to 0.48). The shear straining step has the highest energy barrier among all individual events along the MEP of partial II motion. After RC 0.48, an energy drop with the same magnitude as the increase of the energy due to the shear straining occurs because of the release of stored elastic energy. After the non-sequential shuffling of two Mg atoms, the motion of the first of the two coupled Shockley partial dislocations is completed. The motion of the second Shockley partial is carried out by the shuffling of the Ca atoms (colored light red) with an average activation energy of 0.11 eV, which corresponds to an atomic rearrangement (Rearr.) of the dislocation core. The overall energy barrier of partial II dislocation motion is 1.79 eV.
The 2D setup with $l_x$=1.8 nm exhibits a similar mechanism that also consists of three stages including non-sequential shuffling, shear straining, and atomic rearrangement (see FIG. S6). The activation energy of the shear straining of the 2D setup is 0.33 eV which is proportional to the dislocation length ($l_x$) with the same value per unit length (0.018 eV/\AA) as the three-dimensional (3D) setup ($l_x$=6.1 nm), which indicates the shear straining process is not a localized activation event.
\begin{figure*}[htbp!]
\centering
\includegraphics[width=\textwidth]{Fig_pointdefect.pdf}
\caption{Kink propagation of 30{\textdegree} synchro-Shockley partial I dislocation with point defects in C14 CaMg\textsubscript{2} ($l_x$=6.1 nm, $l_y$=$l_z$=30 nm): (a,b) Mg vacancy, (c,d) Mg\textsubscript{Ca} and (e,f) Ca\textsubscript{Mg} antisites. (a,c,e) Excess energy versus reaction coordinate is calculated using NEB. (b,d) Vacancy hopping is the governed mechanism of kink propagation of the dislocations with vacancies and Mg\textsubscript{Ca} antisite. (f) Interstitial-like mechanism governs the kink propagation of the dislocation with Ca\textsubscript{Mg} antisite. Only atoms in the triple-layer where the dislocation glided are shown here. Large and small atoms are Ca and Mg atoms, respectively. Dark and light red atoms indicate Ca atoms in different layers of the triple-layer. Green atoms indicate antisite defects. Black arrows indicate displacement vectors (RC: 0 configuration as the reference). Orange circles indicate the locations of vacancies. Orange and green arrows mark the directions of kink propagation and dislocation motion, respectively.}
\label{fig4}
\end{figure*}
\subsubsection{Effect of point defects on the motion of partial I}
As the motion of partial I is associated with the formation and motion of point defects along the dislocation line, we further investigated the effect of the presence of pre-existing point-defects on the partial dislocation motion. Taking the ordered structure of Laves phases into account, we considered vacancies and anti-site defects on both the Mg and Ca sublattices of the dislocation core region within the triple-layer. The formation energies of these point defects in the simulated Laves phases at the dislocation core region and the bulk counterparts are listed in TABLE S II.
The observed mechanisms are presented in FIG. \ref{fig4} and FIG. S7. The energy profile and atomistic mechanism of motion of partial I with one Mg vacancy are shown in FIG. \ref{fig4}(a,b). To construct atomistic samples with a Mg vacancy, the same Mg atom within the triple-layer along the dislocation line was removed in both initial and final configurations. The energy profile of motion of partial I with the Mg vacancy can be separated into two individual activation events. In contrast to the mechanism of a pristine partial I dislocation (see FIG. \ref{fig2}), no kink nucleation stage is presented as the kink nucleus was introduced by the pre-existing Mg vacancy. Instead, the shuffling of a Ca atom (colored dark red) to the pre-existing Mg vacancy triggers the propagation of the kink (from RC 0 to 0.06). This event is similar to stage II as described in the mechanism of pristine partial I dislocation motion with a comparable activation energy of around 0.24 eV, therefore also named stage II here. From RC 0.06 to 0.13, the Mg atom above the Ca atom along the dislocation line shuffles to the Ca vacancy created after the first stage II. The activation energy of this event is around 1.01 eV, which is again close to stage I as described in the mechanism of pristine partial I dislocation motion and corresponding to a similar atomic shuffling. Thus this event is also named stage I. In the following reaction path, stage II and I repeatedly occur and the kink only propagates from bottom to top, which is dominated by the vacancy-hopping mechanism. Similarly, in the partial I dislocation motion with a Ca vacancy, stage I (with an average activation energy 0.99 eV) and II (with average activation energy of 0.26 eV) also repeatedly occur, see FIG. S7. The vacancy-hopping mechanism dominates the propagation of the kink from bottom to top.
The overall energy barriers of the partial I dislocation motion with the Mg and Ca vacancies are 1.53 and 1.40 eV, respectively. These values are much lower than the energy barrier of the pristine counterpart.
NEB calculations were also performed on the partial I dislocations with pre-existing anti-site defects at the dislocation core region. For this, a Ca or Mg atom was replaced by a Mg or Ca atom in both initial and final atomistic configurations. FIG. \ref{fig4}(c,d) and FIG. \ref{fig4}(e,f) show the energy profiles and mechanisms of dislocation motion with Mg\textsubscript{Ca} and Ca\textsubscript{Mg} anti-site defects, respectively, where the subscript indicates the original elemental species at this lattice position. In the case of Mg\textsubscript{Ca} anti-site, a kink-pair nucleation event occurs at the beginning of the reaction path (from RC 0 to 0.11), which corresponds to the shufflings of the Mg\textsubscript{Ca} anti-site atom (colored in green) together with a Ca atom (colored dark red) and a Mg atom above the anti-site defect as shown in FIG. \ref{fig4}(d). A Ca vacancy is generated at the upper kink, and a Mg interstitial defect of the anti-site atom is formed at the lower kink. The energy barrier of the kink-pair nucleation in the partial I with the Mg\textsubscript{Ca} anti-site is 1.75 eV, which is lower than the value of the pristine counterpart. In the following reaction path, the upper kink propagates upwards via the vacancy-hopping mechanism, and the lower kink is pinned at the interstitial defect. Stage I and II of the anti-site decorated partial I have similar activation energies (1.03 and 0.21 eV for stages I and II, respectively) and similar atomic shuffling mechanisms to the pristine partial I dislocation. The two stages take place repeatedly in the motion of the partial I dislocation with the Mg\textsubscript{Ca} anti-site. The overall activation energy of the dislocation motion with Mg\textsubscript{Ca} anti-site is 2.59 eV, which is again lower than for the pristine counterpart.
Different from the vacancy-hopping dominated mechanism in partial I with pre-existing vacancies and a Mg\textsubscript{Ca} anti-site defect, a partial I with a Ca\textsubscript{Mg} anti-site shows an interstitial-shuffling mechanism of dislocation motion (see FIG. \ref{fig4}(e,f)). The kink-pair nucleation is associated with the shufflings of Ca atoms (colored dark red) and the Mg atom below the Ca\textsubscript{Mg} anti-site (colored green) and the generation of a Ca vacancy. The energy barrier of kink-pair nucleation is 2.29 eV, which is close to the value of the pristine counterpart. In the rest of the reaction path, the Ca\textsubscript{Mg} anti-site atom shuffles to the Ca vacancy with an activation energy of 1.08 eV, close to the energy barrier of stage I. Then, the upper kink is pinned at the vacancy. The presence of five Ca atoms surrounding the vacancy leads to a high packing density and therefore a small free volume. As a result, the vacancy-hopping mechanism is no longer energetically favorable. Instead, the lower kink propagates downwards via the interstitial-like mechanism with an average activation energy of 0.46 eV, similar to stage III as described for the pristine counterpart.
\subsection{\label{Results3}Stress-dependent activation energy and volume}
\begin{figure*}[htbp!]
\centering
\includegraphics[width=\textwidth]{Fig_stress_energy_rls_partialI.pdf}
\caption{(a) Stress-dependent activation energy of rate-limiting step of motion of 30{\textdegree} synchro-Shockley partial I dislocation in C14 CaMg\textsubscript{2} ($l_x$=6.1 nm, $l_y$=$l_z$=30 nm). (b) The evolution of the activation volume $\Omega$ for the rate-limiting step of partial I dislocation motion as a function of applied shear stress. }
\label{fig5}
\end{figure*}
In SECTION \ref{Results2} we identified kink-pair nucleation and propagation as well as non-sequential atomic shuffling as mechanisms associated with partial dislocation motion. The NEB calculations suggest that the motion of 30\textdegree{} synchro-Shockley dislocations is a multi-step reaction which resolves into multiple transition and intermediate states. As shown in the reaction coordinate diagram FIG. \ref{fig2}(b), the rate-limiting step of the motion of partial I is kink-pair nucleation with an activation energy of 2.33 eV. We calculated the stress-dependent activation energy of rate-limiting step of dislocation motion by varying the applied shear strain on the initial and final NEB configurations (see FIG. \ref{fig5}(a) and FIG. S8). A power law fit to the dependence of the activation energy of rate-limiting step $\Delta E_\text{RLS}$ on the applied shear stress was applied based on the Kocks–Argon–Ashby form \cite{kocks1975thermodynamics}:
\begin{equation}
\label{e4}
\Delta E_\text{RLS} = \Delta E_\text{RLS}^\text{0} [1-(\tau\textsubscript{eff}/\tau\textsubscript{0})^{p}]^{q}
\end{equation}
where $\Delta E_\text{RLS}^\text{0}$ is the activation energy for rate-limiting step of dislocation motion at zero effective shear stress ($\tau\textsubscript{eff}$), $p$ and $q$ are profiling parameters and $\tau\textsubscript{0}$ is the applied shear stress when the energy barrier vanishes without assistance of thermal activation. Note that the latter is also a profiling parameter as it cannot be computed from our simulations.
The estimated critical stress $\tau\textsubscript{0}$ for partial I dislocation is 2.32 GPa when $p$=0.39 and $q$=0.28.
The stress-dependent activation volume of rate-limiting step of the motion of partial I dislocation was computed by taking the first derivative of the fitted activation energy versus applied shear stress curves (see FIG. \ref{fig5}(b)) using the definition of the activation volume $\Omega$,
\begin{equation}
\label{eOmega}
\Omega=-\frac{\partial \Delta E(\tau\textsubscript{eff})}{\partial \tau\textsubscript{eff}}
~.
\end{equation}
The rate-limiting step of partial I dislocation motion exhibits a small activation volume ($\Omega < 15 b^3$) which correlate well with the identified mechanisms of kink-pair nucleation as a highly localized event of displacement of the dislocation core.
As introduced in SECTION \ref{Results2}, the activation energy of the shear straining process of partial II is proportional to the dislocation length therefore is not a localized and thermally activated event. The stress-dependent NEB calculations on partial II dislocation show the energy barrier of the shear straining process decreases rapidly with increasing applied shear strain along $[ 0 1 \bar{1} 0]$ direction (see FIG. S9). The non-sequential shuffling process becomes the rate-limiting step with the highest activation energy among all individual events when the applied shear stress is higher than 471 MPa, see FIG. S9. In addition, the slow descent of the activation energy of non-sequential shuffling process with increasing applied shear strain along $[ 0 1 \bar{1} 0]$ direction implies the small activation volume. Therefore, the motion of partial II dislocation can only be dominated by the thermally activated short-range shuffling under high enough applied shear strain levels.
The small activation volume of kink-pair nucleation and short-range shuffling of dislocation motion (a few $b^3$) in the simulated C14 CaMg\textsubscript{2} correlates well with the experimental estimations on C14 CaMg\textsubscript{2} ($\Omega = 13 b^3$, based on the experimental data obtained from micropillar compression tests \cite{freund2021plastic, zehnder2019plastic} and the calculation details are described in the Supplementary information) and other Laves phases \cite{ohba1989high,saka1993plasticity,kazantzis2007mechanical,kazantzis2008self}.
\section{\label{Discuss}Discussion}
\subsection{\label{Discuss1}Direction-dependence of plasticity}
Two 30\textdegree{} synchro-Shockley dislocations with edge components of Burgers vectors in opposite directions and different core structures were identified as shown in FIG. \ref{fig0}(a) and FIG. \ref{fig1}(b,c). These partial dislocations, named partial I and II here, possess different core energies, which indicates these two dislocations also have different critical stresses of nucleation. Depending on which partial dislocation is more geometrically (higher Schmid factor) and/or energetically (lower activation energy) favorable, nucleation-controlled plasticity could exhibit directional dependencies.
Additionally, the two synchro-Shockley dislocations are expected to exhibit different Peierls stresses due to different mechanisms of motion and corresponding activation energies.
Specifically, partial I and II exhibit different self-pinning characters during the dislocation motion.
The motion of a synchro-Shockley dislocation, also known as a zonal dislocation, can be regarded as the cooperative motion of two coupled Shockley partial dislocations on adjacent planes of the triple-layer.
The motion of partial I dislocations is dominated by the kink-pair propagation and requires consecutive motion of the two coupled Shockley partial dislocations. I.e., in the vacancy-hopping dominant dislocation motion, as given in TABLE \ref{tab.1}, the energy barrier at 0 K for the motion of one of the coupled Shockley dislocations (corresponding to the stage II with average energy barrier $\overline{\Delta E_\text{II}}\approx$0.25 eV) is much lower than the other one (corresponding to the stage I with average energy barrier $\overline{\Delta E_\text{I}}\approx$1.03 eV). Thus, the thermally activated kink propagation requires the activation of two consecutive thermally activated mechanisms. If one of these is unsuccessful, especially the one with higher activation energy, then the total synchro-Shockley dislocation becomes temporarily immobile, which is referred to as the self-pinning nature of synchro-Shockley dislocations in Laves phases \cite{kazantzis2007mechanical,kazantzis2008self}.
For partial II, the shear straining process has a higher energy barrier than the two thermally activated events (non-sequential shuffling and short-range rearrangement) under zero applied shear strain. Thermal activation could only control the motion of partial II if the applied shear strain/stress reaches a certain level (see FIG. S9). In addition, the motion of the two coupled Shockley partial dislocations of partial II occurs separately, namely, the motion of one of the Shockley partial dislocations (corresponding to the series of non-sequential shuffling processes) takes place first and is then followed by the other Shockley partial dislocation (which moves by the series of short-range rearrangement processes). Unlike the self-pinning character of partial I, the thermally activated motion of partial II does not require consecutive thermal activation of events with different energy barriers.
These observations and hypotheses could be investigated experimentally using micro/nanopillar compression. The nucleation-controlled plasticity may be expected to manifest in micro/nanopillar compression tests on defect-free single crystal Laves phases oriented to maximize resolved shear stresses on either partial I or II dislocations. In contrast, variations in the Peierls barrier and thermal activation of the partial dislocation motion may be accessible by compression of similarly oriented single crystalline pillars but with a pre-existing dislocation density at different rates and temperatures. However, the controlled preparation of such samples is challenging and whether the expected effects will be possible to resolve given the experimental uncertainties and the need to suppress fracture remains to be explored.
\subsection{\label{Discuss2}Point defect assisted dislocation motion}
Point defects, including vacancies and anti-site atoms, very commonly exist in Laves phases and have significant effects on mechanical properties \cite{zhu1999point,stein2021laves}. For the Mg-Al-Ca alloying system, first-principles calculations on stoichiometric C14 CaMg\textsubscript{2} \cite{shao2015native} and C15 CaAl\textsubscript{2} \cite{tian2017first} suggest the predominance of constitutional anti-site and vacancy defects, respectively. Although deviations exist in the formation energies of point defects between the first-principles calculations \cite{shao2015native,tian2017first} and the calculations using the semi-empirical potential in this work (see TABLE S II), the effects of vacancy and anti-site defects on the mechanisms of dislocation motion are believed to be not limited to specific Laves compounds.
The dislocation core region is more energetically favorable for the formation of point defects than the perfect and unstrained Laves crystal lattice, as shown in TABLE S II. Therefore, the constitutional point defects are likely to be trapped at pre-existing dislocations and thus affect the mechanisms of dislocation motion and corresponding Peierls barriers. The vacancy-assisted kink propagation was previously speculated on as a mechanism of motion of synchro-Shockley dislocations in Laves phases \cite{kumar2004polytypic}. However, so far, there was no direct evidence from either experimental observation or atomistic modelling. In this study, the key mechanism of vacancy-assisted kink propagation, namely vacancy-hopping, is demonstrated by NEB calculations.
The presence of a vacancy at the dislocation core region dramatically reduces the energy barriers of kink-pair nucleation and propagation. In the simulated C14 CaMg\textsubscript{2}, the presence of V\textsubscript{Mg} and V\textsubscript{Ca} lower the overall energy barriers of dislocation motion by 53\% and 57\%, respectively.
Anti-site defects have also been proposed to affect the hardness of Laves phases with off-stoichiometric compositions \cite{zhu1999point,voss2008composition,takata2016nanoindentation,luo2020composition} and progressive softening behavior with deviations from stoichiometric Laves phases has indeed been observed in previous experiments \cite{voss2008composition,takata2016nanoindentation,luo2020composition}. In this study, a possible origin of this behavior is unveiled by considering the influence of segregated anti-site defects at the dislocation core on the mechanism and activation energy of kink-pair nucleation and propagation. In the simulated C14 CaMg\textsubscript{2} phase, the effect of anti-site defects on the activation barrier of dislocation motion is a reduction ranging from 4\% to 21\% depending on the anti-site type. The effect of Mg\textsubscript{Ca} anti-site defects is more pronounced than Ca\textsubscript{Mg} anti-site defects. The reason for this is that a small Mg atom on a large Ca atom site generates excess free volume, making it easier for the lattice to adapt to the formation of a vacancy during kink nucleation, thus facilitating the vacancy-hopping mechanism of kink propagation. This finding agrees well with the experimental results that the softening was observed in off-stoichiometric compounds that were rich in the smaller B-atoms \cite{voss2008composition,takata2016nanoindentation}.
In Laves phases, the formation of vacancies shows a strong temperature dependence \cite{zhu1999point,tian2017first}. The thermal fluctuation can not only lower the critical stress of dislocation motion but also speed up the atomic diffusion which results in the formation of thermal vacancies. At finite temperatures below the temperatures in which diffusion enables diffusion-based mechanisms of motion, a significant number of vacancies will favour vacancy-hopping mechanisms and thus have a prominent effect on the mechanisms of dislocation motion.
The contribution of the thermal fluctuation on lowering the nucleation and migration barriers and the formation and concentration of the constitutional and thermal point defects could have joint effects on the mobility of dislocations and eventually on the mechanical properties of Laves phases. With the classical molecular dynamics simulations, it is difficult to separate these effects from the bundle of events due to time and size limitations. Kinetic Monte Carlo (kMC) is a suitable approach to study the dislocation dynamics \cite{cai2001kinetic,stukowski2015thermally,shinzato2019atomistically}, including kink-pair nucleation and propagation, self-pinning behavior and defect-kink interactions. The identified activation events and correlated activation energy barriers determined in this work could serve as input parameters for developing a kMC model of dislocation motion in Laves phases. The kMC model could allow investigations of dislocation dynamics comprising various kinds of events with atomistically informed activation rates.
\section{\label{Conclude}Conclusions}
In this study, we investigated the mechanisms of motion of 30\textdegree{} synchro-Shockley dislocations in C14 CaMg\textsubscript{2} Laves phase using atomistic simulations. The MEP of dislocation motion and corresponding activation energies were determined using the NEB method.
Our aim was to reveal the mechanisms of motion of synchro-Shockley partial dislocations and to begin to understand the physical origins of changing mechanical properties of Laves phases containing point defects as a result of temperature and stoichiometry changes.From this work, we conclude that:
\begin{itemize}
\item Two types of 30\textdegree{} synchro-Shockley dislocations (referred to as partial I and partial II) were identified in the simulated Laves phases C14 CaMg\textsubscript{2} and C15 CaAl\textsubscript{2} and observed experimentally. Partial I exhibits a lower core energy than partial II, therefore is expected to have a lower critical nucleation stress.
\item Partial I and II dislocations propagate via kink-pair propagation and non-sequential shuffling mechanisms, respectively. The motions of partial I and II dislocations are both thermally activated (due to the small activation volumes with a few $b^{3}$) but exhibit different mechanisms and activation energies thus different Peierls stresses.
\item Kink-pair nucleation on partial I dislocations is accomplished by creating a vacancy and interstitial. The kink-pair then propagates via vacancy-hopping and interstitial-shuffling mechanisms in two directions along the dislocation line separately.
\item The motion of partial II dislocations consists of three stages including non-sequential atomic shuffling, shear straining and atomic rearrangement.
\item The presence of point defects at the dislocation core significantly lowers the energy barrier for the motion of the partial I dislocation. In the cases of vacancies and B\textsubscript{A} anti-site defect, the activation energy of kink-pair nucleation is dramatically reduced and the propagation of kink-pair is dominated by the vacancy-hopping mechanism.
\end{itemize}
\begin{acknowledgments}
The authors acknowledge financial support by the Deutsche Forschungsgemeinschaft (DFG) through the projects A02, A05 and C02 of the SFB1394 Structural and Chemical Atomic Complexity – From Defect Phase Diagrams to Material Properties, project ID 409476157. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 852096 FunBlocks). E.B. gratefully acknowledges support from the German Research Foundation (DFG) through projects C3 of the collaborative research centre SFB/TR 103. Simulations were performed with computing resources granted by RWTH Aachen University under project (rwth0591), by the Erlangen Regional Computing Center (RRZE) and by the EXPLOR center of the Université de Lorraine and by the GENCI-TGCC (Grant 2020-A0080911390). Z.X. would like to thank Dr.-Ing. Wei Luo (RWTH Aachen University) for fruitful discussions.
\end{acknowledgments}
|
2,869,038,154,551 | arxiv | \section{INTRODUCTION}
\label{sec0}
The only known way to obtain direct experimental information on the
space-time structure of the particle emitting source created in a
relativistic nuclear collision is through two-particle intensity
(Hanbury-Brown--Twiss (HBT)) interferometry \cite{BGJ90}. The
goal of this method is to extract the {\em space-time} structure of
the source from {\em momentum spectra} which are the only measurable
quantities, making use of the quantum statistical correlations between
pairs of identical particles. This information is crucial for an
assessment of theoretical models which try to extract the energy
density of the source from the measured single particle spectra and
particle multiplicity densities in momentum space. Reliable estimates
of the source volume and the energy density are, on the other hand,
indispensable for an experimental proof that high energy collisions can
successfully generate large volumes of matter with extreme energy
density, where a transition into deconfined quark matter might be
possible.
For many years HBT interferometry of hadron-hadron and nucleus-nucleus
collisions was motivated by the naive expectation, based on the
experience with photon interferometry for stars, that the width of the
two-particle correlation function in the relative momentum directly
measures the geometric size of the source. This expectation is {\em
wrong}. Unlike stars in the universe, the sources created in hadronic
or heavy-ion collisions may feature inhomogeneous temperature profiles
and strong collective dynamical expansion. We now know
\cite{MS88,MSH92,AS95,CL95,CSH95} that for such sources the HBT
radius parameters generally don't measure the full source size, but
only so-called ``space-time regions of homogeneity" \cite{MS88,AS95}
inside which the momentum distribution varies sufficiently little so
that the particles can actually show the quantum statistical
correlations. The size of these homogeneity regions varies with the
momentum of the emitted particles, causing a dependence of the HBT
parameters on the pair momentum
\cite{MS88,MSH92,AS95,CL95,CSH95,P84,B89,PCZ90,Marb,CNH95,WSH96,HTWW96,WHTW96}.
The detailed momentum dependence is, however, model-dependent, and in
general it is not simple \cite{WSH96}. The extraction of the strength
of the collective flow from HBT data is further complicated by
significant resonance decay contributions which also induce a
momentum dependence of the HBT radius parameters and of the so-called
``incoherence parameter" \cite{Marb,CLZ94}.
The finite lifetime of the sources created in nuclear collisions leads
to another complication: the HBT radius parameters generally mix the
spatial and temporal aspects of the source extension in a non-trivial
and reference frame dependent way, in particular if at the same time
the source undergoes collective expansion. The origin and pattern of
this mixing was clarified in a recent series of publications from the
Regensburg group \cite{CSH95,CNH95,WSH96,HTWW96,WHTW96}. Two major new
results have resulted from this work: (i) the discovery \cite{CSH95}
of a new cross term in the two-particle correlation function mixing
the outward and longitudinal components of the relative momentum
between the two particles, and (ii) the new YKP
(Yano-Koonin-Podgoretski\u\i) parametrisation of the correlation
function \cite{CNH95,HTWW96,WHTW96}. The latter permits the
experimental determination of the longitudinal velocity of the source
volume element where most of the particle pairs originate (as a
function of the pair momentum) and achieves a nearly perfect
factorization of the longitudinal, transverse and temporal homogeneity
regions of the source (again as functions of the pair momentum) in the
source rest frame. Furthermore it provides for a clean separation of the
longitudinal and transverse dynamics of the source.
In this talk I review these new theoretical developments and
exemplify them for a class of simple model emission functions for
thermalized sources with collective transverse and longitudinal
expansion and finite space-time geometry.
\section{EMISSION FUNCTION AND PARTICLE SPECTRA}
\label{sec1}
The single and two particle spectra are defined as
\begin{eqnarray}
\label{spectra1}
P_1({\bf p}) &=& E_p {dN \over d^3p}
= E_p \langle \hat a^+_p \hat a_p \rangle\, ,
\\
\label{spectra2}
P_2({\bf p_1},{\bf p_2}) &=& E_1 E_2 {dN \over d^3p_1 d^3p_2}
= E_1 E_2 \langle
\hat a^+_{p_1} \hat a^+_{p_2}
\hat a_{p_2} \hat a_{p_1}
\rangle
\end{eqnarray}
in terms of creation and destruction operators for on-shell particles
with momenta ${\bf p}_i$, where $\langle \dots \rangle$ denotes an
average over the source ensemble. $P_1$ and $P_2$ are normalized to
the average number of particles $\langle N \rangle$ and pairs
$\langle N(N-1)\rangle$ per event, respectively. The two-particle
correlation function is defined as
\begin{equation}
\label{correl}
C({\bf p_1},{\bf p_2}) =
{\langle N\rangle^2 \over \langle N(N-1) \rangle}\,
{P_2({\bf p_1},{\bf p_2}) \over P_1({\bf p_1}) P_1({\bf p_2})}\, .
\end{equation}
For uncorrelated emission and in the absence of final state
interactions \cite{BB96} one can prove \cite{GKW79,CH94} a generalized
Wick theorem for the factorisation of the 2-particle spectrum
(\ref{spectra2}) and obtains
\begin{equation}
\label{correl1}
C({\bf q},{\bf K}) = 1 \pm
{\left\vert \langle \hat a^+_{p_1} \hat a_{p_2} \rangle
\right\vert^2
\over \langle \hat a^+_{p_1} \hat a_{p_1} \rangle
\langle \hat a^+_{p_2} \hat a_{p_2} \rangle}
\end{equation}
where ${\bf q} = {\bf p_1} - {\bf p_2}$ and
${\bf K} = ({\bf p_1} + {\bf p_2})/2$ denote the relative and
total momentum of the particle pair, and the positive (negative) sign
applies for bosons (fermions). Note that the second term is positive
definite.
These expressions can be further simplified and turned into a
practical starting point for computations by introducing the emission
function $S(x,K)$. It is defined in terms of the classical source
amplitude $J(x)$ for creating a free pion state \cite{GKW79} via the
Wigner transform of its associated density matrix
\begin{equation}
\label{wigner}
S(x,K) = \int {d^4y \over 2(2\pi)^3} e^{-K{\cdot}y}\,
\left\langle J^*\left(x+{\textstyle{y\over 2}}\right)
J\left(x-{\textstyle{y\over 2}}\right)
\right\rangle
\end{equation}
and is the quantum mechanical analogue of the classical phase-space
density which gives the probability for creating a free particle with
four-momentum $K$ at space-time point $x$. In terms of this emission
function the single particle spectrum is given by
\begin{equation}
\label{single}
E_K {dN\over d^3K} = \int d^4x\, S(x,K)
\end{equation}
where the r.h.s. is to be evaluated on-shell, i.e. at $K^0 = E_K =
\sqrt{m^2 + {\bf K}^2}$. The two-particle correlation function
is obtained from \cite{S73,GKW79,P84,CH94}
\begin{equation}
\label{double}
C({\bf q},{\bf K}) \approx 1 \pm
{\left\vert \int d^4x\, S(x,K)\, e^{iq{\cdot}x}
\right\vert^2
\over
\left\vert \int d^4x\, S(x,K) \right\vert^2}
= 1 \pm \left\vert\left\langle e^{iq{\cdot}x} \right\rangle
\right\vert^2
\end{equation}
where the r.h.s. must be evaluated at $q=p_1-p_2$, $K=(p_1+p_2)/2$
with $p_i$ on-shell. (This implies $K{\cdot}q=0$.) The approximation
consists of replacing the single particle spectra at $p_1$ and $p_2$
in the denominator by the spectrum at $K=(p_1+p_2)/2$; it is exact for
exponential momentum spectra and a good approximation in practice
\cite{CSH95}. The second equality in (\ref{double}) defines a
($K$-dependent) average over the emission function of which we will
make abundant use below. A useful feature is that in (\ref{double})
the emission function can be to very good approximation
\cite{CSH95,PCZ90} evaluated at $K^0=E_K$, i.e. on the classical
energy shell, since the typical source radii are larger than the
Compton wavelengths of the observed hadrons. This warrants the
replacement of the Wigner density $S(x,K)$ by a classical phase-space
distribution in practical calculations.
Due to the on-shell constraint $q{\cdot}K=0$ the four components of
$q$ are not independent, but related by
\begin{equation}
\label{massshell}
q^0 = \bbox{\beta}\cdot {\bf q} \qquad {\rm with} \qquad
\bbox{\beta} = {{\bf K}\over K^0} \approx {{\bf K}\over E_K}\, .
\end{equation}
The Fourier transform in (\ref{double}) is therefore not invertible,
and the reconstruction of the space-time structure of the source from
HBT measurements will thus always require additional model
assumptions. Furthermore, inserting (\ref{massshell}) into (\ref{double}),
$iq{\cdot}x = i {\bf q}\cdot ({\bf x} - \bbox{\beta}t)$, we see that the
correlator $C({\bf q},{\bf K})$ mixes the spatial and temporal information
in a non-trivial way which depends on the pair velocity $\bbox{\beta}$.
Only for time-independent sources things become simple: the correlator
then measures the Fourier transform of the spatial source distribution,
however only in the directions perpendicular to $\bbox{\beta}$ since
the time integration leads to a $\delta$-function $\delta(
\bbox{\beta}{\cdot}{\bf q})$.
From Eq.~(\ref{double}) it is clear that, unless the emission function
factorizes in $x$ and $K$, $S(x,K) = F(x)G(K)$ (in which case $G(K)$
cancels between numerator and denominator), the correlator is a function
of {\em both} ${\bf q}$ and ${\bf K}$. If one parametrises it by a
Gaussian in $q$ (see Sec.~\ref{sec3}) this results in ${\bf K}$-dependent
width parameters (``HBT radii''). In thermal sources $x-K$ correlations
which spoil such a factorization can be induced by temperature gradients
and/or collective expansion (with a 4-velocity $u^\mu(x)$): in both cases
the momentum spectrum $\sim \exp[-p{\cdot}u(x)/T(x)]$ of the
emitted particles depends on the emission point.
\section{MODEL-INDEPENDENT EXPRESSIONS FOR THE HBT RADII}
\label{sec2}
One of the crucial questions is, of course, to what extent a measured
${\bf K}$-dependence of the HBT radii allows for a quantitative
reconstruction of the collective source dynamics. To answer it we must
first learn more about the physical meaning of these ``radii''.
To this end it is useful to write the emission function in the following
form \cite{CNH95,WSH96,HTWW96}:
\begin{equation}
\label{7}
S(x,K) = N({\bf K})\, S(\bar x({\bf K}),K)\,
\exp\left[ - {1\over 2} \tilde x^\mu({\bf K})\,
B_{\mu\nu}({\bf K})\,\tilde x^\nu({\bf K})\right]
+ \delta S(x,K) \, ,
\end{equation}
where (with expectation values defined as in (\ref{double}))
\begin{equation}
\label{8}
\bar x_\mu({\bf K}) = \langle x_\mu \rangle , \ \
\tilde x_\mu ({\bf K}) = x_\mu - \bar x_\mu({\bf K}) , \ \
(B^{-1})_{\mu\nu}({\bf K})
= \langle \tilde x_\mu \tilde x_\nu \rangle .
\end{equation}
This construction ensures that the term $\delta S$ has vanishing zeroth,
first and second order moments and thus contains only higher order
information on sharp edges, wiggles, secondary peaks, etc. in the source.
It was shown numerically \cite{WSH96} to have negligible influence on
the half width of the correlation function and to contribute only weak,
essentially unmeasurable structures in $C({\bf K},{\bf q})$ at large
values of ${\bf q}$. Neglecting $\delta S$, the two-particle correlation
function (\ref{double}) can be calculated analytically:
\begin{equation}
\label{11}
C({\bf K},{\bf q}) = 1 + \exp\left[ - q^\mu q^\nu
\langle \tilde x_\mu \tilde x_\nu \rangle ({\bf K})
\right] \, .
\end{equation}
Please note that the point $\bar x^\mu({\bf K})$ of maximum emissivity
at momentum ${\bf K}$ is unmeasurable \cite{HTWW96,WH96}. Only the
${\bf K}$-dependent effective widths (``lengths of homogeneity'')
$\langle \tilde x_\mu \tilde x_\nu \rangle ({\bf K})$ of the source
of particles with momentum ${\bf K}$ are accessible by HBT interferometry.
Actually, due to the on-shell constraint (\ref{massshell}), only 6 linear
combinations of the 10 variances $\langle \tilde{x}_\mu \tilde{x}_\nu
\rangle({\bf K})$ are measurable \cite{CNH95}; in the case of
azimuthal symmetry of the source around the beam axis, this number
reduces to 4 out of 7. Which linear combinations occur in practice
depends on the way the correlation function is parametrised. The general
form (\ref{11}) together with (\ref{massshell}) still provide some
freedom as to which components of $q$ to keep as independent variables
(see Sec.~\ref{sec3}). But whichever choice one makes, all the
${\bf K}$-dependent parameters (``HBT radii'') in the resulting
Gaussian function of $q$ can be easily calculated from the variances
$\langle \tilde x^\mu \tilde x^\nu \rangle$, i.e. by simple quadrature
formulae, for arbitrary emission functions $S(x,K)$. The relation
between the HBT parameters and the variances is {\em model-independent},
i.e. it does not depend on the form of the emission function $S(x,K)$.
\section{STANDARD AND YKP FITS TO THE CORRELATION FUNCTION}
\label{sec3}
For the following discussion we employ the conventional \cite{P84,B89}
Cartesian coordinate system with $z$ along the beam axis and ${\bf K}$
lying in the $x$-$z$-plane. The $z$-component of a 3-vector is labelled
by $l$ (for {\em longitudinal}), the $x$-component by $o$ (for
{\em outward}) and the $y$-component by $s$ (for {\em sideward}).
Then $\beta_s=0$ such that $q^0 = \beta_\perp q_o + \beta_l q_l$,
with $\beta_\perp = \vert {\bf K}_\perp \vert / K^0$ being
(approximately) the velocity of the particle pair transverse to the
beam direction while $\beta_l$ is its longitudinal component.
The standard Cartesian parametrization \cite{CSH95} of the
correlation function is obtained by using this condition
to eliminate $q^0$ from Eq.~(\ref{11}). One obtains
\begin{equation}
C({\bf K},{\bf q})
= 1 + \exp\left[ -\sum_{i,j=s,o,l} R_{ij}^2({\bf K})\, q_i\, q_j
\right]
\label{13}
\end{equation}
where the 6 HBT ``radius parameters'' $R_{ij}$ are given as \cite{CSH95,HB95}
\begin{equation}
R_{ij}^2({\bf K}) =
\langle (\tilde{x}_i-{\beta}_i\tilde{t})
(\tilde{x}_j-{\beta}_j\tilde{t})\rangle \, ,
\quad i,j = s,o,l \, ,
\label{14}
\end{equation}
through
through the space-time variances of the source. For an azimuthally symmetric
sample of collision events, $C({\bf q}, {\bf K})$ is symmetric with respect
to $q_s \to -q_s$
\cite{CNH95}. Then $R_{os}^2 = R_{sl}^2 = 0$ and
\begin{eqnarray}
C({\bf K},{\bf q})
&=& 1 + \exp\left[ - R_s^2({\bf K}) q_s^2 - R_o^2({\bf K}) q_o^2
- R_l^2({\bf K}) q_l^2 - 2 R_{ol}^2({\bf K}) q_o q_l
\right] \, ,
\quad \text{with}
\label{15}\\
R_s^2({\bf K}) &=& \langle \tilde{y}^2 \rangle \, ,
\label{16a}\\
R_o^2({\bf K}) &=&
\langle (\tilde{x} - \beta_\perp \tilde t)^2 \rangle \, ,
\label{16b}\\
R_l^2({\bf K}) &=&
\langle (\tilde{z} - \beta_l \tilde t)^2 \rangle \, ,
\label{16c}\\
R_{ol}^2({\bf K}) &=&
\langle (\tilde{x} - \beta_\perp \tilde t)
(\tilde{z} - \beta_l \tilde t) \rangle \, .
\label{16d}
\end{eqnarray}
The cross-term (\ref{16d}) was only recently discovered \cite{CSH95}.
Clearly these HBT radius parameters mix spatial and temporal
information on the source in a non-trivial way. Their interpretation
in various reference systems, in particular the meaning of the
generally non-vanishing cross-term $R_{ol}^2$, was extensively
discussed in Refs.~\cite{CSH95,CNH95,WSH96}, by analysing
these expressions analytically for a large class of (azimuthally
symmetric) model source functions and comparing with the numerically
calculated correlation function (\ref{double}). An important observation
resulting from these studies is that the difference
\begin{equation}
\label{17}
R_{\rm diff}^2 \equiv R_o^2 - R_s^2 =
\beta_\perp^2 \langle \tilde t^2 \rangle - 2 \beta_\perp \langle
\tilde{x} \tilde t\rangle + (\langle \tilde x^2 \rangle -
\langle \tilde y^2 \rangle)
\end{equation}
is generally dominated by the first term on the r.h.s. \cite{WHTW96} and thus
provides access to the lifetime $\Delta t = \sqrt{\langle t^2 \rangle
- \langle t \rangle^2}$ of the source \cite{CP91} (more exactly: the
duration of the particle emission process). In heavy-ion
collisions, due to rapid expansion of the source, one would generally
not expect $\langle \tilde t^2 \rangle$ to be much larger than
either $\langle \tilde x^2 \rangle$ or $\langle \tilde y^2 \rangle$
(see however \cite{RG96} for possible exceptions near a phase transition to
QGP). In the standard fit one is not sensitive to small values of $\Delta t$
since Eq.~(\ref{17}) then involves a small difference of two large
numbers, each associated with standard experimental errors. The
factor $\beta_\perp^2 \leq 1$ in front of $\langle \tilde t^2 \rangle$
further complicates its extraction, in particular at low $K_\perp$
where $\Delta t({\bf K})$ is usually largest (see below).
This problem is avoided in the Yano-Koonin-Podgoretski\u\i\ parametrisation
\cite{YK78,P83,CNH95,HTWW96,WHTW96} of the correlation function for
azimuthally symmetric systems. It is based on an elimination
of $q_o$ and $q_s$ in terms of $q_\perp^2 = q_o^2 + q_s^2$,
$q^0$, and $q_3$ in (\ref{11}):
\begin{equation}
\label{18}
C({\bf q},{\bf K}) =
1 + \exp\left[ - R_\perp^2\, q_{\perp}^2
- R_\parallel^2 \left( q_l^2 - (q^0)^2 \right)
- \left( R_0^2 + R_\parallel^2 \right)
\left(q{\cdot}U\right)^2
\right] ,
\end{equation}
with four ${\bf K}$-dependent parameters $R_\perp$, $R_\parallel$,
$R_0$, and $U^\mu$ where the latter is a 4-velocity with only a
longitudinal spatial component:
\begin{equation}
\label{19}
U({\bf K}) = \gamma({\bf K}) \left(1, 0, 0, v({\bf K}) \right) ,
\ \ \text{with} \ \
\gamma = (1 - v^2)^{-1/2}\, .
\end{equation}
This parametrisation has the advantage that the fitted YKP parameters
$R_\perp^2({\bf K})$, $R_\parallel^2({\bf K})$, and $R_0^2({\bf K})$
do not depend on the longitudinal velocity of the observer system.
They (as well as $v({\bf K})$) can be calculated from the variances
$\langle \tilde x^\mu \tilde x^\nu \rangle$ in any reference frame
(see \cite{HTWW96} for explicit expressions), but their physical
interpretation is easiest in terms of coordinates measured in the
frame where $v({\bf K})$ vanishes. There they are given by
\cite{CNH95}
\begin{eqnarray}
R_\perp^2({\bf K}) &=& R_s^2({\bf K}) = \langle \tilde{y}^2 \rangle \, ,
\label{20a} \\
R_\parallel^2({\bf K}) &=&
\left\langle \left( \tilde z - \beta_l \tilde x/\beta_\perp \right)^2
\right\rangle
- \beta_l^2 \langle \tilde y^2 \rangle / \beta_\perp^2
\approx \langle \tilde z^2 \rangle \, ,
\label{20b} \\
R_0^2({\bf K}) &=&
\left\langle \left( \tilde t - \tilde x/\beta_\perp \right)^2
\right\rangle
- \langle \tilde y^2 \rangle/\beta_\perp^2
\approx \langle \tilde t^2 \rangle \, ,
\label{20c}
\end{eqnarray}
where in the last two expressions the approximation consists of
dropping generically small \cite{CNH95} terms (for a quantitative
discussion see \cite{WHTW96}). The first expression (\ref{20a})
remains true in any longitudinally boosted frame.
Eq.~(\ref{20c}) shows that the YKP parameter $R_0({\bf K})$
measures directly (up to the neglected small terms) the time
duration $\Delta t({\bf K})$ during which particles of momentum
${\bf K}$ are emitted, in the frame were the YKP velocity $v({\bf K})=0$.
The advantage compared to the standard Cartesian fit is
that here it is fitted directly, and no problems of differences of large
numbers occur in its extraction.
Since the standard Cartesian and YKP parametrizations (\ref{15}) and
(\ref{18}) of the correlator differ only by the choice of independent
components of $q$, the two sets of HBT parameters must be related. One finds
\cite{HTWW96}
\begin{eqnarray}
\label{24z}
R_s^2 &=& R_\perp^2\, ,
\\
\label{24a}
R_{\rm diff}^2 &=& R_o^2 - R_s^2 = \beta_\perp^2 \gamma^2
\left( R_0^2 + v^2 R_\parallel^2 \right) \, ,
\\
\label{24b}
R_l^2 &=& \left( 1 - \beta_l^2 \right) R_\parallel^2
+ \gamma^2 \left( \beta_l-v \right)^2
\left( R_0^2 + R_\parallel^2 \right)\, ,
\\
\label{24c}
R_{ol}^2 &=& \beta_\perp \left( -\beta_l R_\parallel^2
+ \gamma^2 \left( \beta_l-v \right)^2
\left( R_0^2 + R_\parallel^2 \right) \right)\, .
\end{eqnarray}
These relations provide a powerful consistency check on the experimental
fitting procedure of the correlation function, of similar value as the
relation \cite{CNH95,WSH96} $\lim_{K_\perp \to 0} (R_o({\bf K}) -
R_s({\bf K})) = 0$ which results from azimuthal symmetry.
\section{A SIMPLE SOURCE MODEL}
\label{sec4}
For a quantitative discussion of the physical behaviour of the
HBT radius parameters, in particular of their ${\bf K}$-dependence,
we use a simple model for the emission function of a finite expanding
thermalized source \cite{CNH95}:
\begin{equation}
\label{3.15}
S(x,K) = {M_\perp \cosh(\eta-Y) \over
(2\pi)^3 \sqrt{2\pi(\Delta \tau)^2}}
\exp \left[- {K \cdot u(x) \over T}
- {(\tau-\tau_0)^2 \over 2(\Delta \tau)^2}
- {r^2 \over 2 R^2}
- {{\eta- \eta_0}^2 \over 2 (\Delta \eta)^2}
\right] .
\end{equation}
Here $r = \sqrt{x^2+y^2}$, the spacetime rapidity $\eta = {1 \over 2}
\ln[(t+z)/(t-z)]$ and the longitudinal proper time $\tau= \sqrt{t^2-
z^2}$ parametrize the spacetime coordinates $x^\mu$, with measure
$d^4x = \tau\, d\tau\, d\eta\, r\, dr\, d\phi$. $Y = {1\over 2}
\ln[(1+\beta_l)/(1-\beta_l)]$ and $M_\perp = \sqrt{m^2 + K_\perp^2}$
parametrise the longitudinal and transverse components of the pair
momentum ${\bf K}$.
\vspace*{9cm}
\special{psfile=qm96f1.ps hoffset=20 voffset=-185 hscale=65 vscale=65 angle=0}
\begin{center}
\begin{minipage}[t]{13cm}
\noindent \bf Fig.1. \rm
The standard Cartesian parameters $R_s$ (a), $R_o$ (b), $R_l$ (c),
and $R_{ol}^2$ (d) in the CMS for pion pairs with c.m. rapidity $Y=1.5$,
as functions of $M_\perp$ for 3 different values for the transverse
flow $\eta_f$. The thick lines are exact numerical results from
Eqs.~(\protect\ref{16a}-\protect\ref{16d}), the thin lines are obtained
from the analytical approximations given in Ref.~\protect\cite{CL95}.
(Figure taken from Ref.~\protect\cite{TWH96}.)
\end{minipage}
\end{center}
\noindent $T$ is the freeze-out temperature, $R$ is the
transverse geometric (Gaussian) radius of the source, $\tau_0$ its
average freeze-out proper time, $\Delta \tau$ the mean proper time
duration of particle emission, and $\Delta \eta$ parametrises
the finite longitudinal extension of the source. The
expansion flow velocity $u^\mu(x)$ is parametrised as
\begin{equation}
\label{26}
u^\mu(x){=}\left( \cosh \eta_l \cosh \eta_t(r),
\sinh \eta_t(r)\, {\bf e}_r,
\sinh \eta_l \cosh \eta_t(r) \right) ,
\ \eta_l{=}\eta ,
\ \eta_t(r){=}\eta_f (r/R) ,
\end{equation}
with a boost-invariant longitudinal flow rapidity and a linear
transverse flow rapidity profile. $\eta_f$ scales the strength of
the transverse flow. The scalar product in the exponent of the
Boltzmann factor can then be written as
\begin{equation}
\label{2.5}
K\cdot u(x) = M_\perp \cosh(\eta - Y) \cosh\eta_t(r)
- K_\perp {x\over r} \sinh\eta_t(r) \, .
\end{equation}
Please note that for non-zero transverse momentum $K_\perp$, a finite
transverse flow breaks the azimuthal symmetry of the emission function
via the second term in (\ref{2.5}). For $\eta_f=0$ the source has no
explicit $K_\perp$-dependence, and $M_\perp$ is the only relevant scale.
As will be discussed in Sec.~\ref{sec5c} this gives rise to perfect
$M_\perp$-scaling of the YKP radius parameters in the absence of
transverse flow, which is again broken for non-zero transverse flow
\cite{HTWW96a}.
For the numerical calculations below we have selected one fixed set of
source parameters: $R=3$ fm, $\tau_0 = 3$ fm/$c$,
$\Delta \tau = 1$ fm/$c$, $\Delta \eta = 1.2$, $T=140$ MeV.
\vspace*{9cm}
\special{psfile=qm96f2.ps hoffset=20 voffset=-185 hscale=65 vscale=65 angle=0}
\begin{center}
\begin{minipage}[t]{13cm}
\noindent \bf Fig.2. \rm
Same as Fig.1, but now evaluated in the LCMS. Please note the change of sign
and magnitude of the cross-term.
(Figure taken from Ref.~\protect\cite{TWH96}.)
\end{minipage}
\end{center}
\section{MOMENTUM DEPENDENCE OF HBT PARAMETERS}
\label{sec5}
\subsection{Standard Cartesian fit}
\label{sec5a}
In Fig.~1 I show the HBT radius parameters from the standard Cartesian fit
(\ref{15}) for pion pairs with c.m. rapidity $Y=1.5$ where the fit of the
correlator is done in the CMS \cite{TWH96}. The different thick curves
correspond to different strengths $\eta_f$ of the transverse flow. Without
transverse flow $R_s$ is $M_\perp$-independent because the source
(\ref{3.15}) has no transverse temperature
gradients. As transverse flow increases, $R_s$ develops an increasing
dependence on $M_\perp$. It can be approximated by an inverse power law,
with the power increasing monotonously with $\eta_f$ \cite{WSH96,WHTW96}.
$R_l$ features a very strong $M_\perp$-dependence even without transverse
flow, due to the strong longitudinal expansion of the source. It can also
be described by an inverse power law, with a larger power $\simeq 0.55$,
in rough agreement with the approximate $\sqrt{T/M_\perp}$-scaling law
suggested in \cite{MS88} (see, however, \cite{WSH96,HB95} for a more
quantitative discussion). The increase of $R_o$ at small $M_\perp$ is due
to the contribution (\ref{17}) from the effective lifetime.
As seen in Fig.~4 below, in the YK frame (source rest frame) the latter
is of order 2.5 fm/$c$ at small $M_\perp$; Fig.~1b shows that its effect
on $R_o$ compared to $R_s$ in the CMS is much smaller (and thus more
difficult to measure). Fig.~1d shows that the cross-term is small in the CMS
but non-zero. It vanishes at $K_\perp=0$ by symmetry and also becomes
small again at large $K_\perp$.
The thin lines in Fig.~1 show for
comparison approximate results for the HBT radii calculated from the
approximate analytical results given in Ref.~\cite{CL95} which were
derived by evaluating Eqs.~(\ref{16a}-\ref{16d}) by saddle point
integration. It is clear that this method fails here (see Ref.~\cite{WSH96}
for a quantitative discussion of this approximation), and that the
analytical expressions should not be used for a quantitative analysis of
HBT data.
\vspace*{6cm}
\special{psfile=qm96f3.ps hoffset=-25 voffset=-365 hscale=80 vscale=80 angle=0}
\begin{center}
\begin{minipage}[t]{13cm}
\noindent \bf Fig.3. \rm
(a) The Yano-Koonin rapidity for pion pairs, as a function of the pair
c.m. rapidity $Y$, for various values of $K_\perp$ and two values for
the transverse flow $\eta_f$. (b) The same, but plotted against $K_\perp$
for various values of $Y$ and $\eta_f$.
(Figure taken from Ref.~\protect\cite{HTWW96}.)
\end{minipage}
\end{center}
Fig.~2 shows the same situation as Fig.~1, but now all HBT radii are
evaluated in the LCMS (longitudinally comoving system \cite{CP91}) which
moves with the pair rapidity $Y=1.5$ relative to the CMS. A comparison with
Fig.~1 shows the strong reference frame dependence of the standard HBT
radii. In particular, the cross-term changes sign and is now much larger.
The analytical approximations from Ref.~\cite{CL95} work much better in
the LCMS \cite{CL95}, but for $R_o$ and $R_{ol}^2$ they are still not
accurate enough (in particular in view of the delicate nature of the
lifetime effects on $R_o$).
\subsection{The Yano-Koonin velocity}
\label{sec5b}
Fig.~3 shows (for pion pairs) the dependence of the YK velocity on the
pair momentum ${\bf K}$. In Fig.~3a we show the YK rapidity $Y_{_{\rm YK}} =
\frac 12 \ln[(1+v)/(1-v)]$ as a function of the pair rapidity $Y$
(both relative to the CMS) for different values of $K_\perp$,
in Fig.~3b the same quantity as a function of $K_\perp$ for different $Y$.
Solid lines are without transverse flow, dashed lines are for $\eta_f=0.6$.
For large $K_\perp$ pairs, the YK rest frame approaches the LCMS (which
moves with the pair rapidity $Y$); in this limit all pairs are thus
emitted from a small region in the source which moves with the same
longitudinal velocity as the pair. For small $K_\perp$ the YK frame
is considerably slower than the LCMS; this is due to the thermal
smearing of the particle velocities in our source around the local
fluid velocity $u^\mu(x)$ \cite{WHTW96}. The linear relationship between
the rapidity $Y_{_{\rm YK}}$ of the Yano-Koonin frame and the pion pair
rapidity $Y$ is a direct reflection of the boost-invariant longitudinal
expansion flow \cite{HTWW96}. For a non-expanding source $Y_{_{\rm YK}}$
would be independent of $Y$. Additional transverse flow is seen to have
nearly no effect. The dependence of the YK velocity on the pair rapidity thus
measures directly the longitudinal expansion of the source and cleanly
separates it from its transverse dynamics. A detailed discussion of these
features is given in Ref.~\cite{WHTW96}.
\subsection{$M_\perp$-scaling of YKP radii and transverse flow}
\label{sec5c}
In the absence of transverse flow, a thermal source like (\ref{3.15})
depends on the particle rest mass and on the transverse momentum
$K_\perp$ only through the combination $M_\perp^2 = m^2 +K_\perp^2$ (see
Eq.~(\ref{2.5})). Furthermore, the source
is then azimuthally and $x\to -x$ reflection symmetric. Hence $\langle
\tilde x \tilde t \rangle$, $\langle \tilde x \tilde z \rangle$, and
$\langle \tilde x^2 - \tilde y^2\rangle$ all vanish and the approximations
in Eqs.~(\ref{20b},\ref{20c}) become exact. As a result, all three YKP
radii (\ref{20a})-(\ref{20c}) are only functions of $M_\perp$, too
(as well as of Y, of course), i.e. they do not depend explicitly on the
particle rest mass.
\vspace*{12cm}
\special{psfile=qm96f4.ps hoffset=20 voffset=-90 hscale=65 vscale=65 angle=0}
\begin{center}
\begin{minipage}[t]{13cm}
\noindent \bf Fig.4. \rm
The YKP radii $R_\perp$, $R_\parallel$, and $R_0$ (from top to bottom)
for vanishing transverse flow (left column) and for $\eta_f=0.6$ (right
column), as functions of $M_\perp$ for pairs at $Y_{\rm cm}=0$.
Solid (dashed) lines are for pions (kaons). The breaking of the
$M_\perp$-scaling by transverse flow is obvious in the right column.
Also, as shown in the lower right panel, for nonzero transverse flow
$R_0$ does not agree exactly with the effective source lifetime
$\protect\sqrt{\langle \tilde t^2\rangle}$.
(Figure taken from Ref.~\protect\cite{WHTW96}.)
\end{minipage}
\end{center}
This is seen in the left column of Fig.~4 where the three
YKP radii are plotted for $Y_{\rm cm}=0$ pion and kaon pairs as functions
of $M_\perp$; they agree perfectly.
The transverse radius here shows no $M_\perp$-dependence due to the
absence of transverse temperature gradients, but even with temperature
gradients it would only depend on $M_\perp$.
(Of course, this discussion neglects resonance decays which will
be studied in Sec.~\ref{sec6}.) Note that $M_\perp$-scaling
in the absence of transverse flow applies only to the YKP radius parameters:
since the expressions (\ref{16b})-(\ref{16d}) involve nonvanishing
variances with $\beta_\perp$- or $\beta_l$-prefactors (which depend
explicitly on the rest mass), the HBT radii from the standard Cartesian
fit do not exhibit $M_\perp$-scaling.
For non-zero transverse flow $\eta_f\ne 0$ this $M_\perp$-scaling is
broken by two effects: first, the second term in (\ref{2.5}) destroys
the $M_\perp$-scaling of the emission function itself, and second
the $\bbox{\beta}$-dependent correction terms in (\ref{20b},\ref{20c})
are now non-zero because the same term also breaks, for $K_\perp\ne 0$,
the $x \to -x$ and $x \to y$ symmetries. The magnitude of the associated
scale breaking due to the pion-kaon mass difference is seen in the right
column of Fig.~4 for $\eta_f=0.6$. The effects are small and require very
accurate experiments for their detection. However, the sign of the effect
is opposite for $R_\parallel$ and for $R_\perp,\, R_0$ which may help
to distinguish flow-induced effects from resonance decay contributions.
Since for $Y_{\rm cm}=0$ the YK and CMS frames coincide, $\beta_l=0$ in
the YK frame and the approximation in (\ref{20b}) remains exact even
for non-zero transverse flow. The same is not true for the approximation
in (\ref{20c}), and therefore we show in the lower right panel of Fig.~4
also the effective source lifetime $\sqrt{\langle \tilde t^2 \rangle}$
for comparison. The apparently rather large discrepancies between
the YKP parameter $R_0$ and the effective source lifetime is due to
a rather extreme choice of parameters: a large flow transverse flow and
a small intrinsic source lifetime of $\Delta\tau = 1$ fm/$c$ in (\ref{3.15}).
Since $\sqrt{\langle \tilde t^2 \rangle}$ approaches $\Delta\tau$ in the limit
of large $M_\perp$ while the dominant \cite{WHTW96} correction term
$\langle \tilde x^2 - \tilde y^2 \rangle$ does not depend on $\Delta\tau$,
the YKP parameter $R_0$ will track the effective source lifetime more
accurately for larger values of $\Delta\tau$ (and for smaller values
of $\eta_f$).
Why do $\sqrt{\langle \tilde t^2 \rangle}$ and $R_0$ increase at small
$M_\perp$? Due to the rapid longitudinal expansion, the longitudinal region
of homogeneity $R_\parallel$ is a decreasing function $M_\perp$.
Since for different pair momenta $R_0$ measures the source lifetime
in different YK reference frames, the freeze-out ``hypersurface'' will
in general appear to have different shapes for pairs with different momenta.
Only in our model, where freeze-out occurs at fixed proper time $\tau_0$
(up to a Gaussian smearing with width $\Delta\tau$), is it frame-independent.
It is thus generally unavoidable (and here, of course, true in any frame)
that freeze-out at different points $z$ in the source will occur at different
times $t$ in the YK frame. Since a $z$-region of size $R_\parallel$
contributes to the correlation function, $R_\parallel$ determines how large
a domain of this freeze-out surface (and thus how large an interval of
freeze-out times in the YK frame) is sampled by the correlator. This
interval of freeze-out times combines with the intrinsic Gaussian width
$\Delta\tau$ to yield the total effective duration of particle emission.
It will be largest at small pair momenta where the homogeneity region
$R_\parallel$ is biggest, and will reduce to just the variance of the
Gaussian proper time distribution at large pair momenta where the
longitudinal (and transverse) homogeneity regions shrink to zero. The rise
of $\Delta t({\bf K})$ at small ${\bf K}$ is thus generic.
\section{RESONANCE DECAYS}
\label{sec6}
The proportionality of the $M_\perp$-dependence of $R_\perp$ to the
transverse flow $\eta_f$ and the particular pattern of $M_\perp$
scale-breaking by the latter open an avenue for the quantitative
extraction of transverse flow from HBT data \cite{HTWW96a}. This
requires, however, that the $M_\perp$-dependence of $R_\perp$ is not affected
by resonance decays. Since they contribute more to pions than
to kaons they may also affect the $M_\perp$-scaling arguments.
The work by the Marburg group \cite{Marb} on resonance decay effects on HBT
in the context of hydrodynamical simulations indicates, within the
standard Cartesian framework and without accounting for the cross-term,
a possible additional $M_\perp$-dependence of the transverse radius.
However, a systematic analysis of resonance contributions to HBT as a
function of various characteristic source parameters is only now
becoming available \cite{WH96a}.
\vspace*{10cm}
\special{psfile=qm96f5.ps hoffset=-10 voffset=-220 hscale=70 vscale=70 angle=0}
\begin{center}
\begin{minipage}[t]{13cm}
\noindent \bf Fig.5. \rm
The influence of resonance decays on the $M_\perp$-dependence of
$R_s$ (a,b) and $R_o$ (c,d) for $Y_{\rm cm}=0$ pion pairs. a,c: no
transverse flow; b,d: transverse flow rapidity $\eta_f=0.3$.
The Gaussian transverse radius is here $R=5$ fm, and $T=150$ MeV.
(Figure taken from Ref.~\protect\cite{WH96a}.)
\end{minipage}
\end{center}
In Fig.~5 I show some results from Ref.~\cite{WH96a} for the same
emission function (\ref{3.15}). The only change for resonances
is an additional spin degeneracy factor and the different rest mass.
The complete spectrum of relevant resonances is included, and in the
decays the 2- and 3-body decay kinematics is fully taken into
account. The HBT radii are extracted from a Gaussian fit to the
numerically calculated correlation function. A detailed technical
discussion is given in Ref.~\cite{WH96a}.
Fig.~5 shows that the effects of the short-lived resonances with lifetimes
of order 1 fm/$c$ on $R_s$ are essentially negligible, both at vanishing
and at nonzero transverse flow. Only the $\omega$ with its intermediate
lifetime of 20 fm/$c$ affects $R_s$, but only for vanishing transverse flow.
There it induces a weak $M_\perp$-dependence at small $M_\perp$ even in
the absence of transverse flow; at $M_\perp>500$ MeV the contribution
of the $\omega$ dies out, and $R_s$ again becomes $M_\perp$-independent
(which would not be the case if it were affected by flow). At $\eta_f=0.3$
and 0.6 \cite{WH96a} not even the $\omega$ generates any additional
$M_\perp$-dependence! --
$R_o$ shows some effects from the additional lifetime of the resonances,
in particular from the long-lived $\omega$. Resonances with much longer
lifetimes than the $\omega$ (in particular all weak decays) have
no effect on the radii, because their contribution to the correlator is
only at very small values of $q$ which cannot be resolved experimentally.
They lead to a reduced ``incoherence parameter'' $\lambda$
\cite{Marb,CLZ94}. Since for increasing $M_\perp$ the resonance
contributions decrease, the $\lambda$-parameter increases with $M_\perp$,
approaching 1 as $M_\perp\to\infty$ \cite{Marb,CLZ94}. A detailed study
will follow \cite{WH96a}.
The weak effect of resonances on $R_s=R_\perp$ seems surprising: due
to their non-zero lifetime they should be able to propagate outside the
original source before decay and form a pion ``halo'' \cite{Marb,CLZ94}.
This effect is, however, much weaker than naively expected: most
of the resonances are not very fast, and the halo thickness is thus only
a fraction of the resonance lifetime. At finite transverse flow an
additional effect comes into play: it turns out that then the effective
size of the emission function for directly emitted resonances is
{\em smaller} than that for direct pions \cite{WH96a}! At $\eta_f{=}0.3$
and 0.6 this even slightly overcompensates the halo effect, and altogether
the resonances change neither the size nor the $M_\perp$-dependence
of $R_s$.
\section{CONCLUSIONS}
\label{sec7}
The model-independent expressions of Secs.~\ref{sec2} and \ref{sec4}
for the HBT width
parameters in terms of second order variances of the emission function
provide the basis of a detailed physical interpretation of the measured
HBT radii. They show that the HBT radius parameters do not necessarily
measure the full geometric extension of the source, but regions of
homogeneity in the effective emission function for particles with
certain fixed momenta. For expanding systems these are usually smaller
than the naive geometric source size and decreasing functions of the
pair momentum. For systems with finite lifetime the HBT parameters
usually mix the spatial and temporal structure of the source, and their
unfolding requires model studies.
With the new YKP parametrization we have found a method which, for systems
with dominant longitudinal expansion, cleanly factorises the longitudinal
and transverse spatial from the temporal homogeneity length. The effective
source lifetime is directly fitted by the parameter $R_0$; it is generically
a function of the pair momentum and largest for pairs which are slow
in the CMS. Another fit parameter, the YK velocity, measures directly
the longitudinal velocity of the emitting fluid element, and its
dependence on the pair rapidity allows for a direct determination
of the longitudinal expansion of the source. Without transverse expansion,
the YKP radius parameters show exact $M_\perp$-scaling. The breaking of this
scaling and the $M_\perp$-dependence of the transverse radius parameter
$R_\perp$ allow for a determination of the transverse expansion velocity
of the source. Resonance decays were shown to mostly affect the lifetime
parameter and leave the $M_\perp$-dependence of $R_\perp$ nearly unchanged.
They thus do not endanger the extraction of the transverse flow via
HBT.
\vskip 0.2cm
With this new and detailed understanding of the method, I believe
that HBT interferometry has a begun a new and vigorous life as a
powerful tool for reconstructing the geometric and dynamic space-time
characteristics of the collision zone from the measured momentum spectra.
\vskip 0.4cm
\noindent {\bf Acknowledgements:}
I thank my collaborators on this project, S. Chapman, J.R. Nix, B.
Tom\'a\v sik, U.A. Wiedemann, and Wu Yuanfang, who each contributed
valuable pieces to the puzzle. Without their help until the very last
minutes before the conference this review would have been impossible.
I would also like to acknowledge fruitful discussions with H.
Appelsh\"auser, T. Cs\"org\H o, D. Ferenc, M. Ga\'zdzicki, and P.
Seyboth. This work was supported by grants from BMBF, DFG, and GSI.
|
2,869,038,154,552 | arxiv | \section{Introduction}
The Hartogs triangle $$ \mathbb{H} = \{(z_1, z_2)\in \mathbb{C}^2: |z_1|< |z_2| < 1\} $$
is a pseudoconvex domain with non-Lipschitz boundary. It serves as a model counterexample for many questions in several complex variables. For instance, it does not admit a Stein neighborhood basis or a bounded plurisubharmonic exhaustion function.
Meanwhile, Chaumat and Chollet showed in \cite{CC} that the corresponding $\bar\partial$ problem on $\mathbb{H}$ is not globally regular in the sense that there is a smooth $\bar\partial$-closed $(0, 1)$-form $\mathbf f$ on $\overline{\mathbb{H} }$, such that $\bar\partial u =\mathbf f$ has no smooth solution on $\overline{\mathbb{H} }$. Interestingly, at each H\"older level the $\bar\partial$ equation does admit H\"older solutions with the same H\"older regularity as that of the data. For more properties on $\mathbb H$ please refer to a survey \cite{Sh} of Shaw. On the other hand, the study of Sobolev regularity was initiated by Chakrabarti and Shaw in \cite{CS}, where they carried out a weighted $L^2$-Sobolev estimate for the canonical solution on $\mathbb H$. See also a recent work \cite{YZ} of Yuan and the second author on weighted $L^p$-Sobolev estimates of $\bar\partial$ on general quotient domains.
The goal of this paper is to study the optimal $\bar\partial $ regularity on $\mathbb H$ at each (unweighted) Sobolev level. Recently, the optimal $L^p$ regularity of $\bar\partial$ on $\mathbb H$ was obtained by the second author in \cite{Zhang2}. The following is our main theorem concerning the $W^{k, p}$ regularity, $ k\ge 1$. As demonstrated by a Kerzman-type Example \ref{ex} (in Section 4), it gives the optimal $W^{k,p}$ regularity in the sense that for any $\epsilon>0$, there exists a $W^{k, p}$ datum which has no $W^{k, p+\epsilon}$ solution to $\bar\partial$ on $\mathbb H$.
\begin{theorem}\label{main}
For each $k\in \mathbb Z^+, 4<p<\infty$, there exists a solution operator $\mathcal T_k$ such that for any $\bar\partial$-closed $(0, 1)$ form $\mathbf f\in W^{k,p}(\mathbb H)$, $\mathcal T_k\mathbf f\in W^{k,p}(\mathbb H)$ and solves $\bar\partial u =\mathbf f$ on $\mathbb H$. Moreover, there exists a constant $C$ dependent only on $k$ and $p$ such that \begin{equation*}
\|\mathcal T_k\mathbf f\|_{W^{k,p}(\mathbb H)}\le C\|\mathbf f\|_{W^{k,p}(\mathbb H)}.
\end{equation*}
\end{theorem}
\medskip
The general idea of the proof is as follows. According to a heuristic procedure to treat the $\bar\partial$ problem on the Hartogs triangle $\mathbb H$, one first uses the biholomorphism between the punctured bidisc and $\mathbb H$ to pull back the data and solve $\bar\partial$ on the punctured bidisc, and then pushes the solutions forward onto the Hartogs triangle.
As a consequence of this, the corresponding Sobolev regularity of the $\bar\partial$ problem requires a weighted Sobolev regularity on product domains due to the presence of the nontrivial Jacobian of the biholomorphism.
Based upon our recent weighted Sobolev result \cite{PZ2} about Cauchy-type integrals, we first obtain the following Sobolev regularity for $\bar\partial$ on product domains with respect to weights in some refined Muckenhoupt space $A_p^*$ (see Definition \ref{aps}).
\begin{theorem}\label{mainp}
Let $\Omega = D_1\times \cdots D_n, n\ge 2$, where each $D_j$ is a bounded domain in $\mathbb C$ with $C^{k, 1}$ boundary. There exists a solution operator $T$ such that for any $\bar\partial$-closed $(0, q)$ form $\mathbf f\in W^{k+n-2, p}(\Omega, \mu), k\in \mathbb Z^+, 1<p<\infty, \mu\in A_p^* $, $T\mathbf f\in W^{k, p}(\Omega, \mu)$ and solves $\bar\partial u = \mathbf f$ on $\Omega$. Moreover, there exists a constant $C$ dependent only on $\Omega$, $k, p$ and the $A_p^*$ constant of $\mu$ such that
\begin{equation*}
\|T\mathbf f\|_{W^{k, p}(\Omega, \mu) } \le C\|\mathbf f\|_{W^{k+n-2, p}(\Omega, \mu) }.
\end{equation*}
\end{theorem}
\medskip
As shown by Example \ref{ex2} (in Section 3), Theorem \ref{mainp} gives the optimal Sobolev regularity of solutions on product domains with dimension $n=2$. Jin and Yuan obtained in \cite{JY} a similar Sobolev estimate for polydiscs in the case when $\mu \equiv 1$ and $q=1$. It is also worth pointing out that the operator $T$ considered in Theorem \ref{mainp} fails to maintain the $L^p$ (where $k=0$) regularity in general. See \cite{CM} of Chen and McNeal for a $\bar\partial$-closed (0,1) form $\mathbf f$ in $L^p(\triangle^2)$ such that that $T\mathbf f$ fails to lie in $L^p(\triangle^2), p<2$. Instead, \cite{Zhang2} made use of the canonical solution operator to provide an optimal weighted $L^p$ regularity for $\bar\partial$ on product domains in $\mathbb C^n$.
Theorem \ref{mainp} readily gives a semi-weighted $L^p$-Sobolev estimate below for a (fixed) solution operator to $\bar\partial$ on $\mathbb H, p>2$.
\begin{cor}\label{main4}
There exists a solution operator $\mathcal T$ such that for any $\bar\partial$-closed $(0, 1)$ form $\mathbf f\in W^{k,p}(\mathbb H), k\in \mathbb Z^+, 2< p<\infty$, $\mathcal T\mathbf f\in W^{k,p}(\mathbb H, |z_2|^{kp})$ and solves $\bar\partial u =\mathbf f$ on $\mathbb H$. Moreover, there exists a constant $C$ dependent only on $k$ and $p$ such that \begin{equation*}
\|\mathcal T\mathbf f\|_{W^{k,p}(\mathbb H, |z_2|^{kp})}\le C\|\mathbf f\|_{W^{k,p}(\mathbb H)}.
\end{equation*}
\end{cor}
\medskip
The estimate in Corollary \ref{main4} maintains the Sobolev index $(k, p)$, and in particular improves a result in \cite{YZ}. We note that the $p>2$ assumption in the corollary is due to the fact that the weight after pulling the data on $\mathbb H$ back to the bidisc
lies in $A^*_p$ only when $p>2$, where Theorem \ref{mainp} can be applied. Unfortunately, the solution operator $\mathcal T$ here subjects to some quantified loss in the exponent of the weight at each Sobolev level. Although this weight loss is not unexpected due to the global irregularity of $\bar\partial$ on $\mathbb H$, $ \mathcal T$ does not provide an optimal Sobolev regularity.
In order to obtain the optimal Sobolev regularity for $\bar\partial$ on $\mathbb H$, one needs to further adjust the solution operator $ \mathcal T$ in Corollary \ref{main4} accordingly at different Sobolev levels.
In fact, we apply to $ \mathcal T$ a surgical procedure -- truncation by Taylor polynormials: one on the data, and another on the $\bar\partial$ solution on the punctured bidisc. The idea was initially introduced by Ma and Michel in \cite{MM} to treat the H\"older regularity. In the Sobolev category when $p>4$, this procedure at order $k-1$ is meaningful and in the strong (continuous) sense due to the Sobolev embedding theorem. Note that the top $k$-th order derivatives are still in the weak (distributional) sense where we need to use discretion. After a careful inspection of the post-surgical regularity on the pull-back of the data and push-forward of the solutions on the punctured bidisc, we utilize a weighted Hardy-type inequality to obtain a sequence of refined Sobolev estimates. These estimates eventually allow the weight loss from the singularity at $(0,0)$ to be precisely (and fortunately) compensated by the weight gain from the truncation, so that the truncated solution enjoys the (unweighted) Sobolev regularity in Theorem \ref{main}. Throughout our proof, the assumptions $k\ge 1, p>4$ are crucial and repeatedly used. It is not clear whether the theorem still holds if $p\le 4$.
\medskip
The organization of the paper is as follows. In Section 2, we give notations and preliminaries that are needed in the paper. In Section 3, we prove Theorem \ref{mainp} for the weighted Sobolev estimate on product domains, from which Corollary \ref{main4} follows. Section 4 is devoted to the proof of the main Theorem \ref{main} for the Sobolev estimate on the Hartogs triangle.
\section{Notations and preliminaries}
\subsection{Weighted Sobolev spaces}
Denote by $|S|$ the Lebesgue measure of a subset $S$ in $\mathbb C^n$, and $dV_{z_j}$ the volume integral element in the complex $z_j$ variable. For $z=(z_1, \cdots, z_n)\in \mathbb C^n$, let $\hat z_j =(z_1, \cdots, z_{j-1}, z_{j+1}, \cdots, z_n)\in \mathbb C^{n-1}$, where the $j$-th component of $z$ is skipped. Our weight space under consideration is as follows.
\begin{definition}\label{aps}
Given $1<p<\infty$, a weight $\mu: \mathbb C^n\rightarrow [0, \infty)$ is said to be in $ A^*_p$ if the $A_p^*$ constant
$$ A_p^*(\mu): = \sup \left(\frac{1}{|D|}\int_{D}\mu(z)dV_{z_j}\right)\left(\frac{1}{|D|}\int_{D} \mu(z)^{\frac{1}{1-p}}dV_{z_j}\right)^{p-1}<\infty, $$
where the supremum is taken over a.e. $\hat z_j\in \mathbb C^{n-1}, j=1, \ldots, n $, and all discs $D\subset \mathbb C$.
\end{definition}
When $n=1$, the $A_p^*$ space coincides with the standard Muckenhoupt's class $A_p$, the collection of all weights $\mu: \mathbb C^n\rightarrow [0, \infty)$ satisfying
\begin{equation*}
A_p(\mu): = \sup \left(\frac{1}{|B|}\int_{B}\mu(z)dV_z\right)\left(\frac{1}{|B|}\int_{B} \mu(z)^{\frac{1}{1-p}}dV_z\right)^{p-1}<\infty, \end{equation*}
where the supremum is taken over all balls $B\subset \mathbb C^n$. Clearly, $A_q\subset A_p$ if $1< q<p<\infty$. $A_p$ spaces also satisfy an open-end property: if $\mu\in A_p$ for some $p>1$, then $\mu\in A_{\tilde p} $ for some ${\tilde p}<p$. See \cite[Chapter V]{Stein} for more details of the $A_p$ class.
When $n\ge 2$, Definition \ref{aps} essentially says that $\mu \in A_p^*$ if and only if the restriction of $\mu$ on any complex one-dimensional slice $ \hat z_j$ belongs to $A_p$, with a uniform $A_p$ bound independent of $\hat z_j$. On the other hand, $\mu\in A^*_p$ if and only if the $\delta$-dilation $\mu_\delta(z): =\mu(\delta_1z_1, \ldots, \delta_n z_n)\in A_p$ with a uniform $A_p$ constant for all $\delta =(\delta_1, \ldots, \delta_n)\in (\mathbb R^+)^n$ (see \cite[pp. 454]{GR}). This in particular implies $A^*_p\subset A_p$. As will be seen in the rest of the paper, the setting of $A^*_p$ weights allows us to apply the slicing property of product domains rather effectively.
\medskip
Let $\Omega$ be a bounded domain in $\mathbb C^n$. Denote by $\mathbb Z^+$ the set of all positive integers. Given $k\in \mathbb Z^+\cup\{0\}, p\ge 1$, the weighted Sobolev space $W^{k, p}(\Omega, \mu)$ with respect to a weight $\mu\ge 0$ is the set of functions on $\Omega$ whose weak derivatives up to order $k$ exist and belong to $L^p(\Omega, \mu)$. The corresponding weighted $W^{k, p}$ norm of a function $h\in W^{k,p}(\Omega, \mu)$ is $$ \|h\|_{W^{k,p}(\Omega, \mu)}: = \left(\sum_{l=0}^k\int_\Omega |\nabla_z^lh(z)|^p\mu(z)dV_z\right)^\frac{1}{p}<\infty. $$
Here $\nabla_z^l h$ represents all $l$-th order weak derivatives of $h$. When $\mu\equiv 1$, $W^{k, p}(\Omega, \mu)$ is reduced to the (unweighted) Sobolev space $W^{k,p}(\Omega)$. As a direct consequence of the open-end property for $A_p$ and H\"older inequality, if $\mu\in A_p, p>1$, there exists some $q>1$ such that $ W^{k,p}(\Omega, \mu)\subset W^{k,q}(\Omega)$.
In the rest of the paper, for each $j = 1, \ldots, n$, we use $\nabla^{\alpha_j}_{z_j} h$ to specify all $\alpha_j$-th order weak derivatives of $h$ in the complex $z_j$-th direction. For a multi-index $\alpha =(\alpha_1, \ldots, \alpha_n)$, denote $ \nabla_{z_1}^{\alpha_1}\cdots \nabla_{z_n}^{\alpha_n}$ by $\nabla^\alpha_{z}$. Then for $l\in \mathbb Z^+$, $\nabla_z^{l} = \sum_{|\alpha|=l} \nabla^\alpha_{z} $. We also represent the $\alpha_j$-th order derivative of $h$ with respect to the holomorphic $z_j$ and anti-holomorphic $\bar z_j$ variable by $\partial^{\alpha_j}_{z_j} h$ and $\bar\partial^{\alpha_j}_{z_j} h$, respectively. When the context is clear, the letter $z$ may be dropped from those differential operators and we write instead $\nabla^{l}, \nabla^{\alpha_j}_j, \nabla^\alpha, \partial^{\alpha_j}_{ j}$ and $ \bar\partial^{\alpha_j}_{j}$ etc.
\subsection{Weighted Sobolev estimates on planar domains}
Let $D$ be a bounded domain in $\mathbb C$ with Lipschitz boundary. For $p>1, z\in D$, define
\begin{equation*}
\begin{split}
Th(z)&: =\frac{-1}{2\pi i}\int_D \frac{h(\zeta)}{\zeta- z}d\bar{\zeta}\wedge d\zeta, \ \ \text{for}\ \ h\in L^p(D);\\
Sh(z)&: =\frac{1}{2\pi i}\int_{bD}\frac{h(\zeta)}{\zeta- z}d\zeta, \ \ \text{for}\ \ h\in L^p(bD). \end{split}
\end{equation*}
Clearly, $ d\bar{\zeta}\wedge d\zeta = 2idV_\zeta$ in the above. $T$ and $S$ satisfy the Cauchy-Green formula below: for any $h\in W^{1, p}(D), p>1$,
$$ h = Sh + T\bar\partial h\ \ \text{on} \ \ D $$
in the sense of distributions.
The following weighted Sobolev regularity of $T$ and $S$ is essential in order to carry out the weighted Sobolev regularity of $\bar\partial$ on product domains. It is worthwhile to note that \eqref{So} below fails if $k=0$, where $S$ is not even well-defined.
\begin{theorem}\cite{PZ2}\label{mainT}
Let $D\subset \mathbb C$ be a bounded domain with $C^{k, 1}$ boundary and $\mu\in A_p, 1<p<\infty$. For $k\in \mathbb Z^+\cup\{0\}$, there exists a constant $C$ dependent only on $D, k$, $p$ and $ A_p(\mu)$, such that for all $h\in W^{k, p}(D, \mu)$,
\begin{equation}\label{To}
\|T h\|_{W^{k+1, p}(D, \mu)}\le C \|h\|_{W^{k, p}(D, \mu)}.
\end{equation}
If in addition $k\in \mathbb Z^+ $, then
\begin{equation}\label{So}
\|S h\|_{W^{k, p}(D, \mu)}\le C \|h\|_{W^{k, p}(D, \mu)}.
\end{equation}
\end{theorem}
\subsection{Product domains and the Hartogs triangle}
A subset $\Omega\subset \mathbb C^n$ is said to be a product domain, if $\Omega = D_1\times\cdots\times D_n$, where each $D_j\subset \mathbb C, j=1, \ldots, n, $ is a bounded domain in $\mathbb C$ such that its boundary $bD_j$ consists of a finite number of rectifiable Jordan curves which do not intersect one another. A product domain $\Omega$ is always pseudoconvex, and has Lipschitz boundary if in addition each $bD_j$ is Lipschitz, $j=1, \ldots, n$.
Denote by $\triangle$ the unit disc in $\mathbb C$, and by $\triangle^*: =\triangle\setminus \{0\}$ the punctured disc on $\mathbb C$. Then the punctured bidisc $ \triangle\times \triangle^*$ is biholomorphic to the Hartogs triangle $\mathbb H$ through the map $\psi: \triangle\times \triangle^* \rightarrow \mathbb H$, where \begin{equation}\label{psi}
(w_1, w_2)\in \triangle\times \triangle^* \mapsto (z_1, z_2)= \psi(w)= (w_1w_2, w_2)\in \mathbb H.
\end{equation} The inverse $\phi: \mathbb H \rightarrow \triangle\times \triangle^*$ is given by \begin{equation}\label{phi}
(z_1, z_2)\in \mathbb H \mapsto (w_1, w_2) = \phi(z) = \left(\frac{z_1}{z_2}, z_2\right)\in \triangle\times \triangle^*.
\end{equation}
Note that $\mathbb H$ is not Lipschtiz near $(0,0)$.
It is well-known that any domain with Lipschtiz boundary is a uniform domain (see \cite{GO} for the definition). Recently, it was shown in \cite[Theorem 2.12]{BFLS} that the Hartogs triangle is also a uniform domain. Thus according to \cite{Jo}\cite[Theorem 1.1]{Ch}, both Lipschitz product domains and the Hartogs triangle satisfy a weighted Sobolev extension property. Namely, let $\Omega$ be either a Lipschitz product domain or the Hartogs triangle. Then for any weight $\mu\in A_p, 1< p<\infty, k\in \mathbb Z^+ $, any $h\in W^{k, p}(\Omega, \mu) $ can be extended as an element $\tilde h$ in $W^{k, p}(\mathbb C^n, \mu) $ such that
$$ \|\tilde h\|_{W^{k, p}(\mathbb C^n, \mu)}\le C\|h\|_{W^{k, p}(\Omega, \mu)} $$
for some constant $C$ dependent only on $k, p$ and the $A_p$ constant of $\mu$.
For simplicity of notations, throughout the rest of the paper, we shall say the two quantities $a$ and $b$ satisfy $a\lesssim b$, if $a\le Cb$ for some constant $C>0$ dependent only possibly on $\Omega, k, p$ and the $A_p^*$ constant $ A_p^*(\mu)$ (or $A_p(\mu)$).
\section{Weighted Sobolev estimates on product domains }
Let $D_j\subset\mathbb C$, $j= 1, \ldots, n,$ be bounded domains with $C^{k, 1}$ boundary, $n\ge 2, k\in \mathbb Z^+\cup \{0\}$, and let $\Omega: = D_1\times\cdots\times D_n$. Denote by $T_j$ and $S_j$ the solid and boundary Cauchy integral operators $T$ and $S$ acting on functions along the $j$-th slice of $\Omega$, respectively. Namely, for $p>1, z\in \Omega$,
\begin{equation}\label{tj}
\begin{split}
&T_j h (z): = \frac{-1}{2\pi i}\int_{D_j} \frac{h(z_1, \ldots, z_{j-1}, \zeta, z_{j+1}, \ldots, z_n)}{\zeta- z_j}d\bar{\zeta}\wedge d\zeta, \ \ \text{for}\ \ h\in L^p(\Omega);\\
& S_j h (z): = \frac{ 1}{2\pi i}\int_{bD_j} \frac{h(z_1, \ldots, z_{j-1}, \zeta, z_{j+1}, \ldots, z_n)}{\zeta- z_j}d\zeta, \ \ \text{for}\ \ h\in L^p(b\Omega).
\end{split}
\end{equation}
\begin{pro}\label{Tj}
Let $\Omega = D_1\times\cdots\times D_n$, where each $D_j $ is a bounded domain in $\mathbb C$ with $C^{k, 1}$ boundary, $ k\in\mathbb Z^+\cup\{0\}$. Assume $\mu\in A_p^*, 1<p<\infty$. Then for any $h\in W^{k, p}(\Omega, \mu)$,
\begin{equation}\label{T_j}
\|T_jh\|_{W^{k, p}(\Omega, \mu) }\lesssim \|h\|_{W^{k, p}(\Omega, \mu)}.\end{equation}
If in addition $k\in \mathbb Z^+$, then \begin{equation}\label{S_j}
\|S_jh\|_{W^{k-1, p}(\Omega, \mu) }\lesssim \|h\|_{W^{k, p}(\Omega, \mu)}.
\end{equation}
\end{pro}
\begin{proof}
Without loss of generality, assume $j=1$ and $n=2$.
For any multi-index $\alpha= (\alpha_1, \alpha_2)$ with $|\alpha|\le k$, since $\bar\partial_1 T_1 = id$, we can further assume $\nabla^\alpha T_1h = \partial_1^{\alpha_1} T_1 \left(\nabla_2^{\alpha_2}h\right)$. For a.e. fixed $z_2\in D_2$, $\mu(\cdot, z_2)\in A_p$ and $ \nabla_2^{
\alpha_2}h(\cdot, z_2)\in W^{\alpha_1, p}(D_1, \mu(\cdot, z_2)) $. Making use of \eqref{To}, we have
$$\int_{D_1}|\partial_1^{\alpha_1} T_1 \left(\nabla_2^{\alpha_2}h\right)(z_1, z_2) |^p\mu(z_1, z_2)dV_{z_1}\lesssim \sum_{l=0}^{\alpha_1} \int_{D_1}|\nabla_1^{l}\nabla_2^{\alpha_2}h(z_1, z_2) |^p\mu(z_1, z_2)dV_{z_1}. $$
Thus \begin{equation*}
\begin{split}
\| \nabla^\alpha T_1h\|^p_{ L^{p}(\Omega, \mu)} = &\int_{D_2} \int_{D_1}|\partial_1^{\alpha_1} T_1 \left(\nabla_2^{\alpha_2}h\right)(z_1, z_2) |^p\mu(z_1, z_2)dV_{z_1}dV_{z_2} \lesssim \| h\|^p_{ W^{k, p}(\Omega, \mu)}.
\end{split}
\end{equation*}
The boundedness of $S_1$ is proved similarly. Since $S_1h$ is holomorphic with respect to the $z_1$ variable, we only consider $ \nabla^{\alpha} S_1h(z) = \partial_1^{ \alpha_1} S_1 (\nabla_2^{ \alpha_2} h)$ with $|\alpha|\le k-1$. Then $\nabla_2^{ \alpha_2} h(\cdot, z_2)\in W^{k- \alpha_2, p}(D_1)$ for a.e. $z_2\in D_2$. Noting that $k-\alpha_2\ge 1$, by \eqref{So},
\begin{equation*}
\int_{D_1}|\partial_1^{\alpha_1} S_1 \left(\nabla_2^{\alpha_2}h\right)(z_1, z_2) |^p\mu(z_1, z_2)dV_{z_1}\lesssim \sum_{l=0}^{\alpha_1+1}\int_{D_1}|\nabla_1^{l}\nabla_2^{\alpha_2}h(z_1, z_2) |^p\mu(z_1, z_2)dV_{z_1}.
\end{equation*}
Here the sum for $l$ up to $\alpha_1+1$ is necessary in the case when $ \alpha =(0, k-1)$, due to the absence of \eqref{So} at $k=0$ there. Hence $\|\nabla^{\alpha} S_1 h\|_{L^p(\Omega, \mu)}\lesssim \|h\|_{W^{k, p}(\Omega, \mu)}$.
\end{proof}
\medskip
\begin{remark}\label{re}
a). The estimate \eqref{T_j} is optimal. Indeed, consider $h(z_1, z_2)= |z_2|^{k-\frac{2}{p}}$ on ${\triangle\times \triangle}$. Then $h\in W^{k, s }({\triangle\times \triangle})$ for all $s<p$. However, $T_1h(z_1, z_2) = \bar z_1|z_2|^{k-\frac{2}{p}} \notin W^{k, p}({\triangle\times \triangle})$. \\
b). As a consequence of Theorem \ref{mainT}, one also has when $k\in \mathbb Z^+, 1<p<\infty, j=1, \ldots, n$,
\begin{equation}\label{T11}
\sum_{l=0}^k\|\nabla_j^l T_jh\|_{L^{p}(\Omega, \mu) }\lesssim \sum_{l=0}^{k-1}\|\nabla_j^lh\|_{L^{ p}(\Omega, \mu)} \lesssim \|h\|_{W^{k-1, p}(\Omega, \mu)},
\end{equation}
\begin{equation}\label{T112}
\sum_{l=0}^k \|\nabla_j^l T_jh\|_{W^{1, p}(\Omega, \mu) }\lesssim \|h\|_{W^{k, p}(\Omega, \mu)},
\end{equation}
and
\begin{equation}\label{S11}
\sum_{l=0}^k\|\nabla_j^l S_jh\|_{L^{p}(\Omega, \mu) }\lesssim \sum_{l=0}^{k}\|\nabla_j^lh\|_{L^{ p}(\Omega, \mu)} \lesssim \|h\|_{W^{k, p}(\Omega, \mu)}.
\end{equation}
In the case when $\mu\equiv 1$ and $k=0$, an application of the classical complex analysis theory (see \cite{V} etc.) and Fubini theorem gives for $1\le p< \infty$, \begin{equation}\label{T12}
\|T_jh \|_{L^{p}(\Omega ) } \lesssim \|h\|_{L^{p}(\Omega)}. \ \ \
\end{equation}
These inequalities will be used later.
\end{remark}
\medskip
Given a $(0, q)$ form \begin{equation*}
\mathbf f = \sum_{\substack{ j_1<\cdots<j_q}}f_{\bar j_1\cdots\bar j_q} d\bar z_{j_1}\wedge\cdots \wedge d\bar z_{j_q}\in C^1(\bar{\Omega}),
\end{equation*}
define $T_j \mathbf f$ and $S_j\mathbf f$ to be the action on the corresponding component functions. Namely,
\begin{equation*}
\begin{split}
& T_j\mathbf f: = \sum_{ \substack{ 1\le j_1<\cdots<j_q\le n}}T_jf_{ \bar j_1\cdots\bar j_q} d\bar z_{j_1}\wedge\cdots \wedge d\bar z_{j_q};\\
& S_j\mathbf f: = \sum_{\substack{ \\1\le j_1<\cdots<j_q\le n}}S_jf_{ \bar j_1\cdots\bar j_q} d\bar z_{j_1}\wedge\cdots \wedge d\bar z_{j_q}.
\end{split}
\end{equation*}
Furthermore, define a projection $\pi_k\mathbf f$ to be a $(0, q-1)$ form with
\begin{equation*}
\pi_k\mathbf f: = \sum_{\substack{ 1\le k<j_2<\cdots<j_q\le n}}f_{ \bar k\bar j_2\cdots\bar j_q} d\bar z_{j_2}\wedge\cdots \wedge d\bar z_{j_q}.
\end{equation*}
In their celebrated work \cite[pp. 430]{NW}, Nijenhuis and Woolf constructed a solution operator of the $\bar\partial$ equation for $(0,q)$ forms on product domains.
\begin{theorem}\cite{NW}\label{nw1}
Let $\Omega = D_1\times\cdots\times D_n$, where each $D_j $ is a bounded domain in $\mathbb C$ with $C^{k, 1}$ boundary, $ k\in\mathbb Z^+$. If $\mathbf f\in C^{1}(\bar{\Omega})$ is a $\bar\partial$-closed $(0, q)$ form on $\Omega$, then
\begin{equation}\label{key}
T\mathbf f: =
T_1\pi_1 \mathbf f +T_2S_1\pi_2 \mathbf f+\cdots+ T_nS_1\cdots S_{n-1}\pi_n\mathbf f
\end{equation}
is a solution to $\bar\partial u = \mathbf f$ on $\Omega$.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{mainp}: ] Given a $\bar\partial$-closed $(0, q)$ form $\mathbf f\in W^{n-1,p}(\Omega, \mu), p>1$ (the $k=1$ case in the theorem), we first verify that $T\mathbf f$ in \eqref{key} is a weak solution to $\bar\partial u = \mathbf f$ on $\Omega$. Since $W^{n-1,p}(\Omega, \mu)\subset W^{n-1,q}(\Omega) $ for some $q>1$, for simplicity we directly assume $\mathbf f\in W^{n-1,p}(\Omega), p>1$. Following an idea in \cite{PZ}, for each $j=1, \ldots, n$, let $ \{D^{(m)}_j\}_{m=1}^\infty$ be a family of strictly increasing open subsets of $D_j$ such that\\
a). for $m\ge N_0\in \mathbb N$, $bD^{(m)}_j$ is $C^{k, 1}$, $\frac{1}{m+1}< dist(D^{(m)}_j, D_j^c)<\frac{1}{m}$;\\
b). $H_j^{(m)}: \bar D_j\rightarrow \bar D_j^{(m)}$ is a $C^1$ diffeomorphism with $\lim_{m\rightarrow \infty} \|H_j^{(m)}-id\|_{C^1(D_j)}=0$.
Let $\Omega^{(m)}= D^{(m)}_1\times\cdots\times D^{(m)}_n$ be the product of those approximating planar domains. Denote by $T^{(m)}_j, S^{(m)}_j$ and $T^{(m)}$ the operators defined in (\ref{tj}) and (\ref{key}) accordingly, with $\Omega$ replaced by $\Omega^{(m)}$. Then $T^{(m)} \mathbf f\in W^{1, p}(\Omega^{(m)})$. Adopting the mollifier argument to $\mathbf f\in W^{n-1, p}(\Omega)$, we obtain $\mathbf f^\epsilon\in C^1(\overline{\Omega^{(m)}})\cap W^{n-1, p}( \Omega^{(m)})$ such that $$\|\mathbf f^\epsilon - \mathbf f\|_{W^{n-1, p}(\Omega^{(m)})}\rightarrow 0$$ as $\epsilon\rightarrow 0$ and $\bar\partial \mathbf f^\epsilon =0$ on $\Omega^{(m)}$.
For each fixed $m$, $ T^{(m)} \mathbf f^\epsilon\in W^{n-1, p}(\Omega^{(m)})$ when $\epsilon$ is small and $$\bar\partial T^{(m)} \mathbf f^\epsilon =\mathbf f^\epsilon \quad \text{in}\quad \Omega^{(m)}$$
by Theorem \ref{nw1}. Furthermore, $$\|T^{(m)} \mathbf f^\epsilon - T^{(m)} \mathbf f\|_{W^{1,p}(\Omega^{(m)})} \lesssim \|\mathbf f^\epsilon - \mathbf f\|_{W^{n-1, p}(\Omega^{(m)})}\rightarrow 0$$ as $\epsilon\rightarrow 0$. In particular, $\lim_{\epsilon\rightarrow 0}T^{(m)} \mathbf f^\epsilon$ exists a.e. in $\Omega^{(m)}$
and is equal to $T^{(m)} \mathbf f \in W^{n-1, p}(\Omega^{(m)})$ pointwisely.
Given a testing form $\phi$ with a compact support $K$, let $m_0\ge N_0$ be such that $K \subset \Omega^{(m_0-2)}$.
Denote by $\langle\cdot, \cdot\rangle_{\Omega}$ (and $\langle\cdot, \cdot\rangle_{\Omega^{(m_0)}}$) the inner product(s) in $L^2(\Omega)$ (and in $L^2({\Omega^{(m_0)}}$), respectively), and $\bar\partial^*$ the formal adjoint of $\bar\partial$. For all $m\ge m_0$, one has
\begin{equation}\label{88}
\langle T^{(m)}\mathbf f, \bar\partial^*\phi\rangle_{\Omega^{(m_0)}} =\lim_{\epsilon \rightarrow 0}\langle T^{(m)}\mathbf f^\epsilon, \bar\partial^*\phi\rangle_{\Omega^{(m_0)}}= \lim_{\epsilon \rightarrow 0}\langle \bar\partial T^{(m)}\mathbf f^\epsilon, \phi\rangle_{\Omega^{(m_0)}} = \lim_{\epsilon \rightarrow 0} \langle\mathbf f^\epsilon, \phi\rangle_{\Omega^{(m_0)}} = \langle\mathbf f, \phi\rangle_{\Omega}.
\end{equation}
We further show that \begin{equation}\label{99}
\langle T\mathbf f, \bar\partial^*\phi\rangle_{\Omega}=\lim_{m\rightarrow \infty}\langle T^{(m)}\mathbf f, \bar\partial^*\phi\rangle_{\Omega^{(m_0)}}.
\end{equation}
For simplicity, assume $\pi_j \mathbf f $ contains only one component function $f_j$, so does $\phi$. We will also drop various integral measures, which should be clear from the context.
For each $j=1, \ldots, n$,
\begin{equation*}
\begin{split}
&\langle T_j^{(m)}S_1^{(m)}\cdots S_{j-1}^{(m)}\pi_j \mathbf f, \bar\partial^*\phi\rangle_{\Omega^{(m_0)}} \\
= &\frac{1}{(2\pi i)^{j-1}}\int_{z\in K }T_j\left(\int_{\zeta_1\in b D_1^{(m)}}\cdots \int_{\zeta_{j-1}\in b D_{j-1}^{(m)}}\frac{f_j(\zeta_1, \cdots, \zeta_j, z_{j+1}, \cdots, z_n)\chi_{D_j^{(m)}}(\zeta_j)}{(\zeta_1-z_1)\cdots(\zeta_{j-1}-z_{j-1})}\right)\overline{\bar\partial^*\phi(z)}.
\end{split}
\end{equation*}
Here $\chi_{D_j^{(m)}}$ is the characteristic function of $D_j^{(m)}\subset \mathbb C$.
For each $(z, \zeta_j)\in K\times D_j\setminus \{z_j=\zeta_j\}$, after a change of variables, there exists some function $h^{(m)}\in C (\bar D_1\times\cdots \bar D_{j-1})$, such that $\|h^{(m)}-1\|_{C( D_1\times\cdots D_{j-1} )}\rightarrow 0$ as $m\rightarrow \infty$ and
\begin{equation*}
\begin{split}
& \int_{\zeta_1\in b D_1^{(m)}}\cdots \int_{\zeta_{j-1}\in b D_{j-1}^{(m)}}\frac{f_j(\zeta_1, \cdots, \zeta_j, z_{j+1}, \cdots, z_n)\chi_{D_j^{(m)}}(\zeta_j) }{(\zeta_1-z_1)\cdots(\zeta_{j-1}-z_{j-1})} \\
= & \int_{\zeta_1\in b D_1 }\cdots \int_{\zeta_{j-1}\in b D_{j-1} } \frac{f_j(\zeta_1, \cdots, \zeta_j, z_{j+1}, \cdots, z_n)h^{(m)}(\zeta_1, \cdots, \zeta_{j-1})\chi_{D_j^{(m)}}(\zeta_j) }{(\zeta_1-z_1)\cdots(\zeta_{j-1}-z_{j-1})}.
\end{split}
\end{equation*}
Notice that when $z\in K(\subset \Omega^{(m_0-2)})$ and $\zeta_l\in b D_l^{(m)}, m\ge m_0, l=1, \ldots, j-1$, $$\frac{1}{|\zeta_l-z_l|}\le \frac{1}{dist((\Omega^{(m)})^c, \Omega^{(m_0-2)})}\le\frac{1}{ dist((\Omega^{(m_0)})^c, \Omega^{(m_0-2)})}< m_0^2 .$$
Hence
\begin{equation}\label{00}
\begin{split}
&\left|\langle T_j^{(m)}S_1^{(m)}\cdots S_{j-1}^{(m)}\pi_j \mathbf f, \bar\partial^*\phi\rangle_{\Omega^{(m_0)}} - \langle T_j S_1 \cdots S_{j-1} \pi_j \mathbf f, \bar\partial^*\phi\rangle_{\Omega^{(m_0)}} \right| \\
\lesssim &\left\| T_j\left(\int_{ bD_1\times \cdots \times bD_{j-1}} \left|f_jh^{(m)} \chi_{ D_j^{(m)}}-f_j\right|\right) \right\|_{L^1(\Omega)}.
\end{split}
\end{equation}
On the other hand,
\begin{equation}\label{9}
\begin{split}
& \left\|\int_{ bD_1\times \cdots \times bD_{j-1}} \left|f_jh^{(m)} \chi_{ D_j^{(m)}}-f_j\right|\right\|_{L^1(\Omega)}\\
\lesssim &\left\| |f_j|\left|h^{(m)}\chi_{ D_j^{(m)}}-1\right|\right\|_{L^1(bD_1\times \cdots \times bD_{j-1}\times D_j\times\cdots\times D_n) } \\
\lesssim &\|f_j\|_{L^1(bD_1\times \cdots \times bD_{j-1}\times D_j\times\cdots\times D_n)} \|h^{(m)} -1\|_{C( D_1\times \cdots \times D_{j-1} ) }\\
&+\|f_j\|_{L^p(bD_1\times \cdots \times bD_{j-1}\times D_j\times\cdots\times D_n)}\text{vol}^{1-\frac{1}{p}}(D_j\setminus D_j^{(m)}) \\
\lesssim &\|f_j\|_{W^{n-1, p}(\Omega)}\left(\|h^{(m)} -1\|_{C( D_1\times \cdots \times D_{j-1} ) } +\text{vol}^{1-\frac{1}{p}}(D_j\setminus D_j^{(m)})\right) \rightarrow 0
\end{split}
\end{equation}
as $m\rightarrow \infty$. Here we used the trace theorem in the third inequality. Combining \eqref{T12}, \eqref{00} and \eqref{9} we finally get
$$ \left|\left\langle T_j^{(m)}S_1^{(m)}\cdots S_{j-1}^{(m)}\pi_j \mathbf f - T_j S_1 \cdots S_{j-1} \pi_j \mathbf f, \bar\partial^*\phi\right\rangle_{\Omega^{(m_0)}} \right| \rightarrow 0 $$
as $m\rightarrow \infty$. (\ref{99}) is thus proved. Combining (\ref{88}) with (\ref{99}), we deduce that
\begin{equation*}
\langle T\mathbf f, \bar\partial^*\phi\rangle_{\Omega}=\lim_{m\rightarrow \infty}\langle T^{(m)}\mathbf f, \bar\partial^*\phi\rangle_{\Omega^{(m_0)}} =\langle\mathbf f, \phi\rangle_\Omega,
\end{equation*}
which verifies $T\mathbf f$ as a weak solution to $\bar\partial$ on $\Omega$.
We next prove the weighted Sobolev estimate for the operator $T$ defined in \eqref{key}. Since $\bar\partial T\mathbf f = \mathbf f$, we can further assume $\nabla^k=\partial^{\alpha} $ for any multi-index $\alpha= (\alpha_1, \ldots, \alpha_n)$ with $|\alpha|\le k$. In view of \eqref{key} and the fact that $\pi_j$ being a projection is automatically bounded in $W^{k, p}(\Omega, \mu)$, we only need to estimate $\|\partial^\alpha T_n S_1\cdots S_{n-1} h\|_{L^{p}(\Omega, \mu)}$ in terms of $\|h\|_{W^{k+n-2, p}(\Omega, \mu)}$. Write $ \partial^\alpha T_n S_1\cdots S_{n-1} h = (\partial_n^{\alpha_n} T_n) (\partial_1^{\alpha_1} S_1)\cdots (\partial_{n-1}^{\alpha_{n-1}} S_{n-1}) h $. If $\alpha_n\ge 1$, we apply \eqref{T11} and \eqref{S_j} inductively to have \begin{equation*}
\begin{split}
\| \partial^\alpha T_n S_1\cdots S_{n-1} h\|_{L^{p}(\Omega, \mu)} \lesssim & \| (\partial_1^{\alpha_1} S_1)\cdots (\partial_{n-1}^{\alpha_{n-1}} S_{n-1}) h\|_{W^{\alpha_n-1, p}(\Omega, \mu)} \\
\lesssim & \|(\partial_2^{\alpha_2} S_2) \cdots (\partial_{n-1}^{\alpha_{n-1}} S_{n-1}) h\|_{W^{\alpha_n-1+\alpha_1+1, p}(\Omega, \mu)} \\
\lesssim &\cdots\\
\lesssim & \|h\|_{W^{\sum_{j=1}^n\alpha_j+n-2, p}(\Omega, \mu) } \le\|h\|_{W^{k+n-2, p}(\Omega, \mu) }.
\end{split}
\end{equation*}
If $\alpha_n=0$, then there exists some $1\le j\le n-1$, such that $\alpha_j\ge 1$. Without loss of generality, assume $\alpha_1\ge 1$. Then by \eqref{T11}, \eqref{S11} and \eqref{S_j} inductively,
\begin{equation*}
\begin{split}
\| \partial^\alpha T_n S_1\cdots S_{n-1} h\|_{L^{p}(\Omega, \mu)} \lesssim & \| (\partial_1^{\alpha_1} S_1)\cdots (\partial_{n-1}^{\alpha_{n-1}} S_{n-1}) h\|_{L^{ p}(\Omega, \mu)} \\
\lesssim & \|(\partial_2^{\alpha_2} S_2) \cdots (\partial_{n-1}^{\alpha_{n-1}} S_{n-1}) h\|_{W^{ \alpha_1, p}(\Omega, \mu)} \\
\lesssim & \|(\partial_3^{\alpha_3} S_3) \cdots (\partial_{n-1}^{\alpha_{n-1}} S_{n-1}) h\|_{W^{ \alpha_1 +\alpha_2+1, p}(\Omega, \mu)} \\
\lesssim &\cdots\\
\lesssim & \|h\|_{W^{k+n-2, p}(\Omega, \mu) }.
\end{split}
\end{equation*}
The theorem is thus proved.
\end{proof}
Similar to an example in \cite{Zhang2}, the following one shows that the $\bar\partial$ problem does not improve regularity in weighted Sobolev spaces on product domains. As such the weighted Sobolev regularity obtained in Theorem \ref{mainp} is optimal when $n=2$.
\begin{example}\label{ex2}
For each $k\in \mathbb Z^+, 1<p<\infty, \epsilon>0$ and any $s\in \left(\frac{2}{1+\epsilon}, 2\right)\setminus\{1\}$, consider $\mathbf f= (z_2-1)^{k-s}d\bar z_1 $ on ${\triangle\times \triangle}$, $\frac{1}{2}\pi <\arg (z_2-1)<\frac{3}{2}\pi$ and $\mu =|z_2-1|^{s(p-1)}$. Then $\mu\in A_p^*$, $\mathbf f\in W^{k, p}({\triangle\times \triangle}, \mu )$ and is $\bar\partial$-closed on ${\triangle\times \triangle}$. However, there does not exist a solution $u\in W^{k, p+\epsilon}({\triangle\times \triangle}, \mu)$ to $\bar\partial u =\mathbf f$ on ${\triangle\times \triangle}$. \end{example}
\begin{proof}One can directly verify that $\mathbf f\in W^{k, p}({\triangle\times \triangle}, \mu) $ is $\bar\partial$-closed on ${\triangle\times \triangle}$ and $\mu\in A_p^*$.
Suppose there exists some $u\in W^{k, p+\epsilon}({\triangle\times \triangle}, \mu)$ satisfying $\bar\partial u =\mathbf f $ on ${\triangle\times \triangle}$. Then there exists some holomorphic function $h$ on ${\triangle\times \triangle} $, such that $u = (z_2-1)^{k-s}\bar z_1+h \in W^{k, p+\epsilon}({\triangle\times \triangle}, \mu)$.
For each $(r, z_2) \in U: = (0,1) \times \triangle\subset \mathbb R^3$, consider
$$v(r, z_2): =\int_{|z_1|= r} {u}(z_1, z_2) dz_1. $$
By H\"older inequality, Fubini theorem and the fact that $p>1$, \begin{equation*}
\begin{split}
\|\partial_{z_2}^k v\|^{p+\epsilon}_{L^{p+\epsilon}(U, \mu)} =&\int_{U} \left|\int_{|z_1|= r} \partial_{z_2}^k{u}(z_1, z_2) dz_1\right|^{p+\epsilon}\mu(z_2)dV_{z_2} dr\\
=& \int_{|z_2|<1}\int_{0}^1\left|r \int_{0}^{2\pi} |\partial_{z_2}^k{u}(re^{i\theta}, z_2 )| d\theta \right|^{p+\epsilon} dr\mu(z_2)dV_{z_2} \\
\lesssim & \int_{|z_2|<1}\int_0^1 \int_{0}^{2\pi} |{\partial_{z_2}^k u}(re^{i\theta}, z_2 )|^{p+\epsilon}d\theta r dr \mu(z_2) dV_{z_2} \\
= & \int_{|z_2|<1, |z_1|<1 } |\partial_{z_2}^k{u}(z )|^{p+\epsilon}\mu(z_2)dV_{z}\le \|{u}\|^{p+\epsilon}_{W^{k, p+\epsilon}({\triangle\times \triangle}, \mu)}<\infty.
\end{split}
\end{equation*}
Thus $\partial_{z_2}^k v\in L^{ p+\epsilon}(U, \mu)$.
On the other hand, by Cauchy's theorem, for each $(r, z_2)\in U$,
\begin{equation*}
\begin{split}
\partial_{z_2}^k v(r, z_2) =&(k-s)\cdots (1-s)\int_{|z_1|=r} (z_2-1)^{-s}\bar z_1dz_1\\
=& (k-s)\cdots (1-s)(z_2-1)^{-s}\int_{|z_1|=r} \frac{r^2}{ z_1}dz_1 = 2(k-s)\cdots (1-s)\pi r^2i (z_2-1)^{-s},
\end{split}
\end{equation*}
which is not in $L^{ p+\epsilon}(U, \mu)$ by the choice of $s>\frac{2}{1+\epsilon}$. This is a contradiction!
\end{proof}
Making use of Theorem \ref{mainp}, one can immediately prove the weighted Sobolev estimate for the
$\bar\partial$ problem on $\mathbb H$ in Corollary \ref{main4}. In comparison to the statement of Theorem \ref{main}, the solution operator in Corollary \ref{main4} is the same for all Sobolev levels.
\medskip
\begin{proof}[Proof of Corollary \ref{main4}:] For any $\mathbf f = \sum_{j=1}^2 f_j(z)d\bar z_j\in W^{k,p}(\mathbb H) $, making use of the change of variables formula we have the pull-back \begin{equation}\label{55}
\psi^* \mathbf f = \bar w_2f_1\circ \psi d\bar w_1 + \left(\bar w_1 f_1\circ \psi +f_2\circ \psi \right) d\bar w_2. \end{equation} Moreover, noting by the chain rule
$$ \partial_{ w_1} = w_2\partial_{ z_1}, \ \ \ \partial_{ w_2} = w_1 \partial_{ z_1}+ \partial_{ z_2}, $$
we have $\psi^* \mathbf f\in W^{k, p}({\triangle\times \triangle}, |w_2|^2)$ with
\begin{equation}\label{pull}
\begin{split}
\|\psi^*\mathbf f\|^p_{W^{k, p}({\triangle\times \triangle}, |w_2|^{2}) }\lesssim& \sum_{j=1}^2\sum_{l=0}^k \int_{{\triangle\times \triangle}} |\nabla_w^l (f_j\circ \psi)(w)|^p|w_2|^{2} dV_w\\
\lesssim& \sum_{j=1}^2\sum_{l=0}^k \int_{\mathbb H} |\nabla_z^l f_j(z)|^pdV_z = \|\mathbf f\|^p_{W^{k,p}(\mathbb H) }. \end{split}\end{equation}
Since $k\in \mathbb Z^+, p>2$, by \eqref{pull} one has $\psi^*\mathbf f$ to be $\bar\partial$-closed on ${\triangle\times \triangle}$ (see, for instance, \cite[pp. 28]{Ma}). Making use of Theorem \ref{mainp}, there exists a solution $\tilde u \in W^{k, p }({\triangle\times \triangle}, |w_2|^2)$ solving $\bar\partial \tilde u =\psi^*\mathbf f $. Arguing in the same way as in the proof of \cite[Theorem 1.2]{Zhang2}, we know that $ \mathcal T\mathbf f: = \tilde u\circ \phi $ solves $ \bar\partial u = \mathbf f$.
Moreover,
\begin{equation}\label{push}
\begin{split}
\| \mathcal T\mathbf f \|^p_{ W^{k,p}(\mathbb H, |z_2|^{kp})}=& \sum_{l=0}^k\int_{\mathbb H} |\nabla_z^l (\tilde u\circ \phi)(z) z_2^k|^{p}dV_z \\
\lesssim &\sum_{l=0}^k \int_{{\triangle\times \triangle}} |\nabla_w^l\tilde u(w)|^p|w_2|^2dV_w =\| \tilde u\|^p_{W^{k, p}({\triangle\times \triangle}, |w_2|^2) }.
\end{split}
\end{equation}
Here we used the chain rule
$$ \partial_{ z_1} = \frac{1}{z_2}\partial_{ w_1}, \ \ \ \partial_{ z_2} = -\frac{z_1}{z^2_2}\partial_{ w_1}+ \partial_{w_2}$$
and the fact that $|z_1|<|z_2|$ on $\mathbb H$.
Finally, combining \eqref{pull}-\eqref{push} and Theorem \ref{mainp},
$$\|\mathcal T\mathbf f\|_{W^{k,p}(\mathbb H, |z_2|^{kp})}\lesssim \|\tilde u\|_{W^{k, p}({\triangle\times \triangle}, |w_2|^2) }\lesssim \|\psi^*\mathbf f\|^p_{W^{k, p}({\triangle\times \triangle}, |w_2|^2) }\lesssim \|\mathbf f\|^p_{W^{k,p}(\mathbb H) }. $$
\end{proof}
\section{Optimal Sobolev regularity on the Hartogs triangle}
In this section, following an idea of Ma and Michel in \cite{MM}, we shall adjust the solution operator provided by Corollary \ref{main4}, so that the new operator cancels the loss in the exponent of the weight. In detail, given a $W^{k, p}$ datum on the Hartogs triangle $\mathbb H$, we truncate its $(k-1)$-th order Taylor polynomial at $(0,0)$ and then pull it back to the punctured bidisc $\triangle\times \triangle^*$. Upon extension and solving the $\bar\partial$ problem on the bidisc $\triangle\times \triangle$ using Theorem \ref{mainp}, we once again truncate the $(k-1)$-th order holomorphic Taylor polynomial in the $w_2$ variable at $w_2=0$ from the solution. Both Taylor polynomials are meaningful when $p>4$ due to the Sobolev embedding theorem. Moreover, we can obtain a refined weighted Sobolev regularity at each operation (Proposition \ref{tf} and Proposition \ref{soe}) as a consequence of the truncation. Finally, pushing forward this truncated solution to $\mathbb H$, we show it is a solution to $\bar\partial$ on $\mathbb H$ that maintains the same Sobolev regularity as that of the datum.
Throughout the rest of the paper, $z =(z_1, z_2)$ will serve as the variable on $\mathbb H$, and $w =(w_1, w_2)$ as the variable on $\triangle\times \triangle$.
\subsection{Truncating data on the Hartogs triangle}
Given a $\bar\partial$-closed $(0,1)$ form ${\mathbf f}\in W^{k, p}(\mathbb H), k\in \mathbb Z^+, p>4$, recalling $\mathbb H$ satisfies the Sobolev extension property, it extends to an element, still denoted by $\mathbf f$, in $ W^{k, p}(\mathbb C^2)$. In particular, by Sobolev embedding theorem, $\mathbf f\in C^{k-1, \alpha}(\mathbb H)$ for some $\alpha>0$. Denote by $\mathcal P_k $ the $(k-1)$-th order Taylor polynomial operator at $(0, 0)$. Namely, if $h\in C^{k-1}$ near $(0, 0)$, then
$$ \mathcal P_k h(z): = \sum_{l_1+l_2+s_1+s_2=0}^{k-1} \frac{\partial_{z_1}^{l_1}\bar\partial_{z_1}^{l_2}\partial_{z_2}^{s_1}\bar\partial_{z_2}^{s_2} h(0)}{l_1!l_2! s_1!s_2!}z_1^{l_1}\bar z_1^{l_2}z_2^{s_1}\bar z_2^{s_2}. $$
Then $\mathcal P_k\mathbf f $ is $\bar\partial $-closed on $ \mathbb H$ and thus on ${\triangle\times \triangle}$ (see \cite[Lemma 3]{MM}). Applying the $W^{k, p}$ estimate of $\bar\partial$ on ${\triangle\times \triangle}$ (i.e., Theorem \ref{mainp} with $n=2$ and $ \mu \equiv 1$), one obtains some $u_{k}\in W^{k,p}({\triangle\times \triangle})$ satisfying
\begin{equation}\label{pu}
\begin{split}
&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \bar\partial u_k = \mathcal P_{k}\mathbf f\ \ \text{on}\ \ {\triangle\times \triangle}; \\
& \|u_{k}\|_{W^{k, p}({\triangle\times \triangle})}\lesssim \|\mathcal P_k\mathbf f\|_{W^{k, p}(\triangle\times \triangle)}\lesssim \|\mathcal P_k\mathbf f\|_{C^{k-1}(\mathbb H)}\lesssim \|\mathbf f\|_{W^{k, p}(\mathbb H)}.
\end{split}
\end{equation}
Let $\psi$ be defined in \eqref{psi}. We truncate $ {\mathbf f}$ by $\mathcal P_k {\mathbf f}$, and then pull back the truncated datum by $\psi$ to obtain $\psi^*(\mathbf f -\mathcal P_{k }\mathbf f)$ on the punctured bidisc.
\medskip
Denote by $\mathcal P_{2,k }$ the $(k -1)$-th order Taylor polynomial operator in the complex $w_2$ variable at $w_2=0$ of $C^{k-1}$ functions on $\triangle\times \triangle$. Then for any $h\in W^{k, p}(\mathbb H), k\in \mathbb Z^+, p>4$,
\begin{equation*}
\psi^* \left(\mathcal P_k h\right) = \mathcal P_{2, k} \left(\psi^* h\right).
\end{equation*}
In particular,
\begin{equation}\label{ex1}
\mathcal P_{2, k} \left(\psi^* (h-\mathcal P_k h )\right) =0.
\end{equation}
The following proposition states that the pull-back $\psi^*(\mathbf f -\mathcal P_{k }\mathbf f)$ of the truncated datum satisfies a more refined Sobolev estimate than \eqref{pull}.
\begin{pro}\label{tf}
Let $\mathbf f\in W^{k, p}(\mathbb H)$ be a $\bar\partial$-closed $(0,1)$ form on $\mathbb H, k\in \mathbb Z^+, p>4$ and $\psi$ be defined in \eqref{psi}. Let $$\tilde {\mathbf f}= \tilde f_1d\bar w_1 +\tilde f_2 d\bar w_2: = \psi^*(\mathbf f -\mathcal P_{k }\mathbf f) \ \ \text{on}\ \ {\triangle\times \triangle^*}.$$
Then $\tilde {\mathbf f}$ extends as a $\bar\partial$-closed $(0,1)$ form on ${\triangle\times \triangle}$, with $\tilde {\mathbf f}\in W^{k,p}(\triangle\times \triangle, |w_2|^2) $ and \begin{equation}\label{p2f}
\mathcal P_{2, k} \tilde {\mathbf f} = 0.
\end{equation}
Moreover, for $t, s\in \mathbb Z^+\cup\{0\}, t+s\le k$,
\begin{equation}\label{hh}
\left\| |w_2|^{-k+s} \nabla^{t}_{w_1}\nabla^s_{w_2}\tilde {\mathbf f} \right\|_{L^p({\triangle\times \triangle}, |w_2|^2) }\lesssim \|\mathbf f\|_{W^{k, p}(\mathbb H) }.
\end{equation}
\end{pro}
\medskip
In order to prove Proposition \ref{tf}, we need to establish a crucial weighted Hardy-type inequality on $\mathbb C$. We shall adopt the same notation $\mathcal P_k $ for the $(k-1)$-th order Taylor polynomial operator at $0$ on $C^{k-1}$ functions near $0\in \mathbb C$.
\begin{lem}\label{tr1}
For any $h\in W^{k, p}(\mathbb C, |w|^2), k\in \mathbb Z^+, p>4$ with $\mathcal P_k h =0$, and $j =0, \ldots, k$,
\begin{equation*}
\int_{\mathbb C}|\nabla_w^j h(w) |^p|w|^{2 -(k-j)p}dV_w\lesssim \int_{\mathbb C}|\nabla^k_w h(w)|^p|w|^{2}dV_w.
\end{equation*}
\end{lem}
\begin{proof}
Since the $j=k$ case is trivial, we assume $j\le k-1$. We shall show that for $h\in W^{l, p}(\mathbb C, |w|^2) $ with $\mathcal P_l h =0$, $l =1, \ldots, k$,
\begin{equation}\label{bb3}
\int_{\mathbb C}|h(w) |^p|w|^{2 -lp}dV_w\lesssim \int_{\mathbb C}|\nabla_w h(w)|^p|w|^{2-(l-1)p}dV_w.
\end{equation}
If so, replacing $l$ and $h$ by $k-j$ and $\nabla_w^j h$ in \eqref{bb3}, respectively, then
$$ \int_{\mathbb C}|\nabla_w^j h(w) |^p|w|^{2 -(k-j)p}dV_w\lesssim \int_{\mathbb C}|\nabla^{j+1}_w h(w)|^p|w|^{2-(k-j-1)p}dV_w. $$
A standard induction on $j$ will complete the proof of the lemma.
To show \eqref{bb3}, first apply the Stokes' theorem to $ |h(w)|^p|w|^{2-lp}\bar wd w$ on $\triangle_R\setminus \overline{\triangle_\epsilon}, 0<\epsilon<R<\infty$ to get
\begin{equation*}\label{st1}
\begin{split}
& \frac{1}{2i}\int_{b\triangle_R } |h(w)|^p|w|^{2-lp}\bar wd w - \frac{1}{2i}\int_{b\triangle_\epsilon } |h(w)|^p|w|^{2-lp}\bar wd w\\
=& \int_{ \triangle_R\setminus \overline{\triangle_\epsilon} } \bar\partial_{ w} \left( |h(w)|^p|w|^{2-lp}\bar w\right)dV_{w} \\
=& \left(2-\frac{lp}{2}\right) \int_{\triangle_R\setminus \overline{\triangle_\epsilon} } |h(w)|^p|w|^{2-lp} dV_{w}+ \int_{\triangle_R\setminus \overline{\triangle_\epsilon }} \bar\partial_w \left( |h(w)|^p\right)|w|^{2-lp}\bar wdV_{w}.
\end{split}
\end{equation*}
Sinc
$$\frac{1}{2i}\int_{b\triangle_R } |h(w)|^p|w|^{2-lp}\bar wd w = \frac{1}{2}\int_0^{2\pi}|h(Re^{i\theta})|^pR^{4-lp} d\theta \ge 0,$$
one further has
\begin{equation}\label{st}
\begin{split}
\left(\frac{lp}{2}-2\right) \int_{\triangle_R\setminus \overline{ \triangle_\epsilon } } |h(w)|^p|w|^{2-lp} dV_{w} \le \frac{1}{2i}\int_{b\triangle_\epsilon } |h(w)|^p|w|^{2-lp}\bar wd w + \int_{\triangle_R\setminus \overline{\triangle_\epsilon }} \bar\partial_w \left( |h(w)|^p\right)|w|^{2-lp}\bar wdV_{w}.
\end{split}
\end{equation}
We claim that
\begin{equation}\label{bg}
\lim_{\epsilon\rightarrow 0} \epsilon^{3-lp} \int_{b\triangle_\epsilon } |h(w)|^pd\sigma_w = 0,
\end{equation}
which is equivalent to
$$ \lim_{\epsilon\rightarrow 0}\left| \int_{b\triangle_\epsilon } |h(w)|^p|w|^{2-lp}\bar w d w\right| = 0. $$
Indeed, let $q$ be the dual of $p$, i.e., $\frac{1}{p}+\frac{1}{q}=1$. For a.e. $w\in b\triangle$ and $0<\delta<\epsilon$, applying Fubini theorem in polar coordinates, one can see $h(tw)\in W^{k, p}((\delta, \epsilon))$ as a function of $t$. By the fundamental theorem of calculus,
we have
$$ h(\epsilon w) = h(\delta w) + \int_\delta^\epsilon \frac{d}{dt} h(tw)dt.$$
Letting $\delta\rightarrow 0$ in the above, we have
$$ |h(\epsilon w)|\le \int_0^\epsilon |\nabla h(tw)|dt.$$
An induction process further gives
\begin{equation}\label{poi}
\begin{split}
|h(\epsilon w) |^p\le &\left|\int_0^{\epsilon}\int_0^{t_1}\cdots \int_0^{t_{l-1}}|\nabla^l h(t_{l}w)|dt_{l}\cdots dt_2dt_1\right|^p\\
\le &\left|\int_0^{\epsilon}\int_0^{{\epsilon}}\cdots \int_0^{{\epsilon}}|\nabla^l h(t_{l}w)|dt_{l}\cdots dt_2dt_1\right|^p\\
\le& {\epsilon}^{(l-1)p}\left|\int_0^{\epsilon}|\nabla^l h(tw)|t^{\frac{3}{p}}\cdot t^{-\frac{3}{p}}dt\right|^p\\
\le& {\epsilon}^{(l-1)p} \int_0^{\epsilon}| \nabla^l h(tw)|^pt^{3}dt \left(\int_0^{\epsilon} t^{-\frac{3q}{p}}dt)\right)^{\frac{p}{q}}\\
\lesssim & {\epsilon}^{lp-4}\int_0^{\epsilon}| \nabla^l h(tw)|^pt^{3}dt.
\end{split}
\end{equation}
Here we used the fact that $-\frac{3q}{p}> -1$ when $p>4$ in the last inequality. Note that $$\epsilon\int_{b\triangle}|h({\epsilon}w) |^p d\sigma_w = \int_{b\triangle_{\epsilon}}|h(w) |^p d\sigma_w. $$ Multiplying both sides of \eqref{poi} by $ {\epsilon}^{4-lp}$ and integrating over $b\triangle$, one has
\begin{equation*}
\begin{split}
\epsilon^{3-lp} \int_{b\triangle_{\epsilon}}|h(w) |^p d\sigma_w\lesssim& \int_0^{\epsilon}\int_{b\triangle}| \nabla^l h(tw)|^pt^{3}d\sigma_wdt\\
=& \int_0^{\epsilon}\int_{b\triangle_t}| \nabla^l h( w)|^p|w|^{2}d\sigma_w dt\\
\le & \int_{\triangle_\epsilon}| \nabla^l h(w)|^p|w|^{2}dV_w \rightarrow 0
\end{split}
\end{equation*}
as $\epsilon\rightarrow 0$. The claim \eqref{bg} is thus proved.
Pass $\epsilon\rightarrow 0$ and $R\rightarrow \infty$ in \eqref{st}, and then make use of \eqref{bg}. Since $ \frac{lp}{2}-2>0$, we further infer
\begin{equation*}
\begin{split}
\int_{\mathbb C } |h(w)|^p|w|^{2-lp} dV_{w}\lesssim & \int_{\mathbb C } |\nabla_w h(w)||h(w)|^{p-1}|w|^{3-lp} dV_{w}\\
=& \int_{\mathbb C } |\nabla_w h(w)||w|^{\frac{2}{p} -(l-1)}\cdot |h(w)|^{p-1}|w|^{2-lp+l-\frac{2}{p}} dV_{w}\\
\le& \left(\int_{\mathbb C } |\nabla_w h(w)|^p|w|^{2-(l-1)p}dV_w\right)^{\frac{1}{p}}\left(\int_{\mathbb C} |h(w)|^{p}|w|^{2-lp} dV_{w}\right)^{1-\frac{1}{p}}.
\end{split}
\end{equation*}
\eqref{bb3} follows by dividing both sides by $\left(\int_{\mathbb C} |h(w)|^{p}|w|^{2-lp} dV_{w}\right)^{1-\frac{1}{p}} $ and then taking the $p$-th power.
\end{proof}
\medskip
\begin{cor}\label{tr}
Let $D$ be a uniform domain in $\mathbb C$ and $0\in D$.
Then for any $h\in W^{k, p}(D, |w|^2), k\in \mathbb Z^+, p>4$ with $\mathcal P_k h =0$, and $j =0, \ldots, k$,
\begin{equation*}
\int_{D}|\nabla_w^j h(w) |^p|w|^{2 -(k-j)p}dV_w\lesssim \int_{D}|\nabla^k_w h(w)|^p|w|^{2}dV_w. \end{equation*}
\end{cor}
\begin{proof}
Given $h$ satisfying the assumption of the corollary, according to \cite[Theorem 1.2]{Ch}, one can extend $h$ to be an element $\tilde h\in W^{k, p}(\mathbb C, |w|^2) $, such that
$$ \int_{\mathbb C}|\nabla^k_w \tilde h(w)|^p|w|^{2}dV_w \lesssim \int_{D}|\nabla^k_w h(w)|^p|w|^{2}dV_w.$$
Obviously $\mathcal P_k\tilde h =0$. Hence making use of Lemma \ref{tr1} to $\tilde h$, we have
\begin{equation*}
\begin{split}
\int_{D}|\nabla_w^j h(w) |^p|w|^{2 -(k-j)p}dV_w\le & \int_{\mathbb C}|\nabla_w^j \tilde h(w) |^p|w|^{2 -(k-j)p}dV_w\\
\lesssim& \int_{\mathbb C}|\nabla^k_w \tilde h(w)|^p|w|^{2}dV_w \lesssim \int_{D}|\nabla^k_w h(w)|^p|w|^{2}dV_w.
\end{split}
\end{equation*}
\end{proof}
\begin{remark}\label{re2}
Recall that any domain with Lipschitz boundary is a uniform domain. As a direct consequence of Corollary \ref{tr}, whenever $h\in W^{k, p}(\triangle, |w|^2), p>4$ with $\mathcal P_k h =0$, then $w^{-k}h\in L^p(\triangle, |w|^2).$
\end{remark}
As shown in the proof of Lemma \ref{tr1} (and thus Corollary \ref{tr}), the assumption $p>4$ is essential and can not be dropped. Now we are ready to prove Proposition \ref{tf} making use of Corollary \ref{tr}.
\begin{proof}[Proof of Proposition \ref{tf}: ]
The $\bar\partial$-closedness of $\psi^*\mathbf f$ on ${\triangle\times \triangle}$ was checked in Corollary \ref{main4}. Thus $ \tilde{\mathbf f}$ is $\bar\partial$-closed on ${\triangle\times \triangle}$, and by \eqref{55}, \begin{equation}\label{ho}
\tilde f_1= \bar w_2\psi^*(f_1 - \mathcal P_k f_1), \ \ \tilde f_2= \bar w_1\psi^*(f_1 -\mathcal P_k f_1) +\psi^*(f_2-\mathcal P_kf_2).
\end{equation}
\eqref{p2f} follows from the above \eqref{ho} and \eqref{ex1}.
Next we prove \eqref{hh}. For $l_1, l_2\in \mathbb Z^+\cup\{0\}$ with $l_1+l_2 = t$,
$$ \bar\partial^{l_1}_{w_1}\partial^{l_2}_{w_1} \left(\psi^* f_j\right) = \bar w_2^{l_1} w_2^{l_2} \psi^* \left( \bar\partial^{l_1}_{z_1}\partial^{l_2}_{z_1} f_j\right), \ \ \ j= 1, 2.$$
Observing that $$ \nabla_{z_1}^t( \mathcal P_kf_j) = \mathcal P_{k-t}\left( \nabla_{z_1}^t f_j\right), $$ we get from \eqref{ho} that
\begin{equation*}
\begin{split}
&\left\| |w_2|^{-k+s} \nabla^{t}_{w_1}\nabla^s_{w_2}\tilde {\mathbf f} \right\|_{L^p({\triangle\times \triangle}, |w_2|^2) }\\
\lesssim& \sum_{j=1}^2 \left\| |w_2|^{-k+s} \nabla_{ w_2}^{s} \nabla_{ w_1}^{t}\left( \psi^*(f_j-\mathcal P_kf_j) \right)\right\|_{L^p({\triangle\times \triangle}, |w_2|^2) }\\
\lesssim&\sum_{j=1}^2 \sum_{l_1+l_2=t} \left\| |w_2|^{-k+s} \nabla_{ w_2}^{s} \left( \bar w_2^{l_1}w_2^{l_2} \psi^*\left(\nabla_{ z_1}^t f_j-\mathcal P_{k-t}\left( \nabla_{z_1}^t f_j\right)\right)\right)\right\|_{L^p({\triangle\times \triangle}, |w_2|^2) }\\
\lesssim &\sum_{1\le j\le 2}\sum_{ 0\le l\le s} \left\| |w_2|^{-k +t+l} \nabla_{ w_2}^{l} \left({ \psi^*}\left( \nabla_{ z_1}^t f_j-\mathcal P_{k-t}\left( \nabla_{ z_1}^t f_j \right)\right)\right)\right\|_{L^p({\triangle\times \triangle}, |w_2|^2) }.
\end{split}
\end{equation*}
Thus we only need to estimate $ \left\| |w_2|^{-k +t+l} \nabla_{ w_2}^{l} \left({ \psi^*}\left( \nabla_{ z_1}^t f_j-\mathcal P_{k-t}\left( \nabla_{ z_1}^t f_j \right)\right)\right)\right\|_{L^p({\triangle\times \triangle}, |w_2|^2) }, 0\le l\le s.$
For each fixed $w_1\in \triangle$, let $$h_{w_1} : = \psi^*\left( \nabla_{ z_1}^t f_j -\mathcal P_{k-t}\left(\nabla_{ z_1}^t f_j\right) \right)(w_1, \cdot) .$$
Then $\mathcal P_{k-t}h_{w_1} =0$ by \eqref{ex1}. Applying Corollary \ref{tr} to $h_{w_1}$ on $\triangle$, we have for $ 0\le l(\le s)\le k-t$,
\begin{equation}\label{cc}
\begin{split}
& \left\| |w_2|^{-k +t+l} \nabla_{ w_2}^{l} \left({ \psi^*}\left(\nabla_{ z_1}^t f_j-\mathcal P_{k-t}\left(\nabla_{ z_1}^t f_j \right)\right)\right)\right\|^p_{L^p({\triangle\times \triangle}, |w_2|^2) }\\
\le & \int_{\triangle }\int_{\triangle}|w_2|^{2 -(k-t-l)p}\left| \nabla_{ w_2}^{l} \left(\psi^*\left( \nabla_{ z_1}^t f_j -\mathcal P_{ k-t} \left(\nabla_{ z_1}^t f_j \right) \right)\right)(w_1, w_2)\right|^pdV_{w_2}dV_{w_1}\\
\lesssim & \int_{\triangle}\int_{\triangle}|w_2|^{ 2}\left| \nabla_{ w_2}^{k-t }\left(\psi^*\left( \nabla_{ z_1}^t f_j -\mathcal P_{k-t}\left( \nabla_{ z_1}^t f_j\right) \right)\right)(w_1, w_2)\right|^pdV_{w_2}dV_{w_1}.
\end{split}
\end{equation}
On the other hand, note that for any function $h \in W^{k-t, p}(\mathbb H)$, $l_1+l_2 = k-t$,
$$ \bar\partial_{ w_2}^{l_1} \partial_{ w_2}^{l_2} \psi^* h = \sum_{m_1=0}^{l_1}\sum_{m_2=0}^{l_2} C_{m_1, m_2, l_1, l_2}\bar w_1^{m_1} w_1^{m_2}\psi^*\left( \bar\partial_{ z_1}^{m_1}\bar\partial_{ z_2}^{l_1-m_1}\partial_{ z_1}^{m_2} \partial_{ z_2}^{l_2-m_2} h\right)$$
for some constants $C_{m_1, m_2, l_1, l_2}$ dependent only on $m_1, m_2, l_1$ and $l_2$. Thus
\begin{equation*}
\begin{split}
& \left|\nabla_{ w_2}^{k-t }\left(\psi^*\left( \nabla_{ z_1}^t f_j -\mathcal P_{k-t}\left( \nabla_{ z_1}^t f_j\right) \right) \right) \right|\\\lesssim & \sum_{m=0}^{k-t} |w_1|^{m}\left|\psi^*\left( \nabla_{ z_1}^{t+m} \nabla_{z_2}^{k-t-m} f_j\right) - \psi^*\left( \nabla_{ z_1}^{m}\nabla_{ z_2}^{k-t-m} \left(\mathcal P_{k-t}\left( \nabla_{ z_1}^t f_j\right) \right)\right) \right|\\
\le& \sum_{m=0}^{k-t} \left|\psi^*\left( \nabla_{ z_1}^{t+m} \nabla_{z_2}^{k-t-m} f_j\right)\right|.
\end{split}
\end{equation*}
Here we used in the last equality the fact that
$\nabla_z^{ k-t} \left(\mathcal P_{k-t}\left( \nabla_{ z_1}^t f_j\right) \right) =0.$
Hence by a change of variables \eqref{cc} is further estimated as follows.
\begin{equation*}
\begin{split}
& \left\| |w_2|^{-k +t+l} \nabla_{ w_2}^{l} \left({ \psi^*}\left( \nabla_{ z_1}^t f_j-\mathcal P_{k-t}\left(\nabla_{z_1}^t f_j\right)\right)\right)\right\|^p_{L^p({\triangle\times \triangle}, |w_2|^2) }\\
\lesssim& \sum_{m=0}^{k-t}\int_{\triangle}\int_{\triangle}|w_2|^{ 2}\left| \psi^*\left( \nabla_{z_1}^{t+m}\nabla_{ z_2}^{k-t-m} f_j \right) (w_1, w_2)\right|^pdV_{w_2}dV_{w_1}\\
\lesssim & \|\psi^*(\nabla_z^k f_j) \|^p_{L^p(\triangle\times \triangle, |w_2|^2)} \lesssim \|\nabla_z^k f_j\|^p_{L^{p}(\mathbb H)} \le \|\mathbf f\|^p_{W^{k, p}(\mathbb H)}.
\end{split}
\end{equation*}
The proof of \eqref{hh} is complete. That $\tilde {\mathbf f}\in W^{k,p}(\triangle\times \triangle, |w_2|^2) $
is a direct consequence of \eqref{hh}.
\end{proof}
\subsection{Truncating solutions on the bidisc}
Given $\tilde {\mathbf f}$ in Proposition \ref{tf}, let $u^*$ be the solution to $\bar\partial u^* = \tilde {\mathbf f}$ on ${\triangle\times \triangle}$ obtained in Theorem \ref{mainp} with
\begin{equation}\label{t2}
\|u^*\|_{W^{k, p}({\triangle\times \triangle}, |w_2|^2) }\lesssim \|\tilde {\mathbf f}\|_{W^{k, p}({\triangle\times \triangle}, |w_2|^2) }.
\end{equation} Consider
\begin{equation}\label{tud}
\begin{split}
\tilde u(w_1, w_2): =&u^*(w_1, w_2) -\tilde{\mathcal P}_{2, k} u^*(w_1, w_2)\\
=& u^*(w_1, w_2) -\sum_{l=0}^{k-1}\frac{1}{l!}w_2^l\partial_{w_2}^l u^*(w_1, 0),\ \ (w_1, w_2)\in {\triangle\times \triangle},
\end{split}
\end{equation}
where $\tilde{\mathcal P}_{2, k}$ is the $(k-1)$-th order holomorphic Taylor polynomial operator in the $w_2$ variable at $w_2=0$. $\tilde u$ is well defined, due to the facts that for each fixed $w_1\in \triangle$, $l\le k-1$, $\partial_{w_2}^l u^*(w_1, \cdot) \in W^{1, p}(\triangle, |w_2|^2)$, and when $p>4$, \begin{equation}\label{em}
W^{1, p}(\triangle, |w_2|^2) \subset W^{1, q}(\triangle)\subset C^\alpha(\triangle) \end{equation} for some $q>2$, and $\alpha = 1-\frac{2}{q}$. Here the last inclusion $W^{1, q}(\triangle)\subset C^\alpha(\triangle)$ is the Sobolev embedding theorem; the inclusion $W^{1, p}(\triangle, |w_2|^2) \subset W^{1, q}(\triangle)$ can be seen as follows. Choose some $r\in (\frac{2}{p}, \frac{1}{2})$ and let $q = pr$. Then $q>2$ and $ \frac{r}{1-r}<1$. For any $h\in W^{1, p}(\triangle, |w_2|^2)$,
$$\int_\triangle |h(w)|^q dV_w = \int_\triangle |h(w)|^q |w_2|^{2r} |w_2|^{-2r}dV \le \left(\int_\triangle |h(w)|^q |w_2|^2dV\right)^{r}\left(\int_\triangle |w_2|^{-\frac{2r}{1-r}} dV_w\right)^{1-r} <\infty,$$
and similarly $|\nabla h| \in L^{q}(\triangle) $.
The goal of this subsection is to show that $\tilde u$ satisfies the following refined weighted estimate.
\medskip
\begin{pro}\label{soe}Let $\tilde u$ be defined in \eqref{tud}. Then $\tilde u \in W^{k, p}({\triangle\times \triangle}, |w_2|^2)$. Moreover, for each $s, t\in \mathbb Z^+\cup\{0\}$ with $s+t\le k,$ we have
\begin{equation*}
\left\||w_2|^{-k+s} \partial_{ w_1}^{t}\partial_{ w_2}^{s}\tilde u \right\|_{L^p({\triangle\times \triangle}, |w_2|^2)}\lesssim \left\| \mathbf f \right\|_{ W^{k,p}(\mathbb H)}.
\end{equation*}
\end{pro}
\medskip
We begin by first proving $\tilde u \in W^{k, p}({\triangle\times \triangle}, |w_2|^2)$ below. It is worth pointing out that,
arguing similarly as in \eqref{em}, one has when $k\in \mathbb Z^+, p>4$,
$$ W^{k, p}(\triangle, |w_2|^2) \subset W^{k, q}(\triangle)\subset C^{k-1, \alpha}(\triangle). $$
for some $q>2$ and $\alpha>0$. In particular, for any $h\in W^{k, p}(\triangle\times \triangle, |w_2|^2), k\in \mathbb Z^+, p>4$, we have $h(w_1, \cdot)\in C^{k-1, \alpha}(\triangle)$ for a.e. fixed $w_1\in \triangle$.
\medskip
\begin{lem}\label{tu} Let $\tilde u$ be defined in \eqref{tud}. For each $l=0, \ldots, k-1$, $\partial_{w_2}^l u^*(w_1, 0)\in W^{k, p}({\triangle\times \triangle}, |w_2|^2)$ with
\begin{equation}\label{tuu}
\left\|\partial_{w_2}^l u^*(w_1, 0)\right\|_{W^{k,p}({\triangle\times \triangle}, |w_2|^2)}\lesssim \left\| \mathbf f \right\|_{ W^{k,p}(\mathbb H)}.
\end{equation}
Consequently, $\tilde u\in W^{k, p}({\triangle\times \triangle}, |w_2|^2)$ satisfying
\begin{equation}\label{p2}
\mathcal P_{2, k} \tilde u = 0,
\end{equation}
\begin{equation}\label{hot}
\bar\partial \tilde u = \tilde {\mathbf f} \ \ \text{on}\ \ {\triangle\times \triangle}
\end{equation} and
\begin{equation}\label{tuf}
\|\tilde u\|_{W^{k, p}({\triangle\times \triangle}, |w_2|^2) }\lesssim \| {\mathbf f}\|_{W^{k, p}(\mathbb H) }.
\end{equation}
\end{lem}
\begin{proof}
We first show that $ \sum_{l=0}^{k-1}w_2^l\partial_{w_2}^l u^*(w_1, 0) $ is holomorphic on ${\triangle\times \triangle}$, from which \eqref{hot} follows. Clearly, it is holomorphic in the $w_2$ variable. For the holomorphy in the $w_1$ variable, note that $$ \bar\partial_{ w_1}\partial_{w_2}^l u^* = \partial_{w_2}^l \tilde f_1$$ in the weak sense. On the other hand, for fixed $w_1\in \triangle$, $\partial_{w_2}^l\tilde f_1(w_1, \cdot)\in C^\alpha(\triangle)$ for some $\alpha>0$ by \eqref{em}, and $ \partial_{w_2}^l\tilde f_1(w_1, 0) =0$ by \eqref{p2f}. Thus $\bar\partial_{ w_1}\partial_{w_2}^l u^*\in C^\alpha(\triangle)$ with $\bar\partial_{ w_1}\partial_{w_2}^l u^*(w_1, 0) =0$.
Next we prove \eqref{tuu}. By the holomorphy of $\partial_{w_2}^l u^*(w_1, 0)$ above, it suffices to estimate $ \left\| \partial_{w_1}^{t}\partial_{w_2}^{l} u^*(w_1, 0)\right\|_{L^p({\triangle\times \triangle}, |w_2|^2)}$ for $t =0,\ldots, k$ and $ l=0, \ldots, k-1$.
Let $\chi$ be a smooth function on $\triangle$ such that $\chi=1$ in $\triangle_{\frac{1}{2}}$ and $\chi =0$ outside $\triangle$. By \eqref{key} (or directly verifying $u^* = T_1\tilde f_1 + T_2S_1 \tilde f_2 = T_2\tilde f_2 +T_1S_2\tilde f_1 $), we have
\begin{equation*}
\begin{split}
\partial_{w_1}^{t}\partial_{ w_2}^{l} u^* =& \partial_{w_2}^{l}T_2\left((1-\chi(w_2)) \partial_{ w_1}^{t}\tilde f_2\right) + \partial_{ w_1}^{t}\partial_{w_2}^{l}T_2\left( \chi(w_2) \tilde f_2\right) + \partial_{ w_2}^{l}S_2 \left(\partial_{w_1}^{t} T_1 \tilde f_1\right)\\
=&: A_1+A_2+A_3.
\end{split}
\end{equation*}
For $A_3$, let $h: =\partial_{w_1}^{t} T_1 \tilde f_1$. Since $ t\le k$, by \eqref{T112} $h \in W^{1, p}({\triangle\times \triangle}, |w_2|^2)$, with $\|h\|_{W^{1, p}({\triangle\times \triangle}, |w_2|^2)}\lesssim \|\tilde f_1\|_{W^{k, p}({\triangle\times \triangle}, |w_2|^2)}$. Note that for $w_1\in \triangle$, $$ A_3(w_1, 0) = \frac{ l!}{2\pi i} \int_{b\triangle}\frac{ h (w_1, \zeta)}{\zeta^{l+1}}d\zeta.$$
Hence
\begin{equation}\label{dd}
\begin{split}
\|A_3(w_1, 0)\|^p_{L^{p}({\triangle\times \triangle}, |w_2|^2)}\lesssim &\int_\triangle\left|\int_{b\triangle}|h(w_1, \zeta)|d\sigma_{\zeta} \right|^p dV_{w_1}\int_\triangle|w_2|^2dV_{w_2}\\
\lesssim &\int_\triangle \left|\int_{\triangle} |h(w_1, w_2)| +|\nabla_{w_2} h(w_1, w_2)| dV_{w_2}\right|^p dV_{w_1}\\
\lesssim& \|h\|^p_{W^{1, p}({\triangle\times \triangle}, |w_2|^2)}
\lesssim \|\tilde f_1\|^p_{W^{k, p}({\triangle\times \triangle}, |w_2|^2)}\\ \lesssim & \| {\mathbf f}\|^p_{W^{k, p}(\mathbb H) }.
\end{split}
\end{equation}
Here in the second line we used the trace theorem for $W^{1,1}(\triangle)\subset L^1(\partial \triangle)$; in the third line we used H\"older inequality and the fact that $|w_2|^2\in A_p$ (or directly that $|w_2|^{-\frac{2}{p-1}}\in L^1(\triangle)$); in the fourth line we used Proposition \ref{tf}.
For $A_1$, by the choice of $\chi$, we have
$$ A_1(w_1, 0) =-\frac{l!}{2\pi i} \int_{\triangle}\frac{ (1-\chi(\zeta)) \partial_{w_1}^{t} \tilde f_2 (w_1, \zeta) }{\zeta^{l+1} }d\bar\zeta\wedge d\zeta,$$
with $\left| \frac{1-\chi(\zeta)}{\zeta^{l+1}}\right|\lesssim 1$ on $\triangle$. Thus by Proposition \ref{tf} and the fact that $|w_2|^2\in A_p$ similarly,
\begin{equation}\label{aa}
\begin{split}
\left\| A_1(w_1, 0)\right\|^p_{L^p({\triangle\times \triangle}, |w_2|^2)}\lesssim & \int_\triangle\left|\int_\triangle \left| \partial_{w_1}^{t} \tilde f_2 (w_1, \zeta) \right|dV_{\zeta}\right|^p dV_{w_1}\int_\triangle|w_2|^2dV_{w_2}\\
\lesssim& \int_\triangle\left|\int_\triangle \left| \partial_{w_1}^{t} \tilde f_2 (w_1, w_2) \right|dV_{w_2}\right|^p dV_{w_1}\\
\lesssim & \int_\triangle\int_\triangle \left| \partial_{w_1}^{t} \tilde f_2 (w_1, w_2) \right|^p|w_2|^2dV_{w_2} dV_{w_1}\\
\le & \|\tilde {\mathbf f}\|^p_{W^{k, p}({\triangle\times \triangle}, |w_2|^2) }\lesssim \| {\mathbf f}\|^p_{W^{k, p}(\mathbb H) }.
\end{split}
\end{equation}
Now we treat $A_2$. With a change of variables, rewrite it as\begin{equation*}
\begin{split}
A_2(w_1, 0) =& \left.-\frac{1}{2\pi i}\partial_{ w_1}^{t} \partial_{w_2}^{l} \int_{\mathbb C}\frac{ \chi(\zeta+w_2) \tilde f_2 (w_1, \zeta+w_2)}{\zeta}d\bar\zeta\wedge d\zeta\right|_{w_2=0}\\
=&\left.-\frac{1}{2\pi i}\partial_{ w_1}^{t}\int_{\mathbb C}\frac{\partial_{\zeta}^{l}\left( \chi(\zeta+w_2) \tilde f_2 (w_1, \zeta+w_2)\right)}{\zeta}d\bar\zeta\wedge d\zeta\right|_{w_2=0}\\
=& -\frac{1}{2\pi i}\partial_{ w_1}^{t}\int_{\mathbb C}\frac{\partial_{\zeta}^{l}\left( \chi(\zeta ) \tilde f_2 (w_1, \zeta )\right)}{\zeta}d\bar\zeta\wedge d\zeta .
\end{split}
\end{equation*}
Note that $\chi(\cdot)\tilde f_2(w_1, \cdot)\in C_c^{k-1, \alpha}(\triangle) $ for some $\alpha>0$ with $\mathcal P_k\left(\chi(\cdot)\tilde f_2(w_1, \cdot)\right) =0$. In particular, for $j=0, \ldots, l$, $ \left|\partial_{\zeta}^{j}\left( \chi(\zeta ) \tilde f_2 (w_1, \zeta )\right)\right|\lesssim |\zeta|^{k-1-j+\alpha}$ near $0$. With a repeated application of Stokes' theorem, we have
\begin{equation*}
\begin{split}
A_2(w_1, 0) =&-\frac{l!}{2\pi i} \partial_{ w_1}^{t} \int_{\mathbb C}\frac{ \chi(\zeta )\tilde f_2 (w_1, \zeta ) }{\zeta^{l+1}}d\bar\zeta\wedge d\zeta\\
=&-\frac{l!}{2\pi i} \int_{\triangle}\frac{ \chi(\zeta ) \partial_{ w_1}^{t} \tilde f_2 (w_1, \zeta ) }{\zeta^{l+1}}d\bar\zeta\wedge d\zeta.
\end{split}
\end{equation*}
Since $l\le k-1$, making use of Proposition \ref{tf} with $s=0$ and the fact that $|w_2|^2\in A_p$ again, we get
\begin{equation}\label{ee}
\begin{split}
\|A_2(w_1, 0)\|^p_{L^p({\triangle\times \triangle}, |w_2|^2)}\lesssim & \int_\triangle \left|\int_{\triangle} |\zeta|^{-(l+1)} \left|\partial_{w_1}^t\tilde f_2(w_1, \zeta)\right|dV_{\zeta}\right|^p dV_{w_1}\int_\triangle|w_2|^2dV_{w_2}\\
\lesssim& \int_\triangle \left|\int_{\triangle} |w_2|^{-(l+1)} \left|\partial_{w_1}^t\tilde f_2(w_1, w_2)\right|dV_{w_2}\right|^p dV_{w_1}\\
\lesssim& \int_\triangle \int_{\triangle} |w_2|^{-(l+1)p} \left|\partial_{w_1}^t\tilde f_2(w_1, w_2)\right|^p |w_2|^2dV_{w_2} dV_{w_1}\\
\lesssim& \left\| |w_2|^{-k}\partial_{w_1}^t\tilde f_2\right\|^p_{L^p({\triangle\times \triangle}, |w_2|^2)}\lesssim \| {\mathbf f}\|^p_{W^{k, p}(\mathbb H)}.
\end{split}
\end{equation}
Combining \eqref{dd}-\eqref{ee}, we have the desired inequality \eqref{tuu}.
\eqref{tuf} follows from \eqref{tuu} and \eqref{t2}.
To see \eqref{p2}, we shall verify that $\bar\partial^m_{w_2}\partial^l_{w_2}\tilde u(w_1, 0) =0$ for all $l, m\in \mathbb Z^+\cup\{0\}, l+m\le k-1$. Note that $\bar\partial^m_{w_2}\partial^l_{w_2}\tilde u(w_1, \cdot) \in C^\alpha(\triangle) $ for some $\alpha>0$ by \eqref{tuf}. If $m=0$, then $\partial^l_{w_2}\tilde u(w_1, 0) =0 $ by its definition. If $m\ge 1$, since $\bar\partial_{w_2} \tilde u = \tilde f_2$ by \eqref{hot},
$$ \bar\partial^m_{w_2}\partial^l_{w_2}\tilde u(w_1, 0) = \bar\partial^{m-1}_{w_2}\partial^l_{w_2}\tilde f_2(w_1, 0) =0, $$
where we used \eqref{p2f} in the last equality. Thus \eqref{p2} is proved, and the proof of the lemma is complete.
\end{proof}
In order to derive the refined weighted estimate of $\tilde u$ in Proposition \ref{soe}, we also need the following modified identities/formulas for $W^{k,p}$ functions on $\triangle$ with vanishing $(k-1)$-th Taylor polynomials.
\begin{lem}\label{el} Let $h\in W^{k, p}(\triangle, |w|^2), k\in \mathbb Z^+, p>4 $ with $\mathcal P_k h =0$. Then for a.e. $w\in \triangle$, \\
i). $$2\pi i w^{-k} h(w ) = \int_{b\triangle}\frac{ h ( \zeta)} {\zeta^{k}(\zeta-w )}d\zeta - \int_{ \triangle}\frac{ \bar \partial h(\zeta)} {\zeta^{k}(\zeta-w )}d\bar\zeta\wedge d\zeta; $$
ii). $$ Th(w) - \tilde{\mathcal P}_k (Th)(w) = w^kT\left(w^{-k}h\right)(w), $$
where $\tilde {\mathcal P}_k$ is the $(k-1)$-th order holomorphic Taylor polynomial operator at $0$.
\end{lem}
\begin{proof}
For part i), applying the Cauchy-Green formula to $w^{-k} h $ on $\triangle \setminus \overline{\triangle_\epsilon}$, we have for each fixed $w\ne 0$,
\begin{equation}\label{cg}
2\pi i w^{-k} h(w ) = \int_{b\triangle}\frac{ h ( \zeta)} {\zeta^{k}(\zeta-w )}d\zeta - \int_{b\triangle_\epsilon}\frac{ h ( \zeta)} {\zeta^{k}(\zeta-w )}d\zeta- \int_{ \triangle\setminus \overline{\triangle_\epsilon(0)}}\frac{ \bar \partial h(\zeta)} {\zeta^{k}(\zeta-w )}d\bar\zeta\wedge d\zeta.
\end{equation}
We claim that $$ \lim_{\epsilon\rightarrow 0} \int_{b\triangle_\epsilon}\frac{ h ( \zeta)} {\zeta^{k}(\zeta-w )}d\zeta = 0. $$
Indeed, let $g_w(\zeta): =(\zeta-w )^{-1} h(\zeta) $. Since $w\ne 0$, $g_w \in W^{k,p}(\triangle_\epsilon, |\zeta|^2), p>4$ with $\epsilon$ sufficiently small and $ \mathcal P_k g_w =0$. In particular, $g_w\in C^{k-1, \alpha}(\triangle_\epsilon)$ for some $\alpha>0$, with $ |g_w(\zeta)|\lesssim |\zeta|^{k-1+\alpha}$ near $0$. Thus
\begin{equation}\label{st2}
\begin{split} \lim_{\epsilon\rightarrow 0}\left| \int_{b\triangle_\epsilon}\frac{ h ( \zeta)} {\zeta^{k}(\zeta-w )}d\zeta\right| \le \lim_{\epsilon\rightarrow 0} \epsilon^{-k} \int_{b\triangle_\epsilon} |g_w( \zeta)| d\sigma_\zeta \lesssim \lim_{\epsilon\rightarrow 0} \epsilon^{\alpha} =0.
\end{split}
\end{equation}
The claim is proved. Part i) follows from the claim by letting $\epsilon \rightarrow 0$ in \eqref{cg}.
For ii), let $\chi$ be a smooth function which is 1 near $0$, and vanishes outside $\triangle_{\frac{1}{2}} $. A direct computation gives that
\begin{equation*}
\begin{split}
-2\pi i \partial Th(0) = &\left.\partial \int_\triangle \frac{\chi(\zeta) h(\zeta)}{\zeta-w}d\bar\zeta\wedge d\zeta \right|_{w=0}+\left.\partial \int_\triangle \frac{(1-\chi(\zeta)) h(\zeta)}{ \zeta-w }d\bar\zeta\wedge d\zeta\right|_{w=0} \\
=& \left. \int_{\mathbb C} \frac{\partial_w\left( \chi(\zeta+w) h(\zeta+w)\right)}{\zeta}d\bar\zeta\wedge d\zeta\right|_{w=0} +\int_\triangle \frac{(1-\chi(\zeta)) h(\zeta)}{\zeta^{2}}d\bar\zeta\wedge d\zeta\\
=&\int_{\mathbb C} \frac{\partial_\zeta\left( \chi(\zeta ) h(\zeta )\right)}{\zeta}d\bar\zeta\wedge d\zeta +\int_\triangle \frac{(1-\chi(\zeta)) h(\zeta)}{\zeta^{2}}d\bar\zeta\wedge d\zeta\\
=&\int_{\mathbb C} \frac{ \chi(\zeta ) h(\zeta ) }{\zeta^2}d\bar\zeta\wedge d\zeta +\int_\triangle \frac{(1-\chi(\zeta)) h(\zeta)}{\zeta^{2}}d\bar\zeta\wedge d\zeta = \int_\triangle \frac{ h(\zeta)}{\zeta^{2}}d\bar\zeta\wedge d\zeta.
\end{split}
\end{equation*}
Here in the fourth line above we used Stokes' theorem and a similar argument as in \eqref{st2} (with $k=1$ there). Consequently with an induction,
$$\tilde{\mathcal P}_k Th = -\sum_{l=0}^{k-1}\frac{w^l}{2\pi i} \int_\triangle \frac{h(\zeta)}{\zeta^{l+1}}d\bar\zeta\wedge d\zeta. $$
Note that each term in the right hand side of the above is well defined due to Remark \ref{re2}.
Making use of the following elementary identity for the Cauchy kernel:
\begin{equation*}
\frac{1}{ \zeta-w} -\sum_{l=0}^{k-1}\frac{w^{l}}{\zeta^{l+1}} = \frac{w^k}{\zeta^k(\zeta-w)},\ \ \text{for all}\ \ \zeta\ne w\ \text{nor}\ \ 0,
\end{equation*}
we immediately get
\begin{equation*}
\begin{split}
Th(w) - \tilde{\mathcal P}_k Th(w) = - \frac{w^k}{2\pi i} \int_\triangle \frac{h(\zeta)}{\zeta^{k}(\zeta-w)}d\bar\zeta\wedge d\zeta = w^k T\left( w^{-k}h\right),\ \ \ \ w\in \triangle.
\end{split}
\end{equation*}
\end{proof}
\begin{lem}\label{2s}
If $h\in W^{2, p}(\triangle, |w|^2), p>4$, then $$ S\partial h = \partial S h +S(\bar w^2\bar\partial h)\ \ \text{on} \ \ \triangle. $$
\end{lem}
\begin{proof}
Note that $h\in W^{2, p}(\triangle, |w|^2)\subset C^{1,\alpha}(\triangle)$ for some $\alpha>0$. So both sides of the above equality are actually in the strong sense. The lemma follows from a direct computation below. For $w\in \triangle$,
\begin{equation*}
\begin{split}
S \partial h(w) &= \frac{1}{2\pi i } \int_{0}^{{2\pi}} \frac{\partial_\zeta h(e^{i\theta}) ie^{i\theta}}{ e^{i\theta} - w } d\theta = \frac{1}{2\pi i } \int_{0}^{{2\pi}} \frac{\partial_\theta\left( h(e^{i\theta})\right) +i\bar\partial_\zeta h(e^{i\theta})e^{-i\theta}}{ e^{i\theta} - w } d\theta\\
&= - \frac{1}{2\pi i } \int_{0}^{{2\pi}} \partial_\theta \left(\frac{1}{ e^{i\theta} - w }\right) h(e^{i\theta})d\theta +\frac{1}{2\pi i } \int_{0}^{{2\pi}} \frac{ \bar\partial_\zeta h(e^{i\theta})e^{-2i\theta}}{ e^{i\theta} - w } ie^{i\theta} d\theta\\
& = \frac{1}{2\pi i } \int_{0}^{{2\pi}} \partial_w \left(\frac{1}{ e^{i\theta} - w }\right) h(e^{i\theta})ie^{ i\theta}d\theta + \frac{1}{2\pi i } \int_{b\triangle} \frac{ \bar\partial_\zeta h(\zeta)\bar\zeta^2}{ \zeta - w } d \zeta\\
& = \frac{1}{2\pi i } \int_{b\triangle} \partial_w \left(\frac{1}{ \zeta- w }\right) h(\zeta)d\zeta +S\left(\bar w^2\bar\partial h\right) = \partial S h(w) +S\left(\bar w^2\bar\partial h\right)(w).
\end{split}
\end{equation*}
\end{proof}
\medskip
\begin{proof}[Proof of Proposition \ref{soe}: ] In view of Lemma \ref{tu}, we only need to prove the estimate in the proposition when $s\le k-1$.
First consider the case when $0\le t\le k-1$. For fixed $w_1\in \triangle$,
$h_{w_1} : = \partial_{ w_2}^{s} \tilde u(w_1, \cdot)\in W^{k-s, p}(\triangle, |w_2|^2)$, $\mathcal P_{k-s}h_{w_1} =0$ by \eqref{p2}, and $\bar\partial_{w_2} h_{w_1} = \partial_{ w_2}^{s} \tilde f_2 $. We apply Lemma \ref{el}, part i)
to $h_{w_1} $ and obtain
$$2\pi i w_2^{-k+s} \partial_{ w_2}^{s} \tilde u(w_1, w_2) = \int_{b\triangle}\frac{ \partial_{\zeta}^{s} \tilde u (w_1, \zeta)} {\zeta^{k-s}(\zeta-w_2)}d\zeta - \int_{ \triangle}\frac{ \partial_{\zeta}^{s} \tilde f_2(w_1, \zeta)} {\zeta^{k-s}(\zeta-w_2)}d\bar\zeta\wedge d\zeta. $$
Consequently,
\begin{equation*}
\begin{split}
w_2^{-k+s} \partial_{ w_1}^t \partial_{ w_2}^{s} \tilde u (w_1, w_2) = &\frac{1}{2\pi i}\left(\partial_{w_1}^{t}\int_{b\triangle}\frac{ \partial_{ \zeta}^{s}\left(\bar\zeta^{k-s}\tilde u (w_1, \zeta)\right)} {\zeta-w_2}d\zeta - \int_{ \triangle}\frac{ \zeta^{-k+s} \partial_{ w_1}^t\partial_{ \zeta}^{s} \tilde f_2(w_1, \zeta)} {\zeta-w_2}d\bar\zeta\wedge d\zeta\right)\\
=& \partial_{w_1}^{t}S_2\left( \partial_{ w_2}^{s} \left(\bar w_2^{k-s} \tilde u\right)\right) + T_2\left(w_2^{-k+s} \partial_{ w_1}^t\partial_{ w_2}^{s}\tilde f_2 \right) \\
=&: B_1+B_2.
\end{split}
\end{equation*}
By \eqref{T_j} and Proposition \ref{tf},
\begin{equation*}
\begin{split}
\left\|B_2\right\| _{L^p({\triangle\times \triangle}, |w_2|^2)}\lesssi
&\left\|T_2\left(w_2^{-k+s} \partial_{ w_1}^t\partial_{ w_2}^{s}\tilde f_2 \right)\right\|_{L^p({\triangle\times \triangle}, |w_2|^2)}\lesssim \left\| w_2^{-k+s} \partial_{ w_1}^t\partial_{ w_2}^{s}\tilde f_2 \right\|_{L^p({\triangle\times \triangle}, |w_2|^2)}\lesssim \left\| \mathbf f \right\|_{ W^{k,p}(\mathbb H)}.
\end{split}
\end{equation*}
For $B_1$, if $s=0$, then $ B_1 = S_2\left( \bar w_2^{k} \partial_{ w_1}^{t } \tilde u\right) $, where
$ \bar w_2^{k} \partial_{w_1}^{t }\tilde u \in W^{1, p}({\triangle\times \triangle}, |w_2|^2)$ as $t\le k-1$. Then \eqref{S_j} and Lemma \ref{tu} give
\begin{equation*}
\begin{split}
\left\|B_1\right\|_{L^p({\triangle\times \triangle}, |w_2|^2)}\lesssim& \left\| S_2\left( \bar w_2^{k} \partial_{ w_1}^{t } \tilde u\right) \right\|_{L^{p}({\triangle\times \triangle}, |w_2|^2)}\lesssim \left\| \bar w_2^{k} \partial_{ w_1}^{t } \tilde u \right\|_{W^{1, p}({\triangle\times \triangle}, |w_2|^2)} \\
\lesssim& \|\tilde u\|_{W^{k, p}({\triangle\times \triangle}, |w_2|^2)}\lesssim \left\| \mathbf f \right\|_{ W^{k,p}(\mathbb H)}. \end{split}
\end{equation*}
For the case $s \ge 1$, since $s\le k-1$, $ \partial_{ w_2}^{s-1}\left(\bar w_2^{k-s}\tilde u\right)(w_1, \cdot) \in W^{2, p}(\triangle, |w_2|^2) $ for fixed $w_1\in \triangle$. Applying Lemma \ref{2s} to $ \partial_{ w_2}^{s-1}\left(\bar w_2^{k-s}\tilde u\right)(w_1, \cdot) $ and using the fact that $\bar\partial_{w_2} \tilde u = \tilde f_2$, we further write
\begin{equation*}
\begin{split}
B_1 = & \partial_{w_1}^{t}\partial_{w_2}S_2\left( \partial_{ w_2}^{s-1}\left(\bar w_2^{k-s}\tilde u\right)\right) + \partial_{w_1}^{t} S_2\left( \bar w_2^2\partial_{ w_2}^{s-1}\left((k-s)\bar w_2^{k-s-1}\tilde u + \bar w_2^{k-s}\tilde f_2\right) \right)\\
=& \partial_{w_2}S_2\left(\partial_{w_1}^{t} \partial_{ w_2}^{s-1}\left(\bar w_2^{k-s}\tilde u\right)\right) + (k-s) S_2\left( \partial_{w_1}^{t}\partial_{ w_2}^{s-1}\left(\bar w_2^{k-s+1}\tilde u\right) \right) +S_2\left( \partial_{w_1}^{t}\partial_{ w_2}^{s-1}\left( \bar w_2^{k-s+2}\tilde f_2\right) \right).
\end{split}
\end{equation*}
Note that $\partial_{w_1}^{t} \partial_{ w_2}^{s-1}\left(\bar w_2^{l}\tilde u\right)\in W^{1, p}({\triangle\times \triangle}, |w_2|^2)$ for $ l = k-s, k-s+1, k-s+2$. By \eqref{S11}, Proposition \ref{tf} and \eqref{tuf},
\begin{equation*}
\begin{split}
\left\|B_1\right\|_{L^p({\triangle\times \triangle}, |w_2|^2)}\lesssim& \left\| \partial_{ w_1}^{t } \partial_{w_2}^{s -1} \left(\bar w_2^{k-s}\tilde u\right) \right\|_{W^{1, p}({\triangle\times \triangle}, |w_2|^2)} +\left\| \partial_{ w_1}^{t } \partial_{w_2}^{s -1} \left(\bar w_2^{k-s+1}\tilde u\right) \right\|_{W^{1, p}({\triangle\times \triangle}, |w_2|^2)} \\
&+\left\| \partial_{ w_1}^{t } \partial_{w_2}^{s -1} \left(\bar w_2^{k-s+2}\tilde f_2\right) \right\|_{W^{1, p}({\triangle\times \triangle}, |w_2|^2)}\\
\lesssim& \|\tilde u\|_{W^{k, p}({\triangle\times \triangle}, |w_2|^2)}+ \left\| \tilde{ f}_2 \right\|_{ W^{k,p}({\triangle\times \triangle}, |w_2|^2)}\lesssim \left\| \mathbf f \right\|_{ W^{k,p}(\mathbb H)}. \end{split}
\end{equation*}
Finally, we treat the case when $t=k$ (and so $s=0$). According to the definition of $\tilde u$,
\begin{equation*}
\begin{split}
\tilde u = &T_1\tilde f_1 + S_1T_2 \tilde f_2 - T_1\tilde{\mathcal P}_{2, k}\tilde f_1 - S_1\tilde{\mathcal P}_{2, k} T_2 \tilde f_2\\
=& T_1\tilde f_1 + S_1\left(T_2 - \tilde{\mathcal P}_{2, k} T_2 \right) \tilde f_2\\
= & T_1\tilde f_1 + S_1\left(w_2^kT_2\left(w_2^{-k} \tilde f_2\right)\right).
\end{split}
\end{equation*}
Here we used the fact that $ \mathcal P_{2, k}\tilde f_1 =0$ by \eqref{p2f} in the second equality, and Lemma \ref{el} part ii) in the third equality for each fixed $w_1\in \triangle$. Consequently,
\begin{equation*}
\begin{split}
w_2^{-k } \partial_{ w_1}^k \tilde u = & \partial_{ w_1}^kT_1 \left( w_2^{-k } \tilde f_1\right) + T_2 \left (\partial_{ w_1}^k S_1 \left(w_2^{-k} \tilde f_2\right)\right) =: C_1+C_2.
\end{split}
\end{equation*}
For $C_1$, by \eqref{T11} and Proposition \ref{tf} (with $s=0$ there),
\begin{equation*}
\begin{split}
\left\|C_1\right\|_{L^p({\triangle\times \triangle}, |w_2|^2)}\lesssim & \sum_{j=0}^{k-1}\left\| w_2^{-k} \nabla_{w_1}^j\tilde f_2 \right\|_{L^{p}({\triangle\times \triangle}, |w_2|^2)} \lesssim \|\mathbf f\|_{W^{k, p}(\mathbb H)}.
\end{split}
\end{equation*}
For $C_2$, by \eqref{T11} (with $k=1$ there), \eqref{S11} and Proposition \ref{tf} (with $s=0$ there).
\begin{equation*}
\begin{split}
\left\|C_2\right\|_{L^p({\triangle\times \triangle}, |w_2|^2)}\lesssim & \left\|\partial_{ w_1}^k S_1 \left(w_2^{-k} \tilde f_2\right) \right\|_{L^p({\triangle\times \triangle}, |w_2|^2)} \lesssim \sum_{j=0}^k \left\| w_2^{-k} \nabla^j_{w_1}\tilde f_2 \right\|_{L^{p}({\triangle\times \triangle}, |w_2|^2)}\lesssim \|\mathbf f\|_{W^{k, p}(\mathbb H)}.
\end{split}
\end{equation*}
The proof of the proposition is thus complete.
\end{proof}
\subsection{Proof of the main theorem}
\begin{proof}[Proof of Theorem \ref{main}: ]
Let $\mathcal T_k \mathbf f: = \phi^*\tilde u + u_k$ on $\mathbb H$, where $\tilde u$ is defined in \eqref{tud}, and $u_k$ satisfies \eqref{pu}. Then $\bar\partial \mathcal T_k \mathbf f = \mathbf f$ on $\mathbb H$. To show the desired estimate for $\|\mathcal T_k \mathbf f\|_{ W^{k, p}(\mathbb H)}$, since the anti-holomorphic derivatives of $\mathcal T_k \mathbf f$ are shifted to that of ${\mathbf f}$, we only need to estimate $ \left\|\partial_{ z_1}^{l_1}\partial_{ z_2}^{l_2} \left(\phi^*\tilde u\right)\right\|_{L^p(\mathbb H)}$, $l_1, l_2\in \mathbb Z^+\cup\{0\}, l_1+l_2\le k$. Note that
$$ \partial_{ z_1}^{l_1}\partial_{ z_2}^{l_2} \left(\phi^*\tilde u\right) = \sum_{s+t \le l_1+l_2, t\ge l_1} C_{l_1, l_2, t, s}z_1^{t-l_1}z_2^{-t-l_2+s} \left( \partial_{ w_1}^{t}\partial_{ w_2}^{s}\tilde u\right) \left(\frac{z_1}{z_2}, z_2\right)$$
for some constants $C_{l_1, l_2, t, s} $ dependent on $l_1, l_2, t, s$, and $|z_1|\le |z_2|$ on $\mathbb H$. Then by a change of variables,
\begin{equation*}
\begin{split}
\left\|\partial_{ z_1}^{l_1}\partial_{ z_2}^{l_2} \left(\phi^*\tilde u\right)\right\|_{L^p(\mathbb H)}\lesssim& \sum_{s+t\le l_1+l_2, t\ge l_1}\left\||w_2|^{-l_1-l_2+s} \partial_{ w_1}^{t}\partial_{ w_2}^{s}\tilde u \left(w_1, w_2\right) \right\|_{L^p({\triangle\times \triangle}, |w_2|^2)}\\
\le&\sum_{s+t\le k} \left\||w_2|^{-k +s} \partial_{ w_1}^{t}\partial_{ w_2}^{s}\tilde u \left(w_1, w_2\right) \right\|_{L^p({\triangle\times \triangle}, |w_2|^2)}.
\end{split}
\end{equation*}
The rest of the proof follows from Proposition \ref{soe}.
\end{proof}
\medskip
The following Kerzman-type example demonstrates that the $\bar\partial$ problem on $\mathbb H$ with $W^{k, p}$ data in general does not expect solutions in $W^{k, p+\epsilon}$, $\epsilon>0$, which verifies the optimality of Theorem \ref{main}.
\begin{example}\label{ex}
For each $k\in \mathbb Z^+$ and $ 2<p<\infty$, let $\mathbf f= (z_2-1)^{k-\frac{2}{{p}}}d\bar z_1 $ on $\mathbb H$, $\frac{1}{2}\pi <\arg (z_2-1)<\frac{3}{2}\pi$. Then $\mathbf f\in W^{k, \tilde p}(\mathbb H)$ for all $2<\tilde p< p$ and is $\bar\partial$-closed on $\mathbb H$. However, there does not exist a solution $u\in W^{k,p}(\mathbb H)$ to $\bar\partial u =\mathbf f$ on $\mathbb H$.
\end{example}
\begin{proof}
Clearly $\mathbf f\in W^{k, \tilde p}(\mathbb H) $ for all $2<\tilde p< p$ and is $\bar\partial$-closed on $\mathbb H$.
Arguing by contradiction, suppose there exists some $u\in W^{k,p}(\mathbb H )$ satisfying $\bar\partial u =\mathbf f $ on $\mathbb H$. In particular, since $\triangle_{\frac{1}{2}}\times(\triangle \setminus \overline{\triangle_{\frac{1}{2}}}) \subset \mathbb H $, there exists some holomorphic function $h$ on $\triangle_{\frac{1}{2}}\times(\triangle \setminus \overline{\triangle_{\frac{1}{2}}})$ such that $ u |_{ \triangle_{\frac{1}{2}}\times(\triangle \setminus \overline{\triangle_{\frac{1}{2}}})}= (z_2-1)^{k-\frac{2}{{p}}}\bar z_1+h \in W^{k,p}(\triangle_{\frac{1}{2}}\times(\triangle \setminus \overline{\triangle_{\frac{1}{2}}}))$.
For each fixed $(r, z_2) \in U: = \left(0,\frac{1}{2}\right)\times \left( \triangle\setminus \overline{ \triangle_{\frac{1}{2}}}\right)\subset \mathbb R\times \mathbb C$, consider
$$v(r, z_2): =\int_{|z_1|= r} {\tilde u}(z_1, z_2) dz_1. $$
Then with a similar argument as in the proof of Example \ref{ex2}, one can see that $v\in W^{k,p}(U)$.
Note that $h(\cdot, z_2)$ is holomorphic on $\triangle_{\frac{1}{2}}$ for each fixed $z_2\in \triangle\setminus \overline{\triangle_\frac{1}{2}}$. Thus for fixed $(r, z_2)\in U$, Cauchy's theorem gives
\begin{equation*}
v(r, z_2) =\int_{|z_1|=r} z_2(z_2-1)^{k-\frac{2}{{p}}}\bar z_1dz_1 = 2\pi r^2i z_2(z_2-1)^{k-\frac{2}{{p}}},
\end{equation*}
which does not belong to $W^{k,p}(U)$. A contradiction!
\end{proof}
\bibliographystyle{alphaspecial}
|
2,869,038,154,553 | arxiv | \section{Introduction} \label{sec:introduction}
Computing optimal transport (OT) distances between pairs of probability measures or histograms, such as the earth mover's distance~\citep{werman1985,Rubner2000} and Monge-Kantorovich or Wasserstein distance~\citep{villani09optimal}, are currently generating an increasing attraction in different machine learning tasks~\citep{pmlr-v32-solomon14,kusnerb2015,pmlr-v70-arjovsky17a,ho2017}, statistics~\citep{frogner2015nips,panaretos2016,ebert2017ConstructionON,bigot2017,flamary2018WDA}, and computer vision~\citep{bonnel2011,Rubner2000,solomon2015}, among other applications~\citep{klouri17,peyre2019COTnowpublisher}.
In many of these problems, OT exploits the geometric features of the objects at hand in the underlying spaces to be leveraged in comparing probability measures.
This effectively leads to improved performance of methods that are oblivious to the geometry, for example the chi-squared distances or the Kullback-Leibler divergence.
Unfortunately, this advantage comes at the price of an enormous computational cost of solving the OT problem, that can be prohibitive in large scale applications.
For instance, the OT between two histograms with supports of equal size $n$ can be formulated as a linear programming problem that requires generally super $\mathcal{O}(n^{2.5})$~\citep{leeSidford2013PathFI} arithmetic operations, which is problematic when $n$ becomes larger.
A remedy to the heavy computation burden of OT lies in a prevalent approach referred to as regularized OT~\citep{cuturinips13} and operates by adding an entropic regularization penalty to the original problem.
Such a regularization guarantees a unique solution, since the objective function is strongly convex, and a greater computational stability.
More importantly, this regularized OT can be solved efficiently with celebrated matrix scaling algorithms, such as Sinkhorn's fixed point iteration method~\citep{sinkhorn1967,knight2008,kalantari2008}.
Several works have considered further improvements in the resolution of this regularized OT problem.
A greedy version of Sinkhorn algorithm, called Greenkhorn~\cite{altschulernips17}, allows to select and update columns and rows that most violate the polytope constraints.
Another approach based on low-rank approximation of the cost matrix using the Nystr\"om method induces the Nys-Sink algorithm~\citep{altschuler2018Nystrom}.
Other classical optimization algorithms have been considered for approximating the OT, for instance accelerated gradient descent~\citep{xie2018proxpointOT,dvurechensky18aICML,lin2019}, quasi-Newton methods~\citep{blondel2018ICML,cuturi2016SIAM} and stochastic gradient descent~\citep{genevay2016stochOT,khalilabid2018}.
{In this paper, we propose a novel technique for accelerating the Sinkhorn algorithm when computing regularized OT distance between discrete measures. Our idea
is strongly related to a screening strategy when solving a \emph{Lasso}
problem in sparse supervised learning \citep{Ghaoui2010SafeFE}. Based on the fact
that a transport plan resulting from an OT problem is sparse or presents a large
number of neglectable values \citep{blondel2018ICML}, our objective is to identify the dual variables of an approximate Sinkhorn problem, that are smaller than a predefined threshold, and thus that can be safely removed before optimization while not altering too much the solution of the problem.
Within this global context, our contributions are the following:
\begin{itemize}
\setlength\itemsep{-0.1cm}
\item From a methodological point of view, we propose a new formulation of the dual of the Sinkhorn divergence problem by imposing variables to be larger than a threshold.
This formulation allows us to introduce sufficient conditions, computable beforehand, for a variable to strictly satisfy its constraint, leading then to
a ``screened'' version of the dual of Sinkhorn divergence.
\item We provide some theoretical analysis of the solution of the ``screened'' Sinkhorn divergence, showing that its objective value and the marginal constraint satisfaction are properly controlled as the number of screened variables decreases.
\item From an algorithmic standpoint, we use a constrained L-BFGS-B algorithm \citep{nocedal1980,byrd1995L-BFGS-B} but provide a careful analysis of the lower and upper bounds of the dual variables, resulting in a well-posed and efficient algorithm denoted as \textsc{Screenkhorn}.
\item Our empirical analysis depicts how the approach behaves in a simple Sinkhorn divergence computation context. When considered in complex machine learning
pipelines, we show that \textsc{Screenkhorn} can lead to strong gain in efficiency
while not compromising on accuracy.
\end{itemize}}
The remainder of the paper is organized as follow. In Section~\ref{sec:regularized_discrete_ot} we briefly review the basic setup of regularized discrete OT.
Section~\ref{sec:screened_dual_of_sinkhorn_divergence} contains our main contribution, that is, the \textsc{Screenkhorn} algorithm.
Section~\ref{sec:analysis_of_marginal_violations} is devoted to theoretical guarantees for marginal violations of \textsc{Screenkhorn}.
In Section~\ref{sec:numerical_experiments} we present numerical results for the proposed algorithm, compared with the state-of-art Sinkhorn algorithm as implemented in~\cite{flamary2017pot}.
The proofs of theoretical results are postponed to the supplementary material {as well as additional empirical results}.
\emph{Notation.} For any positive matrix $T \in {\mathbb{R}}^{n\times m}$, we define its entropy as $H(T) = -\sum_{i,j} T_{ij} \log(T_{ij}).$
Let $r(T) = T\mathbf 1_m \in {\mathbb{R}}^n$ and $c(T) = T^\top\mathbf 1_n \in {\mathbb{R}}^m$ denote the rows and columns sums of $T$ respectively. The coordinates $r_i(T)$ and $c_j(T)$ denote the $i$-th row sum and the $j$-th column sum of $T$, respectively.
The scalar product between two matrices denotes the usual inner product, that is $\inr{T, W} = \text{tr}(T^\top W) = \sum_{i,j}T_{ij}W_{ij},$ where $T^\top$ is the transpose of $T$.
We write $\mathbf{1}$ (resp. $\mathbf{0}$) the vector having all coordinates equal to one (resp. zero).
$\Delta(w)$ denotes the diag operator, such that if $w \in {\mathbb{R}}^n$, then $\Delta(w) = \text{diag}(w_1, \ldots, w_n)\in {\mathbb{R}}^{n\times n}$.
For a set of indices $L=\{i_1, \ldots, i_k\} \subseteq \{1, \ldots, n\}$ satisfying $i_1 < \cdots <i_k,$ we denote the complementary set of $L$ by $L^\complement = \{1, \ldots, n\} \backslash L$. We also denote $|L|$ the cardinality of $L$.
Given a vector $w \in {\mathbb{R}}^n$, we denote $w_L= (w_{i_1}, \ldots, w_{i_k})^\top \in {\mathbb{R}}^k$ and its complementary $w_{L^\complement} \in {\mathbb{R}}^{n- k}$. The notation is similar for matrices; given another subset of indices $S = \{j_1, \ldots, j_l\} \subseteq \{1, \ldots, m\}$ with $j_1 < \cdots <j_l,$ and a matrix $T\in {\mathbb{R}}^{n\times m}$, we use $T_{(L,S)}$, to denote the submatrix of $T$, namely the rows and columns of $T_{(L,S)}$ are indexed by $L$ and $S$ respectively.
When applied to matrices and vectors, $\odot$ and $\oslash$ (Hadamard product and division) and exponential notations refer to elementwise operators.
Given two real numbers $a$ and $b$, we write $a\vee b = \max(a,b)$ and $a\wedge b = \min(a,b).$
\section{Regularized discrete OT} \label{sec:regularized_discrete_ot}
We briefly expose in this section the setup of OT between two discrete measures. We then consider the case when those distributions are only available through a finite number of samples, that is $\mu = \sum_{i=1}^n \mu_i \delta_{x_i} \in \Sigma_n$ and $\nu = \sum_{j=1}^m \nu_i \delta_{y_j} \in \Sigma_m$, where $\Sigma_n$ is the probability simplex with $n$ bins, namely the set of probability vectors in ${\mathbb{R}}_+^n$, i.e., $\Sigma_n = \{w \in {\mathbb{R}}_+^n: \sum_{i=1}^n w_i = 1\}.$
We denote their probabilistic couplings set as $\Pi(\mu, \nu) = \{P \in {\mathbb{R}}_+^{n\times m}, P\mathbf{1}_m = \mu, P^\top \mathbf{1}_n = \nu\}.$
\paragraph{Sinkhorn divergence.}
Computing OT distance between the two discrete measures $\mu$ and $\nu$ amounts to solving a linear problem~\citep{kantorovich1942} given by
\begin{equation*}
\label{monge-kantorovich}
\mathcal{S}(\mu, \nu) = \min_{P\in \Pi(\mu, \nu)} \inr{C, P},
\end{equation*}
where $P= (P_{ij}) \in {\mathbb{R}}^{n\times m}$ is called the transportation plan, namely each entry $P_{ij}$ represents the fraction of mass moving from $x_i$ to $y_j$, and $C= (C_{ij}) \in {\mathbb{R}}^{n\times m}$ is a cost matrix comprised of nonnegative elements and related to the energy needed to move a probability mass from $x_i$ to $y_j$.
The entropic regularization of OT distances~\citep{cuturinips13} relies on the addition of a penalty term as follows:
\begin{equation}
\label{sinkhorn-primal}
\mathcal{S}_\eta(\mu, \nu) = \min_{P\in \Pi(\mu, \nu)} \{\inr{C, P} - \eta H(P)\},
\end{equation}
where $\eta > 0$ is a regularization parameter. We refer to $\mathcal{S}_\eta(\mu, \nu) $ as the \emph{Sinkhorn divergence}~\citep{cuturinips13}.
\paragraph{Dual of Sinkhorn divergence.}
Below we provide the derivation of the dual problem for the regularized OT problem~\eqref{sinkhorn-primal}. Towards this end, we begin with writing its Lagrangian dual function:
\begin{equation*}
\mathscr{L}(P,w, z) = \inr{C,P} + \eta \inr{\log P, P} + \inr{w, P\mathbf{1}_m - \mu} + \inr{z,P^\top \mathbf{1}_n - \nu}.
\end{equation*}
The dual of Sinkhorn divergence can be derived by solving $\min_{P \in {\mathbb{R}}_+^{n\times m}}\mathscr{L}(P,w, z)$. It is easy to check that objective function $P\mapsto \mathscr{L}(P,w, z)$ is strongly convex and differentiable. Hence, one can solve the latter minimum by setting $\nabla_P \mathscr{L}(P,w, z)$ to $\mathbf{0}_{n\times m}$. Therefore, we get $ P^\star_{ij} = \exp\big(- \frac{1}{\eta} (w_i + z_j + C_{ij}) - 1\big),
$ for all $i=1, \ldots, n$ and $j=1, \ldots, m$. Plugging this solution, and setting the change of variables $u = -w/\eta - 1/2$ and $v = - z/\eta - 1/2$, the dual problem is given by
\begin{equation}
\label{sinkhorn-dual}
\min_{u \in {\mathbb{R}}^n, v\in{\mathbb{R}}^m}\big\{\Psi(u,v):= \mathbf{1}_n^\top B(u,v)\mathbf{1}_m - \inr{u, \mu} - \inr{v, \nu} \big\},
\end{equation}
where $B(u,v) := \Delta(e^{u}) K \Delta(e^{v})$ and $K := e^{-C/\eta}$ stands for the Gibbs kernel associated to the cost matrix $C$.
We refer to problem~\eqref{sinkhorn-dual} as the \emph{dual of Sinkhorn divergence}. Then, the optimal solution $P^\star$ of the primal problem~\eqref{sinkhorn-primal} takes the form $P^\star = \Delta(e^{u^\star}) K \Delta(e^{v^\star})$
where the couple $(u^\star, v^\star)$ satisfies:
\begin{align*}
\label{sinkhorn-dual}
(u^\star, v^\star) &= \argmin_{u \in {\mathbb{R}}^{n}, v\in {\mathbb{R}}^m} \{\Psi(u,v)\}.
\end{align*}
Note that the matrices $\Delta(e^{u^\star})$ and $\Delta(e^{v^\star})$ are unique up to a constant factor~\citep{sinkhorn1967}. Moreover, $P^\star$ can be solved efficiently by iterative Bregman projections~\citep{benamou2015IterativeBP} referred to as Sinkhorn iterations, and the method is referred to as \textsc{Sinkhorn} algorithm which, recently, has been proven to achieve a near-$\mathcal{O}(n^2)$ complexity~\citep{altschulernips17}.
\section{Screened dual of Sinkhorn divergence} \label{sec:screened_dual_of_sinkhorn_divergence}
\begin{wrapfigure}{o}{0.3\textwidth}
\vspace{-12pt}
\centering
\includegraphics[width=0.3\textwidth]{motivations.pdf}
\caption{Plots of $(e^{u^\star}, e^{v^\star})$ with $(u^\star, v^\star)$ is the pair solution of dual of Sinkhorn divergence~\eqref{sinkhorn-dual} and the thresholds $\alpha_u, \alpha_v$.}
\label{fig:motivations}
\vspace{-12pt}
\end{wrapfigure}
\paragraph{Motivation.}
The key idea of our approach is motivated by the so-called \emph{static screening test}~\citep{Ghaoui2010SafeFE} in supervised learning, which is a method able to {safely} identify inactive features, i.e., features that have zero components in the solution vector.
Then, these inactive features can be removed from the optimization problem to reduce its scale.
Before diving into detailed algorithmic analysis, let us present a brief illustration of how we adapt static screening test to the dual of Sinkhorn divergence.
Towards this end, we define the convex set $\mathcal{C}^r_{\alpha} \subseteq {\mathbb{R}}^r$, for $r\in \mathbb N$ and $\alpha >0$, by $\mathcal{C}^r_{\alpha} = \{w\in {\mathbb{R}}^{r}: e^{w_i} \geq \alpha\}$.
In Figure~\ref{fig:motivations}, we plot $(e^{u^\star}, e^{v^\star})$ where $(u^\star, v^\star)$ is the pair solution of the dual of Sinkhorn divergence~(\ref{sinkhorn-dual}) in the particular case of: $n=m=500, \eta=1, \mu = \nu = \frac 1n \mathbf 1_n, x_i \sim\mathcal{N}((0,0)^\top, \begin{psmallmatrix}1 & 0\\0 & 1\end{psmallmatrix}), y_j \sim\mathcal{N}((3,3)^\top, \begin{psmallmatrix}1 &-0.8 \\ -0.8 &1 \end{psmallmatrix})$ and the cost matrix $C$ corresponds to the pairwise euclidean distance, i.e., $C_{ij} = \norm{x_i - y_j}_2$.
We also plot two lines corresponding to $e^{u^\star} \equiv \alpha_u$ and $e^{v^\star} \equiv \alpha_v$ for some $\alpha_u>0$ and $\alpha_v >0$, choosing randomly and playing the role of thresholds to select indices to be discarded. {If we are able to identify these indices before solving the problem, they can be fixed at the thresholds and removed then from the optimization procedure yielding an approximate solution.}
\paragraph{Static screening test.} Based on this idea, we define a so-called \emph{approximate dual of Sinkhorn divergence}
\begin{equation}
\label{screen-sinkhorn}
\min_{u \in \mathcal{C}^n_{\frac \varepsilon \kappa}, v\in \mathcal{C}^m_{\varepsilon\kappa}} \big\{\Psi_{\kappa}(u,v):= \mathbf{1}_n^\top B(u,v)\mathbf{1}_m - \inr{\kappa u, \mu} - \inr{\frac v\kappa, \nu} \big\},
\end{equation}
which is simply a dual of Sinkhorn divergence with lower-bounded variables, where the bounds
are $\alpha_u = \varepsilon \kappa^{-1}$ and $\alpha_v = \varepsilon \kappa$ with $\varepsilon > 0$ and $\kappa > 0$ being fixed numeric constants which values will be
clear later.
{The new formulation~\eqref{screen-sinkhorn} has the form of $(\kappa\mu, \nu / \kappa)$-scaling problem under constraints on the variables $u$ and $v$. Those constraints make the problem significantly different from the standard scaling-problems~\citep{KALANTARI199687}.
We further emphasize that $\kappa$ plays a key role in our screening strategy. Indeed, without $\kappa$, $e^u$ and $e^v$ can have inversely related scale that may lead in, for instance $e^u$ being too large and $e^v$ being too small, situation in which the screening test would apply only to coefficients of $e^u$ or $e^v$ and not for both of them.
Moreover, it is clear that the approximate dual of Sinkhorn divergence coincides with the dual of Sinkhorn divergence~\eqref{sinkhorn-dual} when $\varepsilon=0$ and $\kappa=1$.
{Intuitively, our hope is to gain efficiency in solving
problem (\ref{screen-sinkhorn}) compared to the original one in Equation \eqref{sinkhorn-dual} by avoiding optimization of variables smaller than the threshold and by identifying
those that make the constraints active. } More formally,
the core of the static screening test aims at locating two subsets of indices $(I, J)$ in $\{1, \ldots, n\}\times\{1, \ldots, m\}$ satisfying: $e^{u_i} > \alpha_u, \text{ and } e^{v_j} > \alpha_v, \text{ for all } (i,j) \in I \times J$ and
$e^{u_{i'}} = \alpha_u, \text{ and } e^{v_{j'}} = \alpha_v, \text{ for all } (i',j') \in I^\complement \times J^\complement$, namely $(u,v) \in \mathcal{C}^n_{\alpha_u}\times \mathcal{C}^m_{\alpha_v}$. {The following key result states sufficient conditions for identifying variables in $I^\complement$ and $J^\complement$.}
\begin{lemma}
\label{lemma_actives_sets}
Let $(u^{*}, v^{*})$ be an optimal solution of problem~\eqref{screen-sinkhorn}.
Define
\begin{equation}
\label{I_epsilon_kappa_J_epsilon_kappa}
I_{\varepsilon,\kappa} = \big\{i=1, \ldots, n: \mu_i \geq \frac {\varepsilon^2} \kappa^{} r_i(K)\big\}, J_{\varepsilon,\kappa} = \big\{j=1, \ldots, m: \nu_j \geq \kappa{\varepsilon^2}{} c_j(K)\big\}
\end{equation}
Then one has $e^{u^{*}_i} = \varepsilon\kappa^{-1}$ and $e^{v^{*}_j} = \varepsilon\kappa$ for all $i \in I^\complement_{\varepsilon,\kappa} $ and $j\in J^\complement_{\varepsilon,\kappa} .$
\end{lemma}
Proof of Lemma~\ref{lemma_actives_sets} is postponed to the supplementary material. It is worth to note that first order optimality conditions applied to $(u^{*}, v^{*})$ ensure that if $e^{u^{*}_i} > \varepsilon\kappa^{-1}$ then $e^{u^{*}_i} (Ke^{v^{*}})_i = \kappa\mu_i$ and if $e^{v^{*}_j} > \varepsilon\kappa$ then $e^{v^{*}_j} (K^\top e^{u^{*}})_j = \kappa^{-1}\nu_j$, that correspond to the Sinkhorn marginal conditions~\citep{peyre2019COTnowpublisher} up to the scaling factor $\kappa$.
\paragraph{Screening with a fixed number budget of points.}
The approximate dual of Sinkhorn divergence is defined with respect to $\varepsilon$ and $\kappa$. {As those parameters are
difficult to interpret, we exhibit their relations with a {fixed number budget of points} from the supports of $\mu$ and $\nu$}.
In the sequel, we denote by $n_b \in\{1, \ldots, n\}$ and $m_b\in\{1, \ldots, m\}$ the number of points {that are going to be optimized in problem~\eqref{screen-sinkhorn}, \emph{i.e}, the points we cannot guarantee
that $e^{u^{*}_i} = \varepsilon\kappa^{-1}$ and $e^{v^{*}_j} = \varepsilon\kappa$ }.
Let us define $\xi \in {\mathbb{R}}^n$ and $\zeta \in {\mathbb{R}}^m$ to be the ordered decreasing vectors of $\mu \oslash r(K)$ and $\nu \oslash c(K)$ respectively, that is $\xi_1 \geq \xi_2 \geq \cdots \geq \xi_n$ and $\zeta_1 \geq \zeta_2 \geq \cdots \geq \zeta_m$.
To keep only $n_b$-budget and $m_b$-budget of points, the parameters $\kappa$ and $\varepsilon$ satisfy ${\varepsilon^2}\kappa^{-1} = \xi_{n_b}$ and $\varepsilon^2\kappa = \zeta_{m_b}$. Hence
\begin{equation}
\label{epsilon_kappa}
\varepsilon = (\xi_{n_b}\zeta_{m_b})^{1/4} \text{ and } \kappa = \sqrt{\frac{\zeta_{m_b}}{\xi_{n_b}}}.
\end{equation}
This guarantees that $|I_{\varepsilon, \kappa}| = n_b$ and $|J_{\varepsilon, \kappa}| = m_b$ by construction. In addition, when $(n_b,m_b)$ tends to the full number budget of points $(n,m)$, the objective in problem \eqref{screen-sinkhorn} converges to the objective of dual of Sinkhorn divergence~\eqref{sinkhorn-dual}.
{We are now in position to formulate the
optimization problem related to the screened dual of Sinkhorn.} Indeed, using the above analyses, any solution $(u^*, v^*)$ of problem~\eqref{screen-sinkhorn} satisfies $e^{u^*_i} \geq \varepsilon\kappa^{-1}$ and $e^{v^*_j} \geq \varepsilon\kappa$ for all $(i,j) \in (I_{\varepsilon,\kappa}\times J_{\varepsilon,\kappa}),$ and $e^{u^*_i} = \varepsilon\kappa^{-1}$ and $e^{v^*_j} = \varepsilon\kappa$ for all $(i,j) \in (I^\complement_{\varepsilon,\kappa}\times J^\complement_{\varepsilon,\kappa})$.
{Hence, we can restrict the problem \eqref{screen-sinkhorn} to
variables in $I_{\varepsilon,\kappa}$ and $J_{\varepsilon,\kappa}$}. This boils down
to restricting the constraints feasibility $\mathcal{C}^n_{\frac \varepsilon \kappa} \cap \mathcal{C}^m_{\varepsilon\kappa}$ to the screened domain defined by $\mathcal{U}_{\text{sc}} \cap \mathcal{V}_{\text{sc}}$,
\begin{equation*}
\mathcal{U}_{\text{sc}} = \{u \in {\mathbb{R}}^{n_b}: e^{u_{I_{\varepsilon,\kappa}}} \succeq \frac \varepsilon\kappa\mathbf 1_{n_b}\} \text{ and } \mathcal{V}_{\text{sc}} =\{v\in{\mathbb{R}}^{m_b}: e^{v_{J_{\varepsilon,\kappa}}} \succeq \varepsilon\kappa \mathbf{1}_{m_b}\}\end{equation*}
where the vector comparison $\succeq$ has to be understood elementwise. {And,
by replacing in Equation \eqref{screen-sinkhorn}, the variables belonging to $(I^\complement_{\varepsilon,\kappa}\times J^\complement_{\varepsilon,\kappa})$ by $\varepsilon\kappa^{-1}$ and
$\varepsilon\kappa$}, we derive the \emph{screened dual of Sinkhorn divergence problem} as
\begin{align}
\label{screen-sinkhorn_second_def}
\min_{u \in \mathcal{U}_{\text{sc}}, v \in \mathcal{V}_{\text{sc}}}\{\Psi_{\varepsilon, \kappa}(u,v)\}
\end{align}
where
\begin{align*}
\Psi_{\varepsilon,\kappa}(u, v) &= (e^{u_{I_{\varepsilon,\kappa}}})^\top K_{(I_{\varepsilon,\kappa}, J_{\varepsilon,\kappa})} e^{v_{J_{\varepsilon,\kappa}}} +
\varepsilon \kappa (e^{u_{I_{\varepsilon,\kappa}}})^\top K_{(I_{\varepsilon,\kappa}, J^\complement_{\varepsilon,\kappa})}\mathbf 1_{m_b} + \varepsilon \kappa^{-1} \mathbf 1_{n_b}^\top K_{(I^\complement_{\varepsilon,\kappa}, J_{\varepsilon,\kappa})}e^{v_{J_{\varepsilon,\kappa}}}\\
&\qquad - \kappa \mu_{I_{\varepsilon,\kappa}}^\top u_{I_{\varepsilon,\kappa}} - \kappa^{-1} \nu_{J_{\varepsilon,\kappa}}^\top v_{J_{\varepsilon,\kappa}} + \Xi
\end{align*}
with $\Xi = \varepsilon^2 \sum_{i \in I^\complement_{\varepsilon,\kappa}, j \in J^\complement_{\varepsilon,\kappa}} K_{ij} -\kappa \log(\varepsilon\kappa^{-1})\sum_{i \in I^\complement_{\varepsilon,\kappa}}\mu_i - \kappa^{-1} \log(\varepsilon\kappa)\sum_{j\in J^\complement_{\varepsilon,\kappa}} \nu_j$.
The above problem uses only the restricted parts $K_{(I_{\varepsilon,\kappa}, J_{\varepsilon,\kappa})},$ $K_{(I_{\varepsilon,\kappa}, J^\complement_{\varepsilon,\kappa})},$ and $K_{(I^\complement_{\varepsilon,\kappa}, J_{\varepsilon,\kappa})}$ of the Gibbs kernel $K$ for calculating the objective function $\Psi_{\varepsilon, \kappa}$. Hence, a gradient descent scheme will also need only those rows/columns of $K$. This is in contrast to Sinkhorn algorithm which performs alternating updates of all rows and columns of $K$. In summary, \textsc{Screenkhorn} consists of two steps: the first one is a screening pre-processing providing the active sets $I_{\varepsilon,\kappa}$, $J_{\varepsilon,\kappa}$.
The second one consists in solving Equation \eqref{screen-sinkhorn_second_def}
using a constrained L-BFGS-B \citep{byrd1995L-BFGS-B} for the stacked variable $\theta=(u_{I_{\varepsilon,\kappa}},v_{J_{\varepsilon,\kappa}}).$
Pseudocode of our proposed algorithm is shown in Algorithm~\ref{screenkhorn}. {
Note that in practice, we initialize the L-BFGS-B algorithm based on the output of a method, called \textsc{Restricted Sinkhorn} (see Algorithm~\ref{restricted_sinkhorn} in the supplementary), which is a Sinkhorn-like algorithm applied to the active dual variables $\theta=(u_{I_{\varepsilon,\kappa}},v_{J_{\varepsilon,\kappa}}).$ While simple and efficient, the solution of this
\textsc{Restricted Sinkhorn} algorithm does not satisfy the lower bound constraints of Problem \eqref{screen-sinkhorn_second_def} but provide a good candidate solution.
}
Also note that L-BFGS-B handles box constraints on variables, but it becomes more efficient when these box bounds are carefully determined for problem~\eqref{screen-sinkhorn_second_def}.
The following proposition (proof in supplementary material) expresses these bounds that are pre-calculated in the initialization step of \textsc{Screenkhorn}.
\begin{proposition}
\label{prop:bounds_of_usc_and_vsc}
Let $(u^{\text{sc}}, v^{\text{sc}})$ be an optimal pair solution of problem~\eqref{screen-sinkhorn_second_def} and $K_{\min} = \min\limits_{i\in I_{\varepsilon,\kappa},j \in J_{\varepsilon,\kappa}}K_{ij}$. Then,
one has
\begin{equation}
\label{bound_on_u}
\frac \varepsilon\kappa \vee \frac{\min_{i \in I_{\varepsilon,\kappa}}\mu_i}{\varepsilon (m- m_b) + \frac{\max_{j\in J_{\varepsilon,\kappa}} \nu_j}{n\varepsilon\kappa K_{\min}} m_b} \leq e^{u^{\text{sc}}_i} \leq \frac{\max_{i \in I_{\varepsilon,\kappa}} \mu_i}{m\varepsilon K_{\min}},
\end{equation}
and
\begin{equation}
\label{bound_on_v}
\varepsilon\kappa \vee \frac{\min_{j \in J_{\varepsilon,\kappa}}\nu_j}{\varepsilon(n- n_b) + \frac{\kappa\max_{i\in I_{\varepsilon,\kappa}} \mu_i}{m\varepsilon K_{\min} } n_b} \leq e^{v^{\text{sc}}_j} \leq \frac{\max_{j \in J_{\varepsilon,\kappa}} \nu_j}{n\varepsilon K_{\min} }
\end{equation}
for all $i\in I_{\varepsilon,\kappa}$ and $j\in J_{\varepsilon,\kappa}$.
\end{proposition}
\LinesNotNumbered
\begin{algorithm}[tbp]
\SetNlSty{textbf}{}{.}
\DontPrintSemicolon
\caption{\textsc{Screenkhorn}$(C,\eta,\mu,\nu,n_b,m_b)$}
\label{screenkhorn}
\textbf{Step 1:} \textcolor{black}{{Screening pre-processing}}\vspace{.1cm}\\
\nl $\xi \gets \texttt{sort}(\mu \oslash r(K)),$ $\zeta \gets \texttt{sort}(\nu \oslash c(K));$ //(decreasing order)\\
\nl $\varepsilon \gets (\xi_{n_b}\zeta_{m_b})^{1/4}, \text{ } \kappa \gets \sqrt{{\zeta_{m_b}}/{\xi_{n_b}}}$;\\
\nl $I_{\varepsilon,\kappa} \gets \{i=1, \ldots, n: \mu_i \geq {\varepsilon^2} \kappa^{-1} r_i(K)\}, J_{\varepsilon,\kappa} \gets \{j=1, \ldots, m: \nu_j \geq \varepsilon^2\kappa c_j(K)\};$\\
\nl $\underline{\mu} \gets \min_{i \in I_{\varepsilon,\kappa}} \mu_i, \bar{\mu} \gets \max_{i \in I_{\varepsilon,\kappa}} \mu_i, \underline{\nu} \gets \min_{j \in J_{\varepsilon,\kappa}} \nu_i, \bar{\nu} \gets \max_{j \in J_{\varepsilon,\kappa}} \nu_i$; \\
\nl $\underline{u} \gets \log\big(\frac \varepsilon\kappa \vee \frac{\underline{\mu}}{\varepsilon (m-m_b) + \varepsilon \vee \frac{\bar{\nu}}{n\varepsilon\kappa K_{\min}} m_b}\big), \bar{u} \gets \log\big(\frac{\bar{\mu}}{m\varepsilon K_{\min}}\big);$\\
\nl $\underline{v} \gets \log\big(\varepsilon\kappa \vee \frac{\underline{\nu}}{\varepsilon(n-n_b) + \varepsilon \vee \frac{\kappa\bar{\mu}}{m\varepsilon K_{\min}} n_b}\big), \bar{v} \gets \log\big(\frac{\bar{\nu}}{n\varepsilon K_{\min}}\big);$\\
\nl $ \bar{\theta} \gets \texttt{stack}(\bar{u}\mathbf 1_{n_b}, \bar{v}\mathbf 1_{m_b}),$ $ \underline{\theta} \gets \texttt{stack}(\underline{u}\mathbf 1_{n_b}, \underline{v}\mathbf 1_{m_b}) ;$\\
\vspace{.2cm}
\noindent \textbf{Step 2:} \textcolor{black}{{L-BFGS-B solver on the screened variables}}\vspace{.1cm}\\
{
\nl $u^{(0)}\gets \log(\varepsilon\kappa^{-1}) \mathbf 1_{n_b},$ $v^{(0)} \gets \log(\varepsilon\kappa) \mathbf 1_{m_b}$;\\
\nl $\hat u, \hat v \gets$ \textsc{Restricted~Sinkhorn}($u^{(0)},v^{(0)}$), $\theta^{(0)} \gets \texttt{stack}(\hat u, \hat v);$\\
}
\nl $\theta \gets \text{L-BFGS-B}(\theta^{(0)}, \underline{\theta}, \bar{\theta});$\\
\nl $\theta_u \gets (\theta_1, \ldots, \theta_{n_b})^\top, \theta_v \gets(\theta_{n_b+1}, \ldots, \theta_{n_b+m_b})^\top;$\\
\nl {$u^{sc}_i \gets (\theta_u)_i$ if $i \in I_{\varepsilon,\kappa}$ and $u_i \gets \log(\varepsilon\kappa^{-1})$ if $i \in I^\complement_{\varepsilon,\kappa};$}\\
\nl {$v^{sc}_j \gets (\theta_v)_j$ if $j \in J_{\varepsilon,\kappa}$ and $v_j \gets \log(\varepsilon\kappa)$ if $j \in J^\complement_{\varepsilon,\kappa};$}\\
\nl \Return{$B(u^{\text{sc}},v^{\text{sc}})$.}
\end{algorithm}
\section{Theoretical analysis and guarantees} \label{sec:analysis_of_marginal_violations}
This section is devoted to establishing theoretical guarantees for \textsc{Screenkhorn} algorithm. We first define the screened marginals $\mu^{\text{sc}} = B(u^{\text{sc}}, v^{\text{sc}}) \mathbf 1_m$ and $\nu^{\text{sc}} = B(u^{\text{sc}}, v^{\text{sc}})^\top \mathbf 1_n.$
Our first theoretical result, Proposition~\ref{proposition_error_in_marginals}, gives an upper bound of the screened marginal violations with respect to $\ell_1$-norm.
\begin{proposition}
\label{proposition_error_in_marginals}
Let $(u^{\text{sc}}, v^{\text{sc}})$ be an optimal pair solution of problem~\eqref{screen-sinkhorn_second_def}.
Then one has
{\small{
\begin{align}
\label{marginal-error-mu}
{\norm{{\mu} -{\mu}^{\text{sc}}}^2_1} = \mathcal{O}\Big(n_bc_\kappa + (n- n_b) \Big(\frac{\norm{C}_\infty}{\eta} + \frac{m_b}{\sqrt{nmc_{\mu\nu}}K_{\min}^{3/2}} &+ \frac{m-m_b}{\sqrt{nm}K_{\min}}
+ \log\Big(\frac{\sqrt{nm}}{m_bc_{\mu\nu}^{5/2}}
\Big)\Big)\Big)
\end{align}
}}
and
{\small{
\begin{align}
\label{marginal-error-nu}
{\norm{{\nu} -{\nu}^{\text{sc}}}^2_1} = \mathcal{O}\Big(m_bc_{\frac1\kappa} + (m- m_b) \Big(\frac{\norm{C}_\infty}{\eta} + \frac{n_b}{\sqrt{nmc_{\mu\nu}}K_{\min}^{3/2}} &+ \frac{n-n_b}{\sqrt{nm}K_{\min}}
+ \log\Big(\frac{\sqrt{nm}}{n_b c_{\mu\nu}^{5/2}}
\Big)\Big)\Big),
\end{align}
}}
where $c_z = z - \log z - 1$ for $z>0$ and $c_{\mu\nu} = \underline{\mu}\wedge \underline{\nu}$ with $\underline{\mu} = \min_{i\in I_{\varepsilon,\kappa}}\mu_i$ and $\underline{\nu} = \min_{j\in J_{\varepsilon,\kappa}}\nu_j$.
\end{proposition}
Proof of Proposition~\ref{proposition_error_in_marginals} is presented in supplementary material and it is based on first order optimality conditions for problem~\eqref{screen-sinkhorn_second_def} and on a generalization of Pinsker inequality (see Lemma~\ref{lem:pinsker} in supplementary).
Our second theoretical result, Proposition~\ref{prop:objective-error}, is an upper bound of the difference between objective values of \textsc{Screenkhorn} and dual of Sinkhorn divergence~\eqref{sinkhorn-dual}.
\begin{proposition}
\label{prop:objective-error}
Let $(u^{\text{sc}}, v^{\text{sc}})$ be an optimal pair solution of problem~\eqref{screen-sinkhorn_second_def} and $(u^\star, v^\star)$ is the pair solution of dual of Sinkhorn divergence~\eqref{sinkhorn-dual}. Then we have
\begin{align*}
\Psi_{\varepsilon, \kappa}(u^{\text{sc}} ,v^{\text{sc}}) -\Psi(u^\star, v^\star)
= \mathcal{O}\big(R(\norm{\mu - \mu^{\text{sc}}}_1 + \norm{\nu - \nu^{\text{sc}}}_1 + \omega_{\kappa})\big).
\end{align*}
where $R = \frac{\norm{C}_\infty}{\eta} + \log\big(\frac{(n\vee m)^2}{nmc_{\mu\nu}^{7/2}}\big)$ and $\omega_{\kappa} = |1- \kappa|\norm{\mu^{\text{sc}}}_1 + |1 - \kappa^{-1}|\norm{\nu^{\text{sc}}}_1 + |1- \kappa| + |1 - \kappa^{-1}|$.
\end{proposition}
Proof of Proposition~\ref{prop:objective-error} is exposed in the supplementary material.
Comparing to some other analysis results of this quantity, see for instance Lemma 2 in~\cite{dvurechensky18aICML} and Lemma 3.1 in~\cite{lin2019}, our bound involves an additional term $\omega_{\kappa}$ (with $\omega_1 =0)$. {To better characterize $\omega_\kappa$, a control of the $\ell_1$-norms of the screened marginals $\mu^{\text{sc}}$ and $\nu^{\text{sc}}$ are given in Lemma 2 in the supplementary material.}
\section{Numerical experiments} \label{sec:numerical_experiments}
In this section, we present some numerical analyses of our
\textsc{Screenkhorn} algorithm and show how it behaves when
integrated into some complex machine learning pipelines.
\subsection{Setup}
We have implemented our \textsc{Screenkhorn} algorithm in Python and used the L-BFGS-B of
Scipy. Regarding the machine-learning based comparison, we have based our code
on the ones of Python Optimal Transport toolbox (POT)~\citep{flamary2017pot} and just replaced the \texttt{sinkhorn} function call with a \texttt{screenkhorn} one. We have considered the POT's default \textsc{Sinkhorn} stopping criterion parameters and for \textsc{Screenkhorn}, the L-BFGS-B algorithm is stopped when the
largest component of the projected gradient is smaller than $10^{-6}$, when the number of iterations {or the} number of objective function evaluations reach $10^{5}$. For all applications, we have set $\eta=1$ unless otherwise specified.
\subsection{Analysing on toy problem}
\label{subsec:analysing_toy_problem}
We compare \textsc{Screenkhorn} to \textsc{Sinkhorn} as implemented in POT toolbox\footnote{\url{https://pot.readthedocs.io/en/stable/index.html}} on a synthetic example. The dataset we use consists of source samples generated from a bi-dimensional gaussian mixture and target samples following the same distribution but with different gaussian means. We consider an unsupervised domain adaptation using optimal transport with entropic regularization. Several settings are explored: different values of $\eta$, the regularization parameter, the allowed budget $\frac{n_b}{n} = \frac{m_b}{m}$ ranging from $0.01$ to $0.99$, different values of $n$ and $m$.
We empirically measure marginal violations as the norms $\norm{{\mu} -{\mu}^{\text{sc}}}_1$ and $\norm{{\nu} -{\nu}^{\text{sc}}}_1$, running time expressed as $\frac{T_{\textsc{Sinkhorn}}}{T_{\text{\textsc{Screenkhorn}}}}$ and the relative divergence difference $| \inr{C, P^\star} - \inr{C, P^{\text{sc}}}|/\inr{C, P^\star}$ between \textsc{Screenkhorn} and \textsc{Sinkhorn}, where $P^\star = \Delta(e^{u^\star}) K \Delta(e^{v^\star})$ and $P^{\text{sc}} = \Delta(e^{u^{\text{sc}}}) K \Delta(e^{v^{\text{sc}}}).$
Figure \ref{fig:margin_expe} summarizes the observed behaviors of both algorithms under these settings. We choose to only report results for $n=m=1000$ as we get similar findings for other values of $n$ and $m$.
\begin{figure*}[t]
\begin{center}
~\hfill\includegraphics[width=0.24\textwidth]{norm_M_Mu_marginals_toy_n1000}
\hfill
\includegraphics[width=0.24\textwidth]{norm_M_Nu_marginals_toy_n1000}
\hfill
\includegraphics[width=0.24\textwidth]{norm_M_time_toy_n1000}
\hfill
\includegraphics[width=0.24\textwidth]{norm_M_div_toy_n1000}\hfill~
\end{center}
\caption{Empirical evaluation of \textsc{Screenkhorn} vs \textsc{Sinkhorn} for normalized cost matrix \emph{i.e.} $\norm{C}_\infty=1$. (most-lefts): marginal violations in relation with the budget of points on $n$ and $m$ . (center-right) ratio of computation times $\frac{T_{\textsc{Sinkhorn}}}{T_{\text{\textsc{Screenkhorn}}}}$ and, (right) relative divergence variation. The results are averaged over $30$ trials.}
\label{fig:margin_expe}
\end{figure*}
\textsc{Screenkhorn} provides good approximation of the marginals $\mu$ and $\nu$ for ``high'' values of the regularization parameter $\eta$ ($\eta > 1$). The approximation quality diminishes for small $\eta$. As expected $\norm{{\mu} -{\mu}^{\text{sc}}}_1$ and $\norm{{\nu} -{\nu}^{\text{sc}}}_1$ converge towards zero when increasing the budget of points. Remarkably marginal violations are almost negligible whatever the budget for high $\eta$. According to computation gain, \textsc{Screenkhorn} is almost 2 times faster than \textsc{Sinkhorn} at high decimation factor $n/n_b$ (low budget) while the reverse holds when $n/n_b$ gets close to 1. Computational benefit of \textsc{Screenkhorn} also depends on $\eta$ with appropriate values $\eta \leq 1$. Finally except for $\eta=0.1$ \textsc{Screenkhorn} achieves a divergence $\inr{C, P}$ close to the one of Sinkhorn showing that our static screening test provides a
reasonable approximation of the Sinkhorn divergence. As such, we believe that \textsc{Screenkhorn} will be practically useful in cases where modest accuracy on the divergence is sufficient. This may be the case of a loss function for a gradient descent method (see next section).
\subsection{Integrating \textsc{Screenkhorn} into machine learning pipelines}
Here, we analyse the impact of using \textsc{Screenkhorn}
instead of \textsc{Sinkhorn} in a complex machine learning pipeline. Our two applications
are a dimensionality reduction technique, denoted as Wasserstein Discriminant Analysis (WDA), based on Wasserstein distance approximated
through Sinkhorn divergence \citep{flamary2018WDA} and a domain-adaptation using optimal transport mapping \citep{courty2017optimal}, named OTDA.
WDA aims at finding a linear projection which minimize the ratio of distance between intra-class samples and distance inter-class samples, where the distance is understood
in a Sinkhorn divergence sense. We have used a toy problem involving Gaussian classes with $2$ discriminative features and $8$ noisy features and the MNIST dataset. For the
former problem, we aim at find the best two-dimensional linear subspace in a WDA sense whereas for MNIST, we look for a subspace of dimension $20$ starting from the original
$728$ dimensions. Quality of the retrieved subspace are evaluated using classification task based on a $1$-nearest neighbour approach.
Figure \ref{fig:wda} presents the average gain (over $30$ trials) in computational time we get as the number of examples evolve and for different decimation factors of the \textsc{Screenkhorn} problem.
Analysis of the quality of the subspace have been deported to the supplementary material (see Figure~~\ref{fig:wda_gain}), but we can remark a small loss of performance of \textsc{Screenkhorn} for the toy problem, while
for MNIST, accuracies are equivalent regardless of the decimation factor. We can note
that the minimal gains are respectively $2$ and $4.5$ for the toy and MNIST problem
whereas the maximal gain for $4000$ samples is slightly larger than an order of magnitude.
\begin{figure*}[t]
\centering
~\hfill
\includegraphics[width=0.37\textwidth]{wda_gain_toy.pdf}~\hfill~
\includegraphics[width=0.37\textwidth]{wda_gain_mnist.pdf}
\hfill~
\caption{Wasserstein Discriminant Analysis : running time gain for (left) a toy dataset and (right) MNIST as a function of the number of examples and the data decimation factor in \textsc{Screenkhorn}.}
\label{fig:wda}
\end{figure*}
\begin{figure*}[t]
\centering
~\hfill\includegraphics[width=0.37\textwidth]{da_gain_mnist_regcl1.pdf}~\hfill~
\includegraphics[width=0.37\textwidth]{da_gain_mnist_regcl10.pdf}\hfill~
\caption{OT Domain adaptation : running time gain for MNIST as a function of the number of examples and the data decimation factor in \textsc{Screenkhorn}. Group-lasso hyperparameter values (left) $1$. (right) $10$.}
\label{fig:otda}
\end{figure*}
For the OT based domain adaptation problem, we have considered the
OTDA with $\ell_{\frac 12,1}$ group-lasso regularizer that helps in exploiting available labels in the source domain. The problem is solved using a majorization-minimization approach
for handling the non-convexity of the problem. Hence, at each iteration, a \textsc{Sinkhorn}/\textsc{Screenkhorn} has to be computed and the number of iteration is
sensitive to the regularizer strength. As a domain-adaptation problem, we have
used a MNIST to USPS problem in which features have been
{computed from the first layers}
of a domain adversarial neural networks \citep{ganin2016domain} before full convergence of the networks (so as to leave room for OT adaptation).
Figure \ref{fig:otda} reports the gain in running time for $2$ different values
of the group-lasso regularizer hyperparameter, while the curves of performances are
reported in the supplementary material. We can note that for all the \textsc{Screenkhorn} with different decimation factors, the gain in computation goes from a factor of $4$ to $12$, {without any loss of the accuracy performance.}
\section{Conclusion}
The paper introduces a novel efficient approximation of the Sinkhorn divergence
based on a screening strategy. Screening some of the Sinkhorn dual variables
has been made possible by defining a novel constrained dual problem and by
carefully analyzing its optimality conditions. From the latter, we derived some
sufficient conditions depending on the ground cost matrix, that some dual variables are smaller than a given threshold. Hence, we need just to solve a restricted
dual Sinkhorn problem using an off-the-shelf L-BFGS-B algorithm. We also provide
some theoretical guarantees of the quality of the approximation with respect to
the number of variables that have been screened. Numerical experiments show
the behaviour of our \textsc{Screenkhorn} algorithm and computational time gain it can
achieve when integrated in some complex machine learning pipelines.
\subsubsection*{Acknowledgments}
This work was supported by grants from the Normandie Projet GRR-DAISI, European funding FEDER DAISI and OATMIL ANR-17-CE23-0012 Project of the French National Research Agency (ANR).
\small
|
2,869,038,154,554 | arxiv | \section{Introduction}
The following is well known
\begin{theorem}{\rm (The Jordan theorem)}\label{theo1.1}
Let $f:[0,1] \to \bR^2$ be a simple closed curve in the plane ($f$ is continous, $f(0) = f(1)$ and $f(u) \not= f(v)$ for $0 < u < v \le 1$).
Define $P=_{\rm def}$ {\rm image}$f= \{f(u) : 0 \le u \le 1\}$, the image of $f$. Then $\bR^2 \setminus P = U_0 \cup U_1$, where $U_0, U_1$ are connected open,
non-empty mutually disjoint sets, $U_0$ is bounded (interior), $U_1$ is unbounded (exterior), and $P = {\rm bd}(U_0) = {\rm bd} (U_1)$.
\end{theorem}
The proof of this theorem is not easy; see \cite{Bertoglio}, \cite{Lawson}, \cite{Thomassen}, \cite[p. 37 ff.]{Moise}, \cite[vol. I, pp. 39-64]{Aleksandrov},
\cite[pp. 285 ff.]{Kuratowski}, and the survey \cite{Dostal}.
When the curve $P$ is polygonal, however, i.e., when $f$ is piecewise affine, the theorem becomes elementary:
\begin{theorem} {\rm(The piecewise affine Jordan theorem)} \label{theo1.2}
Let $p_0,p_1,\dots,p_{n-1},p_n=p_0, n \ge 3$, be ($n$ distinct) points in $\bR^2$. Assume that the polygon $P=_{\rm def} \bigcup\limits^n_{i=1}
[p_{i-1},p_i]$ is simple, i.e., the segments $[p_{i-1},p_i]$ do not intersect except for common endpoints: $\{p_i\} = [p_{i-1},p_i] \cap [p_i,p_{i+1}]$
for $1 \le i \le n-1, \{p_0\} = [p_0,p_1] \cap [p_{n-1},p_0]$. Then $\bR^2 \setminus P = U_0 \cup U_1$ with the same properties of $U_0,U_1$ listed
above {\rm (Theorem \ref{theo1.1})}.
\end{theorem}
\begin{definition}\label{def1.1}
A polygon $P$ satisfying the conditions of Theorem \ref{theo1.2} is a \emph{simple closed $n$-gon}. The bounded [resp. unbounded] domain $U_0$
[resp. $U_1$] is the \emph{interior} [resp. \emph{exterior}], denoted by int$P$ [resp. ext$P$], of $P$.
\end{definition}
A particularly simple proof of Theorem \ref{theo1.2} is known as the ``raindrop proof'', see \cite[pp. 267-269]{Courant}, \cite[pp. 281-285]{Hille},
\cite[pp. 27-29]{Bensen}, or \cite[pp. 16-18]{Moise}. We reproduce this proof in a somewhat more complete and formal form than usually given in the literature for later
reference to some of its parts.
So we first prove Theorem \ref{theo1.2} (in Paragraphs 2 and 3 below). Then, squeezing this proof, a \emph{tight} upper bound on the polygonal diameter
of int$P$ [resp. ext$P$] (see Definition \ref{def3.2} below) is given as a function of $n$, and an $n$-gon $(n \ge 3)$ for which both upper bounds are
attained \emph{simultaneously} is described (see Theorem \ref{theo4.1} below). The $d$-dimensional analogue $(d \ge 2)$ of this problem was discussed
in \cite[Theorem 3.2]{Perles}. There we gave upper bounds on the polygonal diameter of int$\cC$, resp. ext$\cC$, for a polyhedral $(d-1)$-pseudomanifold $\cC$
in $\bR^d$ as a function of the number $n$ of its facets and $d$. The bounds given there are shown to be \emph{almost} tight (see \cite[Section 4]{Perles}), whereas the bounds given here (for $d = 2$) are tight. Another novelty of the present paper is that there is an $n$-gon $P$ in $\bR^2$ for which \emph{both} upper bounds (on the polygonal diameter of int$P$ and ext$P$) are attained
(simultanously), as said above, whereas for $d \ge 3$ the examples given in \cite[Section 4]{Perles} (namely one for int$\cC$ and another
one for ext$\cC$) are \emph{different} from each other.
For the sake of the proof of Theorem \ref{theo1.2}, we split it into two statements: Let $P$ be a simple closed polygon in $\bR^2$.
(E) (separation): $\bR^2 \setminus P$ is the disjoint union of two open sets, int$P$ and ext$P$. The boundary of each one of these sets
is $P$; int$P$ is bounded and ext$P$ is unbounded.
(F) (connectivity): The sets int$P$ and ext$P$ are [polygonally] connected.
We shall prove (E) (Paragraph 2) by constructing a continuous function $f: \bR^2 \setminus P \to \{0,1\}$ which attains both values $0$ and $1$ in every
neighborhood of every point $x \in P$, and defining ext$P = f^{-1}(0)$, int$P = f^{-1}(1)$. Statement (F) (polygonal connectivity of int$P$ and of ext$P$) follows
from Theorem \ref{theo3.1} below.
\section{A ``raindrop'' proof of (E)}
The construction of $f$ will be performed in three steps:
\textbf{Preliminary step:} Choosing a ``generic'' direction.
Choose an orthogonal basis $(u,v)$ for $\bR^2$ so that no two
vertices of $P$ have the same $x$-coordinate. Intuitively: the polygon $P$ is drawn as a paper; rotate the paper so that no two vertices
lie one above the other. Formally: let $L_1, \dots, L_t$ be all lines spanned by subsets of $\{p_1,\dots,p_n\}$. For $i=1,\dots,t$ let
$L^0_i =_{\rm def} L_i-L_i$ be the linear ($1$-dimensional) subspace parallel to $L_i$. Choose a unit vector $v \in \bR^2 \setminus \bigcup \limits^t_{i=1} L^0_i$
(``$v$'' for ``vertical'').
The vector $v$ is our direction ``up'', and $-v$ is pointing ``down''. By our choice of $v$, a line $L$, spanned by the vertices of $P$, will meet a line parallel
to $v$ in at most one point.
For a point $p \in \bR^2 \setminus P$ denote by $R(p)$ the closed vertical ``pointing down'' half-line $R(p) =_{\rm def}
\{p-\lambda v: 0 \le \lambda < \infty\}$. $R(p)$ is the path of a ``raindrop'' emanating from $p$. We divide $\bR^2 \setminus P$ into two disjoint sets
\[
\begin{array}{lll}
S_0 & =_{\rm def} & \{p \in \bR^2 \setminus P: R(p) \, \mbox{ does not meet any vertex of }\, P\}\,,\\
S_1 & =_{\rm def} & \{ p \in \bR^2 \setminus P: R(p) \, \mbox{ meets exactly one vertex of } \, P\}\,.
\end{array}
\]
(By our choice of $v$, we have $\bR^2 \setminus P = S_0 \cup S_1$.)
We shall define $f$ on $S_0$ (= Step I), then extend it (continuously) to $S_1$ (= Step II).
The following notation will be used: For a set $A \subset \bR^2$,
$A^+ =_{\rm def} \{a + \lambda v: a \in A, \lambda \ge 0\}$.
Thus $A^+$ is the set of points that lie
``above'' $A$. If $A$ is closed, then $A^+$ is closed. Note that (for all $p \in \bR^2$ and $A \subset \bR^2$):
\begin{equation}\label{eq1}
R(p) \, \mbox{ meets } \, A \, \mbox{ iff } \, p \in A^+\,.
\end{equation}
\textbf{Step I:} Define $f$ on $S_0$.
For $p \in S_o$ denote by $r(p)$ the number of edges of $P$ met by $R(p)$, and define $f(p) =_{\rm def} {\rm par} (r(p)) =_{\rm def} \frac{1}{2} (1-(-1)^{r(p)})$, the parity
of $r(p)$ ($f(p) = 0$ if $r(p)$ is even, $1$ if $r(p)$ is odd).
\begin{center}
\end{center}
\begin{center}
Fig. 1: the function $r(p)$ \hfill Fig. 2: the parity function $f(p) = {\rm par}(r(p))$
\end{center}
Next we show that $S_0$ is a dense open subset of $\bR^2$, and that $f: S_0 \to \{0,1\}$ is a continuous, hence locally constant function. Using vert$P$ for the set of vertices of $P$,
we have in view of (\ref{eq1})
\begin{equation}\label{eq2}
S_0 = \bR^2 \setminus (P \cup (\mbox{vert}P)^+)\,.
\end{equation}
The set $({\rm vert}P)^+$ is closed, same as $P$. Thus $S_0$ is an open subset of $\bR^2$. Moreover, the set $P \cup ({\rm vert}P)^+$ can be covered by a finite
number of lines in $\bR^2$. It follows that $S_0$ is dense in $R^2$.
Continuity of $f$: Assume $x \in S_0$. Let $\varepsilon$ be the (positive) distance from $x$ to $P \cup ({\rm vert}P)^+ (= \bR^2 \setminus S_0)$.
If $x' \in \bR^2, \|x-x'\| < \varepsilon$, then the segment $[x,x']$ does not meet $P \cup ({\rm vert}P)^+$. Let $e = [p_{i-1},p_i] \, (1 \le i \le n)$ be any
edge of $P$. The set $e^+$ is a closed, convex, unbounded and full-dimensional polyhedral subset of $\bR^2$, whose boundary consists of the lower edge $e$ and the side
edges $p^+_{i-1}, p^+_i$. Thus bd$e^+ \subset P \cup ({\rm vert} P)^+$, and therefore the segment $[x,x']$ does not meet the boundary of $e^+$. It follows that
$x' \in e^+$ iff $x \in e^+$, i.e., $R(x)$ meets $e$ iff $R(x')$ meets $e$. This is true for all edges $e$ of $P$.
Therefore $r(x) = r(x')$, hence $f(x) = f(x')$. This shows that the function $f: S_0 \to \{0,1\}$ is locally constant, hence continuous (in $S_0)$.
\textbf{Step II:} Extend $f$ continuously from $S_0$ to $S_0 \cup S_1 = \bR^2 \setminus P$.
Suppose $p \in S_1$. Let $p_i$ be the unique vertex of $P$ that meets
$R(p)$, i.e., $p \in p^+_i$.
Note that $p \not= p_i$, i.e., $p \in \mbox{ relint } p^+_i$. Let $e_1 = [p_{i-1},p_i], e_2 = [p_i,p_{i+1}]$ be the two edges of $P$ incident with
$p_i$. Define $L = p+\bR v$. $L$ is the vertical line through $p$. Denote by $L^-, L^+$ the two closed half-planes of $\bR^2$ bounded by $L$. None of the edges
$e_1,e_2$ is included in $L$, and they may be either in the same half-plane $L^-$ or $L^+$, or in different half-planes. Choose the notation
so that either $(\alpha)$ $e_1 \subset L^-, e_2 \subset L^+$ (Fig. 3) or $(\beta)$ $e_1 \cup e_2 \subset L^+$ (Fig. 4).
\begin{center}
\end{center}
\begin{center}
Fig. 3: case $\alpha$ \hspace{1cm} ~~~~~~~~~~~~~~~~~~~~~~Fig. 4: case $\beta$
\end{center}
A glance on Figures 3 and 4 shows that for a point $x$ in the vicinity of $p$, but not lying on $L$, the parity of $r(x)$ is the same in either
side of $L$. Hence we can extend the definition of $f$ to $p$ by defining $f(p)$ to be this parity. To make this into a formal argument consider the
closed set $\triangle =_{\rm def} P \cup ({\rm vert} P \setminus \{p_i\})^+$. This set includes the boundary of $e^+$, for every edge $e$ of $P$, except for
$e^+_1$ and $e^+_2$. It also includes the boundaries of $e^+_1$ and $e^+_2$, except for $p^+_i \setminus \{p_i\}$, and it does not contain the point $p$. Put
$\varepsilon =_{\rm def} \mbox{ dist}(p,\triangle) > 0$, and define $U =_{\rm def} \{x \in \bR^2: \|x-p\| < \varepsilon\} = {\rm int} B^2(p,\varepsilon)$.
Note that if $x \in U$, then the closed interval $[p,x]$ misses $\triangle$. Now make the following observations.
\begin{enumerate}
\item[(I)] If $e$ is any edge of $P$, other than $e_1$ and $e_2$, then the interval $[p,x]$ does not meet the boundary of $e^+$, and therefore $p$ and $x$
are either both in $e^+$, or both not in $e^+$.
\item[(II)] If, say, $e_1 \subset L^-$ and $x \in {\rm int} L^-$ then, moving along the interval $[p,x]$ from $p$ to $x$, we start at a point $p \in p^+_i \subset
{\rm bd} e^+_1$, move into int$e^+_1$, and do not hit the boundary of $e^+_1$ again. Therefore $x \in {\rm int} e^+_1$. The same holds with $L^-$ replaced by
$L^+$, and/or $e_1$ replaced by $e_2$. It follows that in case $(\alpha)$: if $x \in U \setminus L$, then $x$ belongs to exactly one of the sets $e^+_1,e^+_2$. And it
follows that in case $(\beta)$: if $x \in U \cap {\rm int} L^-$, then $x$ belongs to none of the sets $e^+_1,e^+_2$; if $x \in U \cap L^+$, then $x$ belongs to both of them.
\item[(III)] If $p_j \in {\rm vert} P \setminus \{p_i\}$, then $p^+_j \subset \triangle$, and therefore $x \notin p^+_j$, who-ever $x \in U$.
\item[(IV)] If $x \in U \setminus L$, then clearly $x \notin p^+_i$. If $x \in U \cap L$, then the interval $[p,x]$ lies on $L$, contains a point $p \in p^+_i \setminus \{p_i\}$
and does not meet $p_i$; therefore $x \in p^+_i \setminus \{p_i\}$ (= relint$p^+_i$). From these observations we infer:
\begin{enumerate}
\item[(A)] $U \setminus L \subset S_0$ and $f$ is constant on $U \setminus L$.
\item[(B)] $U \cap L \subset S_1$.
\end{enumerate}
\end{enumerate}
Now define $f(p)$ to be the constant value that $f$ takes on $U \setminus L$. Clearly, if we apply the same procedure to any point $p' \in U \cap L$, we will end up
with a value $f(p')$ equal to the value $f(p)$ just defined. (Note that any $\varepsilon'$-neighborhood of $p' \,(\varepsilon' > 0)$ contains points of $U \setminus L$.) Thus we
have extended $f$ to a locally constant, hence continuous function $f: \bR^2 \setminus P \to \{0,1\}$.
To complete the proof of statement (E), we define, as indicated after (F) above, the sets ext$P =_{\rm def} f^{-1}(0)$ and int$P =_{\rm def} f^{-1} (1)$. These are clearly
two disjoint open sets in $\bR^2$, whose union is dom$f = \bR^2 \setminus P$. Note that $\bR^2 \setminus {\rm conv}P \subset {\rm ext}P$ and, therefore, int$P \subset {\rm conv} P$.
Thus ext$P$ is unbounded and int$P$ is bounded.
We still have to show that every point of $P$ is a boundary point of both int$P$ and ext$P$ (and therefore int$P \not= \emptyset, {\rm ext} P \not= \emptyset$).
Since the boundaries of int$P$ and of ext$P$ are closed sets, it suffices to show that the common boundary points of int$P$ and ext$P$ are dense in $P$.
For any vertex $p_i \, (1 \le i \le n)$ the intersection of the vertical line $p_i + \bR v$ with an edge $e$ of $P$ is at most a singleton. Thus $e \setminus \cup \{p_i +
\bR v: 1 \le i \le n\}$ is dense in $e$, and $P \setminus \cup \{p_i + \bR v: 1 \le i \le n\}$ is dense in $P$. If $x \in P \setminus \cup \{p_i + \bR v: 1 \le i \le n\}$, then
$x$ belongs to the relative interior of some edge $e$ of $P$. If $\varepsilon > 0$ is sufficiently small, then the points $x + \varepsilon v, x - \varepsilon v$ are both in
$S_0$, the half-line $R(x + \varepsilon v)$ meets $e$, in addition to all edges met by $R(x-\varepsilon v)$. Thus $r(x+ \varepsilon v) = 1 + r (x-\varepsilon v)$, and $f(x+\varepsilon v)
\not= f(x-\varepsilon v)$, i.e., $\{f(x-\varepsilon v), f(x+\varepsilon v)\} = \{0,1\}$. Thus $x$ is a common boundary point of int$P$ and ext$P$. This finishes the proof
of (E).
\section{Proof of (F)}
Put $I_i =_{\rm def} [p_{i-1},p_i], 1 \le i \le n$, the edges of $P$, and for $i = 1,2, \dots, n$ let $u_i$ be a unit vector perpendicular to aff$I_i$. Choose
the orientation of $u_i$ in such a way that for each point $b \in {\rm relint} I_i$ and for all sufficiently small positive value of $\varepsilon, b + \varepsilon u_i \in
{\rm ext}P$ and $b- \varepsilon u_i \in {\rm int}P$. Define $u_{i,i+1} =_{\rm def} u_i + u_{i+1}, \, 1 \le i \le n$ (the indices are taken modulo $n$, i.e., $p_n = p_0,
u_{n+1} = u_1, u_{n,n+1} = u_{n,1} = u_n+u_1$).
\begin{lemma}\label{lem3.1}
If $\varepsilon$ is a sufficiently small positive number, then $p_i + \varepsilon u_{i,i+1} \in {\rm ext}P$, and $p_i - \varepsilon u_{i,i+1} \in
{\rm int}P$ for $1 \le i \le n$.
\end{lemma}
\textbf{Proof:} The edges $I_i,I_{i+1}$ lie in two rays (half-lines) $L_i,L_{i+1}$ bounded by $p_i$, say $L_i = p_i + \bR^+ v_i, L_{i+1} = p_i + \bR^+ v_{i+1}$, where
$v_i, v_{i+1}$ are suitable unit vectors orthogonal to $u_i$, $u_{i+1}$, respectively.
\begin{center}
\end{center}
\begin{center}
~\phantom{000000}(a) ~~\hspace{4.5cm} (b) \hspace{5cm} (c)
Fig. 5
\end{center}
If $\varepsilon$ is a sufficiently small positive number $(0 < \varepsilon < {\rm dist}(p_i,P \setminus ({\rm relint} (I_i \cup I_{i+1}))$, then $B^2 (p_i,\varepsilon)
\setminus P = B^2 (p_i,\varepsilon) \setminus (L_i \cup L_{i+1})$. The union $L_i \cup L_{i+1}$ divides $B^2 (p_i, \varepsilon)$ into two open sectors, $B^2 (p_i, \varepsilon)
\cap {\rm int} P$ and $B^2 (p_i, \varepsilon) \cap {\rm ext} P$. If $L_i,L_{i+1}$ are collinear $(v_{i+1} = -v_i)$, then each one of these two sectors is an open half disc.
In this case $u_i = u_{i+1}$ (Fig. 5(a)), $u_{i,i+1} = 2u_i = 2u_{i+1}$, and the lemma holds trivially. If $u_i,u_{i+1}$ are not collinear, then one of the sectors
is larger than a half disc, and the other is smaller.
In both cases we have
\begin{equation}\label{eq3}
\langle u_i, v_{i+1} \rangle = \langle u_{i+1}, v_i \rangle = \sin \alpha\,,
\end{equation}
where $\alpha$ is the central angle of the sector $B^2 (p_i,\varepsilon) \cap {\rm ext}P$ at $p_i \, (0 \le \alpha \le 360^o)$.
If $\langle u_i, v_{i+1}\rangle < 0$, then $B^2 (p_i, \varepsilon) \cap {\rm ext}P$ is the larger sector (Fig. 5(b)), and if $\langle u_i,v_{i+1}\rangle > 0$, then
$B^2(p_i,\varepsilon) \cap {\rm int}P$ is the larger sector (Fig. 5(c)). Summing up the equalities
\[
\begin{array}{lll}
u_i & = & \langle u_i, u_{i+1}\rangle u_{i+1} + \langle u_i,v_{i+1}\rangle v_{i+1} \,,\\
u_{i+1} & = & \langle u_{i+1}, u_i\rangle u_i + \langle u_{i+1}, v_i\rangle v_i
\end{array}
\]
and using (\ref{eq3}), we find $(1 - \langle u_i, u_{i+1}\rangle) \, (u_i + u_{i+1}) = \sin \alpha\, (v_i + v_{i+1})$.
If $u_i \not= u_{i+1}$, then $1-\langle u_i,u_{i+1}\rangle > 0$, and
\[
u_{i,i+1} = u_i + u_{i+1} = \frac{\sin \alpha}{1-\langle u_i,u_{i+1}\rangle} \cdot (v_i + v_{i+1})\,.
\]
Thus $u_{i,i+1}$ is a positive [resp., negative] multiple of $v_i + v_{i+1}$ when $\sin \alpha > 0$
[resp., $\sin \alpha < 0$]. In both cases, $u_{i,i+1}$ points towards ext$P$, and $-u_{i,i+1}$ towards int$P$. \hfill \rule{2mm}{2mm}
\begin{lemma}\textbf{{\rm (``Push away from \boldmath$P$''\unboldmath)}}\label{lem3.2}
\begin{enumerate}
\item[(a)] Fix $i, \, 1 \le i \le n$, suppose $b \in {\rm relint} I_i$ and $u$ is a vector satisfying $\langle u,u_i \rangle > 0$. Define
$I^0 =_{\rm def} [b,p_i],I^\varepsilon =_{\rm def} [b+ \varepsilon u, p_i + \varepsilon u_{i,i+1}]$ ($u_i, u_{i+1}$ and $u_{i,i+1} = u_i + u_{i+1}$ denote
the same vectors as in the previous lemma). If $\varepsilon$ is a sufficiently small positive number, then $I^\varepsilon \subset {\rm ext}P$ and $I^{-\varepsilon}
\subset {\rm int}P$. (The required smallness of $\varepsilon$ may depend on the choice of the point $b$ and of the vector $u$.)
\item[(b)] Fix $i, 1 \le i \le n$, and define
$J^0 =_{\rm def} [p_i,p_{i+1}] = I_{i+1}, J^\varepsilon =_{\rm def} [p_i + \varepsilon u_{i,i+1}, p_{i+1} + \varepsilon u_{i+1,i+2}]$.
If $\varepsilon$ is a sufficiently small positive number, then $J^\varepsilon \in {\rm ext}P$ and $J^{-\varepsilon} \in {\rm int}P$.
\end{enumerate}
\end{lemma}
\textbf{Proof:}
\begin{enumerate}
\item[(a)] First note that $I^0$ does not meet any edge of $P$ except $I_i$ and $I_{i+1}$. The same holds for $I^\varepsilon$, provided
\[
|\varepsilon| < \min \left(\frac{1}{2}, \frac{1}{\|u\|}\right) \cdot {\rm dist} \left(I^0, P \setminus({\rm relint} (I_i \cup I_{i+1}))\right)\,.
\]
By Lemma \ref{lem3.1}, $p_i + \varepsilon u_{i,i+1} \in {\rm ext}P$ and $p_i - \varepsilon u_{i,i+1} \in {\rm int}P$, provided $\varepsilon$ is
positive and sufficiently small. To complete the proof, it suffices to show that $I^\varepsilon \cap I_i = \emptyset$ and $I^\varepsilon \cap I_{i+1} = \emptyset$
(for sufficiently small $|\varepsilon|, \, \varepsilon \not= 0$).
As for $I_i : \langle u_i, u \rangle > 0$ (given) and $\langle u_i, u_{i,i+1}\rangle = 1 + \langle u_i, u_{i+1}\rangle > 0$. Therefore, for any $\varepsilon \not= 0$
both endpoints of $I^\varepsilon$ lie (strictly) on the same side of the line aff$I_i$, hence $I_i \cap I^\varepsilon = \emptyset$.
As for
$I_{i+1}$: If $I_{i+1}$ and $I_i$ lie on the same line $(u_i = u_{i+1})$, then the previous argument shows that $I_{i+1} \cap I^\varepsilon = \emptyset$ for
all $\varepsilon \not= 0$ as well. If $u_i \not= u_{i+1}$, consider first the case $\langle u_i, v_{i+1}\rangle < 0$. (Fig. 5(b)). For $\varepsilon > 0, I^\varepsilon$
lies in the open half-plane $\{x \in \bR^2 : \langle u_i, x \rangle > \langle u_i,p_i\rangle\}$, whereas $I_{i+1}$ lies in the closed half-plane $\{x \in \bR^2: \langle
u_i, x \rangle \le \langle u_i, p_i \rangle \}$. Therefore $I^\varepsilon \cap I_{i+1} = \emptyset$. For $\varepsilon < 0$,
\[
\langle u_{i+1}, p_i + \varepsilon u_{i,i+1}\rangle =
\langle u_{i+1}, p_i \rangle + \varepsilon (1 + \langle u_i, u_{i+1}\rangle) < \langle u_{i+1},p_i\rangle\,.
\]
On the other hand, $\langle u_{i+1},b\rangle < \langle u_{i+1},p_i\rangle$ (for any point $b \in {\rm relint} I_i$, since $\langle u_{i+1},v_i \rangle < 0$), and
therefore $\langle u_{i+1},b + \varepsilon u\rangle < \langle u_{i+1},p_i\rangle$ for sufficiently small $|\varepsilon|, \varepsilon \not= 0$. Thus both endpoints of
$I^\varepsilon$ lie on the same open side of the line aff$I_{i+1}$, hence $I^\varepsilon \cap I_{i+1} = \emptyset$.
In the case $\langle u_i, v_{i+1}\rangle > 0$ (Fig. 5(c) above), just repeat the previous argument with the roles of $\varepsilon > 0$ and $\varepsilon < 0$
interchanged.
\item[(b)] The proof is similar to that of (a). First, note that $J^0$ does not meet any edge of $P$ except $I_i,I_{i+1}$ and $I_{i+2}$. The same holds for $J^\varepsilon$, provided
\[
|\varepsilon| < \min \left(\frac{1}{2}, \frac{1}{\|u\|}\right) \cdot {\rm dist } \left(J^0, P \setminus {\rm relint} (I_i \cup I_{i+1} \cup I_{i+2})\right)\,.
\]
By Lemma \ref{lem3.1}, $p_i + \varepsilon u_{i,i+1}, p_{i+1} + \varepsilon u_{i+1,i+2} \in {\rm ext}P$ and $p_i - \varepsilon u_{i,i+1}, p_{i+1} - \varepsilon u_{i+1,i+2} \in
{\rm int} P$, provided $\varepsilon$ is positive and sufficiently small. To complete the proof, it suffices to show that $J^\varepsilon \cap I_i = \emptyset,
J^\varepsilon \cap I_{i+1} = \emptyset$ and $J^\varepsilon \cap I_{i+2} = \emptyset$ (for sufficiently small $|\varepsilon|, \varepsilon \not= 0$).
As for $I_{i+1}\!: \langle u_{i+1},u_{i,i+1}\rangle = 1 + \langle u_{i+1},u_i\rangle > 0$ and $\langle u_{i+1},u_{i+1,i+2}\rangle = 1 +
\langle u_{i+1},u_{i+2} \rangle > 0$. Therefore, for any $\varepsilon > 0$, both endpoints of $J^\varepsilon$ lie on the same open side of the line aff$I_{i+1}$,
hence $I_{i+1} \cap J^\varepsilon = \emptyset$.
As for $I_i$: If $I_{i+1}$ and $I_i$ lie in the same line $(u_i = u_{i+1})$, then the previous argument shows that
$I_i \cap J^\varepsilon = \emptyset$ for all $\varepsilon \not= 0$ as well. If $u_i \not= u_{i+1}$, consider first the case $\langle u_i, v_{i+1}\rangle < 0$ (Fig. 5(b)).
For $\varepsilon > 0, J^\varepsilon$ lies in the open half-plane $\{x \in \bR^2: \langle u_{i+1},x\rangle > \langle u_{i+1},p_i\rangle\}$, whereas $I_i$ lies in the closed
half-plane $\{x \in \bR^2: \langle u_{i+1},x\rangle \le \langle u_{i+1},p_i\rangle\}$.
Therefore, $J^\varepsilon \cap I_i = \emptyset$.
For $\varepsilon < 0$, we have $\langle u_i,p_i + \varepsilon u_{i,i+1}\rangle = \langle u_i, p_i \rangle + \varepsilon (1 + \langle u_i,u_{i+1}\rangle) < \langle u_i,p_i\rangle$.
On the other hand, $\langle u_i, p_{i+1}\rangle < \langle u_i,p_i\rangle$ (since $\langle u_i,v_{i+1}\rangle < 0$), and therefore $\langle u_i, p_{i+1} + \varepsilon u_{i+1,i+2} \rangle
< \langle u_i,p_i\rangle$ for sufficiently small $|\varepsilon|$. Thus both endpoints of $J^\varepsilon$ lie on the same open side of the line aff$I_i$, hence
$J^\varepsilon \cap I_i = \emptyset$.
In the case $\langle u_i, v_{i+1}\rangle > 0$ (Fig. 5(c)), just repeat the previous argument with the roles of $\varepsilon > 0$ and $\varepsilon < 0$ interchanged.
As for $I_{i+2}$: Since the roles of $I_i$ and $I_{i+2}$ are interchangeable, the statement proved above for $I_i$ applies to $I_{i+2}$ as well. \hfill \rule{2mm}{2mm}
\end{enumerate}
\begin{definition}\label{def3.1}
Let $p$ be a point in $\bR^2 \setminus P$ (= ${\rm ext}P \cup {\rm int}P$), and $I$ be an edge of $P$. We say that $p$ \emph{sees} $I$ if, for some point $a \in {\rm relint}\ I, [p,a]\cap
P = \{a\}$.
\end{definition}
\begin{lemma}\label{lem3.3}
Assume $p \in \bR^2 \setminus P$. Then $p$ sees at least one edge of $P$.
\end{lemma}
\textbf{Proof:} Assume, w.l.o.g., that $p \in {\rm ext}P$. Let $q$ be a point in int$P$. Let $U$ be a neighborhood of $q$ that lies entirely in int$P$. Choose a point $q' \in U$
such that the line aff$(p,q')$ does not meet any vertex of $P$. (This condition can be met by avoiding a finite number of lines through $p$.) Then the line segment $[p,q']$ must meet
$P$. Let $a$ be the first point of $P$ on $[p,q']$ (starting from $p$). Then $a$ is a relative interior point of some edge $I$ of $P_i$, and $[p,a]\cap P = \{a\}$. \rule{2mm}{2mm}
\begin{definition}{(poldiam(\boldmath$\cdot$\unboldmath))}:\label{def3.2}
For a set $S \subset \bR^2$ and points $a,b \in S$, denote by $\pi_S (a,b)$ the smallest number of edges of a polygonal path that connects $a$ to $b$ within $S$ ($\pi_S(a,b) =_{\rm def}
\infty$ if no such polygonal path exists). If $S$ is polygonally connected, then $\pi_S (\cdot,\cdot)$ is an integer valued metric on $S$. The \emph{polygonal diameter} of $S$ is defined
as poldiam$(S)=_{\rm def}$ ${\rm sup}\{\pi_S(a,b) : a,b \in S\}$.
\end{definition}
To prove (F) in Section 1 above, it suffices to show that poldiam(int$P$)$< \infty$ and poldiam(ext$P$)$< \infty$.
The following theorem does it.
\begin{theorem}{\rm \textbf{(straightforward upper bound on poldiam(int\boldmath$P$\unboldmath) and poldiam(ext\boldmath$P$\unboldmath))}}\label{theo3.1}
If $P$ is a simple closed $n$-gon $(n \ge 3)$ in $\bR^2$, then we have that {\rm poldiam(int$P$)} and {\rm poldiam(ext$P$)} are both $\le \lfloor\frac{n}{2}\rceil + 3$.
\end{theorem}
\textbf{Proof:} Assume that $a,b$ are two points in the same component (int$P$ or ext$P$) of $\bR^2 \setminus P$. By Lemma \ref{lem3.2}, $a\,[b]$ sees at least one edge
$I'\,[I'']$ of $P$ via $\bR^2 \setminus P$ (possibly $I' = I''$). The set $P \setminus ({\rm relint} (I' \cup I''))$ consists of at most two simple polygonal paths
$P',P''$, the shorter one of which, say $P'$, concatenated by $I',I''$ in both of its endpoints is of the form $\langle J_0,J_1,\dots,J_m,J_{m+1}\rangle$, where $m \le
\lfloor\frac{n-2}{2}\rfloor = \lfloor\frac{n}{2}\rfloor-1, J_0, J_1\dots,J_{m+1}$ are edges of $P \, (\{J_0,J_{m+1}\} = \{I',I''\})$, $J_{i-1}$ and $J_i$ share a vertex $q_i$
for $i=1, \, 2,\dots,m+1$, $a$ sees via $\bR^2 \setminus P$ a point $a' \in {\rm relint} J_0$, and $b$ sees via $\bR^2 \setminus P$ a point $b' \in {\rm relint} J_{m+1}$.
Thus $\langle a,a',q_1,q_2,\dots,q_m,q_{m+1},b',b\rangle$ is a polygonal path of $m + 4 \le \lfloor\frac{n}{2}\rfloor - 1 + 4 = \lfloor\frac{n}{2}\rfloor + 3$ edges that connects
$a$ to $b$ and runs along $P$ except for $[a,a']$ and $[b',b]$. By Lemma \ref{lem3.2}, this path can be pushed away from $P$ into $\bR^2 \setminus P$, thus producing a polygonal
path of $m + 4 \le \lfloor\frac{n}{2}\rfloor +3$ edges that connects $a$ to $b$ via $\bR^2 \setminus P$. \hfill \rule{2mm}{2mm}
\section{Tight upper bounds on poldiam(int\boldmath$P$\unboldmath) and on poldiam(ext\boldmath$P$)}
Theorem \ref{theo3.1} gives a upper bound on poldiam(int$P$)~ [poldiam(ext$P$)] which is somewhat ``naive'', but sufficient to prove (F) in Section 1 above. Here we ``squeeze''
the proof of Theorem \ref{theo3.1} to obtain a tight result.
\begin{theorem}{\rm (Main Theorem)}\label{theo4.1}
Let $P$ be a simple closed $n$-gon in $\bR^2, n \ge 3$. Then
\begin{enumerate}
\item[(a)] the polygonal diameter of {\rm int}$P$ is $\le \lfloor\frac{n}{2}\rfloor$, and the polygonal diameter of {\rm ext}$P$ is $\le \lceil\frac{n}{2}\rceil$;
\item[(b)] for every $n \ge 3$, there is an $n$-gon $P_n$ for which \emph{both} bounds are attained.
\end{enumerate}
\end{theorem}
\textbf{Proof of Theorem 4.1(a):} First note that if $P$ is a convex polygon, then poldiam(int$P)= 1 \le \lfloor\frac{n}{2}\rfloor$, and it can be easily checked that
poldiam(ext$P)=2 \le \lceil\frac{n}{2}\rceil$. (If we consider the closures, however, we find that poldiam(cl int$P)=1$, whereas poldiam(cl ext$P)= 3$ if $P$ has
parallel edges, and equals $2$ otherwise.) This settles the case $n=3$ ($P_3$ is just a triangle). If $n=4$ and $P$ is not convex, then ext$P$ is the union of three convex sets
(two open half-planes and a wedge), each two having a point in common, and therefore poldiam (ext$P) = 2 = \lceil\frac{n}{2}\rceil$. This settles the case $n=4$ for ext$P$.
In view of the proof of Theorem \ref{theo3.1} and the foregoing discussion, we can establish the bounds on poldiam(int$P$) and poldiam(ext$P$) as claimed in Theorem \ref{theo4.1}(a) by showing the following:
\begin{theorem}\label{theo4.2}
Let $P$ be a closed simple $n$-gon in $\bR^2$.
\begin{enumerate}
\item[(i)]
If $n \ge 4$ and $a,b \in {\rm int}P$, then there are two vertices $a',b'$ of $P$ such that a sees $a'$ via int$P$, $b$ sees $b'$ via {\rm int}$P$, and $a',b'$ are at
most $\lfloor\frac{n}{2}\rfloor-2$ edges apart on $P$. (Recall that ``$a$ sees $a'$ via {\rm int}$P$'' means just: $]a,a'[\subset {\rm int}P$.)
\item[(ii)] If $n \ge 5$ and $a,b \in {\rm ext}P$, then there are two vertices $a',b'$ of $P$ such that a sees $a'$ via {\rm ext}$P$, $b$ sees $b'$ via {\rm ext}$P$, and $a',b'$ are at
most $\lceil\frac{n}{2}\rceil-2$ edges apart on $P$,\\
\emph{or:} $\pi_{{\rm ext P}}(a,b) \le 3 \left(\le \lceil\frac{n}{2}\rceil\right.$ for $\left.n \ge 5\right)$.
\end{enumerate}
\end{theorem}
\begin{remark}\label{rem1}
The condition $n \ge 5$ in the first part of Theorem \ref{theo4.2} (ii) cannot be relaxed to $n \ge 4$: Let $P_4 = \langle p_0,p_1,p_2,p_3\rangle$ be a convex
quadrilateral, and let $a,b \in {\rm ext}P_4$,
$a$ close to $[p_0,p_1]$ and $b$ close to $[p_2,p_3]$. Then $a$ and $b$ do not see a common vertex of $P_4$.
\end{remark}
\begin{lemma}\label{lem4.1}
Let $P$ be a simple closed polygon in $\bR^2$. Let $\lceil b',p\rceil$ be an edge of $P$, $a,b$ two points such that $a \in \bR^2 \setminus P$, $b \in ]b',p]$ $($=$[b',p]
\setminus \{b'\})$ and a sees $b$ $($via $\bR^2 \setminus P$$)$. Then a sees $($via $\bR^2 \setminus P$$)$ a vertex of $P$ included in $[a,b',b] \setminus [a,b]$.
\end{lemma}
\textbf{Proof:} If a sees $b'$ then we are done. Otherwise the polygon $P \setminus ]b',p[$ meets the set $[a,b,b'] \setminus [b',b]$. For $0 \le \lambda \le 1$, define
$b(\lambda) =_{\rm def} (1-\lambda) b + \lambda b'$, and let $\lambda_0$ be the smallest value of $\lambda$, $0 \le \lambda \le 1$, such that
$[a,b (\lambda)] \cap (P \setminus ]b',p[) \not= \emptyset$
$(0 < \lambda_0 \le 1; \lambda_0 = 1$ is possible). Let $c'$ be the point of $[a,b(\lambda_0) ] \cap P$ nearest to $a$. Then $c'$ is a vertex of $P$, $c' \in [a,b,b'] \setminus
[a,b]$ and $a$ sees $c'$. \hfill \rule{2mm}{2mm}
\begin{corollary}\label{cor4.1}
Let $P$ be a simple closed $n$-gon, $n \ge 3$, in $\bR^2$. Every point $a \in \bR^2 \setminus P$ sees via $\bR^2 \setminus P$ at least two vertices of $P$.
\end{corollary}
\textbf{Proof:} Let $R$ be a ray emanating from $a$ that meets $P$. By a slight rotation of $R$ around $a$ we may assume that $R$ does not meet any vertex of $P$, but still
$R \cap P \not= \emptyset$. Let $b$ be the first point of $R$ that belongs to $P$ (starting from $a$). By assumption $b \in [b',b''[$ for some edge $[b',b'']$ of $P$. By Lemma \ref{lem4.1}, $a$ sees
via $\bR^2 \setminus P$ a vertex $c'$ $[c'']$ of $P$ included in $[a,b,b'] \setminus [a,b]$ [included in $[a,b,b''] \setminus [a,b]$], and clearly $c' \not= c''$.
\hfill \rule{2mm}{2mm}
\begin{lemma}\label{lem4.2}
Let $P$ be a simple closed $n$-gon, $n \ge 4$, in $\bR^2$, and let $a \in \bR^2 \setminus P$. If every ray emanating from a meets $P$, then a sees via $\bR^2 \setminus P$ two
\emph{non-adjacent} vertices of $P$.
\end{lemma}
\begin{remark}\label{rem2}
The condition that every ray emanating from $a$ meets $P$ is met by every point $a \in {\rm int}P$.
\end{remark}
\textbf{Proof:} By Corollary \ref{cor4.1}, $a$ sees a vertex $c$ of $P$ via $\bR^2 \setminus P$. Consider the ray $R =_{\rm def} \{a + \lambda (a-c) : \lambda \ge 0\}$ that
emanates from $a$ in a direction \emph{opposite} to $c$. By our assumption, $R$ meets $P$. Let $b$ be the first point of $R$ that belongs to $P$. If $b$ is a vertex
of $P$, then a sees the two vertices $b,c$ via
$\bR^2 \setminus P$. These vertices are \emph{not adjacent}, since $[c,b] \cap P = \{c,b\}$. Otherwise, if $b$ is not a vertex of $P$, then $b$ is a relative interior point
of an edge $[b',b'']$ of $P$ $(R \cap ]b',b''[ = \{b\}$). By Lemma \ref{lem4.1}, $a$ sees via $\bR^2 \setminus P$ a vertex $c'$ $[c'']$ of $P$ included in $[a,b,b'] \setminus [a,b]$
[included in $[a,b,b''] \setminus [a,b]$]. Clearly, $c' \not= c''$ and $c',c''$ are non-adjacent in $P$ unless $c' = b'$ and $c'' = b''$. In this
case $a$ sees via $\bR^2 \setminus P$ both couples of vertices $\{c,b'\}$ and $\{c,b''\}$. At least one of these couples is \emph{non-adjacent} in $P$, otherwise $P$ would
be a triangle, contrary to the assumption that $n \ge 4$. \hfill \rule{2mm}{2mm}
\textbf{Proof of Theorem 4.2:}\label{theo4.2}
\begin{enumerate}
\item[(i)] Suppose $P$ is a simple closed $n$-gon, $n \ge 4$, in $\bR^2$. Define $S =_{\rm def} {\rm int}P$, and assume $a,b \in S$. If $n = 4,5$, then cl$S$ (=$P \cup {\rm int}P$) is
starshaped with respect to a vertex of $P$. (If $n = 5$, then $S$ can be triangulated by two interior diagonals with a common vertex.) In this case $a$ and $b$ see via $S$ a
common vertex $a'$ of $P$. Define $b' =_{\rm def} a'$; we find that $a',b'$ are at zero edges apart on $P$. But $0 \le 0 = \lfloor\frac{n}{2}\rfloor-2$ for $n=4,5$.
Assume, therefore, that $n \ge 6$, and that $a$ and $b$ do not see a common vertex of $P$ via $S$. By Lemma \ref{lem4.2}, $a$ sees via $S$ two non-adjacent vertices $a',a''$ of
$P$. These vertices divide $P$ into two paths $P_1,P_2$, each having $\le n-2$ edges. Applying Lemma \ref{lem4.2} again, we find that $b$ sees via $S$ two non-adjacent
vertices $b',b''$ of $P$ and $\{a',a''\} \cap \{b',b''\} = \emptyset$.
If both $b'$ and $b''$ are interior vertices of the same path, say $P_1$, then they divide $P_1$ into three parts. The middle part has at least two edges, and the two
extreme parts together have at most $n-4$ edges. The shorter extreme part, with endpoints (say) $a',b'$, has at most $\lfloor\frac{n-4}{2}\rfloor = \lfloor\frac{n}{2}\rfloor-2$
edges.
If, however, $b'$ is an interior vertex of $P_1$ and $b''$ is an interior vertex of $P_2$, then they divide $P_1$ and $P_2$ into four polygonal paths, each
one of which having one endpoint $b'$ or $b''$. The shortest of these paths has at most $\lfloor\frac{n}{4}\rfloor$ edges. But $\lfloor\frac{n}{4}\rfloor \le \lfloor\frac{n}{2}\rfloor-2$
for $n \ge 6$.
\item[(ii)] Assume $n \ge 5$, define $T = {\rm ext}P$, and let $a,b \in T$. Then either
\begin{enumerate}
\item[(A1)] every ray emanating from $a$ meets $P$, \emph{or}
\item[(A2)] some ray emanating from $a$ misses $P$.
\end{enumerate}
Similarly, either
\begin{enumerate}
\item[(B1)] every ray emanating from $b$ meets $P$, \emph{or}
\item[(B2)] some ray emanating from $b$ misses $P$.
\end{enumerate}
If (A1) and (B1) hold, then both $a$ and $b$ see via $T$ two non-adjacent vertices of $P$ (Lemma \ref{lem4.2}).
If $n \ge 6$, this implies that $a[b]$ sees a vertex $a'$ $[b']$ of $P$ such that $a',b'$ are at most $\lfloor\frac{n-4}{2}\rfloor = \lfloor\frac{n}{2}\rfloor-2 \le
\lceil\frac{n}{2}\rceil-2$ or $\lfloor\frac{n}{4}\rfloor \le \lfloor\frac{n}{2}\rfloor-2 \le \lceil\frac{n}{2}\rceil-2$ edges apart on $P$, as in the proof of part (i) above.
If $n=5$, then $a$ sees via $T$ a vertex $a'$ of $P$, and $b$ sees via $T$ a vertex $b'$ of $P$, where $a'$ and $b'$ are either equal or adjacent, i.e., $a',b'$ are at most one
edge apart on $P$. But for $n=5$ one has $1 \le \lceil\frac{n}{2}\rceil-2$.
If (A2) and (B2) hold, then, due to the compactness of $P$, we can find rays $R_a = \{a + \lambda u : \lambda \ge 0\}$ and $R_b = \{b + \lambda v : \lambda \ge 0\}$ that
miss $P$, where the direction vectors $u$ and $v$ are \emph{linearly independent}. When $\lambda$ is sufficiently large, the segment $[a+\lambda u, b + \lambda u]$ misses $P$.
Therefore $\pi_T (a,b) \le 3 \left(\le \lceil\frac{n}{2}\rceil\right.$ for $\left.n \ge 5\right)$ if $R_a \cap R_b = \emptyset$, and $\pi_T(a,b) = 2 < 3 \left(\le\lceil\frac{n}{2}\rceil \mbox{ for } n \ge 5 \right)$ if $R_a \cap R_b \not= \emptyset$.
If (A1) and (B2) hold, then $a$ sees via $T$ two non-adjacent vertices $a',a''$ of $P$, which divide $P$ into two paths $P_1,P_2$ (with disjoint relative interiors) each one of which
having $\le n-2$ edges. The point $b$,
however, sees two distinct vertices $b',b''$ of $P$, which may be adjacent (Corollary 4.1).
If $\{a',a''\} \cap \{b',b''\} \not= \emptyset$, then again $\pi_T(a,b) \le 2 < 3 \left(\le\lceil\frac{n}{2}\rceil\right.$ for $\left.n \ge 5\right)$. If $\{a',a''\}
\cap \{b',b''\} = \emptyset$, then $b'$ and $b''$ are interior vertices of $P_1$ or $P_2$, or both. If $b'$ and $b''$ belong to different paths, then (as in the proof of part
(i) above) they divide $P_1$ and $P_2$ into four polygonal paths, each having one endpoint $b'$ or $b''$. The shortest one of these paths has at most $\lfloor\frac{n}{4}\rfloor$ edges. But $\lfloor\frac{n}{4}\rfloor \le
\lceil\frac{n}{2}\rceil-2$ for $n \ge 5$. If both $b'$ and $b''$ are interior vertices of the same path, say $P_1$, then (as in the proof of part (i) above) they divide $P_1$
into three parts. The two extreme parts together have at most $n-2-1=n-3$ edges. The shortest extreme part with endpoints (say) $a',b'$ has at most $\lfloor\frac{n-3}{2}\rfloor$
edges. But
$\lfloor\frac{n-3}{2}\rfloor = \lfloor\frac{n-1}{2}\rfloor-1 = \lceil\frac{n}{2}\rceil-2$ for all $n \in \bN $.
The same applies when (A2) and (B1) hold. This finishes the proof of Theorem 4.2. \hfill \rule{2mm}{2mm}
\end{enumerate}
By this also the proof of Theorem 4.1(a) is finished. \hfill \rule{2mm}{2mm}
\textbf{Proof of Theorem 4.1(b):}
We split our examples into two cases, namely even $n$ and odd $n$, $n \ge 3$.
\begin{example}\label{ex4.1} \textbf{\boldmath$n=2m$\unboldmath~ (even), \boldmath$m \ge 2$\unboldmath}.
Figure 6 shows the example for the case $m = 3\, (n=6)$.
\begin{center}
\end{center}
\begin{center}
Fig. 6: $m = 3 \, (n=6)$
\end{center}
Here we have
$\pi_{{\rm int} P} (a,b) = m \,(=3) = \lfloor\frac{n}{2}\rfloor$ and
$\pi_{{\rm ext} P} (c,d) = m \,(=3) = \lceil\frac{n}{2}\rceil $.
One can extend the figure inward beyond vertex $\# 4$.
\end{example}
\begin{example}\textbf{\boldmath$n=2m+1$\unboldmath~ (odd), \boldmath$m \ge 1$\unboldmath.}\label{ex4.2}
Figure 7 shows the example for the case $m = 3$ $(n=7)$
\begin{center}
\end{center}
\begin{center}
Fig. 7: $m = 3 \, (n=7)$
\end{center}
We have
$\pi_{{\rm int} P} (a,b) = m\, (=3) = \lfloor\frac{n}{2}\rfloor$ and
$\pi_{{\rm ext} P} (c,d) = m+1\, (=4) = \lceil\frac{n}{2}\rceil$.
Again, one can extend the figure inward beyond vertex $\# 4$.
\end{example}
|
2,869,038,154,555 | arxiv | \section{Introduction}
Currently most laboratory experiments are described up to a very good
precision by the Standard Model of particle interactions. However,
recent developments show that effects beyond the Standard Model surely
exist. The anomaly in atmospheric neutrinos is now understood by
$\nu_\mu \rightarrow \nu_\tau$
oscillation~\cite{Ashie:2005ik,Ashie:2004mr}, while the solar neutrino
puzzle is solved by the oscillation $\nu_e \rightarrow \nu_{\mu,
\tau}$~\cite{Ahmed:2003kj,Araki:2004mb} incorporating the MSW LMA
solution
\cite{Wolfenstein:1977ue,Mikheev:1986wj,Mikheev:1986if,Mikheev:1986gs}.
Current data are consistent with flavor oscillations between three
active neutrinos\footnote{We do not include the LSND
anomaly~\cite{Aguilar:2001ty} in present analysis.} with parameters
given in table~\ref{tab:noscdat}. The definition of mixing angles is
usual
\begin{equation}\label{e:u3}
\left(\begin{matrix}
\nu_e \\ \nu_{\mu} \\ \nu_{\tau}
\end{matrix}\right)
=
\left(\begin{matrix}
c_{12}c_{13} & s_{12}c_{13} & s_{13}
\\
-s_{12}c_{23}-c_{12}s_{23}s_{13}\mathrm{e}^{i\delta} &
c_{12}c_{23}-s_{12}s_{23}s_{13}\mathrm{e}^{i\delta} &
s_{23}c_{13} \mathrm{e}^{i\delta}
\\
s_{12}s_{23}-c_{12}c_{23}s_{13}\mathrm{e}^{i\delta} &
-c_{12}s_{23}-s_{12}c_{23}s_{13}\mathrm{e}^{i\delta} &
c_{23}c_{13} \mathrm{e}^{i\delta}
\end{matrix}\right)
\times \left(\begin{matrix}
\mathrm{e}^{i\alpha_1/2}~\nu_1 \\ \mathrm{e}^{i\alpha_2/2}~\nu_2 \\ \nu_3
\end{matrix}\right)
\;,
\end{equation}
where $s_{ij}\equiv\sin\theta_{ij}$, $c_{ij}\equiv\cos\theta_{ij}$,
$\delta$ is the usual CP-violating phase and $\alpha_1$, $\alpha_2$
are Majorana phases. The three neutrino masses $m_i$ should be added
to the parameter set that describes the matrix (\ref{e:u3}),
representing therefore nine unknown parameters altogether.
\begin{table}
\caption{Neutrino oscillation parameters (2004 status)}
\label{tab:noscdat}
\begin{center}
\begin{tabular}{ccl}
Parameter & Value $\pm1\sigma$ & Comment \\
\hline
$\Delta m_{21}^2$ & $7.9_{-0.5}^{+0.6}\times10^{-5}\mbox{ \upshape\textrm{eV}}^2$
& Solar $\nu$ \cite{Araki:2004mb,Ahmed:2003kj}\\
\hline
$\tan^2\theta_{12}$ & $0.40_{-0.07}^{+0.10}$
& For $\theta_{13}=0$ \cite{Araki:2004mb,Ahmed:2003kj} \\
\hline
$|\Delta m_{32}^2|$ & $2.0_{-0.4}^{+0.6}\times10^{-3}\mbox{ \upshape\textrm{eV}}^2$
& Atmospheric $\nu$ \cite{Ashie:2005ik} \\
\hline
$\sin^2 2\theta_{23}$ & $>0.95$
& For $\theta_{13}=0$ \cite{Ashie:2005ik}\\
\hline
$\sin^2 \theta_{13}$ & $<0.016$
& \cite{Bahcall:2004ut} \\
\hline
\end{tabular}
\end{center}
\end{table}
Another strong indication that the Standard Model is not complete
comes from cosmology. Recently, various cosmological observations
have revealed that the universe is almost spatially flat and mainly
composed of dark energy ($\Omega_{\Lambda}=0.73 \pm 0.04$), dark
matter ($\Omega_\mathrm{dm} = 0.22 \pm 0.04$) and baryons ($\Omega_b =
0.044 \pm 0.004$)~\cite{Eidelman:2004wy}.
A promising way of extending the Minimal Standard Model leading to
explanation of these facts was proposed in~\cite{Asaka:2005an}. The
idea is to add 3 right-handed neutrinos to the model with the most
general gauge-invariant and renormalizable Lagrangian. One then
requires that active neutrinos satisfy the known oscillation data, and
the (Warm) Dark
Matter~\cite{Peebles:1982ib,Olive:1981ak,Dodelson:1993je,Shi:1998km,%
Dolgov:2000ew,Abazajian:2001nj} %
is given by the right-handed (sterile) neutrinos (one could also try
to add only 2 right-handed neutrinos, what is enough to explain
oscillation data, but it turns out to be inconsistent with sterile
neutrino being Dark Matter). These surprisingly leads to a stringent
constraint on the active neutrino masses---the lightest neutrino
should have the mass less than about $10^{-5}\mbox{ \upshape\textrm{eV}}$.
Baryon number asymmetry of the Universe can also be explained in
$\nu$MSM, see Ref.~\cite{Asaka:2005pn}. More constraints on the
parameters of the sterile neutrinos appear from that consideration,
but no additional restrictions on active neutrino parameters, that are
relevant for the current discussion are introduced.
We are going to analyze the effective Majorana mass for neutrinoless
double beta decay emerging in this model. Section~\ref{sec:nuMSM}
revives the main points of $\nu$MSM relevant for our discussion, and
section~\ref{sec:0nubb} provides the estimate of the effective
Majorana mass for neutrinoless double $\beta$ decay in the model.
\section{The $\nu$MSM Model}
\label{sec:nuMSM}
Lagrangian of $\nu$MSM, introduced in~\cite{Asaka:2005an} adds 3 right
handed neutrinos to the Standard Model, which are SU(2)$\times$U(1)
singlets and have the most general gauge-invariant and renormalizable
interactions:
\begin{eqnarray*}
\delta {\cal L}
= \overline{N_I} i \partial_\mu \gamma^\mu N_I
- f^{\nu}_{I\alpha} \, \Phi \overline{N_I} L_\alpha
- \frac{M_I}{2} \; \overline{N_I^c} N_I + h.c. \,,
\end{eqnarray*}
where $\Phi$ and $L_\alpha$ ($\alpha=e,\mu,\tau$) are the Higgs and
lepton doublets, respectively, and both Dirac ($M^D = f^\nu \langle
\Phi \rangle$) and Majorana ($M_I$) masses for neutrinos are
introduced. We have taken a basis in which mass matrices of charged
leptons and right-handed neutrinos are real and diagonal.
In~\cite{Asaka:2005an} this model was called ``the \emph{$\nu$ Minimal
Standard Model} (the $\nu$MSM)''.
Let us first discuss neutrino masses and mixing in the $\nu$MSM. We
will restrict ourselves to the region in which the Majorana neutrino
masses are larger than the Dirac masses, so that the seesaw
mechanism~\cite{Yanagida:1980xy} can be applied. Note that this does
not reduce generality since the latter situation automatically appears
when we require the sterile neutrinos to play a role of dark matter,
as we shall see. Then, right-handed neutrinos $N_I$ become
approximately the mass eigenstates with $M_1 \le M_2 \le M_3$, while
other eigenstates can be found by diagonalizing the mass matrix:
\begin{eqnarray*}
M^\nu = \left(M^D\right)^T \; M_I^{-1} \; M^D \,.
\end{eqnarray*}
which we call the seesaw matrix. The mass eigenstates $\nu_i$
($i=1,2,3$) are found from
\begin{eqnarray*}
U^T M^\nu U = M^\nu_\mathrm{diag} =
\mathrm{diag}(m_1, m_2, m_3 ) \,,
\end{eqnarray*}
and the mixing in the charged current is expressed by
$\nu_\alpha = U_{\alpha i} \, \nu_i + \Theta_{\alpha I}\, N_I^c$
where $\Theta_{\alpha I} = (M^D)^\dagger_{\alpha I} M_I^{-1} \ll
1$ under our assumption. This is the reason why right-handed
neutrinos $N_I$ are often called ``sterile'' while $\nu_i$
``active''.
For three sterile neutrinos added to the SM all active neutrinos
acquire masses, and the smallest mass can be in the range $0 \le
m_\mathrm{min} \lesssim \mathcal{O}(0.1)\mbox{ \upshape\textrm{eV}}$~\cite{Seljak:2004xh}. In
particular, the degenerate mass spectra of active neutrinos are
possible when $m_\mathrm{min}^2 \gtrsim \Delta m_\mathrm{atm}^2$.
Note also that there are two possible hierarchies in the masses of
active neutrinos, i.e.\ ``normal'' hierarchy $\Delta m_{32}^2>0$
leading to $m_1<m_2<m_3$, and ``inverted'' hierarchy $\Delta
m_{32}^2<0$ with $m_3<m_1<m_2$. Note, that here $\nu_1$ is the mass
state maximally mixed with the electron flavor neutrino and $\nu_3$ is
the mass state maximally mixed with $\tau$ neutrino (this is different
from the convention $m_1<m_2<m_3$ used in \cite{Asaka:2005an}).
When the active-sterile neutrino mixing $\abs{\Theta_{\alpha I}}$ is
sufficiently small, the sterile neutrino $N_I$ has never been in
thermal equilibrium and is produced in non-equilibrium reactions. The
production processes include various particle decays and conversions
of active into sterile neutrinos. Requirement that enough of Dark
Matter neutrino is produced leads to the following constraint on the
Dirac mass term in the Lagrangian (see Ref.~\cite{Asaka:2005an})
\begin{equation}
\label{eq:DMCondition}
\sum_I \sum_{\alpha = e,\mu,\tau} \abs{ M^D_{I \alpha}}^2
= m_0^2 \,,
\end{equation}
where $m_0 = {\cal O}(0.1)\mbox{ \upshape\textrm{eV}}$ and the summation over $I$ is only over
sterile neutrinos being Warm Dark Matter. Notice that this constraint
on dark-matter sterile neutrinos is independent of their masses, at
least for $M_I$ in the range discussed below.
The sterile neutrino, being warm dark matter, further receives
constraints from various cosmological observations and the
possible mass range is very restricted as
\begin{equation*}
2 \mbox{ \upshape\textrm{keV}} \lesssim M_I \lesssim 5 \mbox{ \upshape\textrm{keV}} \,,
\end{equation*}
where the lower bound comes from the cosmic microwave background and
the matter power spectrum inferred from Lyman-$\alpha$ forest
data~\cite{Viel:2005qj}, while the upper bound is given by the
radiative decays of sterile neutrinos in dark matter halos limited by
X-ray observations~\cite{Abazajian:2001vt}.
The constraint~(\ref{eq:DMCondition}) together with the neutrino
oscillation data leads to the following conclusion. At least $3$
right-handed neutrinos are required. In case of only 3 sterile
neutrinos only one of them can play the role of WDM (let it be $M_1$,
for definiteness), and the mass of the lightest active neutrino
$m_\mathrm{min}$ should be less than about $10^{-5}\mbox{ \upshape\textrm{eV}}$ (see
Ref.~\cite{Asaka:2005an} for details). If there are more than three
sterile neutrinos no constraint is present.
In the work~\cite{Asaka:2005pn} baryon asymmetry of the Universe was also
explained in the framework of $\nu$MSM. Additional constraints from
requirement of correct baryon asymmetry arise on the parameters of the
sterile neutrinos in $\nu$MSM, but no additional constraints appear on
the parameters of the active neutrinos relevant for our discussion
here.
\section{Neutrinoless Double Beta Decay Effective Mass}
\label{sec:0nubb}
The constraints described in the previous section allow to determine
the effective mass for neutrinoless double beta decay. This mass is
related to the mass eigenvalues and mixings by
\begin{equation}\label{mbb}
m_{\beta\beta}=\left|
\sum_i m_iU_{ei}^2
+M_1\Theta_{e1}^2
\right|
\;,
\end{equation}
where the first term corresponds to the standard three neutrino
contribution, and the second one is the contribution from the Dark
Matter sterile neutrino. The other two sterile neutrinos are
considered heavy ($\gtrsim10\mbox{ \upshape\textrm{GeV}}$, see Ref.~\cite{Asaka:2005pn}) and
do not contribute to $m_{\beta\beta}$.
First, let us estimate the contribution from the last term. Using
definition of $\Theta_{e1}$ we get
\[
|M_1\Theta_{e1}^2| = \frac{|{M^{D}_{1e}}^2|}{M_1}
\;.
\]
The dark matter constraint~(\ref{eq:DMCondition}) requires
$|{M^{D}_{1e}}^2|\lesssim (0.1\mbox{ \upshape\textrm{eV}})^2$. So, the absolute value of the whole
contribution is (as far as $M_1\simeq O(1)\mbox{ \upshape\textrm{keV}}$)
\[
|M_1\Theta_{e1}^2| < 10^{-5}\mbox{ \upshape\textrm{eV}}
\;.
\]
This means that it can be neglected for any reasonable contribution
from the first term in~(\ref{mbb}).
So, standard analysis of the formula~(\ref{mbb}) can be applied (see
eg.~\cite{Aalseth:2004hb}). For normal neutrino mass hierarchy we have
(as far as $m_1$ can be neglected)
\begin{equation*}
m_{\beta\beta}^{NH} = \left|
\sqrt{\Delta m_{21}^2}\sin^2\theta_{12}\cos^2{\theta_{13}}
+\sqrt{|\Delta m_{31}^2|}\sin^2\theta_{13}\mathrm{e}^{-i\alpha_2}
\right|
\;.
\end{equation*}
For $\theta_{13}=0$ this leads to $m_{\beta\beta}^{NH}=2.6\pm0.4\mbox{ \upshape\textrm{meV}}$. Using
$1\sigma$ bound on $\theta_{13}$ from \cite{Bahcall:2004ut}, we get
$1.3\mbox{ \upshape\textrm{meV}}<m_{\beta\beta}^{NH}<3.4\mbox{ \upshape\textrm{meV}}$. It is worth noting, however, that for
$\tan^2_{13}\ge\sin^2\theta_{12}\sqrt{|\Delta m_{21}^2|/\Delta
m_{31}^2}\sim 0.06$ complete cancellation may occur, so at $3\sigma$
level $m_{\beta\beta}^{NH}$ can be zero.
In the case of inverted hierarchy, neglecting $m_3$, one obtains
\begin{equation*}
m_{\beta\beta}^{IH} =\sqrt{|\Delta m_{31}^2|}
\cos^2\theta_{13}
\sqrt{1-\sin^22\theta_{12}\sin^2\frac{\alpha_2-\alpha_1}{2}}
\;.
\end{equation*}
So, we get $13\mbox{ \upshape\textrm{meV}}<m_{\beta\beta}^{IH}<50\mbox{ \upshape\textrm{meV}}$.
\section{Conclusions}
In the $\nu$MSM model the lightest active neutrino has the mass
$<10^{-5}\mbox{ \upshape\textrm{eV}}$, and there is a relatively light sterile neutrino with
the mass $2\mbox{ \upshape\textrm{keV}}\lesssim M_1\lesssim5\mbox{ \upshape\textrm{keV}}$ and mixing with active
neutrinos of the order of $10^{-4}$, which plays a role of the Warm
Dark Matter. Though it is quite light, the sterile dark matter
neutrino makes a vanishing contribution to effective neutrinoless
double $\beta$ decay Majorana mass $m_{\beta\beta}$ because of its
small mixing angle. Thus, predictions for $m_{\beta\beta}$ can be
obtained from usual analysis with zero lightest neutrino mass.
Specifically, current $1\sigma$ limits are
\[
1.3\mbox{ \upshape\textrm{meV}}<m_{\beta\beta}^{NH}<3.4\mbox{ \upshape\textrm{meV}}
\]
for normal active neutrino mass hierarchy and
\[
13\mbox{ \upshape\textrm{meV}}<m_{\beta\beta}^{IH}<50\mbox{ \upshape\textrm{meV}}
\]
for inverted hierarchy.
\begin{acknowledgments}
Author is indebted to Mikhail Shaposhnikov for drawing his interest to
$\nu$MSM, numerous invaluable discussions of the properties of the
model and inspiration for the work. The work of F.B. is supported in
part by INTAS YSF 03-55-2201 and Russian Science Support Foundation.
\end{acknowledgments}
|
2,869,038,154,556 | arxiv | \section{Introduction}
It is a rather subjective matter to decide whether a given statement in \textbf{ZFC} belongs to the field of Elementary Number Theory or not. A typical example is Goodstein's Theorem, which, even if it concerns positive integers, it has been traditionally classified as belonging to the field of Symbolic Logic (see \cite{Goodstein}).
Throughout this paper, we will be interested in theorems of the form ``$R = S$'' in \textbf{ZFC}, where $R$ and $S$ are subsets of the set of positive integers, denoted $\mathbb{Z}_{\geq 1}$. Beside the above-mentioned remark, we will say, in a rather informal way, that ``$R = S$'' is an \emph{elementary number-theoretical statement} if $R$ and $S$ concern some kind of integers traditionally studied in Elementary Number Theory, e.g. prime numbers, perfect numbers, square free numbers, integers which are the sum of two squares, etc. We will leave open the question of what is not an elementary number-theoretical theorem.
Our standpoint is to assign a word $\gamma(n) \in \Sigma^{\ast}$ over a finite alphabet $\Sigma$ to any $n \in \mathcal{U}$, where $\mathcal{U}$ is a subset of $\mathbb{Z}_{\geq 1}$. The traditional way to do it is by means of the decimal positional numeration system, where $\Sigma = [0..9]$ and $\mathcal{U} = \mathbb{Z}_{\geq 1}$. In this case, $\gamma^{-1}(w)$ is either the empty set (g.e. $\gamma^{-1}(0001) = \emptyset$) or a singleton (g.e. $\gamma^{-1}(29) = \{29\}$).
Each choice of $\mathcal{U}$, $\Sigma$ and $\gamma$ gives rise to an structure $\mathcal{T} := \left(\mathcal{U}, \Sigma, \gamma \right)$ that we will call \emph{arithm\'etique langagi\`ere}. In this structure it is natural to define a notion of proof (see Definition \ref{Defundnudsufbsubfuywwe89}) using the minimal $\sigma$-algebra containing the family of sets $\left( \gamma^{-1}(w) \right)_{w \in \Sigma^{\ast}}$. This notion of proof is a refinement of the ordinary notion of proof in \textbf{ZFC} (see Lemma \ref{lemfundu3498ur9483ur983u9ru39}). In the case of the decimal positional numeration system, considered as an arithm\'etique langagi\`ere, it is easy to write a proof that a positive integer, which is divisible by $10$, it is also divisible by $5$ (just look at the last character).
In this paper we are particularly interested in a family of arithm\'etiques langagi\`eres, denoted $\textbf{KR}_{\lambda}$ and parametrized by a real number $\lambda > 1$. The original motivation for the definition of $\textbf{KR}_{\lambda}$ is that, for $\lambda = 2$, $\gamma(n)$ encodes, up to an injective morphism of monoids, the non-zero coefficients of the polynomials $C_n(q)$, introduced in \cite{kassel2015counting} and \cite{kassel2016complete}. A quickly way to define $C_n(q)$ is as the number of ideals $I$ of the group algebra $\mathbb{F}_q\left[ \mathbb{Z}\oplus \mathbb{Z}\right]$ such that $\mathbb{F}_q\left[ \mathbb{Z}\oplus \mathbb{Z}\right]/I$ is an $n$-dimensional vector space. It is remarkable that these polynomials are related to classical multiplicative functions via modular forms (see \cite{kassel2016fourier}).
We will show that the arithm\'etique langagi\`ere $\textbf{KR}_{2}$ can be used to prove, in a natural way, statements (Theorems \ref{teojosjdiojiosdfjs} and \ref{teo8u8439u98ur39ur93}) concerning semi-perimeters of Pythagorean triangles (Definition \ref{defu48589355hjk5h34hk34}), even-trapezoidal numbers (Definition \ref{defnur3huihuihrwiuh98u89}) and $2$-densely divisible numbers (Definition \ref{defju87fd97s7f7ds987f98sd7}). Also, we will show a statement (Theorem \ref{teojrojiorjrwde}) about generalized middle divisors (Definition \ref{defn897e7eeeee7ew7r7r98789ew7rer}), due to H\"oft \cite{Hoft}, whose proof using our approach involves the whole family of arithm\'etiques langagi\`eres $\left(\textbf{KR}_{\lambda} \right)_{\lambda > 1}$.
\section{Preliminaries}
\subsection{Symmetric Dyck words}
\begin{definition}[Definition 1 in \cite{Caballero1}]
Let $\lambda > 1$ be a real number. For any integer $n \geq 1$ define the word
$$
\langle\!\langle n \rangle\!\rangle_{\lambda} := w_1 w_2 ... w_k \in \{a,b\}^{\ast},
$$ by means of the expression
$$
w_i := \left\{ \begin{array}{c l}
a & \textrm{if } u_i \in D_n \backslash \left(\lambda D_n\right), \\
b & \textrm{if } u_i \in \left(\lambda D_n\right)\backslash D_n,
\end{array} \right.
$$
where $D_n$ is the set of divisors of $n$, $\lambda D_n := \{\lambda d: \quad d \in D_n\}$ and $u_1, u_2, ..., u_k$ are the elements of the symmetric difference $D_n \triangle \lambda D_n$ written in increasing order.
\end{definition}
\begin{definition}
For each real number $\lambda > 1$ define the language
$$
\mathcal{L}_{\lambda} := \left\{ \langle\!\langle n \rangle\!\rangle_{\lambda}: \quad n \in \mathbb{Z}_{\geq 1} \right\}.
$$
\end{definition}
The \emph{Dyck language}, denoted $\mathcal{D}$, is defined as the $\subseteq$-smallest language over the alphabet $\{a,b\}$ satisfying $\varepsilon \in \mathcal{D}$, $a\mathcal{D}b \subseteq \mathcal{D}$ and $\mathcal{D}\mathcal{D} \subseteq \mathcal{D}$. Words in $\mathcal{D}$ are called \emph{Dyck words}.
The \emph{symmetric Dyck language}, denoted $\mathcal{D}^{\textrm{sym}}$, is defined by
$$
\mathcal{D}^{\textrm{sym}} := \{w \in \mathcal{D}: \quad \widetilde{w} = \sigma\left(w \right) \},
$$
where $\widetilde{w}$ is the mirror image of $w$ and $\sigma: \{a,b\}^{\ast} \longrightarrow \{a,b\}^{\ast}$ is the morphism of monoids given by $a \mapsto b$ and $b \mapsto a$. Words in $\mathcal{D}^{\textrm{sym}}$ are called \emph{symmetric Dyck words}.
\subsection{Irreducible Dyck words}
Let $(\mathcal{D}, \cdot)$ be the monoid of Dyck words endowed with the ordinary concatenation (usually omitted in notation).
It is well-known that $\mathcal{D}$ is freely generated by the language of \emph{irreducible Dyck words} $
\mathcal{D}^{\textrm{irr}} := a\mathcal{D}b
$, i.e. every word in $\mathcal{D}$ may be formed in a unique way by concatenating a sequence of words from $\mathcal{D}^{\textrm{irr}} $. So, there is a unique morphism of monoids $\Omega: (\mathcal{D}, \cdot) \longrightarrow (\mathbb{Z}_{\geq 1},+)$, such that the diagram
$$
\begin{tikzcd}
\mathcal{D} \arrow{r}{} \arrow[swap, dashed]{rd}{\Omega} & \left(\mathcal{D}^{\textrm{irr}}\right)^{\ast} \arrow[two heads]{d}{} \\
& \mathbb{Z}_{\geq 1}
\end{tikzcd} \label{Diagr093jr9j03}
$$
commutes, where $\mathcal{D} \longrightarrow \left(\mathcal{D}^{\textrm{irr}}\right)^{\ast}$ is the identification of $\mathcal{D}$ with the free monoid $\left(\mathcal{D}^{\textrm{irr}}\right)^{\ast}$ and $\left(\mathcal{D}^{\textrm{irr}}\right)^{\ast} \longrightarrow \mathbb{Z}_{\geq 1}$ is just the length of a word in $\left(\mathcal{D}^{\textrm{irr}}\right)^{\ast}$ considering each element of the set $\mathcal{D}^{\textrm{irr}} $ as a single letter (of length $1$). In other words, $\Omega(w)$, with $w \in \mathcal{D}$, is the number of irreducible Dyck words needed to obtain $w$ as a concatenation of them.
\subsection{The central concatenation}
\begin{definition}[from \cite{Caballero3}]
Consider the set $\mathcal{S} := \{aa, ab, ba, bb\}$ endowed with the binary operation, that we will call \emph{central concatenation},
$$
u \triangleleft v := \varphi^{-1}\left( \varphi(u)\varphi(v)\right),
$$
where $\varphi: \mathcal{S}^{\ast} \longrightarrow \mathcal{S}^{\ast}$ is the bijection given by
\begin{eqnarray*}
\varphi\left( \varepsilon\right) &=& \varepsilon, \\
\varphi\left( x\,u\,y\right) &=& (xy)\,\varphi\left( u\right),
\end{eqnarray*}
for all $x,y \in \{a,b\}$ and $u \in \mathcal{S}^{\ast}$.
\end{definition}
It is easy to check that $\left( \mathcal{S}, \triangleleft \right)$ is a monoid freely generated by $\mathcal{S}$ and having $\varepsilon$ as identity element.
\begin{definition}[from \cite{Caballero3}]
For any $x \in \mathcal{S}$, let
$$\ell_x: \left(\mathcal{S}^{\ast}, \triangleleft \right) \longrightarrow \left( \mathbb{Z}_{\geq 0}, +\right)$$
be the unique morphism of monoids satisfying
$$
\ell_x(y) := \left\{ \begin{array}{c l}
1 & \textrm{if } x = y, \\
0 & \textrm{if } x \neq y,
\end{array}\right.
$$
for all $y \in \mathcal{S}$.
\end{definition}
It is easy to prove that $\left( \mathcal{D}, \triangleleft\right)$ is a monoid freely generated by $
\mathcal{I} := \mathcal{D}_{\bullet} \backslash \left(\mathcal{D}_{\bullet} \triangleleft\mathcal{D}_{\bullet} \right)
$, where $\mathcal{D}_{\bullet} := \mathcal{D} \backslash\{\varepsilon\}$. The following definition corresponds to the notion of \emph{centered tunnels} introduced for the first time, in an equivalent way, in \cite{Elizalde}.
\begin{definition}[from \cite{Elizalde} and \cite{Caballero3}]
Let $\textrm{ct}: \left( \mathcal{D}, \triangleleft\right) \longrightarrow \left( \mathbb{Z}_{\geq 0}, +\right)$ be the morphism of monoids given by
$$
\textrm{ct}\left(w\right) := \left\{ \begin{array}{c l}
1 & \textrm{if } w = ab, \\
0 & \textrm{if } w \neq ab,
\end{array}\right.
$$
for all $w \in \mathcal{I}$. We say that $\textrm{ct}\left(w\right)$ is the \emph{number of centered tunnels} of $w$.
\end{definition}
\section{Logical framework}
\subsection{Th\'eorie langagi\`ere}
Let $\Sigma$ be a finite alphabet. Consider the measurable space $\left(\Sigma^{\ast}, \mathcal{P}\left(\Sigma^{\ast}\right)\right)$ of subsets of $\Sigma^{\ast}$ (languages over the alphabet $\Sigma$), where $\mathcal{P}\left(\Sigma^{\ast}\right)$ is the ordinary $\sigma$-algebra of subsets of $\Sigma^{\ast}$.
\begin{definition}
Let $\mathcal{U}$ be a set. A \emph{th\'eorie langagi\`ere}\footnote{In English we could say \emph{language-theoretic theory}, but it is longer than the French expression.} is a $3$-tuple $\left(\mathcal{U}, \Sigma, \gamma \right)$, where
$
\gamma: \mathcal{U}\longrightarrow \Sigma^{\ast}
$
is an application.
\end{definition}
\begin{definition}\label{Defundnudsufbsubfuywwe89}
Let $\mathcal{T} = \left(\mathcal{U}, \Sigma, \gamma \right)$ be a th\'eorie langagi\`ere. Denote by $\mathfrak{U}_{\mathcal{T}}$ the minimal $\sigma$-algebra containing the family of sets $\left( \gamma^{-1}(w) \right)_{w \in \Sigma^{\ast}}$. Given $R, S \in \mathcal{P}\left( \mathcal{U}\right)$, we say that the theorem ``$R=S$'' is \emph{provable in $\mathcal{T}$} if the following statements are provable in \textbf{ZFC},
\begin{enumerate}[label = (\roman*)]
\item ``$R, S \in \mathfrak{U}_{\mathcal{T}}$'',
\item ``$\gamma\left( R\right) = \gamma\left( S\right)$''.
\end{enumerate}
\end{definition}
\begin{lemma}[Fundamental Lemma of Th\'eories Langagi\`eres]\label{lemfundu3498ur9483ur983u9ru39}
Let $\mathcal{T} = \left(\mathcal{U}, \Sigma, \gamma \right)$ be a th\'eorie langagi\`ere. For all $R, S \in \mathcal{P}\left( \mathcal{U}\right)$, if ``$R=S$'' is provable in $\mathcal{T}$ then ``$R=S$'' is provable in \textbf{ZFC}.
\end{lemma}
\begin{proof}
Suppose that $R, S \in \mathfrak{U}_{\mathcal{T}}$ and $\gamma\left( R\right) = \gamma\left( S\right)$.
The statement $R, S \in \mathfrak{U}_{\mathcal{T}}$ and the minimality of $\mathfrak{U}_{\mathcal{T}}$ imply the existence of two languages $L_R, L_S \in \mathcal{P}\left( \Sigma^{\ast}\right)$ such that
$$
R = \bigcup_{w \in L_R} \gamma^{-1}(w) \textrm{ and } S = \bigcup_{w \in L_S} \gamma^{-1}(w).
$$
Without lost of generality we will assume that $\gamma^{-1}(w) \neq \emptyset$ for all $w \in L_R \cup L_S$. It follows that $\gamma(R) = L_R$ and $\gamma(S) = L_S$. The equality $\gamma\left( R\right) = \gamma\left( S\right)$ implies that $L_R = L_S$. Therefore $R = S$.
\end{proof}
A th\'eorie langagi\`ere $\mathcal{T} = \left(\mathcal{U}, \Sigma, \gamma \right)$ satisfying $\mathcal{U} \subseteq \mathbb{Z}_{\geq 1}$ will be called \emph{arithm\'etique langagi\`ere}\footnote{In English we could say \emph{language-theoretic arithmetic}.}.
\begin{definition}
Let $\lambda > 1$ be a real number. Define $\textbf{KR}_{\lambda} := \left(\mathcal{U}, \Sigma, \gamma \right)$, where $\mathcal{U} := \mathbb{Z}_{\geq 1}$, $\Sigma := \{a, b\}$ and $\gamma(n) := \langle\!\langle n \rangle\!\rangle_{\lambda}$.
\end{definition}
\section{Middle divisors}
Let $C_n(q)$ be the polynomial mentioned in the introduction. It was proved in \cite{kassel2016complete} that $C_n(q) = (q-1)^2 P_n(q)$, for some polynomial $P_n(q)$ whose coefficients are non-negative integers.
Divisors $d|n$ satisfying $\sqrt{n/2} < d \leq \sqrt{2n}$ are called \emph{middle divisors} of $n$. These divisors were studied in \cite{kassel2016complete}, \cite{Hoft} and \cite{Vatne}. The coefficient of $q^{n-1}$ in $P_n(q)$, denoted $a_{n,0}$, counts the number of middle divisors of $n$. The following definition provides a generalization of the arithmetical function $
a_{n,0}$.
\begin{definition}[from \cite{Caballero3}]\label{defn897e7eeeee7ew7r7r98789ew7rer}
Consider a real number $\lambda > 1$. Let $n \geq 1$ be an integer. The number of \emph{$\lambda$-middle divisors} of $n$, denoted $\textrm{middle}_{\lambda}(n)$, is the number of divisors $d$ of $n$ satisfying
$$
\sqrt{\frac{n}{\lambda}} < d \leq \sqrt{\lambda n}.
$$
\end{definition}
A \emph{block polynomial} is a polynomial of the form $B(q) = q^i + q^{i+1} + q^{i+2} + ... + q^j$, with $0 \leq i < j$. The smallest number $k$ of block polynomials $B_1(q), B_2(q), ..., B_k(q)$ such that
$$
P_n(q) = \alpha_1 B_1(q) + \alpha_2 B_2(q) + ... + \alpha_k B_k(q),
$$
for some $\alpha_1, \alpha_2, ..., \alpha_k \in \mathbb{Z}$, will be called the \emph{number of blocks} of $n$ and denoted $\textrm{blocks}(n) := k$. The arithmetical function $\textrm{blocks}(n) $ is generalized in the following definition.
\begin{definition}[from\footnote{In \cite{Caballero2}, the function $\textrm{blocks}_{\lambda}(n)$ is called the \emph{number of connected components} of $\mathcal{T}_{\lambda}(n)$.} \cite{Caballero2}]
Consider a real number $\lambda > 1$. Let $n \geq 1$ be an integer. We define the \emph{number of $\lambda$-blocks} of $n$, denoted $\textrm{blocks}_{\lambda}(n)$, as the number of connected components of
$$
\bigcup_{d|n} \left[d, \lambda d\right].
$$
\end{definition}
Theorem 3 in \cite{Hoft} (we call it \emph{H\"oft's theorem}) states the equivalent between $\textrm{middle}_{2}(n) > 0$ and $\textrm{blocks}_{2}(n) \equiv 1 \pmod{2}$, for any integer $n \geq 1$. The following result is a generalization of H\"oft's original result.
\begin{theorem}[Generalized H\"oft's theorem]\label{teojrojiorjrwde}
Let $\lambda > 1$ be a real number. For each integer $n \geq 1$, we have that $\textrm{middle}_{\lambda}(n) > 0$ if and only if $\textrm{blocks}_{\lambda}(n)$ is odd. Furthermore, this theorem is provable in $\textbf{KR}_{\lambda}$.
\end{theorem}
H\"oft's proof in \cite{Hoft} follows the general lines of traditional proofs in Elementary Number Theory. Our proof of Theorem \ref{teojrojiorjrwde} will be based on properties of Dyck words. We will use the following auxiliary results.
\begin{lemma}\label{lempkjsljdf90sf90s8}
For any integer $n \geq 1$ and any real number $\lambda > 1$, we have that $\langle\!\langle n \rangle\!\rangle_{\lambda}$ is a symmetric Dyck word, i.e. $\mathcal{L}_{\lambda} \subseteq \mathcal{D}^{\textrm{sym}}$.
\end{lemma}
\begin{proof}
See Theorem 2(i) in \cite{Caballero1}.
\end{proof}
\begin{lemma}\label{propjfsdfuisdhfiuhishifuhdhidshf}
Let $\lambda > 1$ be a real number. For any integer $n \geq 1$, $\textrm{ct}\left(\langle\!\langle n \rangle\!\rangle_{\lambda}\right) = \textrm{middle}_{2}(n)$.
\end{lemma}
\begin{proof}
See Lemma 3.7 in \cite{Caballero3}.
\end{proof}
\begin{lemma}\label{propmmknnjknjknnbvvgcgcf}
Let $\lambda > 1$ be a real number.
For any integer $n \geq 1$, $\Omega\left(\langle\!\langle n \rangle\!\rangle_{\lambda}\right) = \textrm{blocks}_{\lambda}(n)$.
\end{lemma}
\begin{proof}
See Theorem 2 in \cite{Caballero2}.
\end{proof}
\begin{lemma}\label{lemjjfuishfishfishfishduh}
Consider the languages over the alphabet $\{a,b\}$,
\begin{eqnarray*}
L_R &:=& \left\{w \in \mathcal{D}^{\textrm{sym}}: \quad \textrm{ct}(w) > 0 \right\}, \\
L_S &:=& \left\{w \in \mathcal{D}^{\textrm{sym}}: \quad \Omega(w) \textrm{ odd} \right\}.
\end{eqnarray*}
We have that $L_R = L_S$.
\end{lemma}
\begin{proof}
Take $w \in L_S$. By definition of $L_S$, we have that $\Omega(w)$ is odd. By Lemma \ref{lempkjsljdf90sf90s8}, there are $u, v \in \mathcal{D}$ such that $w = u \, v \, \sigma\left( \widetilde{u}\right)$ and $v$ is irreducible. By definition of $\mathcal{D}^{\textrm{irr}}$, there is $v^{\prime} \in \mathcal{D}$ satisfying $v = a v^{\prime} b$. So, $w = u \, a \, v^{\prime} \, b \, \sigma\left( \widetilde{u}\right)$. It follows that $\textrm{ct}(w) > 0$. Hence $w \in L_R$.
Now, take $w \in L_R$. By definition of $L_R$ we have that $\textrm{ct}(w) > 0$. By Lemma \ref{lempkjsljdf90sf90s8}, there are $u, v^{\prime} \in \mathcal{D}$ such that $w = u \, a \, v^{\prime} \, b \, \sigma\left( \widetilde{u}\right)$. The Dyck word $v := a v^{\prime} b$ is irreducible and $w = u \, v \, \sigma\left( \widetilde{u}\right)$. It follows that $\Omega(w) = 1 + 2 \Omega(u)$. Hence, $w \in L_S$.
Therefore, $L_R = L_S$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{teojrojiorjrwde}]
Consider a fixed real number $\lambda > 1$. Define the sets
\begin{eqnarray*}
R &:=& \left\{n \in \mathbb{Z}_{\geq 1}: \quad \textrm{middle}_{\lambda}(n) > 0 \right\}, \\
S &:=& \left\{n \in \mathbb{Z}_{\geq 1}: \quad \textrm{blocks}_{\lambda}(n) \textrm{ odd} \right\}.
\end{eqnarray*}
Let $L_R$ and $L_S$ be the languages defined in Lemma \ref{lemjjfuishfishfishfishduh}. In virtue of Lemmas \ref{propjfsdfuisdhfiuhishifuhdhidshf} and \ref{propmmknnjknjknnbvvgcgcf},
$$
R = \bigcup_{w \in L_R} \gamma^{-1}(w)\in \mathfrak{U}_{\textbf{KR}_{\lambda}} \textrm{ and } S = \bigcup_{w \in L_S} \gamma^{-1}(w)\in \mathfrak{U}_{\textbf{KR}_{\lambda}},
$$
where $\gamma$ is from $\textbf{KR}_{\lambda} = \left(\mathcal{U}, \Sigma, \gamma \right)$. By definition of $\mathcal{L}_{\lambda}$, it follows that $\gamma\left( R\right) = L_R \cap \mathcal{L}_{\lambda}$ and $\gamma\left( S\right) = L_S \cap \mathcal{L}_{\lambda}$. In virtue of Lemma \ref{lemjjfuishfishfishfishduh}, $L_R = L_S$. So, $\gamma\left( R\right) = \gamma\left( S\right)$. By Definition \ref{Defundnudsufbsubfuywwe89}, ``$R = S$'' is provable in $\textbf{KR}_{\lambda}$. Using Lemma \ref{lemfundu3498ur9483ur983u9ru39}, we conclude that $R = S$.
\end{proof}
\section{Semi-perimeters of Pythagorean triangles}
\begin{definition}\label{defu48589355hjk5h34hk34}
Let $n \geq 1$ be an integer. We says that $n$ \emph{is the semi-perimeter of a Pythagorean triangle} if there are three integers $x, y, z \in \mathbb{Z}_{\geq 1}$ satisfying
$$
x^2 + y^2 = z^2 \textrm{ and } \frac{x+y+z}{2} = n.
$$
\end{definition}
In order to work with semi-perimeters of Pythagorean triangles, we will need the following language-theoretical characterization.
\begin{lemma}\label{Lem89ru34899r834ur9}
An integer $n \geq 1$ is not the semi-perimeter of a Pythagorean triangle if and only if $\langle\!\langle n \rangle\!\rangle_{2} \in \left( ab\right)^{\ast}$.
\end{lemma}
We will use the following auxiliary result.
\begin{lemma}\label{propklfjdkls98798798fsdsf}
For any integer $n \geq 1$ and any real number $\lambda > 1$, the height of the Dyck path $\langle\!\langle n \rangle\!\rangle_{\lambda}$ is the largest value of $h$ such that we can find $h$ divisors of $n$, denoted $d_1, d_2, ..., d_h$, satisfying
$$
d_1 < d_2 < ... < d_h < \lambda d_1.
$$
\end{lemma}
\begin{proof}
See Theorem 2(ii) in \cite{Caballero1}.
\end{proof}
\begin{proof}[Proof of Lemma \ref{Lem89ru34899r834ur9}]
From the explicit formula for Pythagorean triples (see \cite{sierpinski2003pythagorean}), it follows in a straightforward way that an integer $n \geq 1$ is the semi-perimeter of a Pythagorean triangle if and only if there are two divisors of $n$, denoted $d_1$ and $d_2$, satisfying,
$$
d_1 < d_2 < 2d_1.
$$
By Lemma \ref{lempkjsljdf90sf90s8}, $\langle\!\langle n \rangle\!\rangle_{2}$ is a Dyck word, so its height as Dyck path is well-defined. In virtue of Lemma \ref{propklfjdkls98798798fsdsf}, such divisors $d_1$ and $d_2$ do exist if and only if the height of $\langle\!\langle n \rangle\!\rangle_{2}$ at least $2$. Therefore, $n$ is not the semi-perimeter of a Pythagorean triangle if and only if $\langle\!\langle n \rangle\!\rangle_{2} \in \left( ab\right)^{\ast}$.
\end{proof}
\subsection{Even-trapezoidal numbers}
The number of partitions of a given integer $n \geq 1$ into an even number of consecutive parts was study in \cite{hirschhorn2009partitions}.
\begin{definition}\label{defnur3huihuihrwiuh98u89}
Let $n \geq 1$ be an integer. We says that $n$ \emph{even-trapezoidal} if there is at least a partition of $n$ into an even number of consecutive parts, i.e.
$$
n = \sum_{k=0}^{2m-1} \left(a + k\right)
$$
for two integers $a \geq 1$ and $m \geq 1$.
\end{definition}
It is rather easy to check that a power of $2$ is neither even-trapezoidal nor the semi-perimeter of a Pythagorean triangle. Nevertheless, the converse statement is non-trivial.
\begin{theorem}\label{teojosjdiojiosdfjs}
Let $n \geq 1$ be an integer. We have that $n$ is a power of $2$ (including $n = 1$) if and only if $n$ is neither even-trapezoidal nor the semi-perimeter of a Pythagorean triangle. Furthermore, this theorem is provable in $\textbf{KR}_{2}$.
\end{theorem}
We will use the following auxiliary results.
\begin{lemma}\label{Lemj89jw9efw98j98je9}
For all integers $n \geq 1$, we have that $n$ is a power of $2$ (including $n = 1$) if and only if $\langle\!\langle n \rangle\!\rangle_2 = ab$.
\end{lemma}
\begin{proof}
Take $n \in \mathbb{Z}_{\geq 1}$.
Suppose that $\langle\!\langle n \rangle\!\rangle_2 = ab$. By definition of $\langle\!\langle n \rangle\!\rangle_2$, the length of $\langle\!\langle n \rangle\!\rangle_2$ is two times the number of odd divisors of $n$. So, $n$ has exactly $1$ odd divisors. It follows that $n$ is a power of $2$ (including $n = 1$).
Suppose that $n$ is a power of $2$ (including $n = 1$). It follows that
$$
D_n \triangle 2D_n = \left\{ 1 < 2n\right\},
$$
with $1 \in D_n \backslash \left( 2D_n \right)$ and $2n \in \left( 2D_n \right) \backslash D_n$. By definition of $\langle\!\langle n \rangle\!\rangle_2$, we conclude that $\langle\!\langle n \rangle\!\rangle_2 = ab$.
\end{proof}
\begin{lemma}\label{lemjsiojdoifjosdjfofjs79}
For any integer $n \geq 1$ and any real number $\lambda > 1$, we have
$$
\ell_{ab}\left(\langle\!\langle n \rangle\!\rangle_{\lambda} \right) = \# \left\{ d | n: \quad d \not\in \lambda D_n \textrm{ and } d < \sqrt{\lambda n}\right\},
$$
where $D_n$ is the set of divisors of $n$.
\end{lemma}
\begin{proof}
See Lemma 3.4. in \cite{Caballero3}.
\end{proof}
\begin{lemma}\label{Lemksjdlkjfkljsfl}
For all $n \geq 1$, we have that $n$ is not even-trapezoidal if and only if $\langle\!\langle n \rangle\!\rangle_2 \in \left\{ a^k \, b^k: \quad k \in \mathbb{Z}_{\geq 1} \right\}$.
\end{lemma}
\begin{proof}
It was proved in \cite{hirschhorn2009partitions} that the number of partitions of $n$ into an even number of consecutive parts is precisely the cardinality of the set
$$
\left\{ d | n: \quad d \not\in 2 D_n \textrm{ and } d > \sqrt{2 n}\right\}.
$$
Notice that if $d = \sqrt{2 n}$ is a divisor of $n$, then $d = 2\frac{n}{d}$ is even. So, an integer $n \geq 1$ is not even-trapezoidal if and only if
$$
\# \left\{ d | n: \quad d \not\in 2 D_n \textrm{ and } d < \sqrt{2 n}\right\} = \frac{1}{2}\,\left|\langle\!\langle n \rangle\!\rangle_2 \right|.
$$
By Lemma \ref{lempkjsljdf90sf90s8}, $\langle\!\langle n \rangle\!\rangle_{2}$ is a Dyck word, so $\ell_{ab}\left(\langle\!\langle n \rangle\!\rangle_{2} \right)$ is well-defined. In virtue of Lemma \ref{lemjsiojdoifjosdjfofjs79}, an integer $n \geq 1$ is not even-trapezoidal if and only if
$$
\ell_{ab}\left(\langle\!\langle n \rangle\!\rangle_{2} \right) = \frac{1}{2}\,\left|\langle\!\langle n \rangle\!\rangle_2 \right|.
$$
This last condition holds if and only if there is $k \in \mathbb{Z}_{\geq 1}$ such that $\langle\!\langle n \rangle\!\rangle_{2} = a^k \, b^k$, because $\langle\!\langle n \rangle\!\rangle_{2}$ is a Dyck word. Therefore, $n$ is not even-trapezoidal if and only if $\langle\!\langle n \rangle\!\rangle_2 \in \left\{ a^k \, b^k: \quad k \in \mathbb{Z}_{\geq 1} \right\}$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{teojosjdiojiosdfjs}]
Define the sets
\begin{eqnarray*}
R &:=& \left\{2^m: \quad m \in \mathbb{Z}_{\geq 0} \right\}, \\
S &:=& \left\{n \in \mathbb{Z}_{\geq 1}: \quad \begin{array}{c}
\neg\left(n \textrm{ even-trapezoidal}\right) \textrm{ and} \\
\neg\left(n \textrm{ semi-perimeter of a Pythagorean triangle}\right)
\end{array} \right\}.
\end{eqnarray*}
Consider the languages
\begin{eqnarray*}
L_R &=& \left\{ ab \right\}, \\
L_S &=& \left\{ a^k \, b^k: \quad k \in \mathbb{Z}_{\geq 1} \right\} \cap \left(ab \right)^{\ast}.
\end{eqnarray*}
In virtue of Lemmas \ref{Lemj89jw9efw98j98je9}, \ref{Lem89ru34899r834ur9} and \ref{Lemksjdlkjfkljsfl},
$$
R = \bigcup_{w \in L_R} \gamma^{-1}(w)\in \mathfrak{U}_{\textbf{KR}_{2}} \textrm{ and } S = \bigcup_{w \in L_S} \gamma^{-1}(w)\in \mathfrak{U}_{\textbf{KR}_{2}},
$$
where $\gamma$ is from $\textbf{KR}_{\lambda} = \left(\mathcal{U}, \Sigma, \gamma \right)$. Furthermore, $\gamma\left( R\right) = L_R \cap \mathcal{L}_{2}$ and $\gamma\left( S\right) = L_S \cap \mathcal{L}_{2}$.
It easily follows that $L_R = L_S$. So, $\gamma\left( R\right) = \gamma\left( S\right)$. By Definition \ref{Defundnudsufbsubfuywwe89}, ``$R = S$'' is provable in $\textbf{KR}_{2}$. Using Lemma \ref{lemfundu3498ur9483ur983u9ru39}, we conclude that $R = S$.
\end{proof}
\subsection{Densely divisible numbers}
The so-called $\lambda$-densely divisible numbers were introduced in \cite{polymath8} by the project \emph{polymath8}, led by Terence Tao.
\begin{definition}\label{defju87fd97s7f7ds987f98sd7}
Consider a real number $\lambda > 1$. Let $n \geq 1$ be an integer. We say that $n$ is \emph{$\lambda$-densely divisible} if $\textrm{blocks}_{\lambda}(n) = 1$.
\end{definition}
Again, it can be proved in a straightforward way that powers of $2$ are $2$-densely divisible number. But it is more complicated to prove that, for a given positive integer, to be a $2$-densely divisible number,
without being the semi-perimeter of a Pythagorean triangle, it is enough to be a power of $2$.
\begin{theorem}\label{teo8u8439u98ur39ur93}
Let $n \geq 1$ be an integer. We have that $n$ is a power of $2$ (including $n = 1$) if and only if both $n$ is $2$-densely divisible and it is not the semi-perimeter of a Pythagorean triangle. Furthermore, this theorem is provable in $\textbf{KR}_{2}$.
\end{theorem}
We will use the following auxiliary results.
\begin{lemma}\label{propiojsirjsdoijfdoisjfdo}
Let $\lambda > 1$ be a real number.
For any integer $n \geq 1$, we have that $\langle\!\langle n \rangle\!\rangle_{\lambda}$ is irreducible (i.e. $\langle\!\langle n \rangle\!\rangle_{\lambda} \in \mathcal{D}^{irr}$) if and only if $n$ is $\lambda$-densely divisible.
\end{lemma}
\begin{proof}
It is the case corresponding to $\textrm{blocks}(n) = 1$ in Lemma \ref{propmmknnjknjknnbvvgcgcf}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{teo8u8439u98ur39ur93}]
Define the sets
\begin{eqnarray*}
R &:=& \left\{2^m: \quad m \in \mathbb{Z}_{\geq 0} \right\}, \\
S &:=& \left\{n \in \mathbb{Z}_{\geq 1}: \quad \begin{array}{c}
\left(n \textrm{ 2-densely divisible}\right) \textrm{ and} \\
\neg\left(n \textrm{ semi-perimeter of Pythagorean triangle}\right)
\end{array} \right\}.
\end{eqnarray*}
Consider the languages
\begin{eqnarray*}
L_R &=& \left\{ ab \right\}, \\
L_S &=& \left\{ w \in \mathcal{D}: \quad w \textrm{ irreducible} \right\} \cap \left(ab \right)^{\ast}.
\end{eqnarray*}
In virtue of Lemma \ref{Lemj89jw9efw98j98je9}, \ref{Lem89ru34899r834ur9} and \ref{propiojsirjsdoijfdoisjfdo},
$$
R = \bigcup_{w \in L_R} \gamma^{-1}(w)\in \mathfrak{U}_{\textbf{KR}_{2}} \textrm{ and } S = \bigcup_{w \in L_S} \gamma^{-1}(w)\in \mathfrak{U}_{\textbf{KR}_{2}},
$$
where $\gamma$ is from $\textbf{KR}_{2} = \left(\mathcal{U}, \Sigma, \gamma \right)$. Furthermore, $\gamma\left( R\right) = L_R \cap \mathcal{L}_{2}$ and $\gamma\left( S\right) = L_S \cap \mathcal{L}_{2}$.
It easily follows that $L_R = L_S$. So, $\gamma\left( R\right) = \gamma\left( S\right)$. By Definition \ref{Defundnudsufbsubfuywwe89}, ``$R = S$'' is provable in $\textbf{KR}_{2}$. Using Lemma \ref{lemfundu3498ur9483ur983u9ru39}, we conclude that $R = S$.
\end{proof}
\section{Conclusions}
In this paper we showed that some non-trivial elementary number-theoretical theorems are susceptible to be transformed into relationships among formal languages and then proved by rather trivial arguments from Language Theory.
\section*{Acknowledge}
The author thanks S. Brlek and C. Reutenauer for they valuable comments and suggestions concerning this research.
|
2,869,038,154,557 | arxiv | \section{Introduction}
Theoretical prediction of two inertial ranges, consequence of
both energy and enstrophy
concervation laws by the two- dimensional Euler equations, was and still
is
one of the most remarkable achievements of statistical hydrodynamics
[1]. A direct
and most important outcome of these conservation laws is the fact that if
a fluid is stirred by a random (or non-random) forcing, acting on a scale
$l_{f}=1/k_{f}$, the produced energy is spent on creation of the large-scale
($l>l_{f}$)
flow which cannot be dissipated in the limit of the large Reynlds number
$\nu\rightarrow 0$. This means that the dissipation terms are irrelevant
in the inverse cascade range. Since the dissipation contributions are one of
the most difficult obstacles on the road toward turbulence theory
(see below), one can
hope that in two dimensions the situation is greatly simplified. This hope is
supported by recent numerical and physical experiments showing that as long as
the integral scale $L_{i}\propto t^{\frac{3}{2}}$ is much smaller than the
size of the system, the velocity field at the scales $L_{i}>>l>>l_{f}$ is a
stationary close-to-gaussian process characterized by the structure functions
\begin{equation}
S_{n}=\overline{(u(x+r)-u(x))^{n}}\equiv \overline{(\Delta u)^{n}}\propto
(Pr)^{\frac{n}{3}}
\end{equation}
\noindent where the pumping rate $P$ is defined below [2]-[4]. Moreover, both
numerical and physical experiments were not accurate enough to measure
\begin{equation}
s_{2n+1}=\frac{S_{2n+1}}{S_{2}^{\frac{2n+1}{2}}}<<1
\end{equation}
\noindent which were too small. This means that the observed
probability density
$P(\Delta u)$ was very close to symmetric one. This experimental fact
differs from the
outcome of the measurements in three dimensions
where $s_{n}$'s are very large
when $n$ is not small. Thus, the absence of strong (may be any)
intermittency
in two-dimensional turbulence and proximity of the statistics of velocity
field to gaussian makes the problem seem tractable.
\noindent The equations of motion are (density $\rho\equiv 1$):
\begin{equation}
\partial_{t}v_{i}+v_{j}\partial_{j}v_{i}=-\partial_{i}p+\nu\nabla^{2}v_{i}+f_{i}
\end{equation}
\noindent and
\begin{equation}
\partial_{i}v_{i}=0
\end{equation}
\noindent where $\bf f$ is a forcing function mimicking the
large-scale turbulence production mechanism and in a statistically
steady state the mean pumping rate $P
=\overline{{\bf f\cdot v}}$. In the inverse cascade range the
dissipation terms in (3) will be irrelevant. Neglecting it and multiplying (3
) by $v_{i}$ we
obtain readily
\begin{equation}
E=\frac{1}{2}\overline{v^{2}}=Pt
\end{equation}
\noindent Thus, in this case the energy linearly grows with time.
In this paper we define the force correlation function as:
\begin{equation}
<f_{i}({\bf k})f_{j}({\bf k'})>\propto
P(\delta_{ij}-\frac{k_{i}k_{j}}{k^{2}})
\frac{\delta(k-k_{f})}{k}\delta({\bf k+k'})\delta(t-t')
\end{equation}
\noindent so that
\begin{equation}
\overline{(f(x+r)-f(r))^{2}}\propto P(1-Cos(k_{f}r))
\end{equation}
\noindent
\noindent It will be clear below that the forcing term enters the equations
for the probability density of velocity differences exclusively through the
expression (7) and in the limit $k_{f}r<<1$ it contribution is
$O((k_{f}r)^{2})$
which is a well-known fact. In the energy range we are interested in this work
$k_{f}r>>1$ and the oscillating contribution can be neglected leading to
disappearence of the forcing scale from equation for the PDF. Thus the general
expression for the structure functions is:
\begin{equation}
S_{n}(r)\propto (Pr)^{\frac{n}{3}}(\frac{r}{L_{i}(t)})^{\delta_{n}}
\end{equation}
\noindent where the exponents $\delta_{n}$ denote possible deviations from the
Kolmogorov scaling. If a statistically steady state exist in the limit
$L_{i}>>l>>l_{f}$, then all $\delta_{n}=0$ since $L_{i}\propto
t^{\frac{3}{2}}$. This would be prooof of `` normal'' (Kolmogorov) scaling
in the inverse cascade range, provided one can show that the PDF $P(\Delta u)$
in the inertial range is independent on its counterpart in the interval
$l\approx l_{f}$. This is the subject of the present paper which is organized
as follows. In the next Section the equations for the generating functions
are introduced. Section 3 is devoted to a short analysis of the Polyakov
theory of Burgers turbulence some aspects of which are used in this paper.
Some physical considerations, which are basis for the developing theory,
are presented in Section 4. S
In Sections 5 and 6 the equations for the transvers and longitudinal
probability density functions are derived and solved. Summary and discussion
are presented in Section 7.
Now we would like to recall some well-known properties of velocity
correlation functions in incompressible fluids, needed below.
Consider two points
${\bf x}$ and ${\bf x'}$ and define ${\bf r}={\bf x-x'}$. Assuming that
the $x$-axis is paralel to the displacement vector ${\bf r}$, one
can find that in the two-dimensional flow $d=2$
for the separation $r$ in the
inertial range [5]-[7]:
\begin{equation}
\frac{1}{r^{d+1}}\partial_{r}r^{d+1}S_{3}=\frac{12}{d}{\cal E}
\end{equation}
\noindent giving
\begin{equation}
S_{3}=\overline{(\Delta u)^{3}}
\equiv\overline{(u(x')-u(x))^{3}}\approx \frac{12}{d(d+2)}r
\end{equation}
\noindent and
\begin{equation}
S^{t}_{3}=\overline{(\Delta v)^{3}}
\equiv\overline{(v(x')-v(x))^{3}}\approx 0
\end{equation}
\noindent where $u$ and $v$ are the components of velocity field paralel
and perpendicular to the $x$-axis (vector ${\bf r})$. The relations (9)-(11)
resulting from equations of motion (3) are dynamic properties of the velocity
field. Kinematics also gives something interesting:
\begin{equation}
\frac{1}{r^{d-1}}\frac{d}{dr}r^{d-1}S_{2}=(d-1)S_{2}^{t}\equiv\overline{(\Delta
v)^{2}}
\end{equation}
\noindent and in two dimensions we have:
\begin{equation}
S_{3t}\equiv \overline{\Delta u (\Delta v)^{2}}=\frac{1}{3}\frac{d}{dr}S_{3}
\end{equation}
\section{Equation for Generating Function}
\noindent
We consider the $N$-point generating function:
\begin{equation}
Z=<e^{\lambda_{i}\cdot {\bf v(x_{i})}}>
\end{equation}
\noindent where the vectors ${\bf x_{i}}$ define the positions of the points
denoted $1\leq i \leq N$.
Using the incompressibility condition,
the equation for $Z$ can be written:
\begin{equation}
\frac{\partial Z}{\partial t}+\frac{\partial^{2} Z
}{\partial \lambda_{i,\mu}\partial x_{i,\mu}}=I_{f}+I_{p}
\end{equation}
\noindent with
\begin{equation}
I_{f}=\sum_{j} <{\bf \lambda_{j}\cdot f(x_{j})}e^{\lambda_{i}u(x_{i})}>
\end{equation}
\begin{equation}
I_{p}=-\sum_{j}\lambda_{j}<e^{\lambda_{i}u(x_{i})}\frac{\partial p(x_{j})}{\partial x_{j}}>
\end{equation}
\noindent The dissipation contributions have been neglected here as
irrelevant.
In what follows we will be mainly interested
in the probability density function of the two-point velocity
differences which is ontained from (7)-(10), setting
$\bf{\lambda_{1}+\lambda_{2}}=0$ (see Ref. [8] and the theory developed
below),
so that
\begin{equation}
Z=<exp{(\bf{\lambda\cdot U})}>
\end{equation}
\noindent where
\begin{equation}
{\bf U}={\bf u(x')-u(x)}\equiv \Delta {\bf u}
\end{equation}
\noindent
The moments of the two-point velocity differences which
in
homogeneous and isotropic turbulence can depend only on
the absolute values of two vectors
(velocity difference ${\bf v(x')-v(x)}$ and displacement
${\bf r\equiv x'-x}$) and the angle $\theta$ between them with $\theta=\pi/2$
and $\theta=0$ corresponding to transverse and longitudinal structure
functions, respectively.
\noindent
It is easy to show [5]- [6] that the
general form of the second-order
structure function in the inertial range is:
\begin{equation}
S_{2}(r,\theta)= \frac{2+\xi_{2}}{2}D_{LL}(r)(1-\frac{\xi_{2}}{2+\xi_{2}}cos^{2}(\theta))
\end{equation}
\noindent with $D_{LL}(r)=<(u(x)-u(x+r))^{2}>$.
More involved relation can
be written for the fourth-order moment:
\begin{equation}
S_{4}(r,\theta)=D_{LLLL}(r)cos^{4}(\theta)-3D_{LLNN}(r)sin^{2}(2\theta)+
D_{NNNN}(r)sin^{2}(\theta)
\end{equation}
\noindent where $D_{LLNN}=<(v(x)-v(x+r))^{2}(u(x)-u(x+r))^{2}>$
and $v$ and $u$ are the components of the velocity field perpendicular
and parallel to the $x$-axis, respectively. In general,
in the llimit $cos(\theta)\equiv s\rightarrow \pm 1$, corresponding to the moments of the
longitudinal velocity differences
$S_{n}(r,s)\rightarrow S_{n}(r)cos^{n}(\theta)$.
This means that in this limit
$Z(\lambda,r,s)\rightarrow Z(\lambda s,r)\equiv Z(\lambda_{x},r)$.
The generating function can depend only on three variables:
$$\eta_{1}=r;~~ \eta_{2}=\frac{{\bf \lambda\cdot r}}{{\bf r}}\equiv
\lambda cos(\theta);~~ \eta_{3}=\sqrt{\lambda^{2}-\eta_{2}^{2}};$$
In these variables:
\begin{equation}
Z_{t}+[\partial_{\eta_{1}}\partial_{\eta_{2}}+\frac{d-1}{r}\partial_{\eta_{2}}
+\frac{\eta_{3}}{r}\partial_{\eta_{2}}\partial_{\eta_{3}}+\frac{(2-d)\eta_{2}}{r\eta_{3}}\partial_{\eta_{3}}-\frac{\eta_{2}}{r}\partial^{2}_{\eta_{3}}]Z=
I_{f}+I_{p}
\end{equation}
\noindent where
\begin{equation}
I_{p}=
\lambda_{i}<(\partial_{2,i} p(2)-\partial_{1,i} p(1))e^{\bf \lambda\cdot U}>
\end{equation}
\noindent and
\begin{equation}
I_{f}=(\eta_{2}^{2}+\eta_{3}^{2})P(1-Cos(k_{f}r))Z
\end{equation}
\noindent where, to simplify notation we set $\partial_{i,\alpha}\equiv
\frac{\partial}{\partial x.\alpha}$ and $v(i)\equiv v({\bf x_{i}})$.
\noindent In two dimensions
the equation for the generating function becomes with $P=1$
(the subscript $o$ is omitted hereafter):
\begin{equation}
[\partial_{\eta_{1}}\partial_{\eta_{2}}+\frac{1}{r}\partial_{\eta_{2}}+
\frac{\eta_{3}}{r}\frac{\partial^{2}}{\partial_{\eta_{2}}\partial{\eta_{3}}}
-\frac{\eta_{2}}{r}\frac{\partial^{2}}{\partial \eta{_3}^{2}}-
(\eta_{2}^{2}+\eta_{3}^{2})]Z=I_{p}
\end{equation}
\noindent The generating function can be written as:
\begin{equation}
Z=<e^{\eta_{2}\Delta u + \eta_{3}\Delta v}>
\end{equation}
\noindent so that any correlation function
\begin{equation}
<(\Delta u)^{n}\Delta v)^{m}>=\frac{\partial^{n}}{\partial
\eta_{2}^{n}}\frac{\partial^{n}}{\partial \eta_{3}^{m}}Z(\eta_{2}=\eta_{3}=0)
\end{equation}
\noindent Neglecting the pressure term $I_{p}$ and differentiating (25) once
over $\eta_{2}$ we obtain immediately
\begin{equation}
\frac{1}{r}\frac{d}{dr}rS_{2}=S_{2}^{t}
\end{equation}
\noindent Second differentiation (again neglecting $I_{p}$) gives:
\begin{equation}
\frac{1}{r}\frac{d}{dr}rS_{3}-\frac{2}{r}S_{3t}-2=0
\end{equation}
\noindent Combined with (13) this expression gives
\begin{equation}
\frac{1}{r^{3}}\frac{d}{dr}r^{3}S_{3}-6=0
\end{equation}
\noindent which is nothing but the Kolmogorov relation, derived in 2d without
contributions from the pressure terms. It follows from (25) that it is
reasonable
to look for a scaling solution $Z(\eta_{2},\eta_{3},r)=Z(X_{2},X_{3})$ where
$X_{i}=\eta_{i}r^{\frac{1}{3}}$.
\section{Polyakov's theory of Burgers turbulence}
The dissipation-generated
contributions $O(\nu \nabla^{2} \overline{u_{i}u_{j}})\neq 0$ in the limit
$\nu\rightarrow 0$. This is a consequence of the ultra-violet singularity
$\nabla^{2}\overline{u_{i}(x)u_{j}(x+r)}\rightarrow \infty$ when $r\rightarrow 0$ making the
theory (the closure problem) extremely difficult.
The expression for this ``dissipation anomaly'', part of the equation
for
the generating function, was developed by Polyakov for
the problem of the one-dimensional Burgers equation stirred by the random
force [8]. Theory of two-dimensional turbulence is free from the troubles
coming from the
ultra-violet (dissipation) singularities. Still, here we review some of
the aspects of
Polyakov's theory which
we believe are
of general interest and which will be most helpful below.
Polyakov considered a one-dimensional problem [8]:
\begin{equation}
u_{t}+uu_{x}=f+\nu u_{xx}
\end{equation}
\noindent where the random force is defined by the correlation function
\begin{equation}
\overline{f(x,t)f(x+r,t')}=\kappa(r)\delta(t-t')
\end{equation}
\noindent The equation for generating function, analogous to (14), is written
readily:
\begin{equation}
Z_{t}+\lambda_{j}\frac{\partial}{\partial
\lambda_{j}}\frac{1}{\lambda_{j}}\frac{\partial Z}{\partial r}=
\kappa(r_{ij})\lambda_{i}\lambda_{j}Z+D
\end{equation}
\noindent where
\begin{equation}
D=\nu \lambda_{j}<u''(x_{j},t)e^{\lambda_{k}u(x_{k},t)}>
\end{equation}
\noindent In the limit $r_{ij}\rightarrow 0$ the force correlation function
$\kappa(r_{ij})=O(1-r_{ij}^{2})$ which imposes scaling properties of the
velocity correlation functions. In general, the generating function depends
on both velocity differences $U_{-}=\Delta u=u(x_{i})-u(x_{j})$ and sums
$U_{+}=u(x_{i})+u_(x_{j})$ which makes the problem very difficult.
Defining Galilean invariance as independence of the correlation functions on
``non-universal'' single-point $u_{rms}^{2}=\overline{u^{2}}$,
Polyakov assumed that if all $|U_{-}|<<u_{rms}$ then $U_{-}$ and $U_{+}$ are
statistically independent and $\sum \lambda_{i}=0$.
In this case (see (8)), introducing
$\mu=\lambda_{2}-\lambda_{1}$ and the two-point generating function
\begin{equation}
Z(\mu)=<e^{\mu \Delta u}>
\end{equation}
\noindent the equation for $Z$ reads in a steady
state:
\begin{equation}
(\frac{\partial}{\partial \mu}-\frac{1}{\mu})\frac{\partial }{\partial r}Z=
-r^{2}\mu^{2}Z+D
\end{equation}
\noindent where
\begin{equation}
D=\mu \nu<(u''(x+r)-u''(x))e^{\mu\Delta u}>
\end{equation}
\noindent It is clear that the $O(r^2)$ forcing term imposes the scaling
variable $\xi=\mu r$ and $Z=F(\mu r)$ where $F$ is a solution of the following equation:
\begin{equation}
\xi F''-F'+\xi^{2}F=D
\end{equation}
\noindent The problem is in evaluation of the dissipation
contribution $D$.
On the first glance one can attempt to neglect $D$ and solve the resulting
equation. This is not so simple, however. The Laplace transform of (38) gives an
equation for the probability density $P=\frac{1}{r}\Phi(\frac{U}{r})
\equiv \frac{1}{r}\Phi(X)$
$$\Phi''+X^{2}\Phi'+X\Phi=0$$
\noindent Introducing
\begin{equation}
\Phi=Exp(-\frac{X^{3}}{6})\Psi
\end{equation}
\noindent gives
\begin{equation}
\Psi''=(\frac{X^{2}}{4}-2X)\Psi
\end{equation}
\noindent which is the Schrodinger equation for a particle in a potential
$U(X)=X^{4}/4-2X$ not having any positive solutions.
\noindent The positivity of
the probability density is a severe constraint on a possible solution of the
equation of motion. That is where the dissipation contribution $D$ comes to
the rescue. Polyakov proposed a self-consistent conjecture about the structure
of the dissipation term
\begin{equation}
D=(\frac{b}{\mu}+a)Z
\end{equation}
\noindent modifying the potential in the Schrodinger equation with the
coefficients $b$ and $a$
chosen to produce the zero-energy ground state corresponding to positive PDF.
According to Ref.(8) this expression is the only one satisfying the galileo
invariance of the small-scale dynamics.
\noindent The fact that the one or multi-dimensional advection
contributions to the equation for the generating function do not lead to
positive solutions for the PDF is a general phenomenon (see below). The
importance of Polyakov's theory is, among other things,
in realization that the dynamic closures for the remaining terms
must remove this problem. This
dramatically narrows the allowed classes of closures. Thus, the
expressions for, $D$ or the pressure terms (see below), combined with
advective contributions to equation for $Z$ can be correct only and only if
they lead
to positive
solutions for the PDF's in the entire range where $|\Delta u|<<u_{rms}$ and $r<<L_{i}$.
\section{Physical Considerations}
The problem of two-dimensional turbulence is simlified by the fact that the
dissipation contributions are irrelevant on the scales $l>>l_{f}$ we are
interested in. Moreover, since $u_{rms}$ grows with time,
the statistically steady small-scale velocity differences
$U_{-}=\Delta u$ with $r<<L(t)$ must be
decoupled from $U_{+}$ in (25). This means that the terms
\begin{equation}
\overline{(\Delta u)^{n}(\Delta v)^{m}}
\end{equation}
\noindent can eneter the equation for $P(\Delta u,r)$ while the ones,
involving
\begin{equation}
\overline{(\Delta u)^{n}(\Delta v)^{m}U_{+}^{p}}
\end{equation}
\noindent cannot. In principle, it can happen that the
$U_{-}U_{+}$-correlation functions can sum up into something time-independent.
However, at present we discard this bizarre possibility.
\noindent Next, the pressure gradients
\begin{equation}
\nabla p(x+r)-\nabla p(x)
\end{equation}
\noindent appearing in the equation (22)-(24) for $Z$
involve integrals
over entire
space. It is clear that, if the steady state exists,
the large- scale contribution to the pressure integrals,
depending on $L=L(t)$ cannot
contribute to the small-scale steady-state dynamics, described by (25).
That is why the pressure contributions to $I_{p}$ (23)
must depend exclusively on the local scale $r$. This leads us to
an assumption that the pressure gradients in (23) are local
in a sense the they can be expressed in terms of the velocity field at the
points $x$ and $x+r$. The application of these
considerations are presented below.
The theory of Burgers turbulence, dealt with the ``universal'' part of
the dynamics, i.e. with the moments of velocity difference $S_{n}$ with
$n<1$. The theory of
two-dimensional turbulence, we are interested in, must produce the
moments with $n<\infty$ and that is why the algebtaic expressions for the
PDF's, characteristic of Burgers dynamics, are irrelevant. In addition, we
expect the small-scale dynamics in 2d to be independent on the forcing
function. This makes this problem very different.
\section{Transverse Structure Functions}
Unlike the probability density function for the longitudinal
velocity differences
$P(\Delta u,r)$,
the transverse velocity difference probability density
is symmetric, i.e. $P(\Delta v,r)=P(-\Delta v,r)$.
We are interested in the equation (25) in the limit $\eta_{2}\rightarrow 0$.
Let us first discuss some of the general properties of incompressible
turbulence. Consider the forcing fucntion
$${\bf f}(x,y)=(f_{x}(x,y),0)$$
In this case the equation (25) is:
\begin{equation}
[\partial_{\eta_{1}}\partial_{\eta_{2}}+\frac{1}{r}\partial_{\eta_{2}}+
\frac{\eta_{3}}{r}\frac{\partial^{2}}{\partial_{\eta_{2}}\partial{\eta_{3}}}
-\frac{\eta_{2}}{r}\frac{\partial^{2}}{\partial \eta{_3}^{2}}-
\eta_{2}^{2}]Z=I_{p}
\end{equation}
\noindent Then, setting $\eta_{2}=0$ removes all
information about forcing
function from the equation of motion. Based on our general intuition and
numerical data we know that two flows strirred by a one-component
or by a two-component (statistically isotropic) forcing function are
identical at the scales $l>>l_{f}$, provided the total fluxes
generated by these
forcing functions are equal. This happens due to pressure terms
$$\Delta p=-\nabla_{i}\nabla_{j}v_{i}v_{j}$$
\noindent effectively mixing various components of the velocity field. This
universality, i.e. independence of the small-scale turbulence on the
symmetries of
the forcing, enables us to write an expression for the $I_{p}$
contribution to (25).
According to considerations, presented in a previous section, the pressure
gradients in the equation (25) are local and their dynamic
role is in mixing various components of velocity
field. Thus the only contribution to $I_{p}$, not vanishing in the limit
$\eta_{2}\rightarrow 0$, can be estimated as:
\begin{equation}
b\frac{\eta_{3}}{r}<\Delta u \Delta v e^{\eta_{2}\Delta u+\eta_{3}\Delta v}>=
b\frac{\eta_{3}}{r}
\frac{\partial}{\partial \eta_{2}}
<\Delta v e^{\eta_{2}\Delta u+\eta_{3}\Delta v}>
\end{equation}
\noindent Using a theorem (see Frisch (8), for example)
that for the random gaussian
process $\xi$ (see below)
\begin{equation}
<\xi F(\xi)>=\overline{\xi^{2}}<\frac{\partial F(\xi)}{\partial \xi}>
\end{equation}
\noindent we derive in the limit $\eta_{2}\rightarrow 0$
\begin{equation}
I_{p}\approx b\eta_{3}^{2}\frac{\overline{(\Delta v)^{2}}}{r}\frac{\partial
Z_{3}}{\partial \eta_{2}}
\end{equation}
\noindent
Substituting this into (25) and integrating over $\eta_{2}$ gives in the limit
$\eta_{2}\rightarrow 0$:
\begin{equation}
\frac{\partial Z_{3}}{\partial r}+\frac{Z_{3}}{r}+\frac{\eta_{3}}{r}\frac{\partial Z_{3}}{\partial
\eta_{3}}-\frac{\gamma}{r^{\frac{1}{3}}}\eta_{3}^{2}Z+
\Omega(\eta_{3})=\Gamma(\eta_{3})
\end{equation}
\noindent where $\gamma$ is undetermined parameter and an arbitrary function
$$\Gamma(\eta_{3})=Z_{3}/r+\Omega(\eta_{3})$$
\noindent with
$$-\Omega(\eta_{3})=
lim_{\eta_{2}\rightarrow 0}~\eta_{3}^{2}\int Z(\eta_{2},\eta_{3},r)d\eta_{2}$$
\noindent is chosen to satisfy
a trivial constraint
$Z_{3}(\eta_{3}=0,r)=1$ and the above mentioned universality.
\noindent
This gives:
\begin{equation}
\frac{\partial Z_{3}}{\partial r}+\frac{\eta_{3}}{r}\frac{\partial Z_{3}}{\partial
\eta_{3}}-\frac{\gamma}{r^{\frac{1}{3}}}\eta_{3}^{2}Z=0
\end{equation}
\noindent where $Z_{3}=Z(\eta_{2}=0,\eta_{3})$.
This equation is invariant under
$\eta_{3}\rightarrow -\eta_{3}$ - transformation. It is important that the
$O(\eta_{3}^{2})$ contribution to (50) comes from the pressure term but not
from the forcing, present in the original equation (25).
Seeking a solution
to this
equation in a scaling form
$Z_{3}(\eta_{3},r)=Z(\eta_{3}r^{\frac{1}{3}})\equiv Z(X)$ gives:
\begin{equation}
\frac{4X}{3}Z_{X}=\gamma X^{2}Z
\end{equation}
\noindent and
\begin{equation}
Z=Exp(\frac{3\gamma}{8}\eta_{3}^{2}r^{\frac{2}{3}})
\end{equation}
This generating function corresponds to the gaussian distribution of
transverse velocity differences $P(\Delta v)$ with the second-order structure
function
\begin{equation}
S_{2}^{t}(r)=\overline{(\Delta v)^{2}}=\frac{3\gamma}{4}r^{\frac{2}{3}}
\end{equation}
The equation (50) correseponds to a one-dimensional
linear Langevin equation for ``velocity field'' $V=v/(Pr^){\frac{1}{3}}$
\begin{equation}
v_{\tau}(x)=-v(x)+\phi(x,\tau)
\end{equation}
\noindent where $\tau\propto tr^{-\frac{2}{3}}P^{\frac{1}{3}}$ and
the non-local gaussian ``universal'' forcing $\phi(x,\tau)$,
generated by the
nonlinearity of the original equation is defined by the correlation function
\begin{equation}
\overline{\phi(k,\tau)\phi(k',\tau')}\propto
\delta(k+k')\delta(\tau-\tau')
\end{equation}
\noindent The generating function for the field $V$ is
$$z=<e^{XV}>$$
\noindent Since $\tau\propto tr^{-\frac{2}{3}}$ and $V\propto
vr^{-\frac{1}{3}}$
this equation is strongly non-local. It becomes local, however in the
wave-number space. This will be discussed later.
\noindent Now we can attempt to justify the relation (46). According to (23)
and taking into account that the $x$-axis is paralel to the displacement $r$
in the limit $\eta_{2}\rightarrow 0$
$$I_{p}\approx \eta_{3}<(\partial_{y}p(0)-\partial_{y'} p(r))
Exp(\eta_{3}\Delta v+\eta_{2}\Delta u)>$$
\noindent where
$$\partial_{y}p(0)-\partial_{y'} p(r)=\int
k_{y}(1-e^{ik_{x}r})[\frac{k_{x}^{2}}{k^{2}}u(q)u(k-q)+\frac{k_{y}^{2}}{k^{2}}v(q)v(k-q)+\frac{k_{x}k_{y}}{k^{2}}u(q)v(k-q)]d^{2}kd^{2}q$$
\noindent and the exponent is expressed simply as:
$$e^{\eta_{3}\Delta v +\eta_{2}\Delta u}=
Exp[\eta_{3}\int (1-e^{iQ_{x}r})v(Q)d^{2}Q +
\eta_{2}\int (1-e^{iQ_{x}r})u(Q)d^{2}Q]$$
\noindent
It will be come clear below that transverse velocity differences $\Delta v$
obey gaussian statistics and the longitudinal ones $\Delta u$
are very close to
gaussian. Then, substituting the above
expressions into $I_{p}$ and expanding
the exponent
we generate an infinite series involving various products of $u(q)$'s and
$v(q)$'s. In case of an incompressible, statistically isotropic gaussian
velocity field, we are dealing with, these products are split
into pairs:
$$<v_{i}(q)v_{j}(Q)>\propto
q^{-\frac{8}{3}}(\delta_{ij}-\frac{q_{i}q_{j}}{q^{2}})\delta(q+Q)$$
\noindent The $k_{y}$ integration is carried over the interval
$-\infty<k<\infty$ and in the isotropic case we are dealing with the only
non-zero terms are those involving even powers of $k_{y}$. These terms are
generated by the expansion of
$$e^{\eta_{2} \Delta u}$$
\noindent They however, , being $O(\eta_{2})$,
disappear in the limit $\eta_{2}\rightarrow 0$.
Thus:
$$I_{p}=\eta_{3}\int d^{2}kd^{2}q
k_{y}(1-e^{ik_{x}r})\frac{k_{x}k_{y}}{k^{2}}<u(q)v(k-q)
Exp(\eta_{3}\int (1-e^{iQ_{x}r})v(Q)d^{2}Q +
\eta_{2}\int (1-e^{iQ_{x}r})u(Q)d^{2}Q)>
$$
\noindent where the $O(\eta_{2})$ contribution to the exponent is temporarily
kept to make the transformation
$$\Delta u e^{\eta_{2}\Delta u}=\frac{\partial e^{\eta_{2}\Delta
u}}{\partial \eta_{2}}$$
\noindent to (46) possible. Only after that we set $\eta_{2}=0$.
This proves that the only contribution to the
equation for the probability density function comes from the $O(\Delta u
\Delta v)$ mixing components, involved in the pressure gradients. This
relation justifies the estimate (46).
\section{Longitudinal Velocity Differences}
\noindent The remarkable fact that in the limit $\eta_{2}\rightarrow 0$
all contributions to the equation (25) contain
$\frac{\partial}{\partial \eta_{2}}$ enables separation of variables:
integrating the resulting equation over $\eta_{2}$ gives the closed equation
for $Z_{3}(\eta_{3})$. The corresponding dynamic
equation is linear, meaning that transverse
velocity fluctuations do not directly contribute to the energy transfer
between different scales. This effect is possible only in 2d where the
$O((d-2)\frac{\partial}{\partial \eta_{3}})$
enstrophy production term in (22), not containing
$\frac{\partial}{\partial \eta_{2}}$, is equal to zero. This simplification,
combined with locality of the pressure-gradient
effects, allowed us to
derive a closed-form expression for $Z_{3}$.
The role of pressure in the dynamics of tranverse components of velocity field
is mainly restricted to control of the ``energy redistribution'' neccessary
for generation of
isotropic and incompressible velocity field. The longitudinal field dynamics
are much more involved. The advection (pressure excluding) part of
non-linearity tends to produce large
gradients of velocity field (``shock generation''
using the Burgers equation
phenomenology), manifesting itself in creation of a
constant energy flux in the wave-number space.
Pressure is the only factor preventing the shock
formation.
Interested in the longitudinal
correlation functions we set $\eta_{3}=0$. Then, the term in (25)
\begin{equation}
\frac{\eta_{2}}{r}\frac{\partial^{2}Z}{\partial \eta_{3}^{2}}=
\frac{\eta_{2}}{r}<(\Delta v)^{2}e^{\eta_{2}\Delta u}>\approx
\frac{\eta_{2}A_{2}^{t}}{r^{\frac{1}{3}}}Z_{2}+O(\eta_{2}^{2};~\eta_{3}^{2};
~\eta_{2}^{2}\eta_{3})
\end{equation}
\noindent The last relation is accurate since substituting this into (25),
differentiating once over $\eta_{2}$ and setting
both $\eta_{3}=\eta_{2}=0$ gives:
\begin{equation}
\frac{1}{r}\frac{\partial}{\partial r}rS_{2}-\frac{A_{2}^{t}}{r^\frac{1}{3}}=
\frac{\partial I_{p}(0,0)}{\partial \eta_{2}}
\end{equation}
\noindent Since $S_{2}(r)=A_{2}r^{\frac{2}{3}}$ this equation gives:
\begin{equation}
\frac{5}{3}A_{2}-A_{2}^{t}=
r^{\frac{1}{3}}\frac{\partial I_{p}(0,0)}{\partial \eta_{2}}
\end{equation}
\noindent which, according to (12) is exact since
$\frac{\partial I_{p}(0,0)}{\partial \eta_{2}}=0$ (see below).
Let us consider some general properties of the pressure term $I_{p}$ in the
limit $\eta_{3}\rightarrow 0$. We have:
\begin{equation}
I_{p}\approx \eta_{2}<(\frac{\partial p(2)}{\partial x_{2}}-\frac{\partial
p(1)}{\partial x_{1}})
Exp(\eta_{2}\Delta u +\eta_{3}\Delta v)>
\end{equation}
\noindent
Expanding the exponent and recalling that the isotropic and
incompressible turbulence $\overline{\Delta u}=\overline{\Delta v}=0$ and
$\overline{p(x)v_{i}(x')}=0$, we conclude that
\begin{equation}
I_{p}\approx \eta_{2}<(\frac{\partial p(2)}{\partial x_{2}}-\frac{\partial
p(1)}{\partial x_{1}})
(\eta_{2}\Delta u +\eta_{3}\Delta v)^{2}+...>=O(\alpha \eta_{2}^{3}+\beta
\eta_{2}^{2}\eta_{3}+...)
\end{equation}
\noindent It is clear that the relation (48), derived above for the case of
gaussian statistics, satisfied this
general property of the flow. Thus when $\eta_{3}\rightarrow 0$,
we approximate
\begin{equation}
I_{p}\approx c\eta_{2}^{3}Z+G
\end{equation}
\noindent where $c$ is a yet undetermined constant and $G$ denotes the
contributions to $I_{p}$, properly modifying numerical coefficients in the
equation (25). The presence of the $O(\eta_{2}^{3})$ distinguishes this
equation from the one for transverse PDF considered in the previous section.
There the assumed role of pressure was limited to the mixing of various
components of velocity field. That is why all we accounted for was $O(
\Delta v \Delta u)$ contributions to pressure. Here, in addition we
also consider $O(\eta_{2}^{3})$ contributions, responsible for prevention of
the shock formation.
The resulting equation is:
\begin{equation}
\frac{1}{r^{3}}\frac{\partial^{2}}{\partial \eta_{2}\partial r}r^{3}Z_{2}-
\frac{11}{5r^{\frac{1}{3}}}A_{2}^{t}\eta_{2}Z_{2}-3\eta_{2}^{2}Z_{2}-
c\eta_{2}^{3}Z_{2}=0
\end{equation}
\noindent The Laplace transform of gives equation for the probability density
$P(\Delta u,r)$:
\begin{equation}
cP_{UUU}-3P_{UU}+\frac{1}{r^{3}}\frac{\partial}{\partial r}r^{3}UP+
\frac{11 A_{2}^{t}}{5}P_{U}=0
\end{equation}
\noindent Seeking solution in a scaling form (the parameter $c$ will be
determined below)
\begin{equation}
P(U,r)=\frac{1}{r^{\frac{1}{3}}}F(\frac{U}{r^{\frac{1}{3}}})
\end{equation}
\noindent we obtain
\begin{equation}
cF_{xxx}-3F_{xx}+(b-\frac{x^{2}}{3})F_{x}+\frac{8}{3}xF=0
\end{equation}
\noindent
Where $b=\frac{11}{3}A_{2}$.
All, but one,
term in (65) change
sign wnen $x\rightarrow -x$. The $O(F_{xx})$ symmetry-breaking contribution
is neccessary for existence of the non-zero energy flux.
Assuming for a time being that, in accord with numerical and physical
experiments, that the flux is small (see relation (2)), we first neglect the
$O(F_{xx})$- contribution, find solution and then take it into account
perturbatively. The equation is:
\begin{equation}
cF^{o}_{xxx}+(b-\frac{x^{2}}{3})F^{o}_{x}+\frac{8}{3}xF^{o}=0
\end{equation}
\noindent with solution:
\begin{equation}
F^{o}=e^{\frac{x^{2}}{2A_{2}}}
\end{equation}
\noindent where $c=\frac{A_{2}^{2}}{3}$.
If $A_{2}>>1$, then the neglected $F_{xx}=O(1/A_{2})$ term is small.
This means
that the odd-order moments, computed with the PDF, which is a solution of
(65), must be small in a sense defined by the relation (2). At the same
time the even-order moments must be close to the gaussian ones.
Analytic solution of (65) is difficult. However, one can evaluate
all moments $\frac{S_{n}}{r^{\frac{n}{3}}}=A_{n}$ in terms of only one
parameter $A_{2}$:
\begin{equation}
S_{n+1}=-\frac{3}{n+10}(-\frac{A_{2}^{2}}{3}n(n-1)(n-2)S_{n-3}-3n(n-1)S_{n-2}-\frac{11}{3}A_{2}nS_{n-1})
\end{equation}
This relation gives: $A_{1}=0; A_{3}=3/2; A_{4}=3; A_{5}=12.43A_{2}; A_{6}=
15A_{2}^{3}-36; A_{7}=37.71A_{4}$ etc. These numbers can be tested in
numerical experiments. The one-loop renormalized perturbation expansions give
$A_{2}\approx 10$, while numerical simulations are consistent with
$A_{2}\approx 12$. Keeping these numbers in mind, it follows from (68)
that the accurate measurements of the odd-order moments is the
only way to verify
predictions of the present theory. The deviations of the
even-order moments from the gaussian ones are too small to be detected by both
physical and numerical experiments. It can be checked that the ratios
$$s_{2n+1}=\frac{S_{2n+1}}{S_{2n}^{\frac{2n+1}{2n}}}$$
\noindent vary in the interval $0.04-0.1$ for $2<n<10$ and $A_{2}\approx 10$.
With $A_{2}\approx 12$ these numbers decrease even more.
\section{Summary and Conclusions}
\noindent The experimentally observed gaussian or very close to it statistics
of transverse velocity differences was extremely puzzling since, on
the first glance,
it is incompatible with the non-trivial Kolmogorov scaling,
resulting from strong non-linearity of the problem. The most surpring and
interesting result, derived in this paper, is that due to the symmetries of
the problem the equation, governing probability density function of transverse
velocity differences, has one derivative less than the one corresponding to
the longitudinal differences. This means, in turn, that transverse
components of
velocity field are governed by a non-local linear, equation, driven by a
universal, non-local, solution-depending gaussian force. This reduction,
resembling the super-symmetry effects in field theory, is surprising if not
miraculous.
The non-local equation in the physical space, obtained above,
corresponds to the Langevin equation in the Fourier space:
\begin{equation}
v_{t}(k)+c_{\nu}P^{\frac{1}{3}}k^{\frac{2}{3}}v=f_{R}(k,t)
\end{equation}
\noindent where $c_{\nu}$ is an amplitude of
``effective'' (turbulent) viscosity and
\begin{equation}
\overline{f_{R}(k,t)f_{R}(k',t')}\propto k^{-1}\delta(k+k')\delta(t-t')
\end{equation}
\noindent used in [9]-[10]
in the renormalization group treatments of fluid turbulence.
\noindent
The irrelevance of the
dissipation terms in two-dimensional turbulence
makes the problem much more tractable than its
three-dimensional counterpart. Still, in order to close the equations
for probability density of velocity field
one needs an expression for the pressure contributions.
The situation is even more simplified by the fact
that the large-scale-dominated single-point variables are time-dependent and
must decouple from the steady-state small-scale dynamics. That is why one can
use an assumption about locality of the pressure gradient effects leaving only
the mixing $O(\Delta u \Delta v)$ contributions to the two-point pressure
difference. It can be tested by a mere accounting that all other contributions
to the expression for $I_{p}$ involve one or more $U_{+}$'s and leading to
the time-dependent result. This means that they must
disappear from the steady state equations (25) and (45).
The range of
possible
models for pressure is narrowed by a few dynamic and kinematic constraints and
by the fact that the resulting equation must give positive solution. A simple
calculation shows that the model for the pressure gradient terms, introduced
in this paper, is consistent
with the derived gaussian statistics.
The equations for PDF of longitudinal velocity differences do not
correspond to linear dynamics. Still, the derived solution only slightly
deviates from
gaussian. This is possible due to the relative smallness of the energy flux
in two dimensions.
The results presented here seem to agree with both physical and numerical
experiments. The obtained close-to-gaussian statistics justifies various
one-loop renormalized perturbation expansions giving $A_{2}\approx
10-12$. Using this number we realize that it is extremely difficult to
experimentally detect deviations from the gaussian statistics.
Still, some fine details of the present theory, related to the pressure
gradient-velocity correlation functions can be tested numerically.
In addition, measurements of a few odd-order moments can shed some light on
validity of the present theory.
The equations and solution presented here leave one question unanswered:
are these
${\bf the}$ solutions or not? Our experience with the Burgers
and 2d Navier-Stokes
equations teach us that it is very difficult to find a self-consistent closure
leading to the positive solution for the PDF's. Stretching this statement
a bit
we feel that a closure, satisfying dynamic constraints and leading
to a a plausable
solution has a great chances to be correct.
\noindent Absense of intermittency in a steady-state
developing inertial range discovered
in two-dimensional turbulence [2]-[4] seems to be a general
phenomenon observed in a drift-wave turbulence [11] and in a one-dimensional
model of a passive scalar advected by a compressible velocity field [12].
These observations support our understanding of intermittency as a phenomenon
originating from
interaction of the large and small-scale velocity fluctuations. In a
developing statistically steady
inertial range, were the integral scale is strongly time-dependent,
these interactions must be small for the small-scale steady state to exist.
At the later stages the finite size effects, destroying
time-independence of the small scale
dynamics, lead to formation of coherent structures and new dynamic phenomena
which are beyond the scope of the present theory.
\section{Acknowledment} I am grateful to A. Polyakov, M. Vergassola,
M.Chertkov, B.Shraiman, Y. Sinai and I. Kolokolov for many
interesting and
illuminating discussions.
\noindent {\bf references}
\\
1. R.H.Kraichnan, Phys.Fluids. {\bf 10}, 1417 (1967),
\\
2. L.M. Smith and V. Yakhot, Phys.Rev.Lett. {\bf 71}, 352 (1993)
\\
3. L.Smith and V. Yakhot, J. Fluid. Mech. {\bf 274}, 115 (1994)
\\
4. P. Tabeling and J. Paret, Phys. Fluids, {\bf 12}, 3126 (1998)
\\
5. L.D.Landau and E.M. Lifshitz, Fluid Mechanics, Pergamon Press, Oxford, 198
\\
6. A.S.Monin and A.M.Yaglom, ``Statistical Fluid Mechanics'' vol. 1, MIT Press,
Cambridge, MA (1971)
\\
7 U. Frisch, ``Turbulence'', Cambridge University Press, 1995
\\
8 . A.M. Polyakov, Phys.Rev. E, {\bf 52}, 6183 (1995)
Phys.Rev. E {\bf 52}, 6183 (1995)
\\
9. de Dominicis and P.C.Martin, Phys.Rev.A {\bf 19}, 419 (1979)
\\
10. V. Yakhot and S.A.Orszag, Phys.Rev.Lett. {\bf 57}, 1722 (1986)
\\
11. N. Kukharkin, S.A.Orszag and V.Yakhot, Phys.Rev.Lett., {\bf 75}, 2486
(1995)
\\
12. K. Gawedzki and M.Vergassola, COND-MAT/9811399, (1998)
\\
\end{document}
|
2,869,038,154,558 | arxiv | \section{Introduction}
Cyber-physical systems consist of discrete (usually digital, often implemented in software) controllers interacting with a continuous physical environment.
Control is often networked, sometimes wirelessly.
Many cyber-physical systems are safety- or performance-critical, or economically vital.
We thus need to ensure that they operate as desired, which includes dependability requirements such as reliability assurances, availability levels, or response time guarantees.
Reliability and availability are stochastic timed properties:
the probability of avoiding unsafe behaviour within a certain time horizon, and the expected fraction of time that the system is ready to provide service, respectively.
The critical systems themselves are also typically subject to randomisation, for example due to random message loss in wireless communication or due to employing randomised algorithms, and they are timed systems dealing with e.g.\ transmission delays and timeouts or faults occurring unpredictably over time.
Thus, to assure their dependability by way of modelling and verification (ideally at design-time), we need stochastic timed formalisms and modelling languages supported by tools able to check stochastic timed properties.
In this overview paper accompanying my invited presentation at the 5th Workshop on Models for Formal Analysis of Real Systems (MARS 2022), I outline two such modelling languages, Modest and JANI, and one such set of tools, the Modest Toolset (in \Cref{sec:Modest}).
I then briefly summarise how Modest and the Modest Toolset have been used
to study power supply noise in a two-by-two network-on-chip system by way of a discrete-time Markov chain (DTMC) model and probabilistic model checking with the \textsf{mcsta} tool (in \Cref{sec:NoC});
to find routes through sparse constellations of nanosatellites using an abstract Markov decision process (MDP)~\cite{Bel57,How60} model analysed with a statistical model checking approach that employs scheduler sampling under distributed information as implemented in the \textsf{modes} tool (in \Cref{sec:Space});
and to optimise an attack on the Bitcoin cryptocurrency system via a Markov automata (MA)~\cite{EHZ10} model that permits \textsf{mcsta} to synthesise the strategy that minimises the expected time to success or maximises the probability of success within a certain time bound (in \Cref{sec:Bitcoin}).
\section{Modest Languages and Tools}
\label{sec:Modest}
\begin{figure}[t]
\centering
\begin{tikzpicture}[yscale=1.03,baseline={([yshift={-1.175\ht\strutbox}]current bounding box.north)}]
\tikzstyle{every node}=[font=\normalsize]
\draw[gray] (0,0) -- (3,1);
\draw[fill=white,color=white] (1.5,0.5) circle (0.05);
\draw (0,1) -- (1.5,2);
\draw (1.5,1) -- (3.0,2);
\draw (0,2) -- (0.75,3);
\draw (3,2) -- (2.25,3);
\draw (0.0,0) -- (1.5,1);
\draw (1.5,2) -- (0.75,3);
\draw (1.5,2) -- (2.25,3);
\draw (0.75,3) -- (1.5,4);
\draw (2.25,3) -- (1.5,4);
\draw[gray] (3,0) -- (3,1);
\draw (3.25,0) to[out=0,in=0] (3.05,2);
\draw (1.5,0) node [fill=white] {\textbf{DTMC}} --
(1.5,1) node [fill=white] {\textbf{MDP}} --
(1.5,2) node [fill=white] {PTA};
\draw (3,0) node [fill=white] {CTMC};
\draw (2.9,1) node [fill=white] {\textcolor{gray}{CTMDP}};
\draw (3,2) node [fill=white] {\textbf{MA}};
\draw (2.25,3) node [fill=white] {STA};
\draw (1.5,4) node [fill=white] {SHA};
\draw (0,0) node [fill=white] {LTS} --
(0,1) node [fill=white] {TA} --
(0,2) node [fill=white] {HA};
\draw (0.75,3) node [fill=white] {PHA};
\draw (4.35,2.705) node [anchor=east] {\begin{tiny}\raisebox{0.5pt}{+\,}\end{tiny}\begin{scriptsize}\textit{continuous}\end{scriptsize}};
\draw (4.31,2.455) node [anchor=east] {\begin{scriptsize}~~\,\textit{probability}\end{scriptsize}};
\draw (0,1.65) node [anchor=east] {\begin{tiny}\raisebox{0.5pt}{+\,}\end{tiny}\begin{scriptsize}\textit{continuous}\end{scriptsize}};
\draw[overlay] (0,1.4) node [anchor=east] {\begin{scriptsize}~~\,\textit{dynamics}\end{scriptsize}};
\draw (0,0.645) node [anchor=east] {\begin{tiny}\raisebox{0.5pt}{+\,}\end{tiny}\begin{scriptsize}\textit{real\phantom{\,}}\end{scriptsize}};
\draw (0,0.395) node [anchor=east] {\begin{scriptsize}\textit{time}\end{scriptsize}};
\draw (0,-0.3) node [] {\begin{scriptsize}\textit{nondeter-}\end{scriptsize}};
\draw[overlay] (0,-0.57) node [] {\begin{scriptsize}\textit{\strut ministic}\end{scriptsize}};
\draw[overlay] (0,-0.825) node [] {\begin{scriptsize}\textit{\strut choices}\end{scriptsize}};
\draw (1.5,-0.3) node [] {\begin{scriptsize}\textit{discrete}\end{scriptsize}};
\draw[overlay] (1.5,-0.57) node [] {\begin{scriptsize}\textit{\strut probabilities}\end{scriptsize}};
\draw (3,-0.3) node [] {\begin{scriptsize}\textit{exponential}\end{scriptsize}};
\draw[overlay] (3,-0.57) node [] {\begin{scriptsize}\textit{\strut residence}\end{scriptsize}};
\draw[overlay] (3,-0.825) node [] {\begin{scriptsize}\textit{\strut times}\end{scriptsize}};
\draw[] (4.5,4) node [fill=white] {\emph{Key:}};
\end{tikzpicture}
\begin{minipage}[t]{0.55\textwidth}
\renewcommand{\arraystretch}{0.95}
\begin{tabular}[t]{ll}
SHA & stochastic hybrid automata~\cite{FHHWZ11}\\
PHA & probabilistic hybrid automata~\cite{Spr00}\\
STA & stochastic timed automata~\cite{BDHK06}\\
HA & hybrid automata~\cite{ACHH92}\\
PTA & probabilistic timed automata~\cite{KNSS02}\\
MA & Markov automata\\
TA & timed automata~\cite{AD94}\\
MDP & Markov decision processes\\
CTMDP & continuous-time Markov decision processes\\
LTS & labelled transition systems\\
DTMC~ & discrete-time Markov chains\\
CTMC~ & continuous-time Markov chains\\
\end{tabular}
\end{minipage}
\caption{The family tree of automata-based quantitative formalisms}
\label{fig:ModelFamilyTree}
\end{figure}
A well-defined semantics in terms of some mathematically well-understood object is a cornerstone of formal models.
For quantitative models, we use automata-based formalisms---that represent the evolution of a system from state to state via (randomised) transitions---building on labelled transition systems (LTS, or Kripke structures) and discrete- and continuous-time Markov chains (DTMC and CTMC, respectively)~\cite{BK08}.
By combining these basic mathematical formalisms in various ways, and extending them with features such as real-time clocks and continuous variables evolving according to differential equations, we obtain further formalisms as depicted in \Cref{fig:ModelFamilyTree}.
Since writing real-life models as, say, large Markov chains would be cumbersome, we specify them using a higher-level modelling language that offers at least discrete variables with standard arithmetic and Boolean operators plus a notion of parallel composition for the natural specification of distributed and component- or actor-based systems.
\paragraph{The Modest Language.}
One such language is Modest, originally the \uline{mo}delling and \uline{de}scription language for \uline{s}tochastic \uline{t}imed systems~\cite{BDHK06}.
Its formal semantics was first defined in terms of STA and later extended to SHA~\cite{HHHK13}.
Modest is a textual modelling language; its syntax is designed to be similar to widely used programming languages like C or Java to lower the barrier of entry for domain experts.
At the same time, it is a process algebra in spirit, based on standard operators such as sequential and parallel composition, allowing the definition of and recursive calls to processes, and emphasising compositionality.
In fact, Modest consists of two largely orthogonal languages:
one to define \emph{behaviour}, which is the one based on process-algebraic ideas, and one to manipulate \emph{data} such as the values of discrete variables.
The latter provides arrays, recursive datatypes (e.g.\ allowing the definition of a linked list type via pairs of a head containing a data item and a linked list option tail), and mutually recursive functions.
These features allow for concise and natural models of complex real-life systems.
\paragraph{The JANI model interchange format.}
While Modest is a convenient modelling language for end-users, the work required to implement code that parses Modest models and transforms the parsed syntax into its symbolic semantics (a parallel composition of SHA with discrete variables) is nontrivial.
The same problem affects many other modelling languages, e.g.\ Prism's~\cite{KNP11}, too.
To ease tool development and facilitate the exchange of models between different tools, in 2016, the developers of several quantitative verification tools defined the JSON-based JANI~\cite{BDHHJT17} format.
It is not designed to be human-writable, but rather serve as a model interchange format that is generated by tools from other modelling languages, such as Modest.
Today, JANI is supported by the Modest Toolset (see below), Storm~\cite{DJKV17}, Momba~\cite{KKH21}, and several other tools.
All models in the quantitative verification benchmark set (QVBS)~\cite{HKPQR19} are available in both their original formats as well as in JANI.
The QVBS served as the foundation for the QComp 2019~\cite{HHHKKKPQRS19} and QComp 2020~\cite{BHKKPQTZ20} tool competitions.
\paragraph{The Modest Toolset.}
To support the creation of Modest models, and to compute the values of properties or check requirements specified as part of models, the Modest Toolset~\cite{HH14} provides a collection of visualisation, model transformation, model checking, and simulation tools.
The Modest Toolset has been in development since 2008; it is written in C\#, and is available as precompiled binaries for common Linux distributions, macOS, and Windows at \href{http://www.modestchecker.net/}{modestchecker.net}.
As input languages, it supports Modest and JANI; its \textsf{moconv} tool can convert between the two and apply various transformations, such as converting a suitable PTA model into its digital clocks~\cite{KNPS06} MDP.
The \textsf{mosta} tool visualises a model's symbolic semantics, helping in learning Modest and in debugging models.
The \textsf{mopy} tool converts a model into Python code implementing a first-state-next-state interface~\cite{BHKK03} that can be used to quickly prototype explicit-state verification algorithms and that is used by the author as part of the programming project of a Master's-level course on quantitative verification at the University of Twente.
The main implementation of probabilistic model checking (PMC)~\cite{Bai16,BAFK18} in the Modest Toolset is in \textsf{mcsta}~\cite{HH15}:
an explicit-state model checker that provides a unique disk-based approach to mitigate the state space explosion problem~\cite{HH15}.
It includes efficient model reductions such as the essential states abstraction~\cite{DJJL02}, and provides state-of-the-art algorithms for model checking MA~\cite{BHH21}.
The Modest Toolset's statistical model checker \textsf{modes}~\cite{BDHS20}complements \textsf{mcsta}'s capabilities for cases where model checking cannot be applied, such as when facing state space explosion or models with non-Markovian probability distributions like STA.
Statistical model checking (SMC)~\cite{AP18} is, in essence, Monte Carlo simulation applied to formal models and properties.
A constant-memory technique, it however incurs an explosion in runtime when faced with rare events, and does not directly support nondeterministic models such as MDP.
The \textsf{modes} tool addresses these shortcomings by providing rare event simulation~\cite{RT09} via a highly automated implementation of importance splitting~\cite{BDH19}, and by offering the lightweight scheduler sampling technique~\cite{LST14} for MDP, PTA~\cite{DHLS16,HSD17}, and (with limitations) stochastic-time models like MA and STA~\cite{DHS18}.
Other members of the Modest Toolset provide specialised analysis algorithms such as variants of the probabilistic planning algorithm LRTDP~\cite{BG03} for MDP in \textsf{modysh}~\cite{KH21} or an abstraction-based approach to safety verification of SHA in \textsf{prohver}~\cite{HHHK13}.
\section{Power Supply Noise in a Network-On-Chip System}
\label{sec:NoC}
As the complexity of distributed many-core systems advances, the network-on-chip (NoC) architecture has become the de-facto standard for on-chip communication.
A NoC is typically composed of topologically homogeneous routers operating synchronously in a decentralized manner using a predefined routing protocol.
Changes in the supply voltage---\emph{power supply noise} (PSN)---can influence the performance of the transistor devices in a NoC.
PSN is created by the simultaneous switching of logic devices, which causes a drop in the effective power supply voltage.
PSN is composed of two major components: resistive noise (related to the current drawn and the resistance of the circuit) and inductive noise (which is proportional to the rate of change of current through the inductance of the power grid).
\begin{wrapfigure}[10]{r}{7cm}
\vspace{-0.2cm}
\includegraphics[width=7cm]{noc-architecture.pdf}
\caption{Architecture of the $2 \times 2$ NoC~\cite{RLHBRCZ21}}
\label{fig:NocArch}
\end{wrapfigure}
To study PSN in NoC architectures, we modelled in Modest and analysed with \textsf{mcsta} first a single central router of a NoC~\cite{LHBSCRZ19} and later a two-by-two NoC consisting of four symmetric routers~\cite{RLHBRCZ21} as shown in \Cref{fig:NocArch}.
We focus on the latter in this section.
Our goal is to compute the probability for behavioural patterns that are likely to result in resistive resp.\ inductive noise to occur at least $n$ times within $t$ clock cycles, starting from an initial state where all buffers are empty.
We consider two different data packet (flit) generation patterns:
one where each router receives a flit into its local buffer (e.g.\ from the one core it is connected to) every other cycle, and one where flits are generated in bursts.
We assume the destination of a flit to be one of the other router's local outputs, with the actual router selected uniformly at random for each flit.
The routers use a specific round-robin style routing protocol.
Thus, with all decisions fixed to be either deterministic (flit generation times and routing choices) or random (flit destinations), and the whole NoC running on a discrete clock, this system can naturally be modelled as a DTMC.
The main challenge for model checking with \textsf{mcsta} is to avoid the state space explosion problem.
For a first concrete model, which exploited the availability of complex user-defined datatypes in Modest to represent the state of the network's routers and buffers in full detail, we were unable to perform model checking for more than $t = 4$ clock cycles.
We then manually applied a series of abstractions to achieve tractability:
predicate abstraction to replace the details in the complex datatype's values by only the relevant predicates;
a probabilistic choice abstraction that delays random assignments to discrete variables to the point where the assigned value is first tested;
and an abstraction of the buffers that includes replacing them by bounded integer variables counting the number of waiting flits only.
\begin{wrapfigure}[16]{r}{7cm}
\vspace{-0.2cm}
\includegraphics[width=7cm]{noc-cdf.pdf
\caption{CDF for inductive noise events~\cite{RLHBRCZ21}}
\label{fig:NocCdf}
\end{wrapfigure}
The resulting model could be model-checked for up to 30 clock cycles with every-other-cycle flit generation by unfolding the clock cycle counter into the state space, and up to any number of clock cycles by using the unfolding-free modified iteration technique of~\cite{HH16}, in essence computing the entire cumulative distribution function (CDF) as shown in \Cref{fig:NocCdf}.
This is due to an interesting effect of the different flit generation patterns:
With every-other-cycle generation, the buffers slowly fill up with flits to various destinations; the full state space that includes all combinations of buffer occupancies with different flits is too large to handle today.
Restricting the state space exploration to a bounded number of clock cycles, where initially few flits are present throughout the system, results in a sequence of manageable state spaces of ever-increasing size.
With bursty flit generation, all buffers periodically return to an empty state; the period is small enough for the entire state space to fit into memory, i.e.\ buffers do not fill up far enough for the number of combinations of buffer states to grow too large, if clock cycles are managed as rewards.
We also applied SMC, which however was limited in the case of every-other-cycle generation by noise events being relatively rare, and in the case of bursty generation by not being able to compete in terms of runtime with the modified iteration technique that can compute the probabilities for the entire sequence of values of $t$ up to any upper bound in one go.
Similarly, our attempts to use Storm's binary decision diagram-based state space exploration did not provide scalability improvements, possibly due to the model not being as structured as we think it is, or simply due to a bad variable ordering in the model.
For further details on this first case study, we refer the interested reader to the original paper that was presented at FMICS~2021~\cite{RLHBRCZ21}.
\section{Routing in Satellite Constellations}
\label{sec:Space}
Satellite networks in low-Earth orbit are increasingly used to collect and distribute information across the globe, including access to the Internet.
For real-time applications like Internet access, this requires very large constellations (such as the Starlink constellation being deployed by SpaceX); even if low-cost satellites based on off-the-shelf components that are not space-qualified are used, the entire constellation becomes extremely expensive.
A different and more sustainable approach is to relax the real-time constraint and leverage the store-carry-and-forward principle where nodes store received messages for later forwarding to other nodes in the network, once a communication window---a contact---appears.
This gives rise to a delay-tolerant network.
\begin{wrapfigure}[11]{r}{6.0cm}
\vspace{-0.4cm}
\centering
\begin{tikzpicture}[on grid,auto,align at top]
\node[] (c00) [] {\small$N_1$:};
\coordinate[right=5 of c00.east] (c50);
\node[below=1 of c00] (c01) [] {\small$N_2$:};
\coordinate[right=5 of c01.east] (c51);
\node[below=1 of c01] (c02) [] {\small$N_3$:};
\coordinate[right=5 of c02.east] (c52);
\node[below=1 of c02] (c03) [] {\small$N_4$:};
\coordinate[right=5 of c03.east] (c53);
\node[dot] (n00) [right=0.5 of c00.east] {};
\node[dot] (n01) [right=0.5 of c01.east] {};
\node[dot] (n11) [right=1.5 of c01.east] {};
\node[dot] (n12) [right=1.5 of c02.east] {};
\node[dot] (n20) [right=2.5 of c00.east] {};
\node[dot] (n22) [right=2.5 of c02.east] {};
\node[dot] (n32) [right=3.5 of c02.east] {};
\node[dot] (n33) [right=3.5 of c03.east] {};
\node[dot] (n40) [right=4.5 of c00.east] {};
\node[dot] (n43) [right=4.5 of c03.east] {};
\node[] (t1) [above=0.25 of n00,anchor=south] {\small $T_1$};
\node[] (t2) [right=1.0 of t1.south,anchor=south] {\small $T_2$};
\node[] (t3) [right=1.0 of t2.south,anchor=south] {\small $T_3$};
\node[] (t4) [right=1.0 of t3.south,anchor=south] {\small $T_4$};
\node[] (t5) [right=1.0 of t4.south,anchor=south] {\small $T_5$};
\node[overlay] (sl) [left=1.175 of t1.south,anchor=south] {\small \phantom{$T_1$}Slot:};
;
\path[->]
(c00) edge[dashed] node[] {} (c50)
(c01) edge[dashed] node[] {} (c51)
(c02) edge[dashed] node[] {} (c52)
(c03) edge[dashed] node[] {} (c53)
;
\path[-latex]
(n00) edge[bend left=15] node[right] {$p_1=0.9$} (n01)
(n11) edge[bend left=15] node[left] {$p_2=0.9$} (n12)
(n20) edge[bend left=10] node[right,pos=0.22] {$p_3=0.5$} (n22)
(n32) edge[bend left=15] node[left] {$p_4=0.5$} (n33)
(n40) edge[bend left=5] node[left] {$p_5=0.1$} (n43)
;
\end{tikzpicture}%
\caption{Uncertain contact plan~\cite{DFH20}}
\label{fig:ContactPlan}
\end{wrapfigure}
In satellite constellations, the orbits are known with sufficient precision to calculate the upcoming contacts over the next few days, giving rise to a \emph{contact plan}.
However, message transmissions may fail for various reasons such as unreliable (low-cost) components, contact mispredictions, or interference during the wireless communication.
If statistical data is available or the error margins of calculations are known, we can assign a success probability to each contact, giving rise to an uncertain contact plan.
We show an abstract representation of such a plan in \Cref{fig:ContactPlan}.
This plan comprises four satellites (or ground stations) $N_1$ through $N_4$ with contacts over five time slots $T_1$ through $T_5$.
The numbers annotating contacts are the transmission success probabilities.
Now, given the source and destination of a message, and a limit $n$ on the number of message copies present in the network to avoid exhausting the satellites' limited resources, we would like to compute the routing strategy that maximises the probability of message delivery within the time window covered by the contact plan.
Due to the combination of randomness (in transmission failures) with nondeterministic decisions to be optimised (which contacts to use to send how many copies) in a discrete-time setting (a sequence of contacts), MDP are the perfect match among the formalisms of \Cref{fig:ModelFamilyTree} to model this problem.
The goal is to find an optimal scheduler (i.e.\ routing strategy) for the MDP.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{satellite-toolchain.pdf}
\caption{Satellite routing scheduling toolchain for uncertain delay-tolerant networks~\cite{DFH20}}
\label{fig:SatelliteToolchain}
\end{figure}
We have tackled the problem by developing the toolchain outlined in \Cref{fig:SatelliteToolchain} that converts a concrete contact plan (with exact contact timings) into an abstract Modest MDP model.
At that point, one could apply PMC via e.g.\ \textsf{mcsta} to obtain the optimal scheduler.
However, PMC works with complete, global information:
Consider the contact plan of \Cref{fig:ContactPlan} with $n = 2$.
$N_1$ will definitely send one copy to $N_2$ in slot $T_1$, and if successful, $N_2$ will forward that copy in slot $T_2$.
In slot $T_3$, the best course of action computed by PMC for satellite $N_1$ is to send its remaining copy to $N_3$ if any only if $N_3$ did not receive the first copy.
Thus the model checker ``sees'' the state of all satellites.
Satellites, however, do not have global information about the state of all other satellites in the constellation, making the optimal strategies found by PMC potentially unimplementable.
In fact, what we need are distributed schedulers~\cite{GD07}.
Unfortunately, the model checking problem under distributed schedulers is undecidable, and even with simplifications such as restricting to memoryless schedulers remains practically intractable~\cite{GD09}.
Recently, an approximative model checking-based approach that is specifically tailored to the uncertain delay-tolerant networks case has become available~\cite{RFMDFD21}, which however still remains limited by state space explosion as $n$ increases.
We instead propose to adapt the lightweight scheduler sampling (LSS) approach to sample distributed schedulers~\cite{DFH20}.
In LSS, each scheduler is represented by a fixed-size (e.g.\ 32-bit) integer.
Performing an SMC analysis for each of $m$ randomly sampled such integers and keeping the maximum (minimum) estimate provides an underapproximation (overapproximation) for the maximum (minimum) probability achievable with the unknown optimal schedulers.
During an SMC analysis for scheduler $i$, when the simulator needs to decide between $k$ actions in state $s$, it concatenates the bitstring representations of $s$ and $i$, applies a hash function mapping this value to a fixed-size integer $j$, and selects the \mbox{$((j \mathbin{\,\mathrm{mod}\,} k) + 1)$-th} action.
To perform the same analysis w.r.t.\ distributed schedulers, all we need to change is the input to the hash function:
instead of the bitstring for $s$, we use that for a projection of $s$ to the variables observable by the currently active component (here: satellite).
We also introduce a condition of good-for-distribution models that, when satisfied, ensures that no two components may have a decision at the same time instant, making a global arbiter to break such ties unnecessary.
Our Modest models generated for uncertain contact plans are good for distribution by construction.
We implemented LSS for distributed schedulers as described above in \textsf{modes}, and applied this implementation to a small example contact plan as well as a realistic Walker-formation constellation.
Our results, presented at NFM 2020~\cite{DFH20}, show that LSS is able to find good and implementable routing strategies, and that restricting to distributed schedulers may actually result in better strategies than sampling from all (global-information) schedulers by virtue of restricting the sampling space.
\section{Optimally Attacking Bitcoin}
\label{sec:Bitcoin}
The Bitcoin cryptocurrency records its transactions in a blockchain to which blocks are added via the proof-of-work principle:
participants need to solve a computationally intensive problem to be able to generate or \emph{mine} a valid block.
Generally, the first new valid block mined gets appended to the chain, and a certain number of Bitcoins is awarded to the participant that found the block as a reward.
However, as a distributed system spanning the globe via the Internet, Bitcoin has to deal with asynchrony:
If multiple participants find new blocks at roughly the same time, there are different alternative forks of the Bitcoin blockchain, and a consensus must be reached on which is the valid one.
In Bitcoin, the longest available chain is considered the valid one.
As the computational power used for mining new blocks (the hash rate) changes, the Bitcoin network periodically adjusts the hardness of the problem such that the average time to find a new block (the confirmation time) is 10 minutes.
In practice, the actual confirmation time varies; it was about 12 minutes in 2017~\cite{FC18}.
This time is truly random, and the mining of new blocks can abstractly be modelled by a CTMC in which the transition from a chain with $n$ blocks to one with $n+1$ blocks has rate $\frac{1}{12}$, i.e.\ the time until the transition is taken is exponentially distributed with that rate.
If a large amount of the hash rate (say $M$ percent) is controlled by one malicious entity, they could feasibly implement various attacks on the Bitcoin network by secretly working on their own fork until it becomes longer than the ``public'' one, and then broadcasting the secret fork.
For example, Bitcoins could be spent twice:
once on the public fork in block $b_i$, and once on the secret fork that branches off from publicly known block $b_j$ that is before $b_i$ in the chain.
This behaviour can be integrated into an abstract CTMC model of Bitcoin to e.g.\ compute the expected time until the attack succeeds for various values of $M$.
We built such a model in Modest and studied similar properties using \textsf{mcsta} and \textsf{modes}~\cite{HH19}.
\begin{figure}[t]
\begin{lstlisting}
const real M; // fraction of hash rate controlled by malicious mining pool
const int CD; // confirmation depth required by victim
const int DB = CD; // attacker gives up when this far behind
\end{lstlisting}~\\[-30pt]
\begin{lstlisting}
action sln; // indicates that the honest pool mined a new block
action rst; // indicates that the attacker restarts from the public fork
action cnt; // indicates that the attacker continues
\end{lstlisting}~\\[-30pt]
\begin{lstlisting}
int(0..CD+1) m_len; // length of the secret fork
int(-DB..CD+1) m_diff = 0; // length of secret fork minus honest fork
\end{lstlisting}~\\[-30pt]
\begin{lstlisting}
process HonestPool()
{
rate(1/12 * (1 - M)) tau; // wait 12 / (1 - M) minutes on average
sln; // signal that a new block was found
HonestPool() // repeat
}
\end{lstlisting}~\\[-30pt]
\begin{lstlisting}
process TrustAttacker()
{
do {
:: rate((1/12) * M) {= m_len = min(CD, m_len + 1), m_diff++ =} // new secret block
:: sln {= m_diff-- =}; // public fork extended
alt { // strategy choice: restart or continue malicious fork
:: rst {= m_len = 0, m_diff = 0 =} // can always restart
:: when(m_diff > -DB) cnt // can continue if not too far behind
}
}
}
\end{lstlisting}~\\[-30pt]
\begin{lstlisting}
par {
:: HonestPool()
:: TrustAttacker()
}
\end{lstlisting}
\caption{Modest model for optimising the trust attack on Bitcoin~\cite{HH19}}
\label{fig:BitcoinModest}
\end{figure}
A more interesting and somewhat easier attack, which however does not have the individual benefit of doubly-spent coins but rather attempts to undermine the public trust in Bitcoin, is to simply try to obtain a secret fork that is longer than the official one by a certain margin, and then publish that fork.
If done repeatedly, regular users could no longer rely on the persistence of transactions that initially appeared to have become a part of the valid Bitcoin blockchain.
In this attack, every time the public fork is extended, the malicious entity may decide between (a)~continuing to work on its current secret fork and (b)~restarting its secret fork from the new public block.
This is because it is no longer necessary to purge a specific block $b_i$ from the public chain as in the double-spending attack.
Due to the presence of a nondeterministic choice to be optimised, this attack can thus no longer be represented in a CTMC model.
The attack on trust in Bitcoin was first analysed by Fehnker and Chaudhary~\cite{FC18} using statistical model checking with UPPAAL SMC~\cite{DLLMW11}.
As a consequence of using SMC, they had to run a separate analysis for every possible strategy determining the conditions for when to continue and when to restart, and their results came with a statistical error.
We later modelled the same scenario as the Modest MA model shown in \Cref{fig:BitcoinModest} and let \textsf{mcsta} synthesise the optimal strategy~\cite{HH19}, which it could do in a matter of a few seconds.
We found that the optimal strategy is to restart the attack if
(i)~the public chain is extended when the secret fork is still empty,
(ii)~the secret fork has one block and the public fork adds a third new block, or
(iii)~the secret fork has $\geq 2$ blocks and becomes three blocks shorter than the public one,
and to continue the attack in all other cases.
If the malicious entity controls just 20\,\% of the hash rate, which is not an uncommon situation for the Bitcoin network, then the expected time to success under this strategy is only approximately 2.5 days.
\section{Summary}
Different case studies have different needs in terms of conceptual modelling power, modelling language features, and analysis tool capabilities.
I highlighted three examples that were modelled in the Modest language and analysed using different tools from the Modest Toolset:
First, in the case of \textbf{power supply noise in a NoC}, the simple formalism of DTMC was sufficient.
For the detailed concrete model, however, the Modest language feature of declaring and using one's own complex data types was very helpful.
PMC via \textsf{mcsta} was the analysis method of choice, however significant effort was needed to abstract the model until it became tractable for PMC due to the state space explosion problem.
Second, for \textbf{routing in satellite constellations}, nondeterministic choices needed to be modelled, and optimised over by the analysis tool.
Here, MDP fit the problem very well with their ability to model decision-making under uncertainty.
We auto-generated Modest models from contact plans computed by domain-specific software.
Due to the need to find implementable routing strategies in the distributed-information setting of satellite constellations, we could not use PMC; instead, we adapted the LSS approach to allow SMC to handle both distributed information and nondeterminism.
Finally, to \textbf{optimally attack Bitcoin}, we showed that MA fit the problem well due to the combination of the stochastic time-to-next-block with the nondeterministic choices between continuing and restarting the secret fork.
Using PMC with \textsf{mcsta} again, we were able to compute an optimal strategy with little computational effort.
\paragraph{Acknowledgments.}
I thank my co-authors for the papers underlying the three case studies~\cite{DFH20,HH19,LHBSCRZ19,RLHBRCZ21}, without whom my presentation at MARS and this summary paper would not have been possible:
Prabal Basu, Koushik Chakraborty, Pedro R. D'Argenio, Juan A. Fraire, Holger Hermanns, Rajesh Jayashankara Shridevi, Benjamin Lewis, Riley Roberts, Sanghamitra Roy, and Zhen Zhang.
\bibliographystyle{eptcs}
|
2,869,038,154,559 | arxiv | \section{Introduction}
\label{sec:intro}
Fairness, non-discrimination, and unwanted bias have always been concerns in human decision making \cite{VarshneyV2017}, but are increasingly in the limelight because historical human decisions are now being used as training data for machine learning models in high stakes applications such as employment, credit, and criminal justice \cite{WilliamsBS2018}. Without bias mitigation, models trained on such decisions perpetuate and scale human biases and are thereby unsafe and untrustworthy \cite{VarshneyA2017,HindMMNROV2018}. The last couple of years have seen a surge in papers on algorithmic fairness in the machine learning and data mining literature, with basic principles defined using detection, estimation theory and information theory \cite{MenonW2018,CalmonWVRV2018}.
There are two main notions of fairness in decision making: \emph{group fairness} and \emph{individual fairness}. Group fairness, in its broadest sense, partitions a population into groups defined by \emph{protected attributes} (such as gender, caste, or religion) and seeks for some statistical measure to be equal across groups. There are many different group fairness notions involving different statistical measures, one such notion being \emph{disparate impact} \cite{Narayanan2018}. Individual fairness, in its broadest sense, seeks for similar individuals to be treated similarly. Checking for group fairness is a fairly straightforward computation of statistical metrics \cite{Zliobaitaz2017}, but checking for individual fairness is more computationally involved when there are many protected attributes with many values and scoring samples using a model is expensive \cite{GalhotraBM2017,AgarwalLNDS2018}. Unified metrics for both group and individual fairness have recently been proposed \cite{SpeicherHGGSWZ2018} based on inequality indices \cite{HurleyR2009}.
Machine learning pipelines contain three possible points of intervention to mitigate unwanted bias: the training data, the learning procedure, and the output predictions, with three corresponding classes of bias mitigation algorithms: pre-processing, in-processing, and post-processing \cite{DalessandroOL2017}. Advantages of post-processing algorithms are that they do not require access to the training process and are thus suitable for run-time environments. Moreover, post-processing algorithms operate in a black-box fashion, meaning that they do not need access to the internals of models, their derivatives, etc., and are therefore applicable to \emph{any} machine learning model (or amalgamation of models) \cite{KamiranKZ2012}.
The vast majority of bias mitigation algorithms address group fairness, but a few address individual fairness \cite{DworkHPRZ2012,DworkI2018}. Some pre-processing algorithms address both group and individual fairness \cite{ZemelWSPD2013,CalmonWVRV2017,CalmonWVRV2018}, but to the best of our knowledge, all existing post-processing algorithms are only for group fairness \cite{KamiranKZ2012,HardtPS2016,PleissRWKW2017,CanettiCDRSS2018}. Our main contribution in this paper is to propose a post-processing bias mitigation algorithm that considers \emph{both} group and individual fairness. Moreover, unlike the previous work, our proposal does not require any ground truth class labels in the validation samples while training the bias mitigation algorithm.
The general methodology of post-processing algorithms is to take a subset of samples and change their predicted labels appropriately to meet a group fairness requirement. An interesting observation about post-processing is that \emph{any} sample can be altered to achieve group fairness requirements because the metrics are expectations. The papers \cite{HardtPS2016,PleissRWKW2017} choose the samples randomly, whereas \cite{KamiranKZ2012} chooses the most uncertain samples (the ones in the reject option band \cite{Chow1970,Varshney2011}), capturing the human intuition to give the benefit of the doubt to unprivileged groups. In the method we propose herein, we choose samples that have or are likely to have individual fairness issues and in this way are able to address both group and individual fairness together.
The starting point for our proposed approach is the individual bias detector of \cite{AgarwalLNDS2018}, which finds samples whose model prediction changes when the protected attributes change, leaving all other features constant. Despite a large set of efficiencies enacted in the algorithm, it is still computationally expensive. To overcome the limitation of not being able to run the detector continually, we check for individual fairness on a small set of points and generalize from them by training a classifier that is applied to new samples. The samples with likely individual bias are the ones considered for a change of predicted label. By doing so, we modify the idea of \cite{KamiranKZ2012} from focusing on uncertainty to focusing on individual bias.
Our empirical results are promising. Compared to the state-of-the-art algorithms of \cite{HardtPS2016} and \cite{KamiranKZ2012}, we have superior performance on the combination of classification accuracy, individual fairness, and group fairness in the preponderance of six different real-world classification tasks requiring non-discrimination. The results show very little reduction in classification accuracy with much improvement in individual and group fairness measures.
The remainder of the paper is organized as follows. We first provide background on individual and group fairness definitions and detectors in Sec.~\ref{sec:background}. Next, in Sec.~\ref{sec:algorithm}, we propose a new post-processing bias mitigation algorithm that accounts for both individual and group fairness. In Sec.~\ref{sec:results}, we provide empirical results on several real-world datasets including comparisons to \cite{KamiranKZ2012,HardtPS2016}. Finally, we conclude the paper in Sec.~\ref{sec:conclusion}.
\section{Individual and Group Fairness}
\label{sec:background}
In this section, we introduce notation, provide working definitions of individual and group fairness, and detail methods for detecting individual bias and mitigating group bias.
Consider a supervised classification problem with features $\mathbf{X} \in \mathcal{X}$, categorical protected attributes $\mathbf{D} \in \mathcal{D}$, and categorical labels $Y \in \mathcal{Y}$. We are given a set of training samples $\{(\mathbf{x}_1,\mathbf{d}_1,y_1), \ldots, (\mathbf{x}_n,\mathbf{d}_n,y_n)\}$ and would like to learn a classifier $\hat{y}: \mathcal{X} \times \mathcal{D} \rightarrow \mathcal{Y}$. For ease of exposition, we will only consider a scalar binary protected attribute, i.e.\ $\mathcal{D} = \{0,1\}$, and a binary classification problem, i.e.\ $\mathcal{Y} = \{0,1\}$.\footnote{In many realistic settings, these simplifications do not hold, which motivate the individual bias detector component described in Sec.~\ref{sec:algorithm:ind}.} The value $d = 1$ is set to correspond to the \emph{privileged} group (e.g.\ whites in the United States in criminal justice applications) and $d = 0$ to \emph{unprivileged} group (e.g.\ blacks). The value $y = 1$ is set to correspond to a \emph{favorable} outcome (e.g.\ receiving a loan or not being arrested) and $y = 0$ to an \emph{unfavorable} outcome. Based on the context, we may also deal with probabilistic binary classifiers with continuous output scores $\hat{y}_S \in [0,1]$ that are thresholded to $\{0,1\}$.
One definition of individual bias is as follows. Sample $i$ has individual bias if $\hat{y}(\mathbf{x}_i,d=0) \neq \hat{y}(\mathbf{x}_i,d=1)$. Let $b_i = I[\hat{y}(\mathbf{x}_i,d=0) \neq \hat{y}(\mathbf{x}_i,d=1)]$, where $I[\cdot]$ is an indicator function. The individual bias score, $b_{S, i} = \hat{y}_S(\mathbf{x}_i,d=1) - \hat{y}_S(\mathbf{x}_i,d=0)$, is a soft version of $b_i$. To compute an individual bias summary statistic, we take the average of $b_i$ across test samples.
One notion of group fairness known as \emph{disparate impact} is defined as follows. There is disparate impact if
\begin{equation}
\label{eqn:disp_imp}
\frac{\mathbb{E}[\hat{y}(\mathbf{X},D) \mid D = 0]}{\mathbb{E}[\hat{y}(\mathbf{X},D) \mid D = 1]}
\end{equation}
is less than $1 - \epsilon$ or greater than $(1 - \epsilon)^{-1}$, where a common value of $\epsilon$ is 0.2.
\subsection{Test Generation for Individual Bias Detection}
There are two distinct problems in individual bias detection: first, determining whether there are any cases of individual bias, and second, determining the individual bias status of all samples.
In our earlier work~\cite{AgarwalLNDS2018}, we presented a technique for the first problem that systematically explores the decision space of any black box classifier to generate test samples that have an enhanced chance of being biased. The method uses two kinds of search: (a) a global search which explores the decision space such that diverse areas are covered, and (b) a local search which generates test cases by intelligently perturbing the values of non-protected features of an already found individually-biased sample. The key idea is to use dynamic symbolic execution, an existing systematic test case generation technique for programs that generates search constraints by negating the constraints in a program path and uses a constraint solver to find new search paths \cite{DART}.
This algorithm is useful in solving the second of the distinct problems from a computational perspective when used on a batch of samples in settings involving a large number of attributes and attribute values.
\subsection{Post-Processing to Achieve Group Fairness}
To achieve acceptable group fairness, various post-processing methods may be applied to change the label outputs of the classifier $\hat{y}_i$ to other labels $\check{y}_i \in \mathcal{Y}$. The reject option classification (ROC) method of \cite{KamiranKZ2012} considers \emph{uncertain} samples with $|\hat{y}_S-0.5| < \theta$ (assuming $0.5$ is the classification threshold) for some margin parameter $\theta$ and assigns $\check{y}_i = 1$ for samples with $d_i = 0$ and assigns $\check{y}_i = 0$ for samples with $d_i = 1$. For \emph{certain} samples outside the so-called reject option band, $\check{y}_i = \hat{y}_i$. The $\theta$ value may be optimized to achieve the requirement on disparate impact.
The algorithm proposed by \cite{HardtPS2016}, equalized odds post-processing (EOP), is targeted to a different group fairness measure: equalized odds rather than disparate impact. Perfect equalized odds requires the privileged and unprivileged groups to have the same false negative rate and same false positive rate. The algorithm solves an optimization problem to find probabilities with which to assign $\check{y}_1 = 1 - \hat{y}_i$. There are four such probabilities for the following four combinations: $(d_i = 0, \hat{y} = 0)$, $(d_i = 0, \hat{y} = 1)$, $(d_i = 1, \hat{y} = 0)$, and $(d_i = 1, \hat{y} = 1)$. With these probabilities, the individual points whose prediction is flipped is a random draw. The methods of \cite{PleissRWKW2017,CanettiCDRSS2018} are refinements of \cite{HardtPS2016} and share the key characteristics.
\section{Proposed Algorithm}
\label{sec:algorithm}
The new fairness post-processing algorithm we propose is inspired by and not radically different from \cite{KamiranKZ2012} in form. The key observation in post-processing for group fairness metrics like disparate impact is that since they are defined as expectations, the individual samples are exchangeable. Kamiran et al.~\cite{KamiranKZ2012} elect to change values of $\hat{y}_i$ to $\check{y}_i$ in a reject option band to conform to one type of human sensibility, but the same effect on disparate impact can be achieved using the same numbers of samples from elsewhere in $\mathcal{X}$. And that is precisely what we propose: elect samples from parts of $\mathcal{X}$ that likely have individual bias. In this section, we first describe individual bias detection and then how we wrap that in a post-processing bias mitigation algorithm.
\subsection{Individual Bias Detector}
\label{sec:algorithm:ind}
Consider a classifier $\hat{y}$ already trained on a training dataset partition. We can evaluate the individual bias definition provided in Sec.~\ref{sec:background} on a validation partition that has no labels to go alongside. Some of these validation samples will have individual bias and some will not. Under an assumption of some coherence or smoothness of individual bias in $\mathcal{X}$, we can learn a classifier or detector for individual bias from this validation set that will generalize to unseen samples whose individual bias is unknown. One may use any classification or anomaly detection algorithm here that provides score outputs. We use logistic regression in the empirical results.
Formally, by perturbing the $d_j$ of validation set samples $(\mathbf{x}_j,d_j)$, $j = 1,\ldots,m$, that belong to the unprivileged group ($d_j = 0$), we obtain individual bias scores $b_{S, j}$. We construct a further dataset $\{(\mathbf{x}_1,\beta_1),\ldots,(\mathbf{x}_m,\beta_m)\}$, and use it to train an individual bias detector $\hat{b}(\cdot)$.
$\beta_j$ is 1 for the samples that have the highest individual bias scores, and 0 for the rest. This assignment is determined by a threshold $\tau$ on the individual bias scores chosen based on the disparate impact constraint on the entire validation set. This is similar to the ROC algorithm where the margin parameter is adjusted based on disparate impact requirements.
One may argue that a trained individual bias detector is unnecessary and one should simply compute $b_i$ for all samples as they come at run-time because doing so only involves scoring using the black-box classifier model. This may be true, but with the following caveats. Firstly, in the exposition of the paper, we have assumed $d_i$ to be scalar and binary, when in many instances it is not. Therefore, computing $b_i$ may require several model evaluations which could be prohibitive, especially in the industrial usage we imagine in which each sample that is scored costs a certain amount of money to be paid by the entity deploying the model and remediating the bias. Secondly, we compute the binary $\beta_j$ values based on the group fairness constraint, which ensures that only examples with highest individual bias scores are considered for debiasing, and there is no overcompensation. This level of control is not possible if we consider all examples with $b_i=1$ to be equally biased.
\subsection{Overall Algorithm}
Once we have the individual bias detector $\hat{b}$
trained on the validation set, the bias mitigation algorithm applied in run-time to test samples is as follows. Each sample from the unprivileged group ($d_i = 0$) is tested for individual bias and if it is likely to have individual bias, i.e., $\hat{b}_i = 1$, then this sample is assigned the outcome it would have received if it were in the privileged group, i.e., $\check{y}_i = \hat{y}(\mathbf{x}_k,1)$. To encode a human sensibility similar to ROC, all other samples are left unchanged, including samples from the privileged group.
The proposed algorithm is summarized below:
\begin{algorithm}
\caption{Individual+Group Debiasing (IGD) Post-Processing}
\label{algo:igd}
\begin{algorithmic}[t]
\STATE{Given classifier $\hat{y}$ trained on training set $\{(\mathbf{x}_i,d_i,y_i)\}$, and}
\STATE{Given validation set $\{\mathbf{x}_j \mid d_j = 0\}$, compute individual bias scores $\{b_{S, j} \mid d_j = 0 \}$.}
\IF{$b_{S, j} > \tau$}
\STATE{$\beta_j \leftarrow 1$}
\ELSE
\STATE{$\beta_j \leftarrow 0$}
\ENDIF
\STATE{Construct auxiliary dataset $\{(\mathbf{x}_j,\beta_j) \mid d_j = 0 \}$.}
\STATE{Train individual bias detector $\hat{b}$ on auxiliary dataset.}
\FORALL{run-time test samples $(\mathbf{x}_k,d_k)$}
\STATE{$\hat{y}_{k} \leftarrow \hat{y}(\mathbf{x}_k,d_k)$}
\IF{$d_{k} == 0$}
\STATE{$\hat{b}_k \leftarrow \hat{b}(\mathbf{x}_k)$}
\IF{$\hat{b}_k == 1$}
\STATE{$\check{y}_k \leftarrow \hat{y}(\mathbf{x}_k,1)$}
\ELSE
\STATE{$\check{y}_k \leftarrow \hat{y}_k$}
\ENDIF
\ELSE
\STATE{$\check{y}_k \leftarrow \hat{y}_k$}
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
\section{Empirical Results}
\label{sec:results}
We evaluate our proposed algorithm on three standard datasets: UCI Adult (an income dataset based on a 1994 US Census database; 45,222 samples; favorable outcome: income greater than \$50,000; protected attributes: sex, race), UCI Statlog German Credit (a credit scoring dataset; 1,000 samples; favorable outcome: low risk; protected attributes: sex, age), and ProPublica COMPAS (a prison recidivism dataset; 6,167 samples. favorable outcome: does not reoffend; protected attributes: sex, race). Each of the three datasets has two binary protected attributes that we consider as two different problems, yielding six problems overall.
We compare our proposed individual+group debiasing (IGD) algorithm with ROC \cite{KamiranKZ2012} and EOP \cite{HardtPS2016} using the implementations of ROC and EOP provided in the AI Fairness 360 toolkit \cite{aif360}.
We process and load each dataset using the AI Fairness 360 toolkit and randomly divide it into 60\% training, 20\% validation and 20\% testing partitions. We conduct experiments with 25 such random partitions of the datasets, allowing us to provide error bars in the empirical results that follow. Using the training partition, we fit both $\ell_2$-regularized logistic regression and random forests as black-box classifiers. For random forests, we set the number of trees to 100 and the minimum samples per leaf node to 20.
The parameters of all three bias mitigation approaches are optimized on the validation partition of the dataset. Both the ROC and the EOP approaches require ground truth class labels in the validation set, whereas the proposed IGD approach, being a pure run-time method, does not. ROC and IGD are optimized to achieve disparate impact in the range $(0.8,1.25)$, i.e., $\epsilon = 0.2$. EOP, being designed for equalized odds rather than disparate impact, cannot be optimized for ranges of disparate impact.
In the subsections that follow, we first demonstrate the efficacy of the individual bias detector used in the proposed IGD algorithm and then compare the three algorithms for classification accuracy, disparate impact, and individual fairness.
\subsection{Validation Results on Individual Bias Generalization}
\label{sec:ind_bias_gen}
We verify the generalization performance of the individual bias detector on unseen test data. Since the individual bias detector is used only on unprivileged group samples ($d = 0$), its performance measure is only computed for this subset. The ground truth labels for the bias detector are obtained by actually computing the individual bias scores ($b_{S,k}$) for all unprivileged group samples in the test data, and identifying the ground truth bias labels ($\beta_k$) based on the disparate impact constraint. These labels are compared with the labels predicted by the bias detector ($\hat{b}_k$), and the balanced classification accuracy is computed.
This performance of the bias detector is shown in Fig.~\ref{fig:bias_det_acc_lr} for all dataset and protected attribute combinations when the black-box classifier is logistic regression. All accuracy values are more than 0.85, which illustrates its clear effectiveness for the purpose at hand. The detector performs similarly when the black-box classifier is random forests, with a minimum accuracy of approximately 0.80.
\begin{figure}
\centering
\includegraphics[width=3.2in]{figures/bias_det_acc_lr.png}
\caption{Balanced accuracy of the bias detector when the black box classifier is a Logistic Regression model. The bar shows the mean accuracy, and the vertical lines show the extent of $\pm 1$ standard deviation. The dotted horizontal line shows the best possible performance.}
\label{fig:bias_det_acc_lr}
\end{figure}
\subsection{Fairness Comparisons}
\label{sec:fairness_comp}
We use three measures for comparing EOP, ROC, and IGD: (a) individual bias, (b) disparate impact, and (c) balanced classification accuracy. These measures are computed using the post-processed predictions $\check{y}$. The individual bias measure is the summary statistic discussed in Sec. \ref{sec:background}, the disparate impact measure is defined in (\ref{eqn:disp_imp}), and balanced classification accuracy is the mean of true positive and true negative rates obtained for the predictions $\check{y}$ with respect to the true labels $y$. We also obtain these measures for the original (Orig.) predictions $\hat{y}$. As shown in Fig.~\ref{fig:lr_ind_bias}, Fig.~\ref{fig:lr_disp_imp}, and Fig.~\ref{fig:lr_bal_acc}, the proposed IGD approach is the only one that consistently improves both fairness measures while keeping the accuracy close to that of the original classifier. All results are show for logistic regression as the black-box classifier, but similar results are also observed for random forests (omitted due to space constraints).
\begin{figure}[ht]
\centering
\includegraphics[width=3.2in]{figures/lr_ind_bias.png}
\caption{Individual bias of the original model and the compared post-processing algorithms. The bar shows the mean value, and the vertical lines show the extent of $\pm 1$ standard deviation. The dotted horizontal line shows the ideal fair value (0.0).}
\label{fig:lr_ind_bias}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=3.2in]{figures/lr_disp_imp.png}
\caption{Disparate impact of the original model and the compared post-processing algorithms. The bar shows the mean value, and the vertical lines show the extent of $\pm 1$ standard deviation. The dotted horizontal line shows the ideal fair value (1.0).}
\label{fig:lr_disp_imp}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=3.2in]{figures/lr_bal_acc.png}
\caption{Balanced classification accuracy of the original model and the compared post-processing algorithms. The bar shows the mean value, and the vertical lines show the extent of $\pm 1$ standard deviation. The dotted horizontal line is the best possible accuracy (1.0).}
\label{fig:lr_bal_acc}
\end{figure}
In individual bias, the proposed IGD method performs the best for the German and COMPAS datasets. The ROC method performs the best for the Adult dataset, at the expense of reducing the balanced accuracy. Sometimes the EOP and ROC methods increase the individual bias, which is never the case with IGD. The proposed IGD method also consistently improves disparate impact over the original predictions, although outperformed by the ROC method in five out of six cases. The strong performance of the ROC approach is likely because it does not also optimize for individual bias. The EOP method performs poorly on disparate impact, likely because it was designed to equalize odds, which may or may not always result in improved disparate impact \cite{FriedlerSVCHR2018}. The proposed IGD method is also the best in preserving the balanced classifier accuracy compared to the original predictions even though no ground truth labels are used in the validation partition.
\section{Conclusion}
\label{sec:conclusion}
Algorithmic fairness is an important topic for business and society, and developing new bias mitigation algorithms that address as many facets of fairness as possible is critical. In this paper, we have developed a new post-processing algorithm that targets samples with individual bias for remediation in order to improve \emph{both} individual and group fairness metrics and shown that it does so empirically on several real-world datasets without much loss in classification accuracy. From our experience, the machine learning industry is moving towards paradigms in which there will be a separation between model building and model deployment. This will include a limited ability for deployers to gain access to the internals of pre-trained models. Therefore, post-processing algorithms, especially ones that can treat a classifier as a complete black-box are necessary. In comparison to previous work, our proposed algorithm not only tackles both individual and group fairness, but also is a pure run-time approach because it does not require ground truth class labels for the validation set.
\bibliographystyle{IEEEtran}
|
2,869,038,154,560 | arxiv | \section{Introduction}
Event analysis from news and social networks is very useful for a wide range of social studies and real-world applications \cite{chen2020event}, such as the impact of epidemics, wars and urban violence, finance, elections, and sentiment analysis. Although machine learning methods have been recently explored to support event analysis, learning appropriate representations for event classification algorithms is a challenging task since events have different associated components, such as people, organizations, temporal and geographical information \cite{setty2018event2vec}. Traditionally, events have been represented using bag-of-words models, thereby focusing mainly on the textual information (e.g., terms and keywords) of the events \cite{allan2002topic}. However, such representation does not adequately capture more complex relationships between events and is often criticized for the lack of semantics \cite{aggarwal2018machine}.
Recent studies represent event components through journalistic ``w'' questions, such as what, when, where, and who \cite{hamborg2018giveme5w}. In this sense, events and their components can be explicitly represented using event graphs \cite{chen2020event} (Figure \ref{fig:event_network}), where edges indicate relationships between events and component vertices. Although this is a rich representation for event data, it poses new challenges for graph-based machine learning, which raises the following question: how to extract useful knowledge from event graphs and their complex relationships?
\begin{figure}[hptb]
\centering
\includegraphics[width=0.7\linewidth]{exemplo-rede-eventos.pdf}
\caption{Example of an event graph \cite{dos2020two}.}
\label{fig:event_network}
\end{figure}
Graph embeddings methods have been used to learn latent features capable of capturing complex graph relationships, mapping each vertex in a low dimensional vector space \cite{cui2018survey, goyal2018graph, zhang2020deep}. This new representation is an embedding space used as input for several other tasks, such as event classification. For example, DeepWalk \cite{perozzi2014deepwalk} and Node2Vec \cite{grover2016node2vec} are methods based on short random walks to capture vertex neighborhood structures and learn features using a strategy similar to Word2Vec. Setty and Hose (2018) \cite{setty2018event2vec} proposed the Event2Vec method, which extends Node2Vec to respect event semantics through biased random walks. Graph Convolutional Networks (GCN) \cite{kipf2017semi} and Graph Attention Networks (GAT) \cite{velickovic2018graph} have also been extended for deep representation learning to extract high-level features from graphs. Despite recent advances, existing methods fail to meet the following important requirements for event analysis:
\begin{itemize}
\item In event analysis tasks, a small amount of data can be labeled according to the user's feedback, such as the category or utility of the event. Previous graph embedding methods, such as DeepWalk and Node2Vec, are unsupervised and ineffective in integrating labeled data during the graph embedding process.
\item Determining the importance level of the event components enables the extraction of complex patterns, for example, seasonal and geographical behaviors. Existing methods assume that such importance levels should be defined as parameters for edge weights between events and their components. However, it is impracticable for users to set these parameters manually.
\item Although GCN and GAT are promising methods for semi-supervised graph embeddings, these methods are not suitable for graphs composed of different types of vertices and relationships, such as event graphs. Also, some vertices of event graphs have associated features, such as textual information. Both GAT and GCN are unfeasible to perform graph embeddings in these scenarios.
\end{itemize}
To address these limitations, we propose the GNEE (\textbf{G}AT \textbf{N}eural \textbf{E}vent \textbf{E}mbeddings), a new semi-supervised embedding method for event graphs using Graph Attention Networks (GAT) and Event Feature Regularization. Our GNEE innovates in incorporating both vertex labels and features to improve event representation learning. The key idea is to explore graph regularization to generate a textual-based representation for all vertex types, i.e., propagate semantic features extracted from event vertices. GNEE learns the final embeddings through graph neural networks with attention mechanisms. Our main contributions are two-fold:
\begin{itemize}
\item We present a graph regularization framework to propagate existing textual features from event vertices to neighboring component vertices. We compute a semantic representation of the events through BERT-based neural language models. These models allow the generation of textual embeddings considering context information and pre-trained models from a large textual corpus. During the propagation of textual embeddings from event vertices to component vertices, a fine-tuning of the BERT-based representation is performed according to the structure of the event graph. The graph regularization step ensures that all vertices get a regularized feature, even non-text component vertices representing people's names, times, and locations. For example, the features of a location vertex will have semantic content similar to the event texts that occurred at that location.
\item We propose a semi-supervised graph embedding process guided by both labeled vertices, regularized vertex features, and the graph topology. In particular, we explore a graph attention mechanism proposed by \cite{velickovic2018graph} to automatically learn different importance levels for each vertex, thereby automatically identifying when time, location, names of people, organizations, etc., are relevant for event embedding. Thus, GNEE performs graph embedding learning with an attention mechanism to obtain neural event embeddings according to the neighborhood structure of the component vertices. The expectation is that the model will identify which component event vertex are most important for event classification.
\end{itemize}
We carried out a thorough experimental evaluation on five real-world event datasets. Our GNEE was compared with two state-of-the-art semi-supervised graph embedding methods based on GAT and GCN and with three unsupervised graph embedding methods DeepWalk, Node2Vec, and Struc2Vec. A statistical analysis of the results reveals that the GNEE outperforms the previous GAT-based methods for neural event embeddings in classification tasks. Furthermore, GNEE proved to be competitive with existing methods based on GCN, DeepWalk, and Node2Vec.
\section{GAT Neural Event Embeddings}
Events are related to each other through a complex structure involving components such as people, organizations, locations, and particular time intervals. In this context, graphs allow identifying relationships between events and their components, which would not be possible using a representation model based only on texts, such as the traditional bag-of-words. In addition, it is also possible to enrich the representation by incorporating features and labels at each vertex, which is then used to improve the graph embedding process. However, even recent methods for semi-supervised graph embedding require that all graph vertices contain features, which is an unusual scenario in event graphs. Alternatively, such methods use the adjacency matrix itself as features, which discards essential event information. Ideally, a graph embedding method for events should (i) be semi-supervised to consider small sets of labeled events; (ii) consider existing features for event vertices, even if component vertices do not have associated features; and (iii) automatically learn the importance of the event components. Thus, we propose the GNEE method (GAT Neural Event Embeddings), which explores attention mechanisms and event features regularization to deal with these challenges.
Let $G = (V, E, W)$ be a graph where $V$ represents a set of vertices, $E$ a set of edges, and $W$ the weights between vertices and edges. We use a heterogeneous graph representation in which the vertices are composed of two types, $V = V_E \cup V_C$, where $V_E$ are event vertices and $V_C$ are component vertices. The latter represents information about people's names, organizations, locations, times, and other metadata related to the events. In our graph-based representation, the textual information for each event $v_i \in V_E$ is represented by a feature vector $\vec{g}_{v_i} \in \mathbb{R}^m$ in an $m$-dimensional space obtained by some text pre-processing technique, such as the BERT-based models (detailed later in this section). Moreover, the graph contains some labeled event vertices $V_L \subset V_E$ in $K$ classes $Y = \{1,. . . , K\}$, thereby forming a training set $\{(v_1, y_1), \dots, (v_n, y_n)\}$ for semi-supervised learning, with $ n = |V_L| $ and $y_j = k \in Y$.
The neural event embedding can be formulated as a mapping function $h : G(V,E,W) \rightarrow \mathbb{R}^d$ from vertices to a $d$-dimensional vector space (embedding space), where $d$ is a predefined parameter. Our proposed GNEE explores both the existing features and labels of the vertices to improve the embedding learning process, as well as the graph topology. GNEE first performs a feature regularization from the event vertices to component vertices, followed by a semi-supervised learning based on graph neural attention networks.
The textual information of the event dataset is used mainly to extract the components and relationships between events. After constructing the graph, most methods discard textual information and perform graph embedding using only the graph structure. Our GNEE incorporates textual information as a feature vector in the event vertices. We propose propagating such feature vectors to component vertices according to the network topology through a graph regularization framework. For example, if a component vertex representing a location is connected to multiple events, then that vertex must receive a feature vector similar to the event feature vectors.
We use the BERT neural language model \cite{devlin2019bert} to semantically represent the event textual data. Let $e = (t_1,...,t_k)$ be an event in which its textual information is a sequence of $k$ tokens. BERT explore a masked language modeling procedure, where one of the training objectives is the noisy reconstruction defined in Equation \ref{bertmlm},
\begin{equation} \label{bertmlm}
p(\bar{e}|\hat{e}) = \sum_{j=1}^k m_j p(t_j,c_j)
\end{equation}
\noindent where $\hat{e}$ is a corrupted token sequence of the event $e$, $\bar{e}$ is the masked tokens, $m_j$ is equal to $1$ when $t_j$ is masked and $0$ otherwise. The $c_j$ represents context information for the token $t_j$, usually the neighboring tokens.
BERT uses a deep neural network based on the Transformers architecture to solve $p(t, c)$ of Equation \ref{bertmlm}. Typically, this strategy is reduced as conditional distribution modeling of the a token $t$ given a context $c$, according to Equation \ref{embeddings_bert},
\begin{equation} \label{embeddings_bert}
p(t | c) = \frac{exp(\mathbf{h}_c^{\top} \mathbf{w}_t)}{ \sum_{t'} exp( \mathbf{h}_c^{\top} \mathbf{w}_{t'} ) }
\end{equation}
\noindent where $\mathbf{h}_c$ is a context embedding and $\mathbf{w}_t$ is a word embedding of the token $t$. The term $\sum_{t'} exp( \mathbf{h}_c^{\top} \mathbf{w}_{t'} )$ is a normalization factor using all tokens $t'$ from a context $c$. Both embeddings $\mathbf{h}_c$ and $\mathbf{w}_t$ are obtained during the BERT pre-training stage from large textual corpus. In our approach, given an event $e = (t_1, ... , t_k)$, we compute the initial event semantic feature $\mathbf{g}_e$ by taking the average vector of all token embeddings $\mathbf{g}_e = \sum_{j=1}^k \frac{1}{k} \mathbf{w}_{t_j}$. Next, a vertex $v_i \in V$ representing the event $e_i$ receives the vertex features $\mathbf{g}_{v_i} = \mathbf{g}_{e_i}$. These features are used in the graph regularization process.
The GNEE graph regularization framework has two assumptions. First, neighboring vertices must have similar feature vectors. Second, event feature vectors must remain unchanged during regularization. Equation \ref{eq:reg-gfhf} defines the objective function to be minimized for graph regularization. The first term is related to the first assumption, where two neighboring vertices with weight $w_{v_{i}, v_{j}}$ must have a low similarity difference between their feature vectors. The second term is related to the second assumption, where the event vertices must preserve their feature vectors. The term $ \lim_{\mu\to \infty}\mu$ guarantees that a small difference $(\mathbf{f}_{v_{i}}-\mathbf{g}_{v_{i}})^2$ greatly penalizes the objective function $Q(\mathbf{F})$.
\begin{equation}
Q(\mathbf{F})=\frac{1}{2}\sum_{v_{i}, v_{j}\in V} w_{v_{i}, v_{j}} (\mathbf{f}_{v_{i}}-\mathbf{f}_{v_{j}})^2 + \lim_{\mu\to \infty}\mu \sum_{v_{i}\in V_L}(\mathbf{f}_{v_{i}}-\mathbf{g}_{v_{i}})^2
\label{eq:reg-gfhf}
\end{equation}
Equation \ref{eq:reg-gfhf} is a particular case of graph regularization, which has theoretical proofs of convergence \cite{ref:Belkin2006,ref:Zhu2003}. It can be solved via minimization with quadratic programming or through iterative methods based on label propagation.
After the graph regularization step, all graph vertices will have features in the same event feature space $\mathbf{F}$, i.e., the vertex regularized features. Now, the next step of the GNEE is the semi-supervised graph embedding learning with attention mechanisms. The input of this step is a set of regularized vertex features $\mathbf{F} \in \mathbb{R}^{|V| \times m}$ (from the previous step), where $m$ is the dimension of the event textual features and $|V|$ is the total of vertices. GNEE aims to learn a new set of high-level features $\mathbf{Z} \in \mathbb{R}^{|V| \times d}$, where $d$ is the dimension of the new learned space (graph embedding space). An innovation of GNEE in relation to the existing event analysis methods is to explore a shared self-attention mechanism $att : \mathbb{R}^\mathbf{Z} \times \mathbb{R}^\mathbf{Z} \rightarrow \mathbb{R}$ proposed by \cite{velickovic2018graph}, defined in Equation \ref{att1}, where $\mathbf{A} \in \mathbb{R}^{d \times m}$ is a weight matrix, and $\mathbf{z}_i$ and $\mathbf{z}_j$ are feature vectors of the vertices $v_i$ and $v_j$, respectively.
\begin{equation} \label{att1}
a_{v_i,v_j} = att(\mathbf{A}\mathbf{z}_{v_i}, \mathbf{A}\mathbf{z}_{v_j})
\end{equation}
An important step of the graph attention networks is to consider relationships between events and components into the attention mechanism. In this case, $a_{v_i,v_j}$ is only computed for the neighboring nodes $\mathcal{N}_{v_i}$ of the vertex $v_i$, followed by a normalization via softmax function, according to Equation \ref{attsoftmax}. In this equation, $\alpha_{v_i,v_j}$ indicates the normalized importance of the $v_j$ features to vertex $v_i$ considering the $k$ neighboring vertices $\mathcal{N}_{v_i}$.
\begin{equation} \label{attsoftmax}
\alpha_{v_i,v_j}=\operatorname{softmax}\left(a_{v_i,v_j}\right)=\frac{\exp \left(a_{v_i,v_j}\right)}{\sum_{v_k \in \mathcal{N}_{v_i}} \exp \left(a_{v_i,v_k}\right)}
\end{equation}.
The attention coefficients $\alpha_{v_i,v_j} \forall v_j \in \mathcal{N}_{v_i}$ are used to learn the feature vector $\mathbf{z}_{v_i}$ through a linear combination from all neighboring vertex features, as defined in Equation \ref{attsingle}, where $\sigma$ represents some non-linearity function. This process is applied to all vertices, thus obtaining the graph embedding space $\mathbf{Z}$.
\begin{equation} \label{attsingle}
\mathbf{z}_{v_i} = \sigma\left(\sum_{v_j \in \mathcal{N}_{v_i}} \alpha_{v_i,v_j} \mathbf{A} \mathbf{z}_{v_j}\right)
\end{equation}
Equation \ref{attsingle} represents the embedding calculation considering a single attention mechanism for the entire event graph. However, previous studies show that multiple attention mechanisms can learn more appropriate representations \cite{velickovic2018graph}, especially in graphs with complex structures. Thus, since an event graph is composed of different objects and relationships, our GNEE explores multiple and independent attention mechanisms. We argue that a minimum of $C$ attention mechanisms are sufficient to learn high-level event features, where $C$ represents the total of event components. The key idea is that each attention mechanism (hopefully) can learn the importance of each component vertices and their relationship to event vertices.
\begin{equation} \label{multiatt}
\mathbf{z}_{v_i}= \bigparallel_{c=1}^{C} \sigma\left(\sum_{v_j \in \mathcal{N}_{v_i}} \alpha_{v_i,v_j}^{(c)} \mathbf{A}^{(c)} \mathbf{z}_{v_j}\right)
\end{equation}
Equation \ref{multiatt} defines the GNEE multi-head attention mechanism. The $\bigparallel$ operator indicates the concatenation of the features obtained by each attention mechanism, where $\alpha_{v_i,v_j}^{(c)}$ is the attention coefficient of the $c$-th layer and the respective weight matrix $\mathbf{A}^{(c)}$.
To exemplify the GNEE multi-head attention mechanism, Figure \ref{fig:event_graph_gat_example} presents an event graph containing $8$ event vertices and $6$ component vertices of $2$ different types. For example, consider that there are $4$ vertices of organizations and $2$ geographic vertices. The colored vertices represent labeled events. Note that if we only consider organization vertices, then events can be classified as $\{1,2,3,4\}$ and $\{5,6,7,8\}$. On the other hand, if we consider only geographic vertices, events can be classified as $\{1,6,7,8\}$ and $\{2,3,4,5\}$. We run GNEE with two attention matrices and two hidden layers, where each attention matrix learns two latent features. Figures \ref{fig:example_att_hin}a and \ref{fig:example_att_hin}b show the embedding space learned by each attention matrix, which was able to properly identify how each event component acts in the classification problem (see decision boundaries). The GNEE source code to replicate this example, as well as to extract features from each attention mechanism, are available at \url{https://github.com/joaopedromattos/GNEE}.
\begin{figure}[htpb]
\centering
\includegraphics[width=0.7\linewidth]{event_graph_gat_example.pdf}
\caption{Example of an event graph containing 8 events (two labeled events) and 6 components of two types.}
\label{fig:event_graph_gat_example}
\end{figure}
\begin{figure}[htpb]
\centering
\includegraphics[width=\linewidth]{example_att_hin.pdf}
\caption{Latent spaces obtained by two attention matrices. Dashed lines indicate decision boundaries.}
\label{fig:example_att_hin}
\end{figure}
Finally, we highlight some GNEE capabilities concerning exploratory event analysis tasks. In GNEE, both confidence classification vectors and feature vectors (embeddings) are available for all graphs' vertices. Thus, we can explore how important a specific component is for a given class, as well as calculate the similarity between pairs of events, pairs of components, and between events and components. In the next section, we discuss GNEE performance to learn event graph embeddings and present an experimental comparison involving other state-of-the-art graph embeddings methods.
\section{Experimental Evaluation}
\subsection{Experimental Setup and Baselines}
We carried out an experimental evaluation involving $5$ real-words event datasets \cite{Hamborg2019b}. Table \ref{dataset_overview} shows an overview of each dataset, including the total of vertices, edges, the average degree of vertices, and the number of classes.
\begin{table}[htbp]
\centering
\caption{Overview of the event graphs used in the experimental evaluation.}
\begin{tabular}{l|c|c|c|c}
\hline
\hline
Event Graph & $|V|$ & $|E|$ & Avg. Degree & \#Classes \\ \hline
\hline
GoogleNews & 227 & 270 & 2.38 & 7 \\ \hline
BBC & 392 & 453 & 2.31 & 5 \\ \hline
GoldStd & 579 & 803 & 2.77 & 13 \\ \hline
CLNews & 2191 & 3208 & 2.92 & 69 \\ \hline
40ER & 249 & 344 & 2.76 & 3 \\ \hline
\hline
\end{tabular}
\label{dataset_overview}
\end{table}
Our GNEE neural networks was configured with $8$ attention matrices and $8$ hidden layers. Each attention mechanism derives $8$-dimensional embeddings. After the concatenation step, the graph-embedding space will consist of $64$-dimensional feature vectors.
We used the DistilBERT Multilingual model from SentenceTransformers tool\footnote{\url{https://github.com/UKPLab/sentence-transformers}} to generate features from event texts. These features were used in the event feature regularization step (Equation \ref{eq:reg-gfhf}). We compared our GNEE with $5$ graph embedding methods: DeepWalk, Node2Vec, Struct2Vec, GCN, and GAT.
\begin{table*}[htpb]
\centering
\caption{F1 classification performance values obtained from the embeddings of each method}
\begin{tabular}{l|c|c|c|c|c}
\hline
\hline
& 40ER & BBC & GoldStd & GoogleNews & CLNews \\ \hline
\hline
DeepWalk & 0.629 ± 0.09 & 0.404 ± 0.05 & 0.508 ± 0.05 & 0.622 ± 0.09 & 0.507 ± 0.03 \\ \hline
GAT & 0.594 ± 0.06 & 0.377 ± 0.07 & 0.480 ± 0.04 & 0.506 ± 0.10 & 0.475 ± 0.02 \\ \hline
GCN & 0.638 ± 0.10 & 0.458 ± 0.09 & 0.548 ± 0.04 & 0.617 ± 0.09 & \textbf{0.519 ± 0.03} \\ \hline
Node2Vec & 0.630 ± 0.11 & 0.428 ± 0.06 & 0.546 ± 0.05 & 0.584 ± 0.08 & 0.508 ± 0.02 \\ \hline
Struct2Vec & 0.415 ± 0.07 & 0.186 ± 0.05 & 0.089 ± 0.02 & 0.311 ± 0.05 & 0.066 ± 0.00 \\ \hline
GNEE (ours) & \textbf{0.744 ± 0.12} & \textbf{0.634 ± 0.07} & \textbf{0.676 ± 0.03} & \textbf{0.958 ± 0.07} & 0.274 ± 0.02 \\ \hline
\hline
\end{tabular}
\label{res1}
\end{table*}
\begin{table*}[htpb]
\centering
\caption{ACC classification performance values obtained from the embeddings of each method}
\begin{tabular}{l|c|c|c|c|c}
\hline
\hline
& \multicolumn{1}{c|}{40ER} & \multicolumn{1}{c|}{BBC} & \multicolumn{1}{c|}{GoldStd} & \multicolumn{1}{c|}{GoogleNews} & \multicolumn{1}{c}{CLNews} \\ \hline
\hline
DeepWalk & 0.751 ± 0.07 & 0.415 ± 0.05 & 0.638 ± 0.05 & 0.637 ± 0.09 & 0.607 ± 0.03 \\ \hline
GAT & 0.775 ± 0.06 & 0.393 ± 0.05 & 0.609 ± 0.04 & 0.548 ± 0.10 & 0.568 ± 0.02 \\ \hline
GCN & 0.756 ± 0.10 & 0.473 ± 0.06 & 0.655 ± 0.05 & 0.628 ± 0.12 & 0.601 ± 0.02 \\ \hline
Node2Vec & 0.751 ± 0.09 & 0.439 ± 0.06 & 0.669 ± 0.05 & 0.620 ± 0.08 & \textbf{0.609 ± 0.03} \\ \hline
Struct2Vec & 0.530 ± 0.08 & 0.203 ± 0.04 & 0.117 ± 0.03 & 0.354 ± 0.08 & 0.081 ± 0.00 \\ \hline
GNEE (ours) & \textbf{0.778 ± 0.10} & \textbf{0.636 ± 0.06} & \textbf{0.795 ± 0.02} & \textbf{0.968 ± 0.05} & 0.420 ± 0.02 \\ \hline
\end{tabular}
\label{res2}
\end{table*}
In the GNEE experimental evaluation, we randomly selected $20$\% of event vertices as labeled vertices, thereby simulating a semi-supervised learning scenario. After the graph embedding step, the rest of the unlabeled events are classified considering the embeddings as input for a final layer with a logistic sigmoid activation. In order to evaluate the experimental results, we used the F1-Macro (F1) and Accuracy (ACC) measures \cite{Manning2008}. The same semi-supervised scenario is used to evaluate the GCN and GAT methods. However, the event feature regularization step was not used for these two methods. DeepWalk, Node2Vec, and Struct2Vec methods learn embeddings in an unsupervised way. Then the labeled events are used only in the classification step. In this case, we used the Support Vector Machine with a linear kernel.
\subsection{Results and Discussion}
We analyze and discuss the experimental results considering two aspects. First, we present the F1 and ACC values achieved by GNEE in comparison with the existing methods. Second, we perform a visual comparison of the embeddings obtained by each method using a two-dimensional projection of the embeddings, thereby allowing to qualitatively compare the graph representation learning.
Tables \ref{res1} and \ref{res2} show the classification performance considering F1 and ACC measures, respectively. For both measures, GNEE achieved the best performance in four out of five datasets.
In the datasets in which GNEE achieved better performance, GNEE showed a minimum improvement of $18$\% in the F1 measure and $3$\% in the ACC measure.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{eval1_f1.pdf}
\caption{Critical difference diagram for the F1 measure.}
\label{fig:eval1}
\end{figure}
\begin{figure*}[htpb]
\centering
\begin{subfigure}[t]{0.31\textwidth}
\centering
\includegraphics[width=\linewidth]{gnee.pdf}
\caption{GNEE}
\end{subfigure}%
~
\begin{subfigure}[t]{0.31\textwidth}
\centering
\includegraphics[width=\linewidth]{gcn.pdf}
\caption{GCN}
\end{subfigure}
~
\begin{subfigure}[t]{0.31\textwidth}
\centering
\includegraphics[width=\linewidth]{node2vec.pdf}
\caption{Node2Vec}
\end{subfigure}
~
\begin{subfigure}[t]{0.31\textwidth}
\centering
\includegraphics[width=\linewidth]{deepwalk.pdf}
\caption{DeepWalk}
\end{subfigure}
~
\begin{subfigure}[t]{0.31\textwidth}
\centering
\includegraphics[width=\linewidth]{gat.pdf}
\caption{GAT}
\end{subfigure}
~
\begin{subfigure}[t]{0.31\textwidth}
\centering
\includegraphics[width=\linewidth]{Struc2Vec.pdf}
\caption{Struc2Vec}
\end{subfigure}
\caption{Two-dimensional projection (t-SNE) from the embeddings of each method in the GoldStd event graph.}
\label{embstsne}
\end{figure*}
An important analysis is the comparison of GNEE and traditional GAT, in which we can compare the performance improvement obtained by the event feature regularization proposed in GNEE for event graphs. In the four datasets in which the GNEE obtained the best results, the event feature regularization led to improvements between 25\% to 70\% of the classification performance in both F1 and ACC measures. This result highlights the advantages of GNEE in incorporating textual event features for all vertices during graph embedding.
Figure \ref{fig:eval1} shows the critical difference diagram for F1 measure, computed by Friedman's test with Nemenyi's post-test with 95\% of confidence level. The methods are ordered considering the average ranking from multiple executions. We connect two methods with a line if there is no statistically significant difference between them. Although GNEE obtained the first position in the ranking, there is no evidence of statistically superior performance in relation to the GCN and Node2Vec methods. However, GNEE statistically outperforms the GAT method, thereby favoring the event feature regularization. Node2Vec and DeepWalk present similar performance considering several event graphs. However, Node2Vec allows the adjustment of the walking bias and, consequently, selecting the best models according to the event graph.
Struc2Vec was unable to learn suitable embeddings for event graphs. In fact, Struc2Vec was hampered by using the highest degree component vertices to learn embedding space due to its representation learning bias based on the graph's structural identity. We argue that Struc2Vec can be improved for event graphs if we restrict the search for structural identity using only events as the ``target vertices'' of the structure.
A second aspect of the experimental discussion is to visually analyze the embedding space learned by each method. We selected the GoldStd graph for this analysis. We use the t-SNE algorithm to project embeddings from 64-dimensions to a two-dimensional space, as shown in Figure \ref{embstsne}. In addition, we color each event according to its label. Note that the separation of events according to their classes is visually perceptible in GNEE, as well as in GCN, indicating suitable representation of the learned embedding space. Node2Vec and DeepWalk present an intersection of events of different categories, which is expected since they are unsupervised methods. In this dataset, GAT was inefficient in learning embeddings in the absence of vertice features.
\section{Conclusion}
We present, discuss and evaluate state-of-the-art methods for representation learning from event graphs. Moreover, we present important requirements for event representation learning: (i) allowing semi-supervised graph embedding to consider some labeled data; (ii) automatically learning the importance of event components; and (iii) dealing with the absence of features for some vertices.
We propose the GNEE (GAT Neural Event Embeddings) method that meets the requirements presented above and obtains competitive performance compared to the existing methods. GNEE incorporates state-of-the-art techniques in graph learning to perform an event feature regularization and mitigate the challenge of learning embeddings from graphs with events and their components. Then, attention mechanisms are used to determine the importance of a vertex and its neighbors, which helps determine the importance of events and components. The GNEE source code is available in \url{https://github.com/joaopedromattos/GNEE}, as well as the datasets and source code to reproduce the experiments.
Directions for future work involve investigating how the attention mechanisms act on event graphs using an explainable AI methodology. The general idea is to incorporate the recent advances in interpretable models for event representation learning from graphs.
\bibliographystyle{IEEEtran}
|
2,869,038,154,561 | arxiv | \section{Introduction}
The Voronoi Diagram (VD) is one of the essential structures for computational geometry, along with the convex hull and the Delaunay triangulation which is its dual. The VD provides proximity information of the input seeds (points), something that is required by several scientific and technological applications \cite{Qi2019GPredicatesGI,articleBaTo,10.1007/978-3-642-83539-1_3}.
GPU-based techniques exist and employ a data-parallel design in order to generate the VD efficiently. One of the most known algorithms is the Jump Flooding Algorithm (JFA) \cite{RongJFA,4276119} which is considered one of the fastest VD building techniques. Another efficient approach is the Facet-JFA \cite{10.1145/2683483.2683503} which for some cases is faster than JFA.
The aforementioned approaches are normally used in a static context of seeds, i.e., to have a pixel grid and a set of seeds with fixed locations. If the particles move slowly over time, one could still use any of these state of the art approaches at each time-step, however this would lead to a computational cost that is much higher than what may be really needed, as each state could only be a small displacement from the previous one. Dynamic Voronoi diagrams open the possibility to research on ways to take advantage of the previous state as well as the particles behavior. This work focuses on this research opportunity, by first studying the current GPU rasterized techniques that exist to compute the VD, and then by proposing an algorithm that solves the dynamic case with uniform random moving particles in 2D. Lastly, the proposed method is compared in terms of GPU performance and similarity.
The remaining Sections cover background on Voronoi Diagrams (Section \ref{sec:voronoi-diagrams}) and the Jump Flooding Algorithm (Section \ref{sec:jfa}), problem statement (Section \ref{sec:problem-statement}), proposed algorithm (Section \ref{sec:proposed-algorithm}), experimental evaluation (Section \ref{sec:experimental-evaluation}) and Conclusions (Section \ref{sec:conclusions}).
\section{Background on Voronoi Diagrams}
\label{sec:voronoi-diagrams}
The Voronoi Diagram (VD) is a geometric structure that partition the Euclidean space. The resulting structure provides proximity information, where each region surrounding a seed is the space for which all points are closer to that seed than any other, and each frontier where points are equidistant to the two seeds that generate such adjacent regions (Figure \ref{fig:vd-example}).
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.2]{figures/voro_exmp.png}
\caption{An example Voronoi Diagram for 17 seeds in the plane.}
\label{fig:vd-example}
\end{figure}
Voronoi Diagrams define the following parameters:
\begin{itemize}
\item $X$: Metric space or Grid.
\item $S$: Set of seeds where $S = \{P_1, P_2, \cdots, P_s\}$.
\item $R_k$: Voronoi region associated to seed $P_k$.
\item $d$: Distance function.
\end{itemize}
where regions generate by satisfying the following condition:
\begin{equation}
\label{eq:vd-definition}
R_k = \{ x \in X | d(x,P_k) \le d(x,P_j) \forall k \neq j,\ P_k,P_j\in S\}
\end{equation}
Eq. (\ref{eq:vd-definition}) tells that the $x$ locations that belong to a region $R_k$ are closer to $P_k$ than any other seed.
Voronoi diagrams can be used to simulate the structure and dynamics of cell groups \cite{10.1007/978-3-642-29280-4_21,Indermitte:118485} and crystalline compounds \cite{SBDMul,KOBAYASHI2002681}, also they can be applied to solve neighborhood problems related to building roads \cite{4058742,4459318}, among many other applications. One of the main reasons for its use it that the VD can reproduce the formation of natural structures which are of interest in several scientific and technological fields. For the purposes of this research, we have focused on a uniform distribution for the seeds with uniform random movements in 2D. This model, although simple and synthetic, still relates to some existing particle motion models under study \cite{CERDA20188, carter2018gpu}.
\section{Revisiting the Jump Flooding Algorithm (JFA)}
\label{sec:jfa}
The Jump Flooding Algorithm, or simply JFA, was proposed by Rong \& Tan in 2006 \cite{RongJFA,4276119}, as a way to improve the Standard Flooding (StF) method which was one of the best known techniques for constructing the VD with GPU computing. The main problem of StF was that it could not exploit enough parallelism in its first iterations, as the flood was still small. StF works by defining the positions as the starting points for each flood. Then StF floods all neighbors that have a Chebyshev distance of 1 from the existing floods, in parallel, until the grid is fully flooded. Figure \ref{fig:flood-comparison} a) illustrates the process for one seed.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.22]{figures/flood_graph.pdf}
\caption{(a) Standard flooding algorithm, (b) Jump flooding algorithm.}
\label{fig:flood-comparison}
\end{figure}
As can be seen, StF fulfills its purpose in 5 iterations, exhibiting a small amount of parallelism at each iteration, as the neighborhood jump step is fixed at $k=1$ throughout all the process. In comparison, JFA proposes a different type of flooding, by jumping to the nearest neighbors that are at a distance $k_i(n)$ which is defined as a function of the grid size ($n\times n$) and the actual iteration ($i$), i.e.,
\begin{equation}
\label{eq:k_def}
k_i(n) = \frac{2^{\big\lceil\log_2(N)\big\rceil - 1}}{2^{i-1}}
\end{equation}
The process starts at $i=1$ and terminates with $k_i=1$ (inclusive). Overall, JFA does $\log_2(k) + 1$ steps. As an example, for a $8 \times 8$ grid, there are three iterations with the values of $\{k_1, k_2, k_3\} =\{4,2,1\}$.
The positions for each neighbor are defined in Table \ref{tab:neighbors}.
\begin{table}[ht!]
\caption{Neighborhood estimation of JFA 2D.}\label{tab:neighbors}
\begin{center}
\resizebox{0.85\columnwidth}{!}{
\begin{tabular}{|l|l|}
\hline
{\bfseries Neighbor} & {\bfseries Position}\\
\hline
1 & (\(x_1\),\(y_1\)) = (\(x_0\) + k, \(y_0\))\\
\hline
2 & (\(x_2\),\(y_2\)) = (\(x_0\) + k, \(y_0\) + k)\\
\hline
3 & (\(x_3\),\(y_3\)) = (\(x_0\), \(y_0\) + k)\\
\hline
4 & (\(x_4\),\(y_4\)) = (\(x_0\) - k, \(y_0\) + k)\\
\hline
5 & (\(x_5\),\(y_5\)) = (\(x_0\) - k, \(y_0\)\\
\hline
6 & (\(x_6\), \(y_6\)) = (\(x_0\) - k, \(y_0\) - k)\\
\hline
7 & (\(x_7\), \(y_7\)) = (\(x_0\), \(y_0\) - k)\\
\hline
8 & (\(x_8\), \(y_8\)) = (\(x_0\) + k, \(y_0\) - k)\\
\hline
\end{tabular}
}
\end{center}
\end{table}
The Table shows that the locations of the neighbors follow a Moore-like neighborhood but with distance $k_i$, and at every iteration of JFA this neighborhood has a shorter distance, due to the reduction of $k$, which reflects the way that JFA propagates in the entire domain as shown in Figure \ref{fig:flood-comparison} b). The performance advantage of JFA comes from the fact that it is capable of doing more parallel work at each iteration, leading to less iterations than StF (Figure \ref{fig:flood-comparison}). It is also worth mentioning that if a pixel wants to propagate to another one that has already been claimed, the distance function is used as a criterion to check which flood carries the closest seed.
JFA has proven to be efficient but is not free of visual errors as stated by Rong \& Tan \cite{4276119}. Due to the distance function, it is possible to take a pixel from a region without claiming another one from his neighborhood at Chebyshev's distance 1. Fortunately this problem can be handled by adding one or two extra rounds of the JFA algorithm (also known as JFA+1 and JFA+2).
When switching to moving particles, doing JFA at each time-step would no longer be the most efficient method,
\section{Problem Statement: Dynamic Moving Particles}
\label{sec:problem-statement}
The standard JFA is typically considered in a static context, i.e., there is a defined grid with a fixed seed set and the VD is computed once. However, in a dynamic context where seeds move over time, such as in particle simulations, these can exhibit a behavior where $VD_{t-1}$ and $VD_t$ end up being very similar. In such cases, a direct application of the JFA to each time-step would not be the most efficient approach as it would not be taking advantage of what was computed at the previous time-step, neither considering the movement behavior of particles to see if the $k$ values may have an upper bound smaller than in the standard JFA.
Taking advantage of these properties could save some iterations of the JFA, which would translate into a performance acceleration.
In this work we consider the case study where particles exhibit a uniform random movement, for which we propose the \textit{dynamic JFA} (dJFA).
\section{A New Dynamic Jump Flooding Algorithm}
\label{sec:proposed-algorithm}
We propose the dynamic Jump Flooding Algorithm (dJFA), which is a modified version of the standard JFA, because it reuses the previous state $VD_{t-1}$ and also redefines the $k_i$ parameter as $\delta_i$ using other considerations. We also consider different types of neighborhood; Moore vs Von Neumann, as well as different distance functions; Euclidean vs Manhattan.
\subsection{Defining the dynamic $\delta_i$ parameter}
We recall that in JFA defines $k_i$ in terms of the grid size and the current iteration (see Eq.(\ref{eq:k_def})), which leads to a total of $\log_2(k_1)+1$ sequential steps. Here, we aim to redefine $k_i$ as a smaller value in order to produce a smaller number of sequential steps. This new dynamic $k_i$, now named $\delta_i$, takes advantage of the fact that if all particles are uniformly distributed, and move randomly with a uniform distribution, then the $\delta_1$ value does need to begin as large as in Eq. (\ref{eq:k_def}). Moreover, if the density of seeds is high (and uniform by the distribution assumption) then it would be possible to reduce the total amount of generational steps. In the worst case, if the density is too low, it would perform as fast as JFA. Considering that the seeds are randomly distributed with a uniform distribution, one parameter of interest is the average polygon length, which given the assumptions for the seeds, can be approximated to
\begin{align}
\label{eq:L_def}
L_{avg} & \sim \sqrt{\frac{n\cdot n}{s}}.
\end{align}
Figure \ref{fig:ref-distribution} shows the distribution of the polygon (region) lengths for an example set of seeds following a random uniform distribution.
\begin{figure}[ht!]
\centering
\centering
\includegraphics[scale=0.6]{figures/dist_2.pdf}
\caption{The polygon length histogram resembles a normal distribution.}
\label{fig:ref-distribution}
\end{figure}
The average polygon length approximates to a normal distribution, which is a convenient starting point for defining the $\delta_i$ parameter and its limits. Considering that $L$ follows a normal distribution, it is necessary to have a high confidence, such as $99\%$. One approach for this is to consider $2\times L_{avg}$, because as seen in Figure \ref{fig:ref-distribution}, it is enough to cover almost all the lengths, similar to a $99\%$ of confidence.
The second step in defining $\delta_i$ is to consider the moving seeds. In the assumed dynamic model, seeds move up to $d_{max}$ discrete units in any direction, randomly chosen with a uniform distribution. Therefore, at any time step, the maximum length of a $R_k$ region is the maximum between $2\times L_{avg}$ or $d_{max}$. This leads to a $\delta_i$ defined as:
\begin{equation}
\label{eq:k_d_def}
\delta_i = \frac{2^{\big\lceil\log_2(\max(2L_{avg}, d_{max}))\big\rceil}}{2^{i-1}}
\end{equation}
The ceil function is applied on the logarithm because truncated values could lead to an incomplete computation of the VD. Having $\delta_i$ defined, now dJFA works by doing a total of $\log_2(\delta_1)+1$ generational steps. An extra step may be included in order to cover border cases related to the limitations of JFA or the $1\%$ of uncertainty in the distance coverage of $L_{avg}$, that may produce in a few cases an incomplete work.
The expected behavior of dJFA is that as the seeds set $S$ increases, $\delta_i$ will decrease and this implies fewer steps to be performed. On the other hand, if $S$ is small, then it will behave very much like JFA in terms of performance. Finally if the seed movements are greater than $L_{avg}$, it will trigger the usage of $d_{max}$ as parameter for computing $\delta$, although this is less likely for simulations with smooth moving particles.
\subsection{Combining Moore and Von Neumann Neighborhoods}
In terms of neighborhood, we considered the use of Von Neumann neighborhood instead of the Moore (Figure \ref{fig:neighborhood}).
\begin{figure}[ht!]
\centering
\includegraphics[scale= 0.5]{figures/neigh_graph.pdf}
\caption{Neighborhoods involved in dJFA.}
\label{fig:neighborhood}
\end{figure}
The motivation to use Von Neumann neighborhood is because it requires exploring half the neighbors compared to Moore, which can speedup the computation although at the cost of generating a less precise VD. Experimental results confirmed that in fact Von Neumann alone generates an incorrect VD even for JFA, as shown in Figure \ref{fig:bad-vd}, where several regions are concave or even generate a saw-tooth border.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.10]{figures/rebvd.jpg}
\caption{VD built with JFA and only using Von Neumann neighborhood, it can be seen that some regions show some anomalies.}
\label{fig:bad-vd}
\end{figure}
As way to find an intermediate point between performance and correctness, we propose to combine both neighborhoods. The first one or two iterations of dJFA will use the Von Neumann neighborhood, followed by the rest using the Moore one. The reason for this proposal is that in the first iterations many exploration points fall out of the domain or are not the definitive values, therefore a significant amount of work is potentially lost.
\subsection{Euclidean vs Manhattan distances}
The default distance metric is te Euclidean one, but it is also possible to consider the Manhattan distance, which has the advantage of not having square roots neither squared values. We propose two versions of dJFA; i) dJFAe for the euclidean distance version and ii) dJFAm for the Manhattan version. dJFAm is expected to be faster than dJFAe but less precise.
\subsection{dJFA Algorithm Overview}
Algorithm \ref{alg:dJFA} presents the main steps of dJFA working on a generic simulation application with A steps of simulation.
\begin{algorithm}
\label{alg:dJFA}
\caption{dJFA}
\KwData{VD, S, A}
\KwResult{VD}
$k_m \gets ComputeK(\text{VD})$\;
$step \gets 1$\;
\While{$step \le A$}{
SimulateParticles(S)\;
$\delta \gets$ computeK(VD,S)\;
\While{$\delta \ge 1$}{
\CommentSty{\#Von Neumann|Moore neighborhood\\}
\textbf{Par}\For{p in VD}{
\For{q in neighborhood}{
$s_p \gets VD[p]$\;
$s_q \gets VD[q]$\;
\If{$d(s_p,q) < d(s_q,q)$}{
$VD[q] = s_p$\;
}
}
}
$\delta = \delta/2$\;
}
$step=step+1$\;
}
\end{algorithm}
The algorithm receives the Voronoi Diagram (VD) grid, the seeds set $S$, and the number of application simulation steps $A$. The outer \texttt{while} loop is for the simulation application. Inside each simulation iteration, a whole dJFA process occurs. First, $\delta$ is computed and then $\log(\delta)$ waves of computation occur. At each wave, parallel GPU threads are launched, mapped to the VD pixels using a one-to-one correspondence. Each thread explores the whole neighborhood, Von Neumann for the first two waves, Moore for the rest. At each neighbor, the thread explores its value and verifies if its propagating seed is closer to the one already assigned. If it is, then such neighbor location is updated with the thread's propagating seed.
\section{Experimental Evaluation}
\label{sec:experimental-evaluation}
\subsection{Experimental Setup}
The experimental evaluation used one GPU from the Patag\'on Supercomputer \cite{patagon-uach}. Its hardware specifications are listed in Table \ref{tb:specs}.
\begin{table}[!h]
\caption{Hardware used for tests.}
\label{tb:specs}
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{|c|l|}
\hline
System & Patagón Supercomputer - DGXA100 node\\
\hline
CPU & $2\times$ AMD Rome 7742 64 cores\\
\hline
GPU & $8\times$ NVIDIA A100, 40GB VRAM\\
\hline
RAM & 1TB DDR4\\
\hline
\end{tabular}
}
\end{table}
The benchmark consists on simulating different number of moving seeds on different grid sizes, for $A=100$ application steps using the uniform random distribution model. Figure \ref{fig:iterations-vd} shows how different VDs are obtained throughout the simulation, using a smaller seed count for illustration purposes.
\begin{figure*}[!ht]
\centering
\includegraphics[scale=0.3]{figures/iters_v3.png}
\caption{A 100 step simulation of 50 seeds on a grid of $1000\times 1000$ pixels. VD samples are obtained at different time steps.}
\label{fig:iterations-vd}
\end{figure*}
Three metrics are obtained in the simulation benchmark:
\begin{enumerate}
\item \textbf{Similarity}: is defined in terms of the percentage of equality between dJFA and JFA resulting grids, i.e.,
\begin{equation}
\text{Similarity} = 100 \times \frac{\text{matching pixels}}{\text{total pixels}}
\end{equation}
\item \textbf{Time}: The cumulative time in seconds spent in computing the VD algorithm, ignoring the time spent in moving particles. Times are denoted as $T_{JFA}, T_{dJFAe}, T_{dJFAm}$.
\item \textbf{Speedup}: The acceleration factor defined as
\begin{equation}
\text{Speedup} = \frac{T_{JFA}}{T_{dJFA}}.
\end{equation}
A second speedup is also considered, $T_{dJFAe}/T_{dJFAm}$, for measuring the acceleration from using the Manhattan distance instead of the Euclidean one.
\end{enumerate}
\subsection{Experimental Results}
Figure \ref{fig:similarity-dJFA} shows the similarity of both dJFAe and dJFAm with respect to a complete JFA execution. From the plots, one can note how dJFAe is notoriously more precise than dJFAm, reaching nearly $100\%$ of similarity, while dJFAm reaching only $88\%$ to $92\%$ of similarity. It is worth noticing that although dJFAe decreases its similarity as the are more seeds, it shows an stabilization at the end. In the case of dJFAm, one positive aspect is that as the domain is more dense, its similarity increases, each time at a higher rate, making it potentially useful for fully saturated inputs.
\begin{figure*}[!htb]
\centering
\includegraphics[scale=0.63]{figures/sim_graph_1.pdf}
\includegraphics[scale=0.63]{figures/sim_graph_2.pdf}
\caption{Similarity of dJFAe (left) and dJFAm (right) with respect to JFA.}
\label{fig:similarity-dJFA}
\end{figure*}
The execution times are presented in Figure \ref{fig:times-dJFA}.
\begin{figure*}[!ht]
\centering
\includegraphics[scale=0.63]{figures/time_graph_1.pdf}
\includegraphics[scale=0.63]{figures/time_graph_2.pdf}
\caption{Execution times in seconds for JFA (dashed) and dJFAe/dJFAm (solid).}
\label{fig:times-dJFA}
\end{figure*}
For all $n$ values, both versions of dJFA took less time than JFA to complete the simulation steps, with the dJFAm version being faster. A staircase pattern can be noticed, where time remains constant until certain values of $n$ are met, where it goes down significantly ($\log$ scale). This pattern is related to the $\delta_i$ definition used, as certain values of $n$, in combination to the number of seeds, make the $\log$ function move to the previous integer value, reducing $\delta_i$ thus the number of iterations and increasing the performance.
Speedup results are shown in Figure \ref{fig:speedup}.
\begin{figure*}[!htb]
\centering
\includegraphics[scale=0.43]{figures/sp_graph_1.pdf}
\includegraphics[scale=0.43]{figures/sp_graph_2.pdf}
\includegraphics[scale=0.43]{figures/sp_graph_3.pdf}
\caption{Left, speedup of dJFAe (solid) against JFA (dashed). Middle, speedup of dJFAm against JFA. Right, speedup of dJFAm with respect to dJFAm.}
\label{fig:speedup}
\end{figure*}
In all tests the speedup is favorable, manifesting the staircase pattern that reaches $\sim 4.5\times$ and $\sim 5\times$ for dJFAe and dJFAm, respectively. When comparing dJFAe with dJFAm, we note that for smaller grids both approaches have relatively similar performance, with dJFAm being slithgly faster. For larger $n$, dJFAm shows a faster performance than dJFAe, with up to $1.2\times$ of speedup in its peak. This is expected as the Manhattan distance has a lesser cost. There is an unexpected decrease in speedup at the end of the plot, meaning that for very dense grids the difference between dJFAe and dJFAm is less relevant. This aspect may require further experimentation and research.
Summarizing what has been observed in all tests, both variants of dJFA manage to achieve better performance than JFA and also with a high degree of similarity depending on the distance function, being the high density scenarios in which the higher speedups were registered.
\section{Conclusions and Future Work}
\label{sec:conclusions}
This work proposed the dynamic Jump Flooding Algorithm (dJFA), which is an adaptation of the known JFA, now for dynamic moving particles following a uniform distribution. Results show that the proposed method manages to perform faster than a standard JFA. Moreover, the dJFA manages to progressively increase its speedup as the domain gets denser with more seeds. With regard to the similarity, dJFA managed to achieve close to $100\%$ of similarity compared to JFA when using the Euclidean distance metric, otherwise over $~88\%$ of similarity when using the Manhattan distance. This produces to flavors of dJFA, a more precise version with Euclidean distance, or an faster but less precise one with Manhattan distance.
The results also showed aspects in which further work can be done. This work did not consider removal or insertion of seeds during simulation, which is a feature than many applications require. Implementing such feature presents a base challenge of first supporting dynamic arrays in GPU, which is currently a problem under research with some preliminary progress made. Another extension, even more relevant, is to generalize the dJFA to any type of seed movement. This puts a major challenge in the definition of $\delta_i$ as it cannot assume any distribution as this work did. Some ideas include using Dynamic Parallelism or adapt the use of Ray Tracing cores of recent GPUs in order to explore the particles dynamically allowing any initial distribution and movement, even the formation of clusters. Another alternative way to tackle this general case problem is that instead of changing JFA, one can generate a low-resolution version of the grid and do all computation in this reduced space, followed by a reconstruction. Some aspects of this idea have already been developed by the Facet-JFA \cite{10.1145/2683483.2683503}, where low density is exploited to improve the VD computation time. We believe it is possible to take this idea one step further and benefit from recent advances from artificial intelligence, by using Deep Learning Super Sampling (DLSS) to reconstruct the low-resolution results back to the original resolution, which by the way is accelerated by tensor cores. The use and combination of tensor cores with ray tracing cores presents a novel opportunity to keep exploring new possibilities of algorithms for computing dynamic Voronoi Diagrams.
\bibliographystyle{plain}
|
2,869,038,154,562 | arxiv | \section{Introduction}
Bohr's quantum jump is accompanied by the absorption or emission of a photon in case of a radiative resonant energy transfer. In a quantum system there are continuous transitions, making it difficult to monitor or control them. To monitor individual transitions, Dehmelt \cite{dehmelt,deh} proposed a scheme wherein, in addition to the two levels of the system, there is also a third metastable state which is much longer lived. So the system undergoes transitions between the ground and upper levels, but at a certain random moment, it might transit to the metastable state and remain shelved there, until finally it returns to the ground state. This idea has been beautifully implemented by Minev et al. \cite{minev} where instead of an atom, an artificial atom has been created by hybridizing transmons. Here the role of metastable state is played by a dark state which is driven weakly and is not being observed. The excited state of the two-level system is continuously monitored by a dispersive coupling to the cavity. When the shift in frequency occurs which is different from what is expected from Rabi oscillations, a jump into the dark state is registered. This physical situation may also be realized by considering a two-level system whose excited state is being measured repeatedly, thus interrupting the Rabi oscillations. QZE is a result of repeated projective, instantaneous measurements on a system whereby the evolution is frozen \cite{ms}. Projective measurements assume performing instantaneous ideal measurements where the response time of the measuring apparatus is much shorter than other relevant time scales \cite{ks}. The intrinsic quantum fluctuations of the detector brings stochasticity to quantum evolution, leading to a description in terms of quantum trajectories. Going beyond the projective measurements, the quantum Zeno regime has a rich structure as a function of frequency of measurement. The ``cascades of transition" brought out recently \cite{parveen} appear as ``phases of QZE" when a quantum system is subjected to weak continuous measurements.
For practical reasons and in order to address a larger class of phenomena, we need to consider a wider class of measurements where only partial information about an observable may be extracted. In contrast with projective measurements, there exist many measurement models where measured values of an observable of interest possess some uncertainty, howsoever small \cite{jacobs-steck}. One such method is subsumed in what is known as diffusive measurement. The two outcomes (say, 0 and 1) are discerned by the detection system as probability distributions about possible values of some physical quantity (current, intensity, etc.). These occur in a certain range, modelled by the standard deviation of the distribution to which the values belong. These Poisson distributed uncorrelated outcomes, by the law of large numbers \cite{kac}, lead to a Gaussian distribution for measured quantities. As we shall see in our discussion on diffusive measurements, the appropriate measurement operators are constructed on this basis.
Action has a very special place in physics, Ehrenfest showed that the quantization of energy levels is connected to the adiabatic invariance of classical action \cite{ehrenfest}. Since a change in the value of adiabatic invariant is in terms of jumps \cite{crawford,jain2005}, action changes discontinuously upon repeated measurements. Interestingly, in Sec. 3.2.2, we bring out the validity of this idea.
If a quantum system is interacting with environment, its evolution can be described as a solution of a stochastic master equation where the environment needs to be monitored. The solution consists of a multitude of paths having different weights, constituting the quantum trajectories \cite{carmichael}. Quantum trajectory describes the conditional state of knowledge of a system, given its measurement record. This evolution can only be considered `deterministic' when the measurement record is fixed, independent of discrete or continuous measurements. We calculate the most optimal path and jump events in a phase space representation. Working with individual trajectories and jumps is expected to help correcting errors in a quantum system whereby allowing us to monitor and manipulate the jumps mid-flight \cite{minev}. Quantum Zeno effect, which is at the heart of this adaptation of Dehmelt's idea in superconducting qubits, has been shown to possess a nontrivial ``cascade of transitions" \cite{parveen}. In this work, we employ the action principle developed in \cite{jordan,jordan2} to systematically address quantum Zeno effect and quantum jumps by resorting to clear depictions in phase space. In phase space representation, it turns out that the transition points are, in fact, saddle points. Interestingly, the pair of expanding and contracting directions in phase space are inverted for the two points. There also appear a pair of separatrices. Altogether, the phase space is divided by these special structures, guiding the phase flow and the corresponding transitions and jumps in a rather intricate and instructive way.
\section{The CDJ formalism for quantum measurements}\label{CDJ formalism}
Let us say that we have performed a series of measurements on a two-level system for total time $T$, entailing a realization of a measurement, $\{r_k\}_{k=0}^{n-1}$ \cite{jordan,jordan2}. Each readout is obtained between time $t_k$ and $t_{k+1}=t_k+\delta t$. Define a series of quantum states $\{\textbf{q}(t_k)\}_{k=0}^n$ at the series of time $\{t_k\}_{k=0}^n$, written as a $d$-dimensional parametrized vector $\textbf{q}$, where the components are coefficients of expansion of the evolution of density operator, $\rho$, written in some orthogonal operator basis, such as the Pauli $\sigma$ basis of a two-state system. For this system, $\textbf{q}=\{x,y,z\}$ are the coordinates of the Bloch sphere.
The quantum trajectory can be computed via an update equation of the form, $\textbf{q}(t_{k+1})={\mathcal O}[\textbf{q}(t_{k}),r_k]$, which includes the back-action from the measurement readout and the unitary evolution from system Hamiltonian. In the Markovian approach, we can write the Joint Probability Distribution Function (JPDF) of all measurement outcomes and state trajectories, which gives all the statistical information about a system. The JPDF can be written as:
\begin{alignat}{1}\label{eq:jpdf}
\mathcal{P}_\zeta&\equiv P(\{\textbf{q}(t_{k}))\}_1^n,\{r_k\}_0^{n-1}|\textbf{q}_0,\zeta),\nonumber\\
&=B_\zeta\prod_{k=0}^{n-1}P(\textbf{q}(t_{k+1})|\textbf{q}(t_{k}),r_k)P(r_k|\textbf{q}(t_{k})),
\end{alignat}
where $P(r_k|\textbf{q}(t_{k}))$ is the conditional probability distribution for the measurement outcome $r_k$, given the state of the system before the measurement is $\textbf{q}(t_{k})$. Moreover, $P(\textbf{q}(t_{k+1})|\textbf{q}(t_{k}),r_k)=\delta^d(\textbf{q}(t_{k+1})-{\mathcal O}[\textbf{q}(t_{k}),r_k])$ is the deterministic conditional probability distribution for the quantum state after the measurement, conditioned on the state at the previous time step and measurement readout. $B_\zeta=B_\zeta[\{\textbf{q}(t_{k})\},\{r_k\}]$ is a function accounting for subensembles, i.e., where an initial or final or both states have been chosen.
For an arbitrary functional, $\mathcal{A}=\mathcal{A}[\{\textbf{q}(t_{k})\},\{r_k\}]$, its expectation value is given by the functional integral, $\langle\mathcal{A}\rangle_\zeta=\int d[\textbf{q}(t_{k})]_1^n d[r_k]_0^{n-1}\mathcal{P}_\zeta \mathcal{A}$, where $\int d[\textbf{q}(t_{k})]_1^n\equiv\int d\textbf{q}(t_1)\dots d\textbf{q}(t_n)$. Direct integration of $\mathcal{A}$ will be tedious even for a single qubit measurement problem. Following \cite{jordan}, we write the JPDF as a path integral with a suitably defined action functional.
We write $\delta^d(\textbf{q}(t_{k+1})-{\mathcal O}[\textbf{q}(t_{k}),r_k])$ for $k=0$ to $n-1$ in the Fourier integral form:
\begin{equation}
\delta(q)=\frac{1}{2\pi\iota}\int_{-\iota\infty}^{\iota\infty}e^{-pq}dp,
\end{equation}
for each component of $\textbf{q}$ and rewrite other terms in an exponential form. The conjugate variables for $\delta$-functions are denoted by $p(t_{k})$, $k=0$ to $n-1$. The final form of JPDF is then
\begin{equation}
\mathcal{P}_\zeta=\mathcal{N}\int d[\textbf{p}(t_{k})]\exp(\mathcal{S})=\mathcal{N}\int\mathcal{D}\textbf{p}\exp(\mathcal{S}),
\end{equation}
in the limit $\delta t\to 0$, where for functional integrals, $\int\mathcal{D}\textbf{p}\equiv\lim_{\delta t\to 0}\int d[\textbf{p}(t_{k})]$and $\mathcal{N}$ is the normalization factor.
The action is then given by,
\begin{equation}\label{eq:action}
\mathcal{S}(\textbf{p},\textbf{q},r)=\mathcal{B}_\zeta+\int_0^T\delta t\{-\textbf{p}\cdot(\dot{\textbf{q}}-\mathcal{L}[\textbf{q},r])+\mathcal{F}[\textbf{q},r])\}
\end{equation}
where we have introduced $\dot{\textbf{q}}\delta t=\mathcal{L}[\textbf{q},r]\delta t$ as the continuous time version of the state-update equation $\textbf{q}(t_{k+1})={\mathcal O}[\textbf{q}(t_{k}),r_k]$. This update equation comes from the state transformation equation,
\begin{equation}\label{eq:density}
\hat{\rho}(t+\delta t)=\frac{\mathcal{M}\mathcal{U}\hat{\rho}(t)\mathcal{U}^\dagger \mathcal{M}^\dagger} {Tr[\mathcal{M}\mathcal{U}\hat{\rho}(t)\mathcal{U}^\dagger\mathcal{M}^\dagger]},
\end{equation}
where $\mathcal{M}$ is the evolution operator resulting in measurement back-action. An expansion in powers of $\delta t$ entails $P(r_k|\textbf{q}(t_{k}))\propto\exp\{\delta t\mathcal{F}[\textbf{q}(t_{k}),r_k]+\mathcal{O}(\delta t^2)\}$; we define $\mathcal{F}[\textbf{q}(t_{k}),r_k]=\ln{P(r_k|\textbf{q}(t_{k}))}$. The Hamiltonian is,
\begin{alignat}{1}\label{eq:ham}
\mathcal{H}(\textbf{p},\textbf{q},r)&=\textbf{p}\cdot\mathcal{L}[\textbf{q},r]+\mathcal{F}[\textbf{q},r]-\textbf{p}\cdot(\textbf{q}-\textbf{q}_I)\delta (t)-\textbf{p}\cdot(\textbf{q}-\textbf{q}_F)\delta(t-T)
\end{alignat}
for continuum limit, obtained by taking $\delta t \to 0$, $n\to\infty$ and setting $t_0=0$, $t_n=T$ for a subensemble with initial and final boundary conditions to the states, $\textbf{q}(t_0)=\textbf{q}(t_I)$ and $\textbf{q}(t_n)=\textbf{q}(t_F)$.
We can find the largest contribution of path integral by extremizing the action. Taking the first variation of action over all variables and setting $\delta S$ to $0$ in \eqref{eq:action}, appealing to the principle of extremum action, we have,
\begin{alignat}{1}\label{eq:optimal}
-\dot{\textbf{q}}+\mathcal{L}[\textbf{q},r]&=0,\nonumber\\
\dot{\textbf{p}}+\frac{\delta}{\delta\textbf{q}}(\textbf{p}\cdot\mathcal{L}[\textbf{q},r])+\frac{\delta}{\delta\textbf{q}}\mathcal{F}[\textbf{q},r]&=0,\nonumber\\
\frac{\delta}{\delta r}(\textbf{p}\cdot\mathcal{L}[\textbf{q},r])+\frac{\delta}{\delta r}\mathcal{F}[\textbf{q},r]&=0.
\end{alignat}
The solution to above $\eqref{eq:optimal}$ ($\delta S=0$ and differentiating S w.r.t. p, q and r, respectively) gives the most likely path, denoted by $\bar{\textbf{q}}$, $\bar{\textbf{p}}$, $\bar{r}$, for which $\mathcal{H}$[$\bar{\textbf{q}}$,$\bar{\textbf{p}}$,$\bar{r}$] is a constant of motion. The optimal path can be a local maximum, a local minimum, or a saddle point in the constrained probability space depending on the second variation of action. For the optimal path that represents local maximum, we call it the most likely path or the most probable path.
\section{Quantum Zeno effect in a single qubit-detector system}\label{section 3}
Let us consider that a qubit is undergoing coherent oscillations between the states $|0\rangle$ and $|1\rangle$ according to the Hamiltonian $H_{(\rm s)}=(1/2)(2\Omega_s)\sigma_{x_{(\rm s)}}$, ($\Omega_s>0$) \cite{minev,parveen}, where, $2\Omega_s$ is the Rabi frequency of the system. As shown in Fig. \ref{zeno_system}, the state $\ket{1}$ of the system is continuously being monitored by the detector (ancilla). The detector is subjected to measurements at successive intervals of $\delta t$. The detector is another two-level system, initially prepared in the eigenstate, $\ket{0}_{(\rm d)}$ of $\sigma_{z_{\rm (d)}}$ with eigenvalue $1$. We measure $\sigma_{y_{\rm (d)}}$ of the ancilla, after which it is reset to $\ket{0}_{(\rm d)}$. The interaction of the detector with the system is subsumed in a Hamiltonian,
\begin{equation}
H_{(\rm s-\rm d)}=\frac{J}{2}(\mathbb{I}-\sigma_z)_{(\rm s)}\otimes\sigma_{y_{(\rm d)}},
\end{equation}
where the subscripts $(\rm s)$ and $(\rm d)$ stand for system (qubit) and detector respectively. $J$ is the coupling strength between the detector and the system and the possible measurement outcome, $r$, can be $0$ or $1$. Hence, the system is evolving under the combined effect of its Hamiltonian and the coupling to the detector which is given by the unitary evolution due to the total Hamiltonian, $H=H_{(s)}+H_{(s-d)}$.
\begin{figure}[htbp!]
\begin{center}
\includegraphics[width=0.4\textwidth]{zeno_system1.JPG}
\caption{\small{A qubit is performing coherent oscillations between the state $|0\rangle$ and $|1\rangle$ with Rabi frequency, $2\Omega_s$. The state $|1\rangle$ is measured by a detector, another two-level system, with interaction strength $J$, which is initially prepared in $|0\rangle_d$ state and is reset after each measurement.}}
\label{zeno_system}
\end{center}
\end{figure}
The combined total system-detector Hamiltonian is then,
\begin{equation}
H=H_{(\rm s)}\otimes{\mathbb{I}}_{(\rm d)}+H_{(\rm s-d)}
\end{equation}
It may be recalled that the Hamiltonian describing Rabi oscillation is time-periodic owing to the time-dependent external field. The evolution is thus describable in terms of a Floquet operator over a period of the drive. This is interrupted by the measurements every $\delta t$ in time. The total evolution operator can be written in a factorized form, owing to the results in \cite{karner,karner1}. For the detector state to be in $|r\rangle_{\rm (d)}$ and in the scaling limit for continuous measurement, $\delta t\to 0$, $J^2\delta t\to\alpha=\rm{constant}$ \cite{parveen}. The state of the system is
\begin{equation}
\ket{\psi(t+dt)}=M^{(r)}U_{(\rm s)}\ket{\psi(t)}
\end{equation}
where $U_{(\rm s)}=e^{-\iota H_{(\rm s)}\delta t}$ describes the unitary evolution of the system and $M^{(r)}$ is the measurement operator which is given by
\begin{equation}
M^{(r)}=~_{\rm (d)}\bra{r}e^{-\iota H_{(\rm s-\rm d)}\delta t}\ket{0}_{(\rm d)}.
\end{equation}
Using this, the Kraus operators assume the form \cite{krauss,jordan3},
\begin{align}\label{eq:M}
M^{(0)}=\begin{bmatrix}
1 & 0\\
0 & \cos{J\delta t}
\end{bmatrix}, \quad
M^{(1)} =\begin{bmatrix}
0 & 0\\
0 & \sin{J\delta t}
\end{bmatrix}.
\end{align}
For the post-selected dynamics, for $r=0$, we only look at the subensemble evolution,
\begin{equation}
\ket{\psi(t+\delta t)}=M^{(0)}U_{(\rm s)}\ket{\psi(t)}.
\end{equation}
After evolution, the density matrix of the system will be \cite{jordan,jordan2},
\begin{alignat}{1}\label{rhotplusdt}
\rho(t+\delta t)&=\frac{M^{(0)}U_{(\rm s)}\ket{\psi(t)}\bra{\psi(t)}|U_{(\rm s)}^\dagger M^{(0)\dagger}}{Tr[M^{(0)}U_{(\rm s)}\ket{\psi(t)}\bra{\psi(t)}|U_{(\rm s)}^\dagger M^{(0)\dagger}]}\nonumber\\
&=\frac{M^{(0)}U_{(\rm s)}\rho(t)U_{\rm (s)}^\dagger M^{(0)\dagger}}{Tr[M^{(0)}U_{(\rm s)}\rho(t)U_{(\rm s)}^\dagger M^{(0)\dagger}]},
\end{alignat}
where the denominator term is for normalization. For the initial system assuming the general form of density matrix,
\begin{equation}\label{rhot}
\rho(t)=\frac{1}{2}\begin{bmatrix}
1+z & x-\iota y\\
x+\iota y & 1-z
\end{bmatrix},
\end{equation}
\eqref{rhotplusdt} gives
\begin{equation}\label{eq:rho_tdt}
\rho(t+\delta t)=\frac{1}{2}\begin{bmatrix}
1+z+\frac{1}{2}(\alpha(1-z^2)+4\Omega_s y)\delta t & x-\iota y-\frac{1}{2}z((x-\iota y)\alpha-4\iota\Omega_s)\delta t\\
x+\iota y-\frac{1}{2}z((x+\iota y)\alpha+4\iota\Omega_s)\delta t & 1-z-\frac{1}{2}(\alpha(1-z^2)+4\Omega_s y)\delta t
\end{bmatrix}.
\end{equation}
Equating both sides for the updated coordinates, we get,
\begin{alignat}{1}\label{eq:update}
x(t+\delta t)&=x(t)-2\Omega_s\lambda x(t)z(t)\delta t,\\\nonumber
y(t+\delta t)&=y(t)-2\Omega_s z(t)(1+\lambda y(t))\delta t,\\\nonumber
z(t+\delta t)&=z(t)+2\Omega_s (\lambda(1-z(t)^2)+y(t))\delta t,
\end{alignat}
where $\lambda=\frac{\alpha}{4\Omega_s}$. If we take the initial $x$-coordinate to be $0$, the update is entirely in $y$-$z$ plane which shows there is no update in $x$-coordinate. This can further be written as
\begin{alignat}{1}
\dot{x}(t)&=0,\\\nonumber
\dot{y}(t)&=-2\Omega_s z(t)(1+\lambda y(t)),\\\nonumber
\dot{z}(t)&=2\Omega_s (\lambda(1-z(t)^2)+y(t)).
\end{alignat}
The equations may be re-written in terms of an angle variable. Writing $y=\sin\theta$ and $z=\cos\theta$, the update equation for $\theta$:
\begin{equation}\label{eq:update_dot}
\dot{\theta}(t)=-2\Omega_s (1+\lambda\sin{\theta(t)}).
\end{equation}
which is the same as found by \cite{parveen}. As explained in Sec. 2, the functional $\mathcal{F}[\textbf{q},r]$ is the coefficient of linear order expansion of the term $Tr[M_0^\dagger M_0\rho]$. This comes out to be
\begin{equation}
\mathcal{F}[\textbf{q},r]=-\frac{\alpha}{2}(1-\cos\theta).
\end{equation}
The Hamiltonian can be written according to \eqref{eq:ham},
\begin{alignat}{1}\label{eq:finalham}
\mathcal{H}=-2\Omega_s[p_\theta(1+\lambda\sin{\theta})+\lambda(1-\cos{\theta})]
\end{alignat}
where $p_\theta$ is the canonical conjugate variable to $\theta$. The corresponding Hamilton's equations are
\begin{alignat}{1}\label{eq:canonical}
\dot{\theta}(t)&=\frac{\partial \mathcal{H}}{\partial p_\theta}=-2\Omega_s(1+\lambda\sin{\theta}),\nonumber\\
\dot{p}_\theta(t)&=-\frac{\partial \mathcal{H}}{\partial \theta}=2\Omega_s\lambda( p_\theta\cos\theta+\sin\theta).
\end{alignat}
These equations describe the phase space flow and contain a wealth of information. We now turn to a description and depiction of the flow for different values of $\lambda$. At the outset, it is important to remember that lessons from quantum Zeno effect instruct us of the effect of repeated measurements on the transition probability. In turn, this implies a relation to the ``time taken for transition" - admittedly a term used here to guide our intuition on the basis of statistics rather than to clock the time. We turn to an elaboration of dynamics of the system in regimes separated by values of $\lambda$ about one - guided by the nullclines.
The connection with Minev's experiment\cite{minev} vis a vis Dehmelt shelving scheme and quantum Zeno effect is brought out in Case II. We shall see that even though in Case I, the duration of the Rabi oscillation period is prolonged, shelving occurs only (Case II) with the appearance of a hyperbolic point (an attractor in $\theta$ on Bloch sphere). The ``third level" in the standard discussion \cite{deh,ludlow} appears as the state corresponding to $\theta_1$.
\subsection{Case I: \texorpdfstring{$\lambda < 1$}{Lg}}
The transitions would occur at Rabi frequency when the parameter, $\lambda $ is zero. If we interrogate the system and perform a measurement before an oscillation is completed, we would have enhanced the duration of an oscillation. We need to show this, however. It is only in the limit of a very large number of successive measurements, spaced infinitesimally in time, that the transition would be completely arrested, leading to quantum Zeno effect, established by Misra and Sudarshan \cite{ms,peres}. We find it illustrative as well as instructive to draw conclusions by studying the canonical sections in phase space.
\subsubsection{Portraits of qubit evolution in phase space}
We consider curves in $({\theta, p_{\theta}})$-phase plane corresponding to the evolution of the qubit from $\theta=0$ (state $|0\rangle$) to $\theta=-\pi$ (state $|1\rangle$) on the Bloch sphere. For $\lambda $ equal to zero, Fig. \ref{fig:Phase_space<1}(a) shows that $p_{\theta}$ is a constant of the motion and frequency is $2\Omega_s$, the Rabi frequency. For a non-zero $\lambda$, the straight lines in Fig. \ref{fig:Phase_space<1}(b) become unstable in a way that there is a pair of $p_{\theta}$-values about which there is attraction (repulsion) for positive (negative) $\theta$.
From \eqref{eq:finalham}, for $-\mathcal{H}/(2\Omega_s) = {\mathcal E}$, we have
\begin{equation}\label{eq:ptheta}
p_{\theta}(\theta ; \lambda , \mathcal E) = \frac{\mathcal E - \lambda (1 - \cos \theta)}{1 + \lambda \sin \theta}.
\end{equation}
\begin{figure*}[htbp!]
\centering
\subfloat[]{\includegraphics[width=0.42\textwidth]{fig2a.png}}\hspace{5mm}
\subfloat[]{\includegraphics[width=0.42\textwidth]{fig2b.png}}\\
\caption{Phase space plots after extremizing the action showing invariant curves (most optimal paths) where the Hamiltonian, ${\mathcal H}(\theta, p_{\theta})$ = constant \eqref{eq:finalham}. (a) For $\lambda=0$, we notice that $p_{\theta}$ is a constant of the motion; (b) for $\lambda=0.5$, while energy is constant, $p_{\theta}$ is no longer a constant ($\Omega_s=0.5$). It is evident that $\theta$ is continuously evolving towards $-\pi$, when $\lambda$ is between 0 and 1.}
\label{fig:Phase_space<1}
\end{figure*}
\subsubsection{Action integral}
We calculate the action for the phase space curves traced by the dynamics dictated by the Hamiltonian, \eqref{eq:finalham}. The stochastic action integral for the system is given by
\begin{alignat}{1}\label{eq:area_la<1}
\mathcal A (\lambda ) &= \int_{t_i}^{t_f}(-p_\theta \dot{\theta}+\mathcal{H})dt\\
&=\int_{\theta_i}^{\theta_f}\mathcal{F}\frac{dt}{d\theta}d\theta\\
&= \frac{2\lambda}{\sqrt{1-\lambda^2}}\bigg(\tan^{-1}\bigg[\frac{\lambda+\tan{\frac{\theta_f}{2}}}{\sqrt{1-\lambda^2}}\bigg]-\tan^{-1}\bigg[\frac{\lambda+\tan{\frac{\theta_i}{2}}}{\sqrt{1-\lambda^2}}\bigg]\bigg)-\ln\bigg[\frac{1+\lambda\sin{\theta_f}}{1+\lambda\sin{\theta_i}}\bigg].
\end{alignat}
This is seen to change sign about $\lambda = {\mathcal E}$. Fig. \ref{fig:Phase_space<1} (b) shows the curves with negative (positive) areas in the part where $p_{\theta}$ is positive (negative).
\subsubsection{Time of transition}\label{time 0f transition}
As discussed earlier, transition probability is intimately related to the time of transition from $\theta=0$ to $\theta=-\pi$ which becomes interesting in the context of any discussion of Zeno effect. In this case, it is simply calculated using \eqref{eq:canonical}:
\begin{equation}\label{eq:timeperiod_la<1}
T_{\lambda<1}=\frac{\frac{\pi}{1-\lambda^2}+\frac{2\tan^{-1}\left[\frac{\lambda}{\sqrt{1-\lambda^2}}\right]}{\sqrt{1-\lambda^2}}}{2\Omega_s}.
\end{equation}
For $\lambda=0$, this comes out to be $T_{\lambda<1}=\frac{1}{2}(\frac{2\pi}{2\Omega_s})$ which, as expected, turns out to be exactly half the time-period of the Rabi oscillations. This is independent of the energy, $\mathcal E$, indicating that on all constant energy curves in the phase space, this time of transition remains constant. Although it might seem surprising, but it is easily understood by the fact that the action as well as momentum are linear in ${\mathcal E}$.
\begin{figure}[htbp!]
\begin{center}
\includegraphics[width=0.5\textwidth]{fig3.jpg}
\caption{Plot shows the frequency of transitions (in the unit of GHz) from $\theta=0$ to $\theta=-\pi$ with $\lambda$ for $\lambda<1$. The frequency decreases with increasing $\lambda$, indicating that system makes fewer oscillations on increasing the detection frequency and comes to rest at $\lambda = 1$, marking the onset of Zeno regime.}
\label{time_la<1}
\end{center}
\end{figure}
The time of making a transition from state $|0\rangle$ to $|1\rangle$ is longer than the Rabi oscillations for $\lambda=0$. Fig. \ref{time_la<1} brings out the variation in transition frequency as detection frequency increases. It takes longer and longer for the oscillations to complete (Fig. 9(a)). Eventually, at $\lambda = 1$, a kind of resonance condition is satisfied when Rabi period is twice the time between successive detections; here, frequency of oscillations vanishes. The system remains in the initial state, the quantum Zeno effect sets in. Similar conclusion is drawn in \cite{parveen}.
\subsection{Case II: \texorpdfstring{$\lambda > 1$}{Lg}}
This regime is clearly Zeno regime where a cascade of stages have been shown to occur \cite{parveen}. Inspired by this and the beautiful and exciting experiment \cite{minev} on the possibility of controlling quantum jumps, we present our exploration of this phenomenon in phase space, as explained above. In this description, via action integral, it is possible to make estimates on transition times. The cascades also show up when the description appears in phase space. As above, we begin the description with critical points about which the phase space curves would be organized.
\subsubsection{Critical points and their stability}\label{Critical points}
Critical points or the nullclines are obtained by setting the right hand side of \eqref{eq:update_dot} to zero. We would like to show the points of equilibrium for this system on the Bloch sphere as well. Critical points $\theta_1$ and $\theta_2$ for $\lambda > 1$ turn out to be
\begin{alignat}{1}\label{eq:critical_th}
\theta_1=-\sin^{-1}\frac{1}{\lambda}, \quad \theta_2=\sin^{-1}\frac{1}{\lambda}-\pi.
\end{alignat}
From \eqref{eq:canonical}, $\dot{p}_\theta=0$ gives the critical points in $p_\theta$:
\begin{alignat}{1}\label{eq:critical_p}
{p_\theta}_1=\frac{1}{\sqrt{\lambda^2-1}}, \quad {p_\theta}_2=-\frac{1}{\sqrt{\lambda^2-1}}.
\end{alignat}
\begin{figure}[htbp!]
\begin{center}
\includegraphics[width=0.5\textwidth]{la_1.2_phasespace.png}
\caption{Phase space vector plots for $\lambda=1.2$ is shown for energy, $\mathcal E=2$ with $\Omega_s=0.5$. The plot displays the critical points at $\theta_1=-\sin^{-1}\frac{1}{\lambda}$ and $\theta_2=\sin^{-1}\frac{1}{\lambda}-\pi$, so that $P_1\equiv(-0.985,1.507)$ and $P_2\equiv(-2.156,-1.507)$. At $\theta_1$ ($\theta_2$), the system is stable (unstable), whereas at $p_{\theta_1}$ ($p_{\theta_2}$), system is unstable (stable). Thus, the phase portrait has two saddle points.}
\label{fig:Phase_space_>1}
\end{center}
\end{figure}
To ascertain the stability of these points, we linearize the equations by introducing a small perturbation, $\epsilon_i$ and retain the terms in the resulting equation up to linear order. Thus, writing $\theta_1=-\sin^{-1}\frac{1}{\lambda} + \epsilon_1$ and $\theta_2=\sin^{-1}\frac{1}{\lambda}-\pi + \epsilon_2$, we obtain for $\epsilon_i$:
\begin{alignat}{1}\label{eq:stable_th}
\epsilon_1&=\exp(-2\Omega_s t\sqrt{\lambda^2-1}), \quad \epsilon_2=\exp(2\Omega_s t\sqrt{\lambda^2-1}).
\end{alignat}
which shows that $\theta_1$ is a stable point while $\theta_2$ is an unstable point.
A similar analysis can be carried out for $p_\theta$. Writing ${p_\theta}_1=\frac{1}{\sqrt{\lambda^2-1}} + \delta_1$ and ${p_\theta}_2=-\frac{1}{\sqrt{\lambda^2-1}} + \delta_2$, we obtain
\begin{alignat}{1}\label{eq:stable_p}
\delta_1&=\exp(2\Omega_s t\sqrt{\lambda^2-1}), \quad \delta_2=\exp(-2\Omega_s t\sqrt{\lambda^2-1}),
\end{alignat}
which remarks that the dynamics is unstable about ${p_{\theta}}_1$ and stable about ${p_{\theta}}_2$, which is opposite in sense to that of fixed points of $\theta$. Hence the points determined by $(\theta_1,p_{\theta_1})$ and $(\theta_2,p_{\theta_2})$ are saddle points.
Phase space trajectory of the system is plotted in Fig \ref{fig:Phase_space_>1}, which shows that the system is evolving from $\theta=0$ (state $|0\rangle$) to $\theta_1=-\sin ^ {-1} \frac{1}{\lambda}$ instead of making transitions from $|0\rangle$ to $|1\rangle$. It is staying at $\theta_1$ which is like a metastable state whose lifetime is longer. On increasing the value of interaction parameter (or the detection frequency) as $\lambda\to\infty$, $p_\theta\to\tan\frac{\theta}{2}$. At $p_{\theta_{\lambda\to\infty}}$ the system is freezing in state $|0\rangle$ manifesting the Zeno effect which is desirable for quantum error correction and manipulation of qubits.
On differentiating \eqref{eq:ptheta} w.r.t. $\lambda$ about the unstable point, we obtain a relation between $\lambda$, thereby giving two critical values for $\mathcal E$,
\begin{align}\label{lambdavsE}
\mathcal E&=\lambda\pm\sqrt{\lambda^2-1}
\end{align}
\begin{figure}[htbp!]
\begin{center}
\includegraphics[width=0.7\textwidth]{separatrix1.png}
\caption{Phase space plot obtained after extremization of stochastic action shows most optimal paths for Zeno regime ($\lambda>1$). From \eqref{lambdavsE}, for $\lambda=1.5$, the plot is for energy values in three ranges, $\mathcal E<\mathcal E_{C1}$, $\mathcal E_{C1}\leq\mathcal E\leq\mathcal E_{C2}$ and $\mathcal E>\mathcal E_{C2}$ ($\Omega_s=0.5$), where $\mathcal E_{C1}$ and $\mathcal E_{C2}$ are the critical values of $\mathcal E$ corresponding to a fixed $\lambda$ ($>1$). We have two separatrices, which cannot be crossed, regardless of the energy. The points $P_1\equiv(-0.729,0.894)$ and $P_2\equiv(-2.411,-0.894)$ are the critical points corresponding to $\lambda=1.5$. The small dashed (bold) line corresponds to energy value $\mathcal{E}_{C1}$ ($\mathcal{E}_{C2}$) and the line with bigger dashes corresponds to the energy value between these two ($\mathcal{E}_{C2}=1.5$). If the system is prepared in state $\ket0$ or in equal superposition of $\ket0$ and $\ket1$, it will evolve to $\theta_1$ and will be localized at that point.}
\label{fig:separatrix}
\end{center}
\end{figure}
At these points, we obtain two separatrices from the two critical values of $\mathcal E$. This can be seen in Fig. \ref{fig:separatrix}. Clearly, such situations are only possible for $\mathcal E>0$ as $\lambda$ is a positive quantity. As seen in Fig. 5, the two critical points are such that one is a saddle point with unstable (manifold) direction along $p_{\theta}$ and a stable transverse (manifold) direction almost along $\theta$; the other one is a saddle point with opposite directions. Localization or stabilization of qubit in the neighbourhood of one saddle and destabilization of qubit in the other neighbourhood drives the dynamics - a projection of that is seen on the Bloch sphere. Fig. 5 shows seven regions in which the phase flow is divided and organized thus due to saddles and separatrices. These curves portray the most likely behaviour of states. A detailed description of the possible paths may be found in a rather insightful work by Chantasri and Jordan \cite{jordan2} where they have also studied qubit stabilization in the case of qubit measurement with linear feedback. In their case, the critical point are stable in $\theta$. In our case, in quantum Zeno regime, stabilization around $\theta_1$ and destabilization around $\theta_2$ is clearly seen, these are special states other than $|0\rangle$ and $|1\rangle$. However, the quantum fluctuations and tunneling across the separatrices which would enable the system to continue evolving. The tunneling probability is proportional to $\exp [- \tau S_I/\hbar]$ where $S_I$ is the imaginary part of the action as the separatrix is crossed and $\tau $ is the inverse of the characteristic stability exponent along the unstable direction \cite{alonso,ssj} found by the linearization process above.
\begin{figure*}[htbp!]
\centering
\subfloat[]{\includegraphics[width=0.45\textwidth]{Blochsphere_opp.png}}
\subfloat[]{\includegraphics[width=0.45\textwidth]{Blochsphere_same.png}}
\caption{To see the discontinuity in action, we evaluate action as a function of an arbitrary angle, $\epsilon$, and $\mu=\lambda>1$ in the region $\theta_i=\theta_1+\epsilon\to\theta_f=\theta_2$ and $\theta_i=\theta_1-\epsilon\to\theta_f=\theta_2$ in Fig. (a) and in the region $\theta_i=\theta_2\to\theta_f=\theta_1-\epsilon$ and $\theta_i=\theta_1+\epsilon\to\theta_f=\theta_2$ in Fig. (b) for $\lambda=1.5$ and $\epsilon\to0$.}
\label{fig:blochsphere}
\end{figure*}
\begin{figure*}[htbp!]
\centering
\subfloat[]{\includegraphics[width=0.48\textwidth]{areasame1.png}}\hspace{2mm}
\subfloat[]{\includegraphics[width=0.5\textwidth]{areaopp1.png}}
\caption{Fig. (a) represents, for an arbitrary constant $\epsilon\to0$ and $\lambda=1.5$, action evaluated from $\theta_i\to\theta_1+\epsilon$ to $\theta_f\to\theta_2$ (blue) and from $\theta_i\to\theta_1-\epsilon$ to $\theta_f\to\theta_2$ (orange) for the paths shown in Fig. \ref{fig:blochsphere}(a) and (b) respectively. These paths comply with the direction of stability near the points $\theta_1$ and $\theta_2$ as in the vector plots shown in Fig. \ref{fig:Phase_space_>1} and hence have the opposite direction but same sign. Whereas, in Fig. (b), action for both the paths is calculated in the same direction, which is opposite in the sense of the direction of stability on one side of the stable point, and hence have opposite signs.}
\label{Area}
\end{figure*}
\subsubsection{Action and quantum jumps}
We show here that associated with a quantum jump, there appears a discontinuity in action.
For $\lambda>1$, let us denote it by $\mu$, $\sqrt{1 - \lambda^2} = i\sqrt{\mu^2 - 1}$. Employing the identity \cite{ablowitz},
\begin{equation}
\tan^{-1} \xi = \frac{1}{2i}\log \frac{i - \xi}{i + \xi},
\end{equation}
we have action upon integration from an initial point $\theta_i$ to $\theta_f$:
\begin{alignat}{1}\label{eq:area_la>1}
\mathcal A (\mu ) &=-\frac{\mu}{\sqrt{\mu^2-1}}\log\left[\bigg(\frac{\sqrt{\mu^2-1}+\mu+\tan\frac{\theta_f}{2}}{\sqrt{\mu^2-1}-\mu-\tan\frac{\theta_f}{2}}\bigg)\bigg(\frac{\sqrt{\mu^2-1}-\mu-\tan\frac{\theta_i}{2}}{\sqrt{\mu^2-1}+\mu+\tan\frac{\theta_i}{2}}\bigg)\right]-\log\left[\frac{1+\mu\sin\theta_f}{1+\mu\sin\theta_i}\right]
\end{alignat}
for $\mu = \lambda > 1$.
Consider the Bloch sphere diagrams in Fig. \ref{fig:blochsphere} (a) and (b). If we plot the area w.r.t. to an arbitrary angle, $\epsilon\to0$ from a point where initial Bloch angle is $\theta_i=\theta_1-\epsilon$ to $\theta_f=\theta_2$ (clockwise) and the action from $\theta_i=\theta_1+\epsilon$ to $\theta_f=\theta_2$ (anti-clockwise), both are in the opposite direction, but we find that both the action have positive values. If, instead we plot the action in the same direction (anti-clockwise) from $\theta_i=\theta_2$ to $\theta_f=\theta_1-\epsilon$ and $\theta_i=\theta_1+\epsilon$ to $\theta_f=\theta_2$, we find that they have opposite signs as shown in Fig. \ref{Area}. This displays a discontinuity in the area at the stable point $\theta_1$, thus implying a discontinuity in action at that point. This discontinuity in action may be interpreted as a quantum jump.
\begin{figure}[htbp!]
\begin{center}
\includegraphics[width=0.7\textwidth]{freq_la_gt_1.jpeg}
\caption{Plot shows the frequency of transitions from $\theta=0$ to $\theta=-\pi$ with $\lambda$. Here, $\omega_1$ (dashed black) is the frequency of transitions from $\theta=0\to\theta_1$, $\omega_{12}$ (orange) is the frequency of transitions from $\theta=\theta_2\to\theta_1$ and $\omega_2$ (light green) is the frequency of transitions from $\theta=\theta_2\to-\pi$. Frequencies, $\omega_1$ and $\omega_2$ decrease at the same rate with increasing $\lambda$ (hence the plots of these frequencies are overlapped), indicating that system makes fewer oscillations on increasing the detection frequency. However, $\omega_{12}$ increases with increasing $\lambda$ which shows that system transits at a faster rate from $\theta_2\to\theta_1$.}
\label{time_la>1}
\end{center}
\end{figure}
\begin{figure*}[htbp!]
\centering
\subfloat[]{\includegraphics[width=0.46\textwidth]{fig9a.jpg}}
\subfloat[]{\includegraphics[width=0.43\textwidth]{fig9b.jpg}}\\
\caption{(a) For $\lambda = 0.423 (< 1)$, prolonged Rabi oscillations are shown here in accordance with \eqref{eq:timeperiod_la<1}; (b) For $\lambda = 1.5 (> 1)$, in the Zeno regime, the trajectory is divided into three distinct arcs due to the presence of stable and unstable points. }
\label{sphere_projective}
\end{figure*}
We plot the probability density, i.e., the exponential of extremized action \eqref{eq:area_la<1}, \eqref{eq:area_la>1} with respect to $z_f$ (Fig. \ref{fig:expS}) to see the leading term of $P(z_f|z_I)$, starting from $\theta_i$=$0$ or $z_i=1$. For large value of $\lambda$=$2.5$, final state is mostly around the stable states. As the value of $\lambda$ decreases the curve gets broader and the most probable final states move toward $z_f$=$-1$.
\begin{figure}[htbp!]
\begin{center}
{\includegraphics[width=0.6\textwidth]{prob_distribution.png}}
\caption{The extremization of action for both the conditions of $\lambda$ indicates that for $\lambda=0$, the probability density $P(z_f|z_I)$ remains constant. For very small values of $\lambda$, i.e., low detector frequencies ($\lambda=0.05, 0.5$), the qubit traverses the path from initial state $z_i=1$ ($\theta_i=0$) to final state $z_f=-1$ ($\theta_f=-\pi$). However, when $\lambda$ is increased beyond $1$, ($\lambda=1.5, 2.5$), the qubit starts from the initial state and arrested at $z_1$, corresponding to the critical point of \eqref{eq:critical_th}, which is a function of $\lambda$. }
\label{fig:expS}
\end{center}
\end{figure}
Plotted probability densities are normalized as the area under the curve is $1$ for all the curves corresponding to different values of $\lambda$. For $\lambda=0$, the probability is constant for all the values of $z_f$ which shows the uninterrupted Rabi oscillations of the qubit. As the value of $\lambda$ increases, the probability of reaching the final state $z_f=-1$ (or $\theta=\pi$) decreases, which corresponds to the prolonged Rabi oscillations of the qubit. For $\lambda > 1$, the probability density shows almost $0$ value for all the $z_f$ except at the ``critical points" of \eqref{eq:critical_th}, where it has peaks. This shows that the qubit is arrested at these stable points exhibiting the quantum Zeno effect.
\subsubsection{Time of transition}
For $\lambda>1$, the time of transition cannot be calculated over the range of $\theta$ as there are points where the phase space curves exhibit discontinuities. However, we can divide the interval into three parts: $0\to\theta_1+\epsilon$, $\theta_2+\epsilon\to\theta_1-\epsilon$ and $\theta_2-\epsilon\to-\pi$,corresponding to frequencies, $\omega_1$, $\omega_{12}$ and $\omega_2$, respectively, where $\epsilon$ is an arbitrary small number. Frequencies $\omega_1$ and $\omega_2$ decrease with increasing $\lambda$, indicating that system is making fewer oscillations on increasing the detection frequency and comes to rest at $\lambda\gg1$, marking the end of Zeno regime as the system freezes completely. Whereas, $\omega_{12}$ increases with increasing $\lambda$ which shows that if the system is anywhere between $\theta_1$ and $\theta_2$ it transits at a faster rate from $\theta_2\to\theta_1$, see Figs. \ref{time_la>1}, \ref{sphere_projective}(b).
\section{Diffusive measurement}\label{Diffusive}
We consider the system discussed in Sec. \ref{section 3}, albeit with a measurement model . The state $\ket{1}$ of the qubit is being continuously monitored by a detector (ancilla) which has a characteristic time $\tau$ and the strength of interaction between the qubit and the detector is $J=\sqrt{\frac{\alpha}{\delta t}}$, where $\alpha$ is a constant \cite{parveen}. The detector is another two-level system, initially prepared in $\ket{0}_{(\rm d)}$ state of $\sigma_{{z}_{(\rm d)}}$ and is reset prior to each measurement of the observable $\sigma_{y_{(\rm d)}}$. For a total time of measurement, $T=n \delta t$, divided into $n$ intervals such that measurement readout is obtained at each interval $\delta t$, weak coupling implies, $\tau \gg T$ \cite{jordan2} and $\tau \gg\delta t$ \cite{ashhab}, so that the detector's operation during each interval is independent.
The total system-detector Hamiltonian is
\begin{equation}\label{eq:ham_diffusive}
H=\Omega_s\sigma_{{x}_{(\rm s)}}\otimes\mathbb{I}_{(\rm d)}+\frac{J}{2}(\mathbb{I}-\sigma_z)_{(\rm s)}\otimes\sigma_{y_{(\rm d)}}.
\end{equation}
The combined system evolves via the evolution operator $\mathcal{U}$, which can be decomposed into operators corresponding to Schr\"{o}dinger evolution of the system, interaction between system and detector, followed by measurement. The evolution of density matrix can be written as: \begin{equation}\label{eq:update_diff}
\rho(t+dt)=\frac{\mathcal{U}\rho(t)\mathcal{U}^\dagger}{Tr[\mathcal{U}\rho(t)\mathcal{U}^\dagger]}.
\end{equation}
Let us consider the free evolution of a closed system which is described by the time-independent Schr\"{o}dinger equation
\begin{equation}
\rho(t+dt)=U\rho(t)U^\dagger,
\end{equation}
where $U(t) = e^{-\iota H dt}$. The weak measurement of the system, i.e. projective measurement of the ancilla changes the state from $\rho(t)\otimes\ket{\phi}\bra{\phi}$ to $\rho(t+dt,i)\otimes\ket{i}\bra{i}$ \cite{martin}
\begin{equation}
\rho(t+dt,i)\otimes\ket{i}\bra{i}=\frac{({\mathcal I}_{(\rm s)}\otimes \ket{i}\bra{i})U(\rho(t)\otimes\ket{\phi}\bra{\phi})U^\dagger({\mathcal I}_{(\rm s)}\otimes\ket{i}\bra{i})}{Tr[({\mathcal I}_{(\rm s)}\otimes \ket{i}\bra{i})U(\rho(t)\otimes\ket{\phi}\bra{\phi})U^\dagger({\mathcal I}_{(\rm s)}\otimes\ket{i}\bra{i})]},
\end{equation}
where $\ket{i}\bra{i}$ is the projection operator into the $ith$ eigenspace of the observable subjected to measurement, and $\ket{\phi}$ is the initial state of the ancilla.
We perform a projective measurement on the ancilla and trace over the ancilla to obtain the final state of the system:
\begin{alignat}{1}\label{eq:rho_martin}
\rho(t+dt,i)&=\frac{({\mathcal I}_{(\rm s)}\otimes \langle i|)U(\rho(t)\otimes|\phi\rangle\langle\phi|)U^\dagger({\mathcal I}_{(\rm s )}\otimes|i\rangle)}{Tr[({\mathcal I}_{(\rm s)}\otimes \langle i|)U(\rho(t)\otimes|\phi\rangle\langle\phi|)U^\dagger({\mathcal I}_{(\rm s)}\otimes|i\rangle)]}\nonumber\\
&=\frac{{\mathcal M}_i\rho(t){\mathcal M}_i^\dagger}{Tr[{\mathcal M}_i\rho(t){\mathcal M}_i^\dagger]},
\end{alignat}
where ${\mathcal M}_i$'s are the Kraus operators. ${\mathcal M}_i=|i\rangle\langle i|$ is projection operator with $\sum_i{\mathcal M}_i=\hat{I}$. In \eqref{eq:rho_martin}, ${\mathcal M}_i=({\mathcal I}_{s}\otimes\langle i|)U({\mathcal I}_{s}\otimes |\phi\rangle)$ is the Kraus operator acting on the system, given by
\begin{alignat}{1}
{\mathcal M}_i & = ({\mathcal I}_{(\rm s)}\otimes\bra{i}) U({\mathcal I}_{(\rm s)}\otimes\ket{g})\nonumber\\
&=({\mathcal I}_{(\rm s)}\otimes\bra{i}|) \exp{\left[-\iota \Omega_s\sigma_{{x}_{(\rm s)}}\otimes\mathbb{I}_{(\rm d)}\delta t-\iota J\bigg(\frac{{\mathcal I}_{(\rm s)}-\sigma_{{z}_{(\rm s)}}}{2}\otimes\sigma_{{y}_{(\rm d)}}\bigg)\delta t\right]}({\mathcal I}_{(\rm s)}\otimes\ket{g})\nonumber\\
&=({\mathcal I}_{(\rm s)}\otimes\langle i|)\bigg[{\mathcal I}_{(\rm s)}\otimes {\mathcal I}_{(\rm d)}-\iota\Omega_s\sigma_{{x}_{(\rm s)}}\otimes\mathbb{I}_{(\rm d)}\delta t-\iota J\delta t \frac{{\mathcal I}_{(\rm s)}-\sigma_{{z}_{(\rm s)}}}{2}\otimes\sigma_{{y}_{(\rm d)}}\nonumber\\
&-(J\delta t)^2\frac{{\mathcal I}_{(\rm s)}-\sigma_{{z}_{(\rm s)}}}{4}\otimes {\mathcal I}_{(\rm d)}\bigg]({\mathcal I}_{(\rm s)}\otimes|g\rangle)+\mathcal{O}(\delta t^3).
\end{alignat}
Projecting the ancilla in the eigenstates of $\sigma_y$, i.e., $|i\rangle=|\pm y\rangle$, leads to the following form of the Kraus operators:
\begin{alignat}{1}
{\mathcal M}_{\pm}& =\frac{1}{\sqrt{2}}\bigg({\mathcal I}_{(\rm s)} \mp \iota J\delta t \frac{{\mathcal I}_{(\rm s)}-\sigma_{{z}_{(\rm s)}}}{2}-\iota \Omega_s\sigma_{{x}_{(\rm s)}}\delta t- \frac{(J\delta t)^2}{2}\frac{{\mathcal I}_{(\rm s)}-\sigma_{{z}_{(\rm s)}}}{4}\bigg)\nonumber\\
&=\frac{1}{\sqrt{2}}\bigg({\mathcal I}_{(\rm s)} \mp \iota\sqrt{\alpha \delta t} \frac{{\mathcal I}_{(\rm s)}-\sigma_{{z}_{(\rm s)}}}{2}-\iota\Omega_s \sigma_{{x}_{(\rm s)}}\delta t- \alpha \delta t\frac{{\mathcal I}_{(\rm s)}-\sigma_{{z}_{(\rm s)}}}{4}\bigg).
\end{alignat}
Due to inherent stochasticity in the detector, the randomness of measurement outcomes finds description in stochastic calculus. Defining these by a random variable, $W(t)$ such that
\begin{alignat}{1}
W(t+\delta t) = W(t)\pm\sqrt{\delta t}
\end{alignat}
with $W(0)=0$. In the continuum limit, $\delta t\to 0$, $W(t)$ describes a Wiener process. Wiener increment, $dW$ is a zero-mean Gaussian distributed random variable with variance $\delta t$. We can write the updated state (after dropping out the subscript (\rm s) as the Kraus operator is operating on the system only) as \cite{martin}:
\begin{alignat}{1}\label{update1}
\ket{\bar{\psi}(t+\delta t)}&=\sqrt{P(dW)}{\mathcal M}_{dW}|\psi(t)\rangle\nonumber\\
&=\sqrt{P(dW)}\bigg({\mathcal I} -\iota \sqrt{\alpha}dW \frac{({\mathcal I}-\sigma_z)}{2}-\iota\Omega_s \sigma_x \delta t- \alpha \delta t\frac{({\mathcal I}-\sigma_{z})}{4}\bigg)\ket{\psi(t)}
\end{alignat}
This is a linearised stochastic Schr\"{o}dinger equation, where $|\bar{\psi}(t+\delta t)\rangle$ is an unnormalized state. To write the equation with correct statistics, we incorporate an \^{I}to random variable with the same statistics as that of an actual measurement. The probabilities for each of the two possible outcomes corresponding to the eigenstates of $\sigma_y$ are:
\begin{alignat}{1}\label{eq:probability}
P(\pm)&=\langle\psi(t)|{\mathcal M}_{\pm}^\dagger {\mathcal M}_{\pm}\ket{\psi(t)} = \frac{1}{2}.
\end{alignat}
For writing the equation of motion, we replace the random variable $dW^2$=$\delta t$ with an \^{I}to random variable $\delta r$. Averaging with the probability distribution \eqref{eq:probability} gives:
\begin{alignat}{1}
\langle \delta r \rangle &= \sqrt{\delta t}[P(+)-P(-)]= 0\\
{\rm var}(\delta r)& = \langle (\Delta r)^2\rangle-(\langle \Delta r\rangle)^2=\delta t +{\mathcal O}(\delta t)^2
\end{alignat}
Suppose we sum the values of $\delta r$ over many time steps, keeping $\delta t$ small enough so that $\langle \delta r \rangle$ remains constant over a large number of time-steps. By the Central limit theorem, the sum follows a Gaussian distribution with zero mean and variance, $\delta t$. In the limit $\delta t\to 0$, we can write from \cite{lewalle}:
\begin{equation}
r=\frac{dW}{dt}\sqrt{\tau}.
\end{equation}
The Kraus operator is
\begin{equation}
{\mathcal M}_{dW}= \sqrt{P(dW)} \exp \bigg(-\iota\sqrt{\alpha}\frac{r dt}{\sqrt{\tau}} \frac{({\mathcal I}-\sigma_{z})}{2}-\iota\Omega_s \sigma_x dt-\alpha\frac{({\mathcal I}-\sigma_{z})}{4} dt\bigg).
\end{equation}
For a large number of steps, $N \to \infty$, the prefactor $\sqrt{P(dW)}$ limits to a Gaussian which leads to the following Kraus operator,
\begin{alignat}{1}
{\mathcal M}_{r}&= \bigg(\frac{dt \, e^{-r^2 dt/\tau}}{2\pi \sqrt{\tau}}\bigg)^{1/4} \exp \bigg(-\iota\Omega_s\sigma_x dt-\iota \sqrt{\frac{\alpha}{\tau}}\frac{({\mathcal I}-\sigma_{z})}{2} r dt-\alpha\frac{({\mathcal I}-\sigma_{z})}{4} dt\bigg).
\end{alignat}
Using \eqref{update1}, we can find the updated state and the final density matrix of the system:
\begin{alignat}{1}
\rho(t+dt)&=|\psi(t+dt)\rangle\langle\psi(t+dt)|\nonumber\\
&=\frac{{\mathcal M}_r\rho (t){\mathcal M}_r^\dagger}{Tr[{\mathcal M}_r \rho (t){\mathcal M}_r^\dagger ]}\nonumber\\
&=\frac{\rho(t)+\iota\Omega_s[\rho(t),\sigma_x]dt-\frac{\iota\sqrt{\alpha}r}{2\tau}[{\mathcal I}-\sigma_z,\rho(t)]dt-\frac{\alpha}{4}\{\rho(t),{\mathcal I}-\sigma_z\}dt}{1-\frac{r^2}{2\tau}dt-\frac{\alpha}{2}dt(1-z)}.
\end{alignat}
Thus, we arrive at the stochastic master equation:
\begin{alignat}{1}
\frac{d\rho}{dt}&=\iota\Omega_s[\rho(t),\sigma_x]-\frac{\iota\sqrt{r \alpha}}{2\tau}[{\mathcal I}-\sigma_z,\rho(t)]-\frac{\alpha}{4}\{\rho(t),{\mathcal I}-\sigma_z\}+\bigg(\frac{r^2}{2\tau}dt+\frac{\alpha}{2}dt(1-z)\bigg)\rho(t)
\end{alignat}
with $\{ \ldots \}$ denoting the anti-commutator.
\begin{figure*}[htbp!]
\centering
\subfloat[]{\includegraphics[width=0.46\textwidth]{x_0_y_0.4_la_0.000001.png}}
\subfloat[]{\includegraphics[width=0.46\textwidth]{x_.7_y_.2_z_la_0.000001.png}}
\caption{The evolution of Bloch coordinates without measurement, i.e., $\lambda=0$, for two generic initial conditions, (a) $\{x,y,z\} \equiv \{0,0.4,0.916\}$, $\{p_x,p_y,p_z\}\equiv \{0.5,0.3,0.2\}$, and (b) $\{x,y,z\} \equiv \{0.7,0.2,0.685\}$, $\{p_x,p_y,p_z\}\equiv \{0.2,0.6,0.5\}$. In both the cases, there are periodic Rabi oscillations in the $y$- and $z$-coordinates, whereas the $x$-coordinate remains constant.}
\label{fig:la=0}
\end{figure*}
Upon comparison of the matrix elements, we arrive at the update equations of Bloch coordinates:
\begin{alignat}{1}
\dot{x}(t)&=-\frac{\alpha x z}{2}+r\sqrt{\frac{\alpha}{\tau}} y, \nonumber\\
\dot{y}(t)&=-\frac{\alpha y z}{2}-r\sqrt{\frac{\alpha}{\tau}} x-2\Omega_s z, \nonumber\\
\dot{z}(t)&=\frac{\alpha(1-z^2)}{2}+2\Omega_s y.
\end{alignat}
The functional $\mathcal{F}$, as described in Sec. \ref{CDJ formalism} is $-\frac{\alpha}{2}(\frac{r^2}{\alpha\tau}+1-z)$. The action ${\mathcal S}$ and stochastic Hamiltonian ${\mathcal H}$ are given by \begin{alignat}{1}
{\mathcal S}& = \int_{0}^{T} dt (-p_x\dot x-p_y\dot y-p_z\dot z-{\mathcal H}),\nonumber\\
{\mathcal H}&=p_x\bigg(-\frac{\alpha x z}{2}+r\sqrt{\frac{\alpha}{\tau}} y\bigg) +p_y\bigg(-\frac{\alpha y z}{2}-r\sqrt{\frac{\alpha}{\tau}} x-2\Omega_s z\bigg)\nonumber\\ &+p_z\bigg(\frac{\alpha}{2}(1-z^2)+2\Omega_s y\bigg)-\frac{\alpha}{2}\bigg(\frac{r^2}{\alpha\tau}+1-z\bigg).
\end{alignat}
\begin{figure*}[htbp!]
\centering
\subfloat[]{\includegraphics[width=0.46\textwidth]{x_0_y_0.4_la_0.5.png}}
\subfloat[]{\includegraphics[width=0.43\textwidth]{x_0_y_0.4_z_la_1.5.png}}\\
\subfloat[]{\includegraphics[width=0.46\textwidth]{x_.7_y_.2_z_la_0.5.png}}
\subfloat[]{\includegraphics[width=0.43\textwidth]{x_.7_y_.2_z_la_1.5.png}}\\
\caption{For the same initial conditions as in Fig. \ref{fig:la=0}, (a) and (c) correspond to the case when $\lambda=0.5$, and, (b) and (d) correspond to the case when $\lambda=1.5$. We see that in the Zeno regime ($\lambda=1.5$), the coordinates freeze at the critical point $\{x,y,z\}\equiv\{0,-0.666,0.745\}$, (in spherical coordinates, equivalent to $\theta_1=-\sin^{-1}\left(\frac{1}{1.5}\right)=-0.729$ and $\theta_2=\sin^{-1}\left(\frac{1}{1.5}\right)-\pi=-2.411$ with $\phi_1=\phi_2=\frac{\pi}{2}$, corresponding to the stable and unstable points in Sec. \ref{Critical points}, Fig. \ref{fig:separatrix}).}
\label{fig:zeno}
\end{figure*}
Extremization of action gives the following coupled differential equations and a constraint on $r$:
\begin{alignat}{1}\label{eq:xyzupdate}
\dot{x}(t)&=-\frac{\alpha x z}{2}+r\sqrt{\frac{\alpha}{\tau}} y,\nonumber\\
\dot{y}(t)&=-\frac{\alpha y z}{2}-r\sqrt{\frac{\alpha}{\tau}} x-2\Omega_s z,\nonumber\\
\dot{z}(t)&=\frac{\alpha}{2}(1-z^2)+2\Omega_s y, \nonumber\\
\dot{p_x}(t)&=\frac{\alpha z p_x}{2}+r\sqrt{\frac{\alpha }{\tau}}p_y,\nonumber\\
\dot{p_y}(t)&=-r\sqrt{\frac{\alpha }{\tau}}p_x+\frac{\alpha z p_y}{2}-2\Omega_s p_z, \nonumber\\
\dot{p_z}(t)&=\frac{\alpha x p_x}{2}+\frac{\alpha y p_y}{2}+2\Omega_s p_y + \alpha z p_z-\frac{\alpha}{2}, \nonumber\\
r&=\sqrt{\alpha\tau}(yp_x-xp_y).
\end{alignat}
In terms of $\theta$ and $\phi$, these critical points correspond to $\theta_1=-\sin^{-1}\left(\frac{1}{1.5}\right)=-0.729$ and $\theta_2=\sin^{-1}\left(\frac{1}{1.5}\right)-\pi=-2.411$ with $\phi_1=\phi_2=\frac{\pi}{2}$. The evolution of density matrix turns out to be stochastic, a consequence of Gaussian-distributed measurement outcomes \cite{Gisint}. The update in Bloch coordinates with time is shown in Fig. \ref{fig:la=0}, \ref{fig:zeno} for different measurement frequencies and different initial conditions. Without measurement, the system exhibits Rabi oscillations (Fig. \ref{fig:la=0}). The measurements influence the dynamics of the Bloch coordinates as shown in Fig. \ref{fig:zeno}. When the measurement frequency is higher than the Rabi frequency, the qubit is frozen in a state (Fig. \ref{fig:zeno} (b), (d)) manifesting the quantum Zeno effect. The phase space point corresponding to this state is the stable point of \eqref{eq:xyzupdate}, indeed equivalent to the stable point $\theta_1$ of Sec. \ref{section 3}.
\section{Concluding Remarks}
In this work, we have employed the action formalism developed by Jordan and coworkers \cite{jordan} which facilitates a study of evolution of density matrix in time, in terms of evolution of Bloch coordinates and canonically conjugate momenta. The system considered by us has been a subject of close study due to its potential relation with quantum error correction. Unlike in \cite{jordan}, where the measurement readout is performed by a quantum point contact (QPC), we consider another two level system, namely, an ancilla, entangled with our qubit, thus performing a partial measurement rather than a direct one. The dynamics of the system is controlled by the frequency with which repeated measurements are performed leading us to a connection with QZE. For the qubit system considered here, it is shown that the Rabi oscillations are prolonged in time. When the ratio $\lambda > 1$, it has been shown recently that there appear a cascade of stages after the quantum Zeno effect sets in \cite{parveen}. There appear two critical points in $\theta$ - one stable and the other unstable. The system navigates to the stable point in the sense that an ensemble of identically prepared systems evolve towards the stable point. Then, according to the earlier works \cite{parveen}, after some time (duration being statistical), the system would make a transition to the unstable point, from where, it would reach the state $|0\rangle$, owing to inherent quantum fluctuations and tunneling. The mechanism of this rather important dynamics is correctly and completely captured in a description in terms of phase space as we need to consider the evolution of $\theta$ and $p_{\theta}$ to understand the stages. In fact, as shown here, there appear two hyperbolic points in phase space $P_1(\theta_1, p_{\theta_{1}})$ and $P_2(\theta_2, p_{\theta_{2}})$ - the important difference being that $\theta$ ($p_{\theta})$ is a stable (unstable) direction around $P_1$, and, exactly the opposite is the case for $P_2$ \ref{fig:separatrix}. At $P_1$ and $P_2$, the stable and unstable directions are interchanged. Thus, the complete explanation of the stages and the transition is as follows. While the system is attracted towards $P_1$ along $\theta$, due to instability along $p_{\theta}$, it allows the system to reach $P_2$ along $p_{\theta}$ wherefrom the system evolves due to instability along $\theta$ direction. This provides a clear picture of different stages in the QZE as phase space description is complete whereas a description in terms of just the angle or coordinates would be a reduced one. We would also like to point out that the saddle points seen in Fig. \ref{fig:separatrix} are time-reversed partners insofar as $(\theta_1, p_{\theta_1})$ is transformed to the other point via $\pi - \theta_1, -p_{\theta_1}$. So, under time-reversal, the other saddle point plays the role of the state where the system would be attracted to.
Even in the case of diffusive measurement, the system evolves towards the same critical points as in Sec. \ref{section 3}, thus exhibiting quantum Zeno effect. The system demonstrates the lengthening of Rabi period for frequencies of detector lower than the Rabi frequency, and freezing of the coordinates to reach the stable points for frequencies higher than the Rabi frequency, which is reminiscent of the ``critical slowing down", discussed in \cite{parveen}. We would like to conclude by expressing that the usage of the stochastic action principle to analyze the quantum Zeno effect and related qubit dynamics on phase space and Bloch sphere is most insightful.
\noindent
{\bf Acknowledgement}
We would like to thank Dr. Parveen Kumar for stimulating and instructive discussions.
\newpage
|
2,869,038,154,563 | arxiv | \section{Introduction}
\vspace*{-0.05in}
The Low Latency Fault Tolerance (LLFT) system provides fault tolerance
for distributed applications, using the leader-follower replication
technique. LLFT provides application-transparent replication, with
strong replica consistency, for applications that involve multiple
interacting processes or threads. LLFT supports client-server
applications where both client and server processes are replicated,
and three-tier applications where middle-tier processes are
replicated. LLFT provides fault tolerance for distributed
applications deployed over a local-area network, as in a data center,
rather than over a wide-area network, such as the Internet.
With LLFT, the processes of the application are replicated, and the
replicas of a process form a process group. Within a process group,
one replica is designated as the primary replica, and the other
replicas are the backup replicas. The primary in a process group
multicasts messages to a destination process group over a virtual
connection, as shown in Figure~\ref{conn}. The primary in the
destination process group orders the messages, performs the
operations, and produces ordering information for non-deterministic
operations, which it supplies to the backups in the destination group.
The LLFT system provides fault tolerance for the distributed
applications, with the following properties.
{\bf Strong Replica Consistency.} The LLFT system replicates the
processes of an application, and maintains strong replica consistency
within a primary component. The application continues to run
without loss of processing or messages, and without disruption to its
state. If a fault occurs, LLFT provides reconfiguration and recovery
while maintaining virtual synchrony \cite{BR:ISIS,Moser}, including transfer of
state from an existing replica to a new replica and synchronization of
the operation of the new replica with the existing replicas. To
maintain strong replica consistency within a primary component,
LLFT sanitizes (masks) non-deterministic operations, including
multi-threading, time-related operations and socket communication.
{\bf Low Latency.} The LLFT system achieves low latency message
delivery during normal operation, and low latency reconfiguration and
recovery when a fault occurs. That is, it provides fault tolerance to
the applications with minimal overhead in the response times seen by
the clients. LLFT achieves low latency by design, in that the primary
makes the decisions on the order in which operations are performed and
the ordering information is reflected to its backups. Moreover, the
replicated applications interact with each other directly, without an
intermediate daemon process and without additional context switches.
{\bf Transparency and Ease of Use.} The LLFT system provides fault
tolerance that is transparent to the application. The application is
unaware that it is replicated, and is unaware of faults. Applications
programmed using TCP socket APIs, or middleware such as Java RMI, can
be replicated without modifications to the applications. The
application programs require no extra code for fault tolerance, and
the application programmers require no special skills in fault
tolerance programming. The application program is identical to that
of a non-fault-tolerant unreplicated applications.
\begin{figure*}[t]
\begin{center}
\leavevmode
\epsfxsize=5.6in
\epsfbox{llft-group-view2.eps}
\vspace*{-0.1in}
\caption{Process groups interacting over virtual connections.}
\label{conn}
\vspace{-0.2in}
\end{center}
\end{figure*}
The novel contributions of this work lie in the design of the
components of the LLFT system.
{\bf Low Latency Messaging Protocol.} The Low Latency Messaging
Protocol provides reliable, totally ordered message delivery by
communicating message ordering information from the primary replica to
the backup replicas in a group. It ensures that, in the event of a
fault, a backup has, or can obtain, the messages and the order
information that it needs to reproduce the actions of the primary.
The replicated applications interact with each other directly, via a
group-to-group multicast.
{\bf Leader-Determined Membership Protocol.} The Leader-Determined
Membership Protocol ensures that the members of a process group have a
consistent view of the membership set and of the primary replica in
the group. It effects a membership change and a consistent view more
quickly than other membership protocols, by selecting a new primary
deterministically, based on the precedences and ranks (defined below)
of the backups in the group and by avoiding the need for a multi-round
consensus algorithm.
{\bf Virtual Determinizer Framework.} The Virtual Determinizer
Framework renders the replicas of an application virtually
deterministic by recording the order and results of each
non-deterministic operation at the primary, and by guaranteeing that
the backups obtain the same results in the same order as the primary. The
Virtual Determinizer Framework has been instantiated for major sources
of non-determinism, including multi-threading, clock-related
operations and socket communication.
\vspace*{-0.1in}
\section{Basic Concepts}
\label{BasicConcepts}
\vspace*{-0.1in}
\subsection{System Model}
\vspace*{-0.05in}
LLFT operates in an asynchronous distributed system that comprises one
or more applications running on multiple processors and communicating
over a local-area network, such as an Ethernet. Clients that run
outside the local-area network are supported via a gateway.
An application consists of one or more processes, possibly
multi-threaded with shared data, that interact with each other and
also with file systems and database systems.
A process that is non-faulty completes a computation, but there is no
bound on the time required to complete the computation. The processes
communicate via messages using an unreliable, unordered message
delivery service, such as UDP multicast, with no bound on the time
required to communicate a message. With asychronous messaging, no
protocol can guarantee reliable message delivery within a time bound.
Thus, we adopt the (theoretical) assumption of {\it eventual reliable
communication}, {\it i.e.}, if a message is transmitted repeatedly,
it is eventually received, as do other researchers \cite{Guerraoui}.
\vspace*{-0.1in}
\subsection{Fault Model}
\vspace*{-0.05in}
The LLFT system replicates application processes to protect the
application against various types of faults, in particular:
\begin{itemize}
\item {\it\bf Crash fault} - A process does not produce
any further results.
\item {\it\bf Timing fault} - A process does not produce
a result within a given time constraint.
\end{itemize}
LLFT does not handle Byzantine faults. LLFT allows processes to
recover but, when a process recovers, it is regarded as a new process
with a new identity ({\tt birthId}). LLFT also handles communication
network faults, including message loss, selective communication loss,
and partitioning faults. Healing of partitioning faults is guaranteed
by the eventual reliable communication assumption.
To achieve liveness and termination of the algorithms, LLFT uses
unreliable fault detectors based on timeouts. The fault detectors are
necessarily unreliable, and the timeouts are a measure of how
unreliable the fault detectors are. Crash faults are detected as
timing faults by the LLFT fault detectors.
LLFT uses a leader-follower algorithm to establish the total order of
messages, to render operations virtually deterministic, and to
establish a consistent group membership. It does not use a consensus
algorithm based on unreliable fault detectors \cite{Chandra:UnrelFD}
to circumvent the impossibility result \cite{FLP:IMPOS}. Moreover,
LLFT does not assume that a majority of the processes in a group is
non-faulty, as do other works such as
\cite{Chandra:UnrelFD,Paxos,Liskov}. Rather, LLFT adopts the
(theoretical) assumption of {\it sufficient replication}, {\it i.e.},
in each primary view, there exists at least one replica that does not
become faulty. The price of this relaxation is that LLFT must be able
to detect and heal partitions of the group membership, which it does
using a precedence mechanism (described below). The risk of multiple
concurrent memberships, and thus the need to detect and heal
partitions, can be avoided only in systems in which a majority vote is
used not only for membership, but also for every value that is
communicated, as is done in aircraft flight control systems
\cite{SIFT}.
LLFT ensures that only one component of a partition, which we refer to
as the {\it primary component}, survives in the infinite sequence of
consecutive primary views of a group. Within that primary component,
LLFT maintains {\it virtual synchrony} \cite{BR:ISIS,Moser} which
means that, if the primary fails, the new primary must advance to the
state of the old primary, in particular the state known to the remote
groups of its connections, before it failed.
\vspace*{-0.1in}
\subsection{Process Groups}
\vspace*{-0.05in}
The replicas of a process form a {\it process group (virtual
process)}. Each process group has a unique identifier ({\it group
id}), which is supplied by the user. We equate a process group with its
group id. This group id is mapped by LLFT to a virtual port on which
the group sends and receives messages, as discussed below.
Each process group has a {\it group membership} that consists of the
replicas of the process. Typically, different members of a process
group run on different processors. One of the replicas in the process
group is the {\it primary replica}, and the other members are the {\it
backup replicas}. Each membership change that introduces a new
primary replica constitutes a new {\it primary view}, which has a {\it
primary view number}. Each member of a process group must know the
primary replica in its group. There is no need for the members of a
sending process group to know which member of a destination process
group is the primary replica.
\vspace*{-0.1in}
\subsection{Virtual Connections}
\vspace*{-0.05in}
The LLFT system introduces the novel, elegant idea of a virtual
connection, which is a natural extension of the point-to-point
connection of TCP.
A {\it virtual connection} is a connection between two endpoints,
where each endpoint is a process group, over which messages are
communicated between the two group endpoints. A virtual connection is
a full-duplex, many-to-many communication channel between the two
endpoints. A sender uses UDP multicast to send messages to a
destination group over the virtual connection.
A {\it virtual port (group id)} identifies the source (destination)
group from (to) which the messages are sent (delivered) over the
virtual connection. All members of a group listen on the same virtual
port, and members of different groups listen on different virtual
ports. The groups need to know the virtual ports (group ids) of the
groups with which they are communicating, just as with TCP.
Typically, a process group is an endpoint of more than one virtual
connection, as shown in Figure \ref{conn}, where there are multiple
process groups, representing multiple applications running on
different processors and interacting over the network, but there might
be only two process groups and one virtual connection.
\vspace*{-0.1in}
\subsection{Replication Types}
\vspace*{-0.05in}
The LLFT system supports two types of leader-follower replication, namely:
\begin{itemize}
\item {\it\bf Semi-active replication} - The primary
orders the messages it receives, performs the
operations, and provides ordering information for
non-deterministic operations to the backups.
A backup receives and logs incoming messages, performs the
operations according to the ordering information supplied by the
primary, and logs outgoing messages, but does not send
outgoing messages.
\item {\it\bf Semi-passive replication} - The primary
orders the messages it receives, performs the
operations, and provides ordering information for
non-deterministic operations to the backups.
In addition, the primary communicates state updates to the
backups, as well as file and database updates. A backup receives
and logs incoming messages, and updates its state, but does not
perform the operations and does not produce outgoing
messages.
\end{itemize}
Semi-passive replication uses fewer processing resources than
semi-active replication, but it incurs greater latency for
reconfiguration and recovery, if the primary fails.
To maintain strong replica consistency within a primary component, it
is necessary to sanitize (mask) non-deterministic operations not only
for semi-active replication but also for semi-passive replication.
For example, consider requests from two clients that are processed
concurrently using semi-passive replication. Processing the request
from the first client updates a data item. Processing the request
from the second client then updates the same data item. The request
from the second client completes, and the primary sends its update to
the backups and its reply to the second client. The primary then
fails before it can send its update to the backups and its reply to
the first client. The processing of the request from the first client
is repeated at the new primary. However, the update to the data item
has already been performed and recorded at the backup, before it
became the new primary, when the reply was sent to the second client.
The update must not be repeated if the correct result is to be
obtained.
\vspace*{-0.1in}
\subsection{Correctness Properties}
\vspace*{-0.05in}
The safety and liveness properties for LLFT, based on the above model,
are stated below. Traditionally, safety and liveness properties are
strictly separated. However, in a system that might incur a
communication partitioning fault with subsequent recovery from that
partitioning, safety is necessarily dependent on liveness. While the
system is partitioned, even complete knowledge of all processes does
not suffice to determine which of the two or more competing branches
is a transient side branch that will be terminated when the partition
is healed. Thus, the safety properties for LLFT are defined in terms
of an infinite sequence of consecutive primary views, as assured by
the liveness properties. The proofs of correctness can be found in
the Appendix. A discussion of the assumptions of the model, relative
to these properties, is included below.
\subsubsection{{\bf Safety Properties}}
For each process group:
\begin{itemize}
\item There exists at most one infinite sequence of consecutive
primary views for the group. Each of those primary views has a
unique primary view number and a single primary replica.
\item There exists at most one infinite sequence of operations
in an infinite sequence of consecutive primary views for the group.
\item For semi-active replication, the sequence of operations of a
replica, in a membership of the infinite sequence of consecutive
primary views, is a consecutive subsequence of the infinite sequence
of operations for the group.
\item For semi-passive replication, the sequence of states of a
replica, in a membership of the infinite sequence of consecutive
primary views, is a consecutive subsequence of the infinite sequence
of states for the group.
\end{itemize}
\subsubsection{{\bf Liveness Properties}}
For each process group:
\begin{itemize}
\item There exists at least one infinite sequence of consecutive
primary views with consecutive primary view numbers for the group.
\item There exists at least one infinite sequence of operations in
each infinite sequence of consecutive primary views for the group.
\end{itemize}
LLFT imposes implicit bounds on the computation time and the
communication time in the form of tunable timeout parameters of the
fault detectors. Due to the asynchrony of the system, those bounds
might be violated, which might lead to a replica being mistakenly
removed from the membership.
With the assumption of eventual reliable communication ({\it i.e.}, if
a message is transmitted repeatedly, it is eventually received), a
replica that is mistakenly removed from the membership eventually
receives a {\tt PrimaryChange} or {\tt RemoveBackup} message that
informs it that it has been removed. The replica then applies for
readmission to the membership using the {\tt ProposeBackup} message.
Without the assumption of eventual reliable communication, a
mistakenly removed replica might not receive those messages and, thus,
not apply for readmission.
The tuning of the timeout parameters for the fault detectors, relative
to the distribution of the times for computation and communication,
can be viewed as a probabilistic optimization problem. The latency to
determine a new primary and a new membership after detection of a
genuine fault (true positive) is matched against the latency caused by
detection of a mistaken fault (false positive), to minimize the
overall latency of the system.
With the assumption of sufficient replication ({\it i.e.}, each group
contains enough replicas such that in each primary view there exists a
replica that does not become faulty), the sequence of operations of a
group is infinite. Without the assumption of sufficient replication,
the sequence of operations of a group might be finite.
\vspace*{-0.1in}
\section{Low Latency Messaging Protocol}
\vspace*{-0.05in}
The LLFT Messaging Protocol converts the unreliable, unordered
message delivery service of UDP multicast into a reliable, totally ordered
message delivery service between two group endpoints, just as TCP
converts the unreliable message delivery service of IP unicast into a
reliable, totally ordered message delivery service between two
individual endpoints.
The Messaging Protocol provides the following services for
the application messages:
\begin{itemize}
\item {\it\bf Reliable delivery} - All of the members in a group
receive each message that is multicast to the group on a connection.
\item {\it\bf Total ordering} - All of the members in a group deliver
the messages to the application in the same sequence.
\item {\it\bf Buffer management} - When a message no longer needs to
be retransmitted (because all of the intended destinations have
received the message), the source and the destinations remove the
message from their buffers.
\end{itemize}
The Messaging Protocol provides reliable, totally ordered message
delivery, while maintaining virtual synchrony \cite{BR:ISIS,Moser} in the
event of a fault. It incorporates flow control mechanisms similar to
those used by TCP. Flow control is needed to ensure that processing in
the primary receiver, and in the backups, can keep up with the primary
sender, and that buffer space does not become exhausted.
\renewcommand{\tabcolsep}{0.05in}
\renewcommand{\baselinestretch}{0.8}
\begin{figure}\hbox{
\begin{tabular}{ll}
&\parbox[t]{2.5in}{\footnotesize\bf \hspace*{0.5in}Message Types}\\[0.01in]
{\footnotesize\tt\bf Request}&
\parbox[t]{2.5in}{\footnotesize A message that carries application
payload and that is sent by a primary client.}\\[0.01in]
{\footnotesize\tt\bf Reply}&
\parbox[t]{2.5in}{\footnotesize A message that carries application
payload and that is sent by a primary server.}\\[0.01in]
{\footnotesize\tt\bf FirstAck}&
\parbox[t]{2.5in}{\footnotesize A control message that is sent by a primary
to acknowledge the receipt of a {\tt Request} or {\tt Reply} message.}\\[0.01in]
{\footnotesize\tt\bf SecondAck}&
\parbox[t]{2.5in}{\footnotesize A control message sent by a backup
to acknowledge the receipt of a {\tt FirstAck} message.}\\[0.01in]
{\footnotesize\tt\bf Nack}&
\parbox[t]{2.5in}{\footnotesize A control message sent by a backup to its
primary, or by a primary to the primary that originated a
missing {\tt Request} or {\tt Reply} message, which then retransmits
the message.}\\[0.01in]
{\footnotesize\tt\bf Heartbeat}&
\parbox[t]{2.5in}{\footnotesize A control message sent by the primary
and the backups to facilitate fault detection and also to share
timestamp watermark information, which is used for buffer management.}\\[0.01in]
{\footnotesize\tt\bf KeepAlive}&
\parbox[t]{2.5in}{\footnotesize A control message sent by
interacting groups over an inter-group connection, to indicate
the liveliness of the connection, after the connection has been
idle more than a predetermined amount of time.}\\ \\ \\ \\ \\
\end{tabular}
\hspace*{0.0in}
\begin{tabular}{ll}
&\parbox[t]{2.8in}{\footnotesize\bf \hspace*{0.5in}Message Header Fields}\\[0.01in]
{\footnotesize\tt\bf messageType}&
\parbox[t]{2.8in}{\footnotesize The type of message ({\tt Request}, {\tt Reply}, {\tt FirstAck}, {\tt SecondAck}, {\tt Nack}, {\tt Heartbeat}, {\tt KeepAlive}).}\\[0.01in]
{\footnotesize\tt\bf sourceGroupId}&
\parbox[t]{2.8in}{\footnotesize The identifier of the source group of the message.}\\[0.01in]
{\footnotesize\tt\bf destGroupId}&
\parbox[t]{2.8in}{\footnotesize The identifier of the destination group of the message.}\\[0.01in]
{\footnotesize\tt\bf connSeqNum}&
\parbox[t]{2.8in}{\footnotesize A connection sequence number used to
identify the connection on which the message is sent.}\\[0.01in]
{\footnotesize\tt\bf primaryViewNum}&
\parbox[t]{2.8in}{\footnotesize The primary view number, a sequence
number that represents the number of membership changes that involve a
change in the primary.}\\[0.01in]
{\footnotesize\tt\bf precedence}&
\parbox[t]{2.8in}{\footnotesize The precedence of the primary.}\\[0.01in]
{\footnotesize\tt\bf msgSeqNum}&
\parbox[t]{2.8in}{\footnotesize The message sequence number, which is
non-zero if and only if the message is a {\tt Request} or {\tt Reply}
message multicast by the primary.}\\[0.01in]
{\footnotesize\tt\bf ackViewNum}&
\parbox[t]{2.8in}{\footnotesize The primary view number of the message with
the message sequence number in the {\tt ack} field.}\\[0.01in]
{\footnotesize\tt\bf ack}&
\parbox[t]{2.8in}{\footnotesize A message sequence number, which
is non-zero if and only if the message is a {\tt Request} or
{\tt Reply} message, and the primary has received all
messages on the connection with sequence numbers less than or equal
to this sequence number.}\\[0.01in]
{\footnotesize\tt\bf back}&
\parbox[t]{2.8in}{\footnotesize A timestamp watermark used for buffer
management to indicate that all members of a group have
received all messages with timestamps less than this
watermark.}\\[0.01in]
{\footnotesize\tt\bf timestamp}&
\parbox[t]{2.8in}{\footnotesize A timestamp derived from a Lamport logical
clock at the source of the message.}\\ \\ \\
\end{tabular}}
\vspace*{-0.2in}
\caption{The message types and the message header fields used by the Messaging
Protocol.}
\label{MessageTypesMessageHeaders}
\end{figure}
\renewcommand{\baselinestretch}{1.5}
\vspace*{-0.1in}
\subsection{Data Structures}
\vspace*{-0.05in}
\subsubsection{{\bf Message Types}}
The types of messages used by the Messaging Protocol are shown on the
left of Figure~\ref{MessageTypesMessageHeaders} and are illustrated in
Figure \ref{reliable-messaging}. A {\tt Request} or {\tt Reply}
message is not necessarily a synchronous blocking request or reply, as
is commonly used in client/server communication; a {\tt Request} or
{\tt Reply} message can be an asynchronous one-way message. A
retransmitted {\tt Request} or {\tt Reply} message uses the same
message type as the original message.
\subsubsection{{\bf Message Header Fields}}
The fields of a message header are shown on the right of
Figure~\ref{MessageTypesMessageHeaders}. The quadruple ({\tt
sourceGroupId}, {\tt destGroupId}, {\tt connSeqNum}, {\tt role})
uniquely identifies a connection, where {\tt role} is client or
server.
A message with a non-zero message sequence number {\tt msgSeqNum},
{\it i.e.}, a {\tt Request} or {\tt Reply} message multicast by the
primary is inserted into the sent list at the sender and the received
list at the destination. An {\tt ack} acknowledges not only the
acknowledged message but also all prior messages from the remote
primary of the connection, and allows more rapid retransmission and
delivery of missing messages.
In a message multicast by the primary on a connection, {\tt back}
contains the primary's {\tt myGroup} {\tt Watermark}, \ie the minimum
timestamp watermark of the group, which is the minimum of the
primary's own {\tt myTime} {\tt stampWatermark} and all of the
backups' {\tt myTimestampWatermark}s. In a control message sent by a
backup to its primary, {\tt back} contains the backup's {\tt
myTimestampWatermark}, \ie the minimum timestamp of messages that
the backup received on all of its connections. The {\tt timestamp},
which drives the {\tt back}s, is used for buffer management and not
for message ordering.
\begin{figure}[t]
\begin{center}
\leavevmode
\epsfxsize=3.3in
\epsfbox{reliable-messaging4.eps}
\vspace{-0.15in}
\caption{Message exchange between client and server groups.}
\label{reliable-messaging}
\vspace{-0.15in}
\end{center}
\end{figure}
\subsubsection{{\bf Variables}}
To achieve reliable, totally ordered message delivery, for each
connection, the Messaging Protocol uses the variables shown on the
left of Figure~\ref{Variables}.
The message sequence number is used by a member of the destination
group to ensure that it has received all messages from the sending
group on the connection.
When a member acknowledges the message with message sequence number {\tt
receivedUpToMsn}, it indicates that it has received all messages,
sent on the connection, with message sequence numbers less than or equal to
this sequence number.
For buffer management, the Messaging Protocol uses Lamport timestamps
and timestamp watermarks to determine when all intended destinations
have received a message. In particular, it uses the global variables
shown at the right of Figure~\ref{Variables}. A message carries {\tt
myGroupWatermark} in the {\tt back} field of its header, as a form
of acknowledgment, so that the remote group can safely discard, from
its buffers, messages sent to the group that carry timestamps less
than this group watermark. The primary and the backups use the same
timestamp value for a message, which is essential for timestamp
watermark-based buffer management.
\renewcommand{\tabcolsep}{0.05in}
\renewcommand{\baselinestretch}{0.8}
\begin{figure}\hbox{
\begin{tabular}{ll}
&\parbox[t]{2.6in}{\footnotesize\bf \hspace*{0.6in}For Each Connection}\\[0.01in]
{\footnotesize\tt\bf msgSeqCount}&
\parbox[t]{2.6in}{\footnotesize A variable used to assign a
message sequence number to each application message sent on the
connection.}\\[0.01in]
{\footnotesize\tt\bf receivedUpToMsn}&
\parbox[t]{2.6in}{\footnotesize A variable used to store the
sequence number of the last message received on the connection
without a gap.}\\[0.01in]
{\footnotesize\tt\bf sent list}&
\parbox[t]{2.6in}{\footnotesize A linked list that stores the application
messages originated by a member.}\\[0.01in]
{\footnotesize\tt\bf received list}&
\parbox[t]{2.6in}{\footnotesize A linked list that stores incoming
application messages received by a member.}\\[0.01in]
{\footnotesize\tt\bf nack list}&
\parbox[t]{2.6in}{\footnotesize A linked list that stores entries for
the missing messages expected by this receiver and due to be
negatively acknowledged.}\\ \\
\end{tabular}
\hspace*{0.0in}
\begin{tabular}{ll}
&\parbox[t]{2.15in}{\footnotesize\bf \hspace*{0.5in}For Buffer Management}\\[0.01in]
{\footnotesize\tt\bf myTimestamp}&
\parbox[t]{2.15in}{\footnotesize A Lamport clock used to timestamp the
messages that are transmitted.}\\[0.01in]
{\footnotesize\tt\bf myTimestampWatermark}&
\parbox[t]{2.15in}{\footnotesize A timestamp such that this member has
received all messages up to this timestamp for all connections in
which it is involved.}\\[0.01in]
{\footnotesize\tt\bf myGroupWatermark}&
\parbox[t]{2.15in}{\footnotesize The minimum of the timestamp
watermarks of the members of the group.}\\ \\ \\ \\ \\ \\
\end{tabular}}
\vspace*{-0.05in}
\caption{Variables used for each connection and global variables used for buffer management.}
\label{Variables}
\end{figure}
\renewcommand{\baselinestretch}{1.5}
Figure \ref{data-structures} shows the pseudocode for the {\tt
OrderInfo} struct (lines 1-7) and the {\tt MsgOrder} struct (lines
8-15). The opaque field (line 12) stores different {\tt OrderInfo}
entries for different message types. For a normal read, and a
sucessful nonblocking write, it stores the offset with respect to the
lower bound {\tt m\_msgSeqNum} so that the upper bound can be
calculated, which allows the merger of {\tt OrderInfo} entries for
consecutively sent/delivered messages from the same connection into a
single {\tt OrderInfo} entry. For a nonblocking write, it stores the
number of times that the same message has been attempted, but failed,
to send. If other {\tt OrderInfo} entries have been created in
between, there might be multiple {\tt OrderInfo} entries for the same
message. There is no {\tt OrderInfo} entry for a blocking write.
\vspace*{-0.1in}
\subsection{Reliable Message Delivery}
\vspace*{-0.05in}
Reliable message delivery is described below in terms of a {\tt
Request} from a client group $C$ to a server group $S$. The same
considerations apply for a {\tt Reply} from a server group $S$ to a
client group $C$. The pseudocode for the Messaging Protocol is given
in Figure \ref{messaging-alg}.
The primary in group $C$ multicasts messages originated by the
application to a destination group over a virtual connection. A
backup in group $C$ creates and logs (but does not multicast) messages
originated by the application. Restricting the actions of the backup
in this way reduces the amount of network traffic.
When the primary in group $C$ first multicasts an application message
to group $S$ on a connection, it stores the message in a {\tt sent
list} for the connection (lines 15-17). The primary retransmits a
message in the {\tt sent list} if it does not receive an
acknowledgment for the message sufficiently promptly (as determined
by a timeout) (lines 45-46).
The primary in group $S$ includes, in the header ({\tt ack} field) of
each application message it multicasts to group $C$ on a connection,
the message sequence number of the last application message it
received without a gap from the primary in group $C$ on that
connection (line 10). If the primary in group $S$ does not have a
message to multicast on the connection sufficiently promptly (as
determined by a timeout), it multicasts a {\tt FirstAck} message
containing an {\tt ack} for the last application message it received
without a gap (lines 47-48).
The primary (and a backup) in group $S$ checks the {\tt precedence}
field of a message it receives to determine whether the message
originated in a competing membership whose primary has higher
precedence. If so, it multicasts the message on the intra-group
connection to ensure that all other members have also received the
message, resets its state and rejoins the group (lines 18-20).
The primary (and a backup) in group $S$ adds the application messages
it receives on the connection to a {\tt received list} for the
connection (lines 25, 31), and updates the {\tt receivedUpToMsn}
variable (last message received without a gap) (line 32). If the
replica detects a gap in the message sequence numbers (lines 22-25),
it creates placeholders for the missing messages, and adds
corresponding entries to a {\tt nack list}. If the replica receives a
retransmitted message, and a placeholder for the message exists, it
replaces the placeholder with the message and, otherwise, discards the
retransmitted message (lines 26-30).
\begin{figure*}[t]
\vspace*{-0.5in}
\begin{center}
\footnotesize
\hbox{
\parbox{3.4in}{
\begin{tabbing}
\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\kill
\>{\tt struct OrderInfo} \\[-0.1in]
\>\{ \\[-0.1in]
1 \>\>{\tt OrderType m\_orderType} \\[-0.1in]
2 \>\>{\tt SeqNumType m\_orderSeqNum} \\[-0.1in]
3 \>\>{\tt union} \\[-0.1in]
\>\>\{ \\[-0.1in]
4 \>\>\>{\tt MsgOrder m\_msgO} \\[-0.1in]
5 \>\>\>{\tt MutexOrder m\_mtxO} \\[-0.1in]
6 \>\>\>{\tt TimeOrder m\_timeO} \\[-0.1in]
7 \>\>\>{\tt SocketOrder m\_socO} \\[-0.1in]
\>\>\} \\[-0.1in]
\>\}
\end{tabbing}
}
\hspace*{1.5in}
\parbox{3.4in}{
\begin{tabbing}
\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\kill
\>{\tt struct MsgOrder} \\[-0.1in]
\>\{ \\[-0.1in]
8 \>\>{\tt ViewNumType m\_primaryViewNum} \\[-0.1in]
9 \>\>{\tt MsgType m\_msgType} \\[-0.1in]
10 \>\>{\tt ConnSeqNumType m\_connSeqNum} \\[-0.1in]
11 \>\>{\tt short m\_sockFd} \\[-0.1in]
12 \>\>{\tt unsigned short m\_opaque} \\[-0.1in]
13 \>\>{\tt GrpIdType m\_remoteGrpId} \\[-0.1in]
14 \>\>{\tt SeqNumType m\_msgSeqNum} \\[-0.1in]
15 \>\>{\tt SeqNumType m\_orderSeqNum} \\[-0.1in]
\>\} \\[-0.1in]
\end{tabbing}
}}
\vspace*{-0.15in}
\caption{Pseudocode for the {\tt OrderInfo} struct and the {\tt MsgOrder} struct.}
\label{data-structures}
\end{center}
\end{figure*}
A backup in group $C$ acknowledges a {\tt FirstAck} message it
receives with a {\tt SecondAck} message (lines 60-68). The backup
sends a {\tt SecondAck} message in response to receiving a {\tt
FirstAck} message only if the backup application has generated the
message that the {\tt FirstAck} message acknowledges. If there is no
backup in group $C$, the primary in group $C$ carries out this
responsibility. The primary in group $S$ stops retransmitting a {\tt
FirstAck} message on receiving the corresponding {\tt SecondAck}
message (lines 70-71).
If a primary in group $C$ receives too many {\tt FirstAck} messages
from the primary in group $S$, acknowledging the last message the
primary in group $C$ sent, then the primary in group $S$ has not
received a {\tt SecondAck} from the backups in group $C$.
Consequently, the primary in group $C$ invokes the intra-group flow
control mechanisms to slow down, so that the backups in group $C$ can
catch up (lines 65-66).
If the primary in group $S$ does not have a message to multicast on an
inter-group connection sufficiently promptly (as determined by a
timeout) after it has stopped retransmitting the {\tt FirstAck}
message due to receiving the {\tt SecondAck} message, it multicasts a
{\tt KeepAlive} message on the connection to indicate the liveness of
the connection (lines 54-55).
If the primary (a backup) in group $S$ determines that it has not received
a message from the primary in group $C$ on a connection, it multicasts
a {\tt Nack} message on the remote (local) connection (lines 49-52). Such a
determination occurs if:
\begin{itemize}
\item The primary or a backup in group $S$ sees a gap in the message
sequence numbers of the messages it received (line 24), or
\item A backup in group $S$ receives a {\tt SecondAck} message that
contains an {\tt ack} for a message that the backup has not
received (line 72), or
\item A backup in group $S$ receives a message from the primary in
group $S$ that orders a message that the backup has not received.
\end{itemize}
The primary and each backup in group $S$ periodically exchange {\tt
Heartbeat} messages on the intra-group connection (lines 56-59), so
that the primary knows that the backup has not failed (and vice versa)
and the buffer management mechanisms work properly.
\begin{figure*}[pt]
\vspace*{-0.5in}
\begin{center}
\footnotesize
\hbox{
\parbox{3.4in}{
\begin{tabbing}
\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\kill
\>{\bf On sending an application message {\tt M}}\\[-0.1in]
\>begin\\[-0.1in]
1 \>\>{\tt msn} $\leftarrow$ msg seq number assigned to {\tt M} \\[-0.1in]
2 \>\>{\tt ack} $\leftarrow$ {\tt msn} of the last msg received without a gap \\[-0.1in]
3 \>\>{\tt wm} $\leftarrow$ group-wide timestamp watermark \\[-0.1in]
4 \>\>{\tt ts} $\leftarrow$ timestamp assigned to {\tt M} \\[-0.1in]
5 \>\>{\tt so} $\leftarrow$ source ordering information \\[-0.1in]
6 \>\>{\tt ro} $\leftarrow$ remote ordering information \\[-0.1in]
7 \>\>if {\tt amPrimary()} then \\[-0.1in]
8 \>\>\>record send message order \\[-0.1in]
9 \>\>{\tt M.setMsgSeqNum(msn)} \\[-0.1in]
10 \>\>{\tt M.setAckField(ack)} \\[-0.1in]
11 \>\>{\tt M.setBackField(wm)} \\[-0.1in]
12 \>\>{\tt M.setTimestamp(ts)} \\[-0.1in]
13 \>\>{\tt M.setPrecedence(precedence)} \\[-0.1in]
\>\> // precedence of primary \\[-0.1in]
14 \>\>{\tt M.piggybackOrderInfo(so,ro)} \\[-0.1in]
15 \>\>if {\tt amPrimary()} then \\[-0.1in]
16 \>\>\>multicast {\tt M} to destination group \\[-0.1in]
17 \>\>append {\tt M} to sent list for retransmission \\[-0.1in]
\>end \\[-0.1in] \\[-0.1in]
\>{\bf On receiving an application message {\tt M}} \\[-0.1in]
\>begin \\[-0.1in]
18 \>\>if {\tt M.precedence} $>$ precedence of primary then \\[-0.1in]
\>\>\{ \\[-0.1in]
19 \>\>\>multicast {\tt M} on the intra-group connection \\[-0.1in]
20 \>\>\>reset state and rejoin the group \\[-0.1in]
\>\>\} \\[-0.1in]
\>\>else \\[-0.1in]
\>\>\{ \\[-0.1in]
21 \>\>\>{\tt msn} $\leftarrow$ next expected msg seq number \\[-0.1in]
22 \>\>\>if {\tt msn} $<$ {\tt M.getMsgSeqNum()} then \\[-0.1in]
\>\>\>\{ \\[-0.1in]
23 \>\>\>\>create placeholders for missing messages \\[-0.1in]
24 \>\>\>\>append a {\tt Nack} to nack list \\[-0.1in]
25 \>\>\>\>append {\tt M} to received list \\[-0.1in]
\>\>\>\} \\[-0.1in]
26 \>\>\>else if {\tt msn} $>$ {\tt M.getMsgSeqNum()} then \\[-0.1in]
\>\>\>\{ \\[-0.1in]
27 \>\>\>\>if {\tt M} was missing then \\[-0.1in]
\>\>\>\>\{ \\[-0.1in]
28 \>\>\>\>\>replace the placeholder with {\tt M} \\[-0.1in]
29 \>\>\>\>\>remove the {\tt Nack} from nack list \\[-0.1in]
\>\>\>\>\} \\[-0.1in]
\>\>\>\>else \\[-0.1in]
30 \>\>\>\>\>discard retransmitted {\tt M} \\[-0.1in]
\>\>\>\} \\[-0.1in]
\>\>\>else \\[-0.1in]
31 \>\>\>\>append {\tt M} to received list \\[-0.1in]
32 \>\>\>update {\tt receivedUpToMsn} \\[-0.1in]
33 \>\>\>handle piggybacked ordering information \\[-0.1in]
\>\>\} \\[-0.1in]
\> end \\[-0.1in] \\[-0.1in]
\>{\bf On delivering an application message {\tt M}} \\[-0.1in]
\>begin \\[-0.1in]
34 \>\>{\tt M} $\leftarrow$ first message in received list \\[-0.1in]
35 \>\>if not a placeholder for {\tt M} then \\[-0.1in]
\>\>\{ \\[-0.1in]
36 \>\>\>if {\tt amPrimary()} then \\[-0.1in]
37 \>\>\>\> deliver {\tt M} and create a received message order \\[-0.1in]
38 \>\>\>if {\tt amBackup()} then \\[-0.1in]
\>\>\>\{ \\[-0.1in]
39 \>\>\>\> find first msg order \\[-0.1in]
40 \>\>\>\>if found and msg order orders {\tt M} then \\[-0.1in]
41 \>\>\>\>\>deliver {\tt M} \\[-0.1in]
\>\>\>\} \\[-0.1in]
42 \>\>\>if {\tt M} is delivered then \\[-0.1in]
43 \>\>\>\>move {\tt M} from received list to delivered list \\[-0.1in]
\>\>\} \\[-0.1in]
\>end \\[-0.1in] \\[-0.1in] \\[-0.1in]
\end{tabbing}
}
\hspace*{0.7in}
\parbox{3.4in}{
\begin{tabbing}
\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\kill
\>{\bf On periodic processing} \\[-0.1in]
\>begin \\[-0.1in]
44 \>\>{\tt M\_sent} $\leftarrow$ message in the sent list \\[-0.1in]
45 \>\>if {\tt amPrimary()} and {\tt M\_sent} not acked then \\[-0.1in]
46 \>\>\>retransmit {\tt M\_sent} \\[-0.1in]
47 \>\>if {\tt amPrimary()} and {\tt SecondAck} not received then \\[-0.1in]
48 \>\>\>retransmit a {\tt FirstAck} for last msg received without a gap \\[-0.1in]
49 \>\>if {\tt amPrimary()} and nack list not empty then \\[-0.1in]
50 \>\>\>retransmit {\tt Nack} to the remote group \\[-0.1in]
51 \>\>if {\tt amBackup()} and nack list not empty then \\[-0.1in]
52 \>\>\>retransmit {\tt Nack} to the local group \\[-0.1in]
53 \>\>deliver the first ordered message, if any, to the application \\[-0.1in]
\>end \\[-0.1in] \\[-0.1in]
\>{\bf On expiration of the {\tt KeepAlive} timer for a connection} \\[-0.1in]
\>begin \\[-0.1in]
54 \>\>\>multicast a {\tt KeepAlive} message on the connection \\[-0.1in]
55 \>\>\>reset the {\tt KeepAlive} timer for the connection \\[-0.1in]
\>end \\[-0.1in] \\[-0.1in]
\>{\bf On expiration of the Heartbeat timer} \\[-0.1in]
\>begin \\[-0.1in]
56 \>\>if {\tt amPrimary()} then \\[-0.1in]
57 \>\>\> multicast a {\tt Heartbeat} message to the backups \\[-0.1in]
\>\>else // backup \\[-0.1in]
58 \>\>\>transmit a {\tt Heartbeat} message to the primary \\[-0.1in]
59 \>\>reset the {\tt Heartbeat} timer \\[-0.1in]
\>end \\[-0.1in] \\[-0.1in]
\>{\bf On receiving a FirstAck message {\tt M\_FirstAck}} \\[-0.1in]
\>begin \\[-0.1in]
60 \>\>{\tt M} $\leftarrow$ message in the sent list \\[-0.1in]
61 \>\>{\tt num\_acked} $\leftarrow$ number of {\tt FirstAck} received for {\tt M} \\[-0.1in]
62 \>\>{\tt MAX\_ACK} $\leftarrow$ max number of {\tt FirstAck} for {\tt M} \\[-0.1in]
63 \>\>find message {\tt M} corresponding to {\tt M\_FirstAck} \\[-0.1in]
64 \>\>if {\tt M} is found then \\[-0.1in]
\>\> \{ \\[-0.1in]
65 \>\>\>if {\tt num\_acked} $>$ {\tt MAX\_ACK} then \\[-0.1in]
66 \>\>\>\>invoke intra-group flow control \\[-0.1in]
\>\>\>else \\[-0.1in]
67 \>\>\>\>if {\tt amBackup()} or {\tt amTheOnlyPrimary()} then \\[-0.1in]
68 \>\>\>\>\>multicast a {\tt SecondAck} for {\tt M} \\[-0.1in]
\>\> \} \\[-0.1in]
\>end \\[-0.1in] \\[-0.1in]
\> {\bf On receiving a SecondAck message {\tt M\_SecondAck}} \\[-0.1in]
\> begin \\[-0.1in]
69 \>\>find message {\tt M} corresponding to {\tt M\_SecondAck} \\[-0.1in]
70 \>\>if {\tt M} is found then \\[-0.1in]
71 \>\>\>{\tt M.stopRetransmitFirstAck()} \\[-0.1in]
\>\> else \\[-0.1in]
72 \>\>\>append a {\tt Nack} to the nack list \\[-0.1in]
\> end \\[-0.1in] \\[-0.1in]
\> {\bf On receiving a Nack message {\tt M\_Nack}} \\[-0.1in]
\> begin \\[-0.1in]
73 \>\>find message {\tt M}, received or sent, corresponding to {\tt M\_Nack} \\[-0.1in]
74 \>\>if {\tt M} is found then \\[-0.1in]
75 \>\>\>retransmit {\tt M} \\[-0.1in]
\> end \\[-0.1in] \\[-0.1in]
\> {\bf On garbage collecting an application message {\tt M}} \\[-0.1in]
\> begin \\[-0.1in]
76 \>\>{\tt ts} $\leftarrow$ {\tt M.getTimestamp()} \\[-0.1in]
77 \>\>{\tt myGrpWM} $\leftarrow$ group-wide watermark \\[-0.1in]
78 \>\>{\tt remoteGrpWM} $\leftarrow$ remote group watermark \\[-0.1in]
79 \>\>if {\tt M} is on sent list then \\[-0.1in]
80 \>\>\>if {\tt ts} $<=$ {\tt remoteGrpWM} then \\[-0.1in]
81 \>\>\>\>remove {\tt M} from sent list and delete {\tt M} \\[-0.1in]
82 \>\>if {\tt M} is on delivered list then \\[-0.1in]
83 \>\>\>if {\tt ts} $<=$ {\tt myGrpWM} then \\[-0.1in]
84 \>\>\>\>remove {\tt M} from received list and delete {\tt M} \\[-0.1in]
\> end \\[-0.1in]
\end{tabbing}
}}
\vspace*{-0.15in}
\caption{Pseudocode for the Messaging Protocol.}
\label{messaging-alg}
\end{center}
\end{figure*}
\vspace*{-0.1in}
\subsection{Total Ordering within Groups}
\vspace*{-0.05in}
The primary in a group communicates ordering information to the
backups in its group, so that they obtain the same results in the same
order as the primary and, thus, maintain strong replica consistency.
In particular, the primary in group $C$ piggybacks, on each message it
originates and sends on a connection, the ordering information for the
messages it sent on the connection and received on the connection
since the last message it sent on the connection (along with ordering
information for other types of operations, as described in Section
\ref{determinizer-sec}). A backup in group $C$ does not receive the
ordering information directly from the primary in group $C$. Instead,
the primary in group $S$ reflects back the ordering information to
group $C$ in the next message it multicasts to group $C$. The primary
in group $C$ includes the ordering information in each message it
sends until it receives that information reflected back to it.
Similarly, the primary in group $C$ reflects back to group $S$ the
ordering information it receives in messages from the primary in group
$S$.
\vspace*{-0.1in}
\subsection{Buffer Management}
\vspace*{-0.05in}
A replica must retain each message that it originates and that it
receives, until it knows that it will no longer need the message,
either to retransmit the message in response to a negative
acknowledgment or, to process the message if the primary fails and it
becomes the new primary.
In LLFT, timestamps and timestamp watermarks are used for buffer
management. Each replica in a group maintains a timestamp {\tt
myGroupWatermark}. Each message carries in the {\tt back} field of
the message header the group watermark of the sending
group. Each replica in a group maintains, for each connection, a {\tt
remoteGroupWatermark}, to store the latest group watermark received
from the remote group of that connection.
As shown in Figure \ref{messaging-alg} (lines 76-84), a replica that
sends a message garbage collects the message if the {\tt timestamp} in
the message header is less than or equal to the {\tt
remoteGroupWatermark} for the connection on which the message is
sent. A replica that receives and delivers a message garbage collects
the message if the {\tt timestamp} in the message header is less than
or equal to the {\tt myGroupWatermark} for that replica's group.
\vspace*{-0.1in}
\section{Leader-Determined Membership Protocol}
\vspace*{-0.05in}
The formation of a membership has been based on two-phase commit with
a majority of correct processes to achieve consensus agreement and
avoid the split brain problem in which two or more competing
memberships are formed \cite{PaxosACMTrans,Paxos,Liskov}.
Unfortunately, in the presence of unreliable communication, it is
difficult or expensive to eliminate the risk of competing memberships.
If a communication problem occurs, some of the members might form a
new membership, while other members continue to operate with the
existing membership. It is possible to avoid such a situation only if
every value that is communicated is subjected to a majority vote of
the membership, which is what is done in aircraft flight control
systems \cite{SIFT}. Under conditions of unreliable communication, it
is undesirable to degenerate into multiple competing memberships,
possibly singleton memberships, and it is also undesirable to fail to
form a membership. The objective must be to form the best possible
membership (a heuristic criterion), to detect and heal partitions that
form, and to reestablish a consistent state following recovery from a
partition \cite{NetworkPartitioning}.
The LLFT Membership Protocol addresses the problem of maintaining a
consistent view of the membership at the primary and the backups. It
ensures that they have the same membership set, the same primary view
number, and the same primary, by handling changes at the primary and
the backups. The primary, in turn, determines the addition (removal)
of the backups to (from) the group, as well as their precedences and
ranks (defined below). By making a deterministic choice of the
primary, the Membership Protocol is faster than a multi-round
consensus algorithm \cite{Chandra:UnrelFD}, which is particularly
important when normal processing is suspended by primary failure.
The {\it precedence} of a member of a group is determined by the order
in which the member joins the group. If a member fails and
subsequently restarts, it is considered a new member, and is assigned
a new precedence. The precedences increase monotonically so that, in
the infinite sequence of consecutive primary views for a group, no two
members have the same precedence and a member that joins later has a
higher precedence. When a primary adds a new backup to the
membership, it assigns the next precedence in sequence to that backup.
The precedences of the members determine the order of succession to
become the new primary, if the primary fails.
The {\it rank} of the primary member of a group is $1$, and the ranks
of the backup members are $2, 3, \ldots$ When a proposed new primary
assigns ranks to the backups of a new membership or when a primary
adds a new backup to the membership, it assigns those ranks in the
order of their precedences. The ranks determine the timeouts for
detection of faults in the primary or the backups.
The rank of a member can change when another member is removed from
the group, whereas the precedence of a member is assigned when it
joins the group and does not change while it is a member. The ranks
of the members are consecutive, whereas the precedences need not be,
due to removal of a member from the group.
To avoid the situation where two backups both claim to be the next new
primary, the fault detection timeouts for the backups increase with
increasing rank. The backup with rank 3 operates a fault detector to
determine that the primary is faulty and that the backup with rank 2
is faulty, because it did not determine that the primary is faulty.
Thus, the fault detector operated by the backup with rank 3 has a
longer timeout than the fault detector operated by the backup with
rank 2, {\it etc.}
For efficiency reasons, the fault detector timeouts must be chosen
carefully. Timeouts that are too long cause unnecessary delays after a
fault, whereas timeouts that are too short cause membership churn and
readmission of members to the group. For example, the timeout of the
fault detector operated by the backup with rank $2$ might be 10ms, the
timeout of the fault detector operated by the backup with rank $3$
might be 30ms, {\it etc}. Thus, the fault detector timeout of the
backup with rank $3$ allows for 10ms of inaction by the primary, an
additional 10ms of inaction by the backup with rank $2$, and an
additional 10ms for skew between the timeouts of the backups. Given
its longer timeout, timing out the backup with rank $3$ is rare.
However, it might still happen that two backups both propose to become
the new primary. In such a case, the backup with the lower precedence
gives up and the backup with the higher precedence continues. For
example, if the backup with rank 2 and the backup with rank 3 both
propose to become the new primary, because the backup with rank 3 has
higher precedence, it overrides the backup with rank 2.
Only membership changes that correspond to a change of the primary
constitute a new view, which we refer to as a {\it primary view
change}. Each new primary view has a {\it primary view number}.
When the primary view changes, the proposed new primary adjusts the
members' ranks and resets the message sequence number to one on each
of its connections.
It is important for the backups to change the primary view at the same
virtual synchrony point as the primary. To this end, the new primary
produces an ordering information entry for the primary view change and
multicasts that entry to the backups, just like the other ordering
information. A backup changes to the new primary view when it has
performed all of the operations that were ordered before the virtual
synchrony point, as described below.
Both the primary and the backups in a group need to know when there is
a change in the primary of the group, because at that point their
ranks change and the message sequence numbers are reset to one. They
also need to know about the addition (removal) of a backup, because
such an event can result in a change in their ranks. Moreover, to
achieve reliable message delivery, both the primary and the backups
need to know when there is a change in the primary of a remote
group. However, they do not need to know about membership changes due
to addition (removal) of a backup to (from) the remote group.
\vspace*{-0.1in}
\subsection{Data Structures}
\vspace*{-0.05in}
\subsubsection{{\bf Message Types}}
The types of messages used by the Membership Protocol are described in
Figure \ref{MembershipMessages} and are illustrated in Figure
\ref{reliable-membership}.
\renewcommand{\tabcolsep}{0.05in}
\renewcommand{\baselinestretch}{0.8}
\begin{figure}\hbox{
\begin{tabular}{ll}
&\parbox[t]{2.5in}{\footnotesize\bf \hspace*{0.15in}Message Types for Primary Change}\\[0.01in]
{\footnotesize\tt\bf ProposePrimary}&
\parbox[t]{2.5in}{\footnotesize A message multicast by a self-appointed
new primary to request a change of the primary.}\\[0.01in]
{\footnotesize\tt\bf NewPrimaryView}&
\parbox[t]{2.5in}{\footnotesize A message multicast by the new
primary on each of its connections (other than the intra-group
connection) to report the new primary view and to collect
information regarding the old primary.}\\[0.01in] \\[0.01in] \\[0.01in]
\end{tabular}
\hspace*{0.0in}
\begin{tabular}{ll}
&\parbox[t]{2.6in}{\footnotesize\bf \hspace*{0.3in}Message Types for Backup Change}\\[0.01in]
{\footnotesize\tt\bf ProposeBackup}&
\parbox[t]{2.6in}{\footnotesize A message multicast by a new replica
that wants to join the group.}\\[0.01in]
{\footnotesize\tt\bf AcceptBackup}&
\parbox[t]{2.6in}{\footnotesize A message multicast by the primary to add
a backup to the group.}\\[0.01in]
{\footnotesize\tt\bf RemoveBackup}&
\parbox[t]{2.6in}{\footnotesize A message multicast by the primary to
remove a backup from the group.}\\[0.01in]
{\footnotesize\tt\bf State}&
\parbox[t]{2.6in}{\footnotesize A message sent by the primary to a
backup, containing the checkpointed state of the primary.} \\[0.01in]
\end{tabular}}
\caption{The types of messages used by the Membership Protocol for change of the primary and addition/removal of a backup.}
\label{MembershipMessages}
\end{figure}
\renewcommand{\baselinestretch}{1.5}
The {\tt ProposePrimary}, {\tt ProposeBackup}, {\tt AcceptBackup} and
{\tt RemoveBackup} messages are multicast on the intra-group
connection.
The {\tt ProposePrimary}, {\tt AcceptBackup} and {\tt RemoveBackup}
messages include the old membership in the payload, and require an
explicit acknowledgment from each backup. For the primary, these
acknowledgment messages serve as ``commit'' messages. The primary
(including the self-appointed new primary) must retransmit these
messages until all of the backups in the membership (as determined by
the primary) have acknowledged them. The reason is that all of the
members in the group must have a consistent view of the membership and
the ranks of the members.
\vspace*{-0.1in}
\subsection{Change of the Primary}
\vspace*{-0.05in}
The change of the primary in a group is handled in two phases, as
described below. The pseudocode for the Membership Protocol for
change of the primary is shown in Figure \ref{alg-primary-change}. In
the rules below, $V_i$ denotes the primary view with primary view
number $i$ which corresponds to {\tt myPvn} in the pseudocode, and $p$
denotes the precedence of the primary which corresponds to {\tt
myPrecedence} in the pseudocode.
\subsubsection{{\bf Determining the New Membership}}
In the first (election) phase, the new primary is determined. The new
primary determines which backups are included in the new membership
and their precedences and ranks. More specifically, the first phase
operates as follows:
\begin{itemize}
\item If a backup with precedence $p$ does not receive a {\tt
Heartbeat} message from the primary of view $V_i$ within a given
time period (and thus determines that the primary is faulty) and it
has not received a {\tt ProposePrimary} message for view $V_i$ from
a backup with precedence $< p$, the backup multicasts a {\tt
ProposePrimary} message on the intra-group connection, denouncing
the old primary and appointing itself as the new primary of view
$V_{i+1}$.
\begin{itemize}
\item The backup excludes from the membership the old primary and the backups
with precedences $< p$ (line 4). It excludes such a backup because that backup
did not send a {\tt ProposePrimary} message quickly enough to
become the new primary and, thus, is declared to be faulty.
\item The backup includes, in the {\tt ProposePrimary} message, the
group identifier, the proposed new membership, its current primary
view number $i$ and its precedence $p$ (line 5).
\end{itemize}
\item If a backup with precedence $q$ receives a {\tt ProposePrimary}
message for a new primary view $V_{i+1}$, from a proposed new
primary with precedence $p$, and the backup is included in the
proposed new membership (which implies that $q > p$), and
\begin{itemize}
\item The backup has not generated a {\tt ProposePrimary} message for
view $V_{i+1}$, and
\item The backup has not acknowledged a {\tt ProposePrimary} message
from a backup with precedence $> p$ for view $V_{i+1}$
\end{itemize}
then the backup with precedence $q$ accepts the proposed new
membership and acknowledges the {\tt ProposePrimary} message (lines
21-24).
\item If a backup receives a {\tt ProposePrimary} message for new
primary view $V_{i+1}$, or a subsequent view, from a proposed new
primary with precedence $p$, and the backup is not included in the
proposed new membership, and
\begin{itemize}
\item The backup has not generated a {\tt
ProposePrimary} message for view $V_{i+1}$ and $q > p$, and
\item The backup with precedence $q$ has not received a {\tt
ProposePrimary} message for view $V_{i+1}$ from a backup with precedence $> p$
\end{itemize}
then the backup resets its state and rejoins the group (line 25).
\item When the proposed new primary has received acknowledgments for
its {\tt ProposePrimary} message from all members in the proposed
new membership, it concludes the first phase and proceeds to the
second phase (lines 14-16).
\end{itemize}
Note that the sets of conditions in the second and third bullets above
are not complementary and collectively exhaustive. If a backup
receives a {\tt ProposePrimary} message that does not satisfy either
of those sets of conditions, it ignores that {\tt
ProposePrimary} message. The mechanisms for change of the primary determine
the new membership of the group using only one round of message
exchange ({\tt ProposePrimary} and corresponding acknowledgments). In
a tradeoff for simplicity and timeliness, the mechanisms do not attempt
to form a new membership with the maximum possible number of members.
\begin{figure}[t]
\begin{center}
\leavevmode
\epsfxsize=3.1in
\vspace{-0.2in}
\epsfbox{reliable-membership3.eps}
\caption{Message exchange when a primary view change occurs.}
\vspace{-0.15in}
\label{reliable-membership}
\end{center}
\end{figure}
\begin{figure*}[t]
\vspace*{-0.5in}
\begin{center}
\footnotesize
\hbox{
\parbox{3.4in}{
\begin{tabbing}
\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\kill
\>{\bf On expiration of the fault detection timeout for the primary} \\[-0.1in]
\>(at a backup) \\[-0.1in]
\>begin \\[-0.1in]
1 \>\>{\tt myPvn} $\leftarrow$ current primary view number \\[-0.1in]
2 \>\>{\tt myPrecedence} $\leftarrow$ precedence assigned to this member \\[-0.1in]
3 \>\>if not received a {\tt ProposePrimary} message {\tt M} such that \\[-0.1in]
\>\>\>\>{\tt M.pvn} $>=$ {\tt myPvn} and {\tt M.precedence} $<$ {\tt myPrecedence} then \\[-0.1in]
\>\>\{ \\[-0.1in]
4 \>\>\>exclude all members with lower precedences than {\tt myPrecedence} \\[-0.1in]
5 \>\>\>multicast a {\tt ProposePrimary} message {\tt Mp} with the group id, \\[-0.1in]
\>\>\>\>\>the new membership, {\tt myPvn} and {\tt myPrecedence} \\[-0.1in]
6 \>\>\>start retransmission timer \\[-0.1in]
7 \>\>\>{\tt retransmissionCount} $\leftarrow$ 0 \\[-0.1in]
\>\>\} \\[-0.1in]
\>end \\[-0.1in] \\[-0.1in]
\>{\bf On expiration of the retransmission timer} (at the backup \\[-0.1in]
\>that sent the {\tt ProposePrimary} message {\tt Mp}) \\[-0.1in]
\>begin \\[-0.1in]
8 \>\>if {\tt retransmissionCount} $>$ {\tt MAX\_COUNT} then \\[-0.1in]
\>\>\{ \\[-0.1in]
9 \>\>\>exclude members that have not yet acknowledged my \\[-0.1in]
\>\>\>\>\>{\tt ProposePrimary} message {\tt Mp} \\[-0.1in]
10 \>\>\>transmit a {\tt ProposePrimary} {\tt Mp} with the latest membership \\[-0.1in]
11 \>\>\>{\tt retransmissionCount} $\leftarrow$ 0 \\[-0.1in]
\>\>\} \\[-0.1in]
12 \>\>restart retransmission timer \\[-0.1in]
13 \>\>{\tt retransmissionCount}++ \\[-0.1in]
\>end \\[-0.1in] \\[-0.1in]
\>{\bf On receiving an ack for the {\tt ProposePrimary} message {\tt Mp}} \\[-0.1in]
\>(at the backup that sent {\tt Mp}) \\[-0.1in]
\>begin \\[-0.1in]
14 \>\>if received acks from all backups in membership then \\[-0.1in]
\>\>\{ \\[-0.1in]
15 \>\>\>cancel retransmission timer for {\tt ProposePrimary} message {\tt Mp} \\[-0.1in]
16 \>\>\>start recovery protocol \\[-0.1in]
\>\>\} \\[-0.1in]
\>end \\[-0.1in] \\[-0.1in]
\>{\bf On receiving a {\tt ProposePrimary} message {\tt Mp}} (at a backup \\[-0.1in]
\>that did not send {\tt Mp} \\[-0.1in]
\>// {\tt primaryPrecedence} initially set to precedence of \\[-0.1in]
\>// primary in current view \\[-0.1in]
\>begin \\[-0.1in]
17 \>\>{\tt myPvn} $\leftarrow$ current primary view number \\[-0.1in]\\[-0.1in]
\end{tabbing}
}
\hspace*{0.0in}
\parbox{3.4in}{
\begin{tabbing}
\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\kill
18 \>\>if {\tt Mp.pvn} $>=$ {\tt myPvn} then \\[-0.1in]
19 \>\>\>if I am in the membership then \\[-0.1in]
\>\>\>\{ \\[-0.1in]
20 \>\>\>\>if {\tt Mp.precedence} $>$ {\tt primaryPrecedence} then \\[-0.1in]
\>\>\>\>\{ \\[-0.1in]
21\>\>\>\>\>{\tt primaryPrecedence} $\leftarrow$ {\tt Mp.precedence} \\[-0.1in]
22\>\>\>\>\>send acknowledgment for {\tt ProposePrimary} message \\[-0.1in]
23\>\>\>\>\>update membership and ranks \\[-0.1in]
24\>\>\>\>\>start fault detection timer for the new primary \\[-0.1in]
\>\>\>\>\} \\[-0.1in]
\>\>\>\} \\[-0.1in]
\>\>\>else \\[-0.1in]
25 \>\>\>\>reset state and rejoin the group \\[-0.1in]
\>end \\[-0.1in] \\[-0.1in]
\>{\bf On recovering from a primary change} (at the new primary) \\[-0.1in]
\>begin \\[-0.1in]
26 \>\>{\tt myPvn}$++$ \\[-0.1in]
27 \>\>{\tt primaryPrecedence} $\leftarrow$ {\tt myPrecedence} \\[-0.1in]
28 \>\>for each connection do \\[-0.1in]
29 \>\>\>multicast a {\tt NewPrimaryView} message \\[-0.1in]
\>end \\[-0.1in] \\[-0.1in]
\>{\bf On receiving an ack for the {\tt NewPrimaryView} message} \\[-0.1in]
\>{\tt Mv} (at the new primary) \\[-0.1in]
\>begin \\[-0.1in]
30 \>\>nack all missing messages until received them \\[-0.1in]
31 \>\>retrieve order info held at remote groups from application msg \\[-0.1in]
\>\>\>\>or {\tt KeepAlive} msg \\[-0.1in]
32 \>\>if received all missing messages and reproduced \\[-0.1in]
\>\>\>\>all messages sent by the old primary then \\[-0.1in]
\>\>\{ \\[-0.1in]
33 \>\>\>reset {\tt msgSeqNum} to 0 on each connection \\[-0.1in]
34 \>\>\>adjust ranks of backups \\[-0.1in]
35 \>\>\>record an order for the primary view change \\[-0.1in]
\>\>\} \\[-0.1in]
\>end \\[-0.1in] \\[-0.1in]
\>{\bf On receiving a {\tt NewPrimaryView} message {\tt Mv}} (at the primary \\[-0.1in]
\>of a remote group) \\[-0.1in]
\>begin \\[-0.1in]
36 \>\>{\tt recvUpToMsn} $\leftarrow$ seq num of last msg received without a gap \\[-0.1in]
37 \>\>{\tt lastSentMsn} $\leftarrow$ seq num of last msg sent \\[-0.1in]
38 \>\>discard all messages received after {\tt recvUpToMsn} \\[-0.1in]
39 \>\>{\tt expectedPvn} $\leftarrow$ {\tt Mv.pvn} \\[-0.1in]
40 \>\>{\tt expectedMsn} $\leftarrow$ 0 \\[-0.1in]
41 \>\>acknowledge {\tt Mv} with ({\tt recvUpToMsn}, {\tt lastSentMsn}) \\[-0.1in]
\>end \\[-0.1in]
\end{tabbing}
}}
\vspace*{-0.2in}
\caption{Pseudocode for the Membership Protocol to
handle the change of the primary.}
\label{alg-primary-change}
\end{center}
\end{figure*}
\subsubsection{{\bf Recovering from the Membership Change}}
In the second phase, the new primary queries the remote group of each
of its inter-group connections regarding the old primary's state, and
determines a virtual synchrony point. The new primary needs to know
the last message sent by the old primary and delivered to each remote
group on a connection and, in particular, the ordering information
piggybacked onto the last message. To advance to the state of the old
primary known to the remote groups before the old primary failed, the
new primary must follow the ordering information. More specifically,
\begin{itemize}
\item The new primary collects information for the virtual synchrony
point by multicasting a {\tt NewPrimaryView} message on each of its
inter-group connections (lines 28-29). The {\tt NewPrimaryView}
message contains the most recent ordering information known to the
new primary for the connection.
\item On receiving the {\tt NewPrimaryView} message, the primary of
the remote group flushes all messages that came after the last
message delivered from the old primary's group (line 38). The
primary of the remote group acknowledges the {\tt NewPrimaryView}
message by providing information regarding the last message
delivered from, and the last message sent to, the old primary's
group (line 41). The primary of the remote group sends back the
ordering information to the new primary either in a new application
message, or in a {\tt KeepAlive} message if it does not have an
application message to send.
\item On receiving an acknowledgment from the primary of the remote
group, the new primary determines whether it has missed
any messages from that primary. The new primary then sends {\tt Nacks}
for all missing messages until it has received them (line 30). The
new primary retrieves the ordering information piggybacked on
application messages or {\tt KeepAlive} messages from the primary of
the remote group.
\item When the new primary has executed all of the operations
according to the ordering information determined by the old primary,
it concludes the second phase by resetting the message sequence
numbers to one, adjusting the backups' ranks, and generating an
ordering information entry declaring the start of a new primary view
(lines 33-35). The backups switch to the new primary view when they
receive and process the ordering information.
\end{itemize}
\vspace*{-0.1in}
\subsection{Change of a Backup}
\vspace*{-0.05in}
The change of a backup is either the addition of a backup to the
group, or the removal of a backup from the group. The pseudocode for
the Membership Protocol for the addition (removal) of a backup is
shown in Figure \ref{alg-backup-change}. The pseudocode for joining a
process group (lines 1-14) includes the case where a process is the
first member of the group and, thus, is the primary.
\begin{figure*}[t]
\vspace*{-0.5in}
\begin{center}
\footnotesize
\hbox{
\parbox{3.4in}{
\begin{tabbing}
\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\kill
\>{\bf On joining a process group} (at a new backup) \\[-0.1in]
\>begin \\[-0.1in]
1 \>\>start logging \\[-0.1in]
2 \>\>{\tt hostId} $\leftarrow$ host id of the joining process \\[-0.1in]
3 \>\>{\tt pid} $\leftarrow$ process id of the joining process \\[-0.1in]
4 \>\>{\tt ts} $\leftarrow$ local start up time of the joining process \\[-0.1in]
5 \>\>{\tt myBirthId} $\leftarrow$ (hostId, pid, ts) \\[-0.1in]
6 \>\>{\tt Mp} $\leftarrow$ {\tt ProposeBackup} message with {\tt myBirthId} \\[-0.1in]
7 \>\>multicast {\tt ProposeBackup} message {\tt Mp} \\[-0.1in]
8 \>\>start retransmission timer \\[-0.1in]
9 \>\>{\tt retransmissionCount} $\leftarrow$ 0 \\[-0.1in]
\> end \\[-0.1in] \\[-0.1in]
\>{\bf On expiration of the retransmission timer for the} \\[-0.1in]
\>{\bf {\tt ProposeBackup} message {\tt Mp}} (at the new backup) \\[-0.1in]
\>begin \\[-0.1in]
10 \>\>if {\tt retransmissionCount} $>$ {\tt MAX\_COUNT} then \\[-0.1in]
11 \>\>\>become the first member of the group and thus the primary \\[-0.1in]
\>\>else \\[-0.1in]
\>\>\{ \\[-0.1in]
12 \>\>\>retransmit {\tt ProposeBackup} message {\tt Mp} \\[-0.1in]
13 \>\>\>{\tt retransmissionCount}$++$ \\[-0.1in]
14 \>\>\>restart retransmission timer \\[-0.1in]
\>\>\} \\[-0.1in]
\> end \\[-0.1in] \\[-0.1in]
\>{\bf On receiving an {\tt AcceptBackup} message {\tt Ma}} (at the new backup) \\[-0.1in]
\>begin \\[-0.1in]
15 \>\>if {\tt Ma.birthId} == {\tt myBirthId} then \\[-0.1in]
\>\>\{ \\[-0.1in]
16 \>\>\>accept membership, precedence, rank \\[-0.1in]
17 \>\>\>cancel retransmission timer for {\tt ProposeBackup} message {\tt Mp} \\[-0.1in]
18 \>\>\>acknowledge {\tt AcceptBackup} message {\tt Ma} indicating \\[-0.1in]
\>\>\>\>\>the need for a state transfer \\[-0.1in]
19 \>\>\>wait for a {\tt State} message \\[-0.1in]
\>\>\} \\[-0.1in]
\> end \\[-0.1in] \\[-0.1in]
\>{\bf On receiving a {\tt State} message} (at the new backup) \\[-0.1in]
\>begin \\[-0.1in]
20 \>\>restore state \\[-0.1in]
21 \>\>replay messages from the log \\[-0.1in]
\> end \\[-0.1in] \\[-0.1in]
\>{\bf On receiving a {\tt ProposeBackup} message {\tt Mp}} (at the primary) \\[-0.1in]
\>begin \\[-0.1in]
22 \>\>if {\tt ProposeBackup} message {\tt Mp} is not a duplicate then \\[-0.1in]
\>\>\{ \\[-0.1in]
23 \>\>\>assign precedence and rank \\[-0.1in]
24 \>\>\>add the backup to the membership \\[-0.1in]
25 \>\>\>execute commit protocol code \\[-0.1in]
26 \>\>\>start fault detection timer for the new backup \\[-0.1in]
\>\>\} \\[-0.1in]
\>end
\end{tabbing}
}
\hspace*{0.1in}
\parbox{3.4in}{
\begin{tabbing}
\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\kill
\>{\bf On expiration of the fault detection timer for a backup} \\[-0.1in]
\>(at the primary) \\[-0.1in]
\>begin \\[-0.1in]
27 \>\>remove the backup from the membership \\[-0.1in]
28 \>\>adjust ranks of the other backups \\[-0.1in]
29 \>\>execute commit protocol code \\[-0.1in]
\>end \\[-0.1in] \\[-0.1in]
\>{\bf On committing a new membership} (at the primary) \\[-0.1in]
\>begin \\[-0.1in]
30 \>\>multicast a membership change message \\[-0.1in]
\>\>// {\tt AcceptBackup} for adding a backup \\[-0.1in]
\>\>// {\tt RemoveBackup} for removing a backup \\[-0.1in]
31 \>\>start retransmission timer \\[-0.1in]
32 \>\>{\tt retransmissionCount} $\leftarrow$ 0 \\[-0.1in]
\>end \\[-0.1in] \\[-0.1in]
{\bf On expiration of the retransmission timer for the membership} \\[-0.1in]
\>{\bf change message} (at the primary) \\[-0.1in]
\>begin \\[-0.1in]
33 \>\>if {\tt retransmissionCount} $>$ {\tt MAX\_COUNT} then \\[-0.1in]
\>\>\{ \\[-0.1in]
34 \>\>\>exclude members that have not yet acknowledged membership \\[-0.1in]
\>\>\>\>\>change message \\[-0.1in]
35 \>\>\>retransmit membership change message \\[-0.1in]
\>\>\>\>\>with latest membership \\[-0.1in]
36 \>\>\>{\tt retransmissionCount} $\leftarrow$ 0 \\[-0.1in]
\>\>\} \\[-0.1in]
37 \>\>restart retransmission timer \\[-0.1in]
38 \>\>{\tt retransmissionCount}$++$ \\[-0.1in]
\>end \\[-0.1in] \\[-0.1in]
{\bf On receiving an ack for the membership change message} \\[-0.1in]
\>(at the primary) \\[-0.1in]
\>begin \\[-0.1in]
39 \>\>if received acks for the membership change message from \\[-0.1in]
\>\>\>\>all backups in membership then \\[-0.1in]
\>\>\{ \\[-0.1in]
40 \>\>\>cancel retransmission timer for the membership change message \\[-0.1in]
41 \>\>\>get checkpoint of the state \\[-0.1in]
42 \>\>\>send {\tt State} message to the backup \\[-0.1in]
\>\>\} \\[-0.1in]
\>end \\[-0.1in] \\[-0.1in]
\>{\bf On receiving an {\tt AcceptBackup} / {\tt RemoveBackup}} \\[-0.1in]
\>{\bf message} (at an existing backup) \\[-0.1in]
\>begin \\[-0.1in]
43 \>\>if I am in the membership then \\[-0.1in]
\>\>\{ \\[-0.1in]
44 \>\>\>update the membership and ranks \\[-0.1in]
45 \>\>\>send acknowledgment to primary \\[-0.1in]
\>\>\} \\[-0.1in]
\>\>else \\[-0.1in]
46 \>\>\>reset state and rejoin the group \\[-0.1in]
\>end
\end{tabbing}
}}
\vspace*{-0.15in}
\caption{Pseudocode for the Membership Protocol to
handle the addition and removal of a backup.}
\label{alg-backup-change}
\end{center}
\end{figure*}
\subsubsection{{\bf Addition of a Backup}}
A new process begins to log messages as soon as it starts up (line
1). The {\tt myBirthId} of a process (line 5) is a unique identifier,
similar to a birth certificate. It is used to distinguish a process
that wishes to join the membership and that doesn't yet have a
precedence. The precedence is a unique identifier for a process only
after it is a member, and denotes the order in which it has become a
member. The process multicasts a {\tt ProposeBackup} message on the
intra-group connection (line 7). The primary assigns the precedence
and the rank of the new backup (line 23) and then multicasts an {\tt
AcceptBackup} message (line 30), containing the new membership, on
the intra-group connection. A backup that receives an {\tt
AcceptBackup} message, that includes itself in the membership,
accepts the new membership, and responds with an acknowledgment (lines
15-18).
The primary checkpoints its state when it has received
acknowledgments for the new membership from all of the backups in the
group (lines 39-41). The point at which the checkpoint is taken
represents the virtual synchrony point for adding the new backup. The
primary transmits the checkpoint to the new backup in a {\tt State}
message (line 42). The new backup then sets its state by applying the
checkpoint, and replaying the messages from the log (lines 20-21),
after deleting obsolete messages.
\subsubsection{{\bf Removal of a Backup}}
The primary modifies the ranks of the backups in the group (line 28)
and then multicasts a {\tt RemoveBackup} message (line 30), containing
the new membership, on the intra-group connection. When a backup
receives a {\tt RemoveBackup} message that includes itself in the
membership, the backup accepts the new membership and responds with an
acknowledgment (lines 43-45). When a backup receives a {\tt
RemoveBackup} message that does not include itself in the
membership, the backup resets its state and multicasts a {\tt
ProposeBackup} message requesting to be readmitted to the membership
(line 46).
For both addition and removal of a backup, the primary multicasts the
new membership to all of the backups in the membership (line 30), and
asynchronously collects acknowledgments from all of them. It commits
the membership change when it has collected acknowledgments from all
of the backups in the membership (line 39). If a backup does not
provide an acknowledgment promptly, the primary removes the backup
from the membership (line 34).
\vspace*{-0.1in}
\section{Virtual Determinizer Framework}
\label{determinizer-sec}
\vspace*{-0.05in}
A reliable, totally ordered message delivery protocol ensures consistent
replication only if the application is deterministic (or is rendered
deterministic). However, modern applications are typically
non-deterministic in a number of ways. To maintain strong replica
consistency, it is necessary to sanitize or mask such sources of
non-determinism, {\it i.e.}, to render the application {\it virtually
deterministic}.
The LLFT Virtual Determinizer Framework introduces a novel generic
algorithm for sanitizing the sources of non-determinism in an
application in a transparent manner. We describe the generic
algorithm below, after describing the threading model.
\vspace*{-0.1in}
\subsection{Threading Model}
\vspace*{-0.05in}
The state of an application process is determined by data that are
shared among different threads, and by thread-specific local data managed
and changed by each thread.
Each thread within a process has a unique thread identifier. A data
item that is shared by multiple threads is protected by a mutex. The
threads and mutexes can be created and deleted dynamically.
Each replica in a process group runs the same set of threads. A
thread interacts with other threads, processes, and its runtime
environment through system/library calls. Non-determinism can arise
from different orderings of, and different results from, such calls at
different replicas in the group.
If the operations on the shared and local data in different replicas
are controlled in such a way that (1) the updates on a data item occur
in the same order with the same change, and (2) each thread updates
different data items in the same order with the same change, then the
replicas will remain consistent.
Figure \ref{determinizer} on the left gives example pseudocode for a
thread that shows how such calls might change the state of an
application. The pseudocode uses three types of system/library calls:
\begin{itemize}
\item Calls that try to acquire a mutex (line 18). The {\tt pthread\_}
{\tt mutex\_trylock()} operation is similar to a nonblocking read in
that, if the mutex is currently held by another thread, the call
returns immediately with a specific error code, so that the caller
thread is not blocked. If the thread of one replica successfully
claims the mutex, while the corresponding thread of another replica
fails, the two replicas perform different operations (lines 19-22),
causing divergence of their states, because one replica changes the
shared data {\tt SD1} (line 20) while the other replica changes the
thread-local data {\tt LD5} (line 22).
\item Calls that retrieve local clock values (lines 1, 13). These
calls change thread-local data ({\tt LD1})
directly (lines 2, 14). If different replicas obtain different clock values, the
replicas might arrive at different decisions (line 15) as to whether
a timeout occurred. If one replica times out while the
other does not, the states of the replicas will diverge because
of the difference in thread-local data {\tt LD4} (line 16).
\item Calls that read (write) from (to) a socket asynchronously (lines
3, 7, 12). If, for the same read operation, one replica
successfully reads a message while the other does not, the states of
the two replicas will differ in the thread-local data {\tt LD2} (line 5) and
potentially {\tt LD3} (lines 9, 11). The consequence of different
results for a nonblocking write call is similar.
\end{itemize}
\begin{figure*}[t]
\vspace*{-0.5in}
\begin{center}
\footnotesize
\hbox{
\parbox{3.4in}{
\begin{tabbing}
\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\kill
1 \>\>{\bf get current time} \\[-0.1in]
2 \>\>// update thread-local data {\tt LD1} \\[-0.1in]
3 \>\>{\bf do a nonblocking read from socket fd} \\[-0.1in]
4 \>\>if picked up a message then \\[-0.1in]
\>\>\{ \\[-0.1in]
5 \>\>\>// update thread-local data {\tt LD2} \\[-0.1in]
6 \>\>\>handle the message \\[-0.1in]
7 \>\>\>{\bf do a nonblocking write to socket fd} \\[-0.1in]
8 \>\>\>if failed to write the response then \\[-0.1in]
\>\>\>\{ \\[-0.1in]
9 \>\>\>\>// update thread-local data {\tt LD3} \\[-0.1in]
10 \>\>\>\>append to a queued message, if any \\[-0.1in]
\>\>\>\} \\[-0.1in]
\>\>\} \\[-0.1in]
\>\>else \\[-0.1in]
\>\>\{ \\[-0.1in]
11 \>\>\>// update thread-local data {\tt LD3} \\[-0.1in]
12 \>\>\>{\bf flush queued message, if any, to socket fd} \\[-0.1in]
\>\>\} \\[-0.1in]
13 \>\>{\bf get current time} \\[-0.1in]
14 \>\>// update thread-local data {\tt LD1} \\[-0.1in]
15 \>\>if timed out then \\[-0.1in]
\>\>\{ \\[-0.1in]
16 \>\>\>// update thread-local data {\tt LD4} \\[-0.1in]
17 \>\>\>call timeout handling routine \\[-0.1in]
\>\>\} \\[-0.1in]
18 \>\>{\bf try to claim mutex {\tt Mtx}} \\[-0.1in]
19 \>\>if claimed mutex {\tt Mtx} then \\[-0.1in]
\>\>\{ \\[-0.1in]
20 \>\>\>change shared data {\tt SD1} \\[-0.1in]
21 \>\>\>release mutex {\tt Mtx} \\[-0.1in]
\>\>\} \\[-0.1in]
\>\>else \\[-0.1in]
22 \>\>\>update thread-local data {\tt LD5}
\end{tabbing}
}
\hspace*{0.75in}
\parbox{3.4in}{
\begin{tabbing}
\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\hspace*{0.1in}\=\kill
\>{\bf On returning from a call} (at the primary) \\[-0.1in]
\>begin \\[-0.1in]
23 \>\>{\tt T} $\leftarrow$ thread identifier \\[-0.1in]
24 \>\>{\tt O} $\leftarrow$ operation identifier \\[-0.1in]
25 \>\>{\tt N} $\leftarrow$ operation count \\[-0.1in]
26 \>\>{\tt D} $\leftarrow$ operation metadata \\[-0.1in]
27 \>\>{\tt OrderInfo} $\leftarrow$ global queue to store order info \\[-0.1in]
28 \>\>append an entry {\tt (T, O, N, D)} to {\tt OrderInfo} \\[-0.1in]
\>end \\[-0.1in] \\[-0.1in] \\[-0.1in]
\>{\bf On receiving an order info entry {\tt (T, O, N, D)}} \\[-0.1in]
\>(at a backup) \\[-0.1in]
\>begin \\[-0.1in]
29 \>\>if {\tt O.OrderInfo} does not exist then \\[-0.1in]
30 \>\>\>create {\tt O.OrderInfo} \\[-0.1in]
31 \>\>append {\tt (T, N, D)} to {\tt O.OrderInfo} \\[-0.1in]
32 \>\>{\tt T1} $\leftarrow$ first entry in {\tt O.OrderedInfo} \\[-0.1in]
33 \>\>wake up {\tt T1} if it is blocked \\[-0.1in]
\>end \\[-0.1in] \\[-0.1in] \\[-0.1in]
\>{\bf On intercepting a call} (at a backup) \\[-0.1in]
\>begin \\[-0.1in]
34 \>\>{\tt T1} $\leftarrow$ identifier of the thread performing the call \\[-0.1in]
35 \>\>{\tt O1} $\leftarrow$ operation identifier of the call \\[-0.1in]
36 \>\>{\tt N1} $\leftarrow$ count for {\tt O1} for any thread \\[-0.1in]
37 \>\>get first entry {\tt (T, N, D)} of {\tt O1.OrderInfo} \\[-0.1in]
38 \>\>while {\tt (T, N, D)} not available or {\tt T1 != T} or {\tt N1 != N} do \\[-0.1in]
39 \>\>\>suspend {\tt T1} \\[-0.1in]
40 \>\>consume {\tt (T, N, D)} and remove it from {\tt O1.OrderInfo} \\[-0.1in]
41 \>\>return \\[-0.1in]
\>end \\[-0.1in]
\end{tabbing}
}}
\vspace*{-0.15in}
\caption{On the left, pseudocode for a thread. The system/library calls that
might change the state, or lead to a state change, are highlighted
in bold. On the right, pseudocode for the Virtual Determinizer to render
the application virtually deterministic.}
\label{determinizer}
\vspace*{-0.0in}
\end{center}
\end{figure*}
\vspace*{-0.1in}
\subsection{Generic Algorithm}
\vspace*{-0.05in}
The generic algorithm, shown in Figure \ref{determinizer} on the right, records
the ordering information and the return value information of non-deterministic
system/library calls at the primary, to ensure that the backups obtain
the same results in the same order. For each non-deterministic
operation, the algorithm records the following information:
\begin{itemize}
\item {\tt\bf Thread identifier} - The identifier of the thread that is
carrying out the operation.
\item {\tt\bf Operation identifier} - An identifier that represents one
or more data items that might change during the operation or on
completion of the operation.
\item {\tt\bf Operation count} - The number of operations
carried out by a thread for the given operation identifier.
\item {\tt\bf Operation metadata} - The data returned from the
system/library call. This metadata includes the {\tt out} parameters
(if any), the return value of the call, and the error code (if
necessary).
\end{itemize}
At the primary, the algorithm maintains a queue, the {\tt OrderInfo}
queue of four-tuples {\tt (T, O, N, D)}, where thread {\tt T} has
executed a call with operation identifier {\tt O} and with metadata
recorded in {\tt D}, and this is the {\tt N}th time in its execution
sequence that thread {\tt T} has executed such a non-deterministic
call. The {\tt OrderInfo} queue spans different threads and different
operations.
At the primary, the algorithm appends an entry {\tt (T, O, N, D)} to
the {\tt OrderInfo} queue on return of the operation {\tt
O} (lines 23-28). The entries are transmitted to the backups
reliably and in order, using the piggybacking mechanism of the
Messaging Protocol.
At a backup, for each operation {\tt O}, the algorithm maintains an
{\tt O.OrderInfo} queue of three-tuples {\tt (T, N, D)}, in the order
in which the primary created them. When the backup receives the first
entry {\tt (T, O, N, D)} for operation {\tt O}, it creates the
{\tt O.OrderInfo} queue (lines 29-30). After the entry is appended to
the queue, the algorithm awakens the first application thread in the
{\tt O.OrderInfo} queue if it is blocked (lines 31-33).
At a backup, when thread {\tt T} tries to execute operation {\tt
O} as its {\tt N}th execution in the sequence, if {\tt (T, N, D)}
is not the first entry in the {\tt O.OrderInfo} queue, the algorithm
suspends the calling thread {\tt T} (lines 34-39). It resumes a
thread {\tt T} that was suspended in the order in which {\tt (T, N,
D)} occurs in the {\tt O.OrderInfo} queue, rather than the order in
which the thread was suspended or an order determined by the operating
system scheduler. It removes an entry {\tt (T, N, D)} from the {\tt
O.OrderInfo} queue immediately before it returns control to the
calling thread {\tt T} after its {\tt N}th execution in the sequence
(lines 40-41). The algorithm requires the ordering of all related
operations, {\it e.g.}, both claims and releases of mutexes.
We have instantiated the generic algorithm of the Virtual Determinizer
Framework for several major types of non-determinism, including
multi-threading, time-related operations and socket communication, as
discussed below.
\vspace*{-0.1in}
\subsection{Multi-Threading}
\vspace*{-0.05in}
The Consistent Multi-Threading Service (CMTS)
creates mutex ordering information at the primary, where the {\tt
operation identifier} is the mutex $Mtx$. For the normal mutex claim call
({\tt pthread\_} {\tt mutex\_lock()} library call), the {\tt operation
metadata} can be empty if the call is successful. However, if the
normal mutex claim call returns an error code and for the nonblocking mutex
claim call ({\tt pthread\_mutex\_trylock()} library call), the {\tt
operation metadata} is the return value.
At a backup, to process a mutex ordering information entry, the CMTS
examines the metadata. If the metadata contain an error code, the CMTS
returns control to the calling thread with an identical error status,
without performing the call. Otherwise, it delegates the mutex claim
operation to the original library call provided by the operating
system. If the mutex is not currently held by another thread, the
calling thread acquires the mutex immediately. Otherwise, the calling
thread is suspended and subsequently resumed by the operating system
when the thread that owns the mutex releases it.
The CMTS allows concurrency of threads that do not simultaneously
acquire the same mutex. Thus, it achieves the maximum possible degree
of concurrency, while maintaining strong replica consistency.
\vspace*{-0.1in}
\subsection{Time-Related Operations}
\vspace*{-0.05in}
The Consistent Time Service (CTS) ensures that clock readings at
different replicas are consistent. For time-related system calls,
such as {\tt gettimeofday()} and {\tt time()}, the CTS creates time
ordering information at the primary, where the {\tt operation
identifier} is the time source and the {\tt operation metadata} is
the clock value, or an error code if the call fails.
In addition to consistency for each clock reading, the CTS ensures
monotonicity of the clock as seen by the replicas in a group, even if
the primary fails \cite{cts}. With the CTS, the replicas see a {\it
virtual group clock} that resembles the real-time clock. Each
replica maintains an offset to record the difference between its local
physical clock and the virtual group clock. The offset of the primary
is 0. Each backup updates its offset for each clock reading.
If the primary fails, one of the backups becomes the new primary. The
new primary must not include its local physical clock value in the
time ordering information it sends to the backups, because doing so
might roll backward, or roll forward, the virtual group
clock. Instead, the new primary adds the recorded offset to its local
physical clock value, and includes that value in the time ordering
information it sends to the backups.
\vspace*{-0.1in}
\subsection{Socket Communication}
\vspace*{-0.05in}
An application might use a nonblocking read to receive messages from
the network asynchronously. If no message is received, the
nonblocking read call returns a specific error code. On such an error
return, the application might switch to some other task and change to
a different state. Thus, the event of failing to receive a message
must be ordered. Similarly, an application might use a nonblocking
write to send data asynchronously. If the message is not sent
successfully, the application might take on a different task and
change to a different state. Thus, the event of failing to send a
message must be ordered.
On return from a read/write system call on a socket at the primary,
the Consistent Socket Communication Service (CSCS) produces a socket
ordering information entry for that operation. The {\tt operation
identifier} is the socket file descriptor. The {\tt operation
metadata} is an identifier for the message being read/written, if
the read/write succeeds, or an error code, if it fails.
It is quite common to combine socket read/write system calls with
select/poll system calls. Typically, the application performs a
read/write system call only if the select/poll system call indicates
that the corresponding socket is readable/writable. The select/poll
system call offers a timeout parameter for the user to specify how
long the operating system can take to return from the call.
The CSCS produces a socket ordering information entry on returning
from a select/poll system call. The {\tt operation identifier} is the
socket file descriptor. The {\tt operation metadata} contains the
number of events, the read/write/error mask, and the amount of time
left before the timeout (used on Linux) if the call returns
successfully, or an error code, if it fails.
\vspace*{-0.1in}
\section{Implementation and Performance}
\vspace*{-0.05in}
The LLFT system has been implemented in the C++ programming language
for the Linux operating system. The library interpositioning technique
is used to capture and control the application's interactions with its
runtime environment. Application state is checkpointed and restored
using facilities provided by a memory-mapped checkpoint library
derived from \cite{dieter::chkpt}. The implementation of LLFT is
compiled into a shared library. The library is inserted into the
application address space at startup time using the {\tt LD\_PRELOAD}
facility provided by the operating system. LLFT is transparent to the
application being replicated, and does not require recompilation or
relinking of the application program.
The experimental testbed consists of 14 HP blade servers, each
equipped with two 2GHz Intel Xeon processors, running the Ubuntu 9.04
operating system, on a 1Gbps Ethernet. A two-tier client/server
application was used to benchmark the LLFT implementation. The
performance evaluation focuses on three areas: (1) performance of the
Messaging Protocol during normal fault-free operation, (2) overhead of
the Virtual Determinizer Framework, and (3) performance of the
Membership Protocol during fault recovery.
\begin{figure*}[t]
\begin{center}
\leavevmode
\hbox{\parbox{3.65in}{
\epsfxsize=3.6in
\epsfbox{latency.eps}
\vspace{-0.15in}
\caption{End-to-end latency vs. message size. }
\label{latency}
}
\hspace{0.05in}
\parbox{3.65in}{
\epsfxsize=3.6in
\epsfbox{thrput.eps}
\vspace{-0.15in}
\caption{Throughput vs. number of concurrent clients.}
\label{thrput}
}}
\vspace{-0.25in}
\end{center}
\end{figure*}
\vspace*{-0.1in}
\subsection{Messaging Protocol}
\vspace*{-0.05in}
First, we consider the performance of the Messaging Protocol during
normal fault-free operation. We characterize the end-to-end latency in
the presence of a single client for various invocation patterns: (1)
short requests and short replies, (2) various size requests and short
replies, and (3) short requests and various size replies. The
end-to-end latency for pattern (2) is virtually indistinguishable
from that for pattern (3) for the same message size (for requests and
replies). Hence, the measurement results shown in Figure~\ref{latency}
refer only to message size. The figure shows the end-to-end latency
without replication using TCP as a baseline for comparison and with
3-way replication using LLFT to understand the overhead that LLFT
incurs. As can be seen in the figure, the Messaging Protocol incurs
very moderate overhead, ranging from about 15\% for large messages to
about 55\% for small messages. The overhead of the Messaging Protocol
is determined primarily by the piggybacking of ordering
information. For large messages, which require fragmentation in user
space, the Messaging Protocol incurs additional context switches,
although the relative overhead is less percentage-wise.
We also measured the throughput, without replication using TCP and
with 3-way replication using LLFT, in the presence of various numbers
of concurrent clients. Each client continually issues 1KB requests
without any think time, and the server responds with 1KB replies. The
measurement results are summarized in Figure~\ref{thrput}. It can be
seen that, although the throughput reduction under replication is
moderate under light loads, it is more prominent under heavy
loads.
We also characterized the fault scalability of the Messaging
Protocol. As shown in Figure~\ref{scalabilityperf}, the performance
does not degrade noticeably as the number of replicas is increased (so
that larger numbers of concurrent faults can be tolerated). These
results are as expected because the primary can deliver a message as
soon as it is ordered within a connection without having to
communicate with the backups.
\begin{figure*}[t]
\begin{center}
\leavevmode
\hbox{\parbox{3.65in}{
\epsfxsize=3.6in
\epsfbox{scalability.eps}
\vspace{-0.15in}
\caption{End-to-end latency vs. number of replicas in a group.}
\label{scalabilityperf}
}
\hspace{0.15in}
\parbox{3.65in}{
\epsfxsize=3.6in
\epsfbox{ndlatency3.eps}
\vspace{-0.15in}
\caption{\hspace*{-0.05in}End-to-end latency vs. number of non-deterministic operations.}
\label{ndlatency}
}}
\vspace{-0.25in}
\end{center}
\end{figure*}
\vspace*{-0.1in}
\subsection{Virtual Determinizer Framework}
\vspace*{-0.05in}
To evaluate the performance of the Virtual Determinizer Framework, we
injected non-deterministic operations into our benchmark
application. For each run, we varied the number of non-deterministic
operations per call, while keeping the request/reply message size
fixed at 1KB.
The measurement results for the end-to-end latency shown in
Figure~\ref{ndlatency} are obtained by introducing clock-related
non-deterministic operations (\ie, {\tt gettimeofday()}) into the
application. Other types of non-deterministic operations produce a
similar profile. In general, the end-to-end latency increases linearly
as the number of non-deterministic operations per call increases. On
average, each additional non-deterministic operation adds about 8
microseconds overhead to the end-to-end latency. This
overhead is primarily due to the piggybacking of ordering information.
\vspace*{-0.1in}
\subsection{Membership Protocol}
\vspace*{-0.05in}
To evaluate the performance of the Membership Protocol during recovery
from primary failure, we considered (1) the primary view change
latency, and (2) the recovery latency, {\it i.e.}, the primary view
change latency plus the virtual synchrony latency. The failover
latency when the primary fails is determined by the fault detection
time and the recovery latency. In a system that does not incur
lengthy communication delays, the first backup can detect the failure
of the primary in about 30 milliseconds, based on the parameters used
in our experiments.
Figure~\ref{primaryviewchangelatency} summarizes the measurement
results for the primary view change latency, which are obtained when
no client is running, to highlight the primary view change latency
itself. As can be seen in the figure, the latency increases with the
number of replicas. Interestingly, when the number of replicas is two
(which the industry regards as the typical case and which
majority-based membership algorithms do not handle), the primary
view change latency is less than 50 microseconds, which is
significantly less than the latency with more replicas. In this case,
when the primary crashes, only one replica is left. That replica can
promote itself to be the new primary without the need to wait for
acknowledgments from other replicas.
Figure~\ref{recoverylatency} summarizes the measurement results for
the recovery latency, {\it i.e.}, the primary view change latency plus
the virtual synchrony latency. The figure shows the measured recovery
latency in the presence of various numbers of concurrent clients, for
3-way and 2-way replication. As expected, the recovery latency
increases with the number of concurrent clients in both cases. If the
availability requirement allows 2-way replication (which is typical
industry practice), the recovery is faster by about 200 microseconds.
\begin{figure*}[t]
\begin{center}
\leavevmode
\hbox{\parbox{3.65in}{
\epsfxsize=3.6in
\epsfbox{membership1.eps}
\vspace{-0.15in}
\caption{Primary view change latency.}
\label{primaryviewchangelatency}
}
\hspace{0.05in}
\parbox{3.65in}{
\epsfxsize=3.6in
\epsfbox{membership2.eps}
\vspace{-0.15in}
\caption{Recovery latency.}
\label{recoverylatency}
}}
\vspace{-0.25in}
\end{center}
\end{figure*}
\vspace*{-0.1in}
\section{Related Work}
\vspace*{-0.05in}
The LLFT system is a software-based approach to fault tolerance. Such
an approach was first used in SIFT \cite{SIFT} and became the favored
approach in later fault-tolerant systems such as the Delta-4
\cite{Powell:Delta4}, TFT \cite{Bressoud:TFT}, Hypervisor
\cite{hypervisor} and Viewstamped Replication \cite{Liskov} systems.
More recent efforts on software fault tolerance system have focused on
rendering CORBA, Java RMI and Web Services applications fault
tolerant~\cite{birman04,Cukier:AQUA,Felber:Tapos,Baldoni:IRL,tempest-dsn08,moser:ftws,NMM:CSSE,ws-rep,ftweb,WenbingIPDPS2002,zhao:ftws}.
However, supporting a particular messaging protocol API limits the
applicability of those systems.
The LLFT system provides fault tolerance transparently to both the
applications and the operating system, like the TFT
\cite{Bressoud:TFT}, Hypervisor \cite{hypervisor} and TARGON/32
\cite{ftunix} systems. However, those systems differ from LLFT in the
way in which they achieve transparency. The TARGON/32 system uses a
special bus design that ensures atomic transmission of a message sent
by a primary to both its destination group and its own backups. The
TFT system requires application object code editing. The Hypervisor
system requires a hardware instruction counter. LLFT uses the more
flexible library interpositioning technique. The user can dynamically
insert (remove) the LLFT software into (from) a running application
process using operating system facilities.
The LLFT system uses a leader-follower replication technique similar
to that used in the Delta-4 \cite{Powell:Delta4} and Viewstamped
Replication \cite{Liskov} systems. Delta-4 uses a separate message to
transmit ordering information from the primary to the backups. Thus,
the primary must wait until all of the backups have explicitly
acknowledged the ordering notification before it sends a message,
which can reduce system performance. In contrast, LLFT uses
piggybacking mechanisms and the Virtual Determinizer Framework to
maintain strong replica consistency and virtual synchrony
\cite{BR:ISIS,Moser}, if a fault occurs. In the Viewstamped
Replication system, the primary generates a new timestamp each time it
needs to communicate information to the backups. Unlike LLFT, the
Viewstamped Replication system is based on atomic transactions, which
it combines with a view change algorithm.
Atomic multicast protocols that deliver messages reliably and in the
same total order, such as Isis \cite{BR:ISIS}, Amoeba
\cite{KT:AMOEBA}, and Totem \cite{MMABL:cacm}, have been used to
maintain strong replica consistency in fault-tolerant distributed
systems. However, those protocols introduce delays in either sending
or delivering a message. The LLFT Messaging Protocol does not incur
such delays, because the primary makes the decisions on the order in
which the operations are performed and the ordering information is
reflected to the backups in its group. The recent LCR total order
broadcast protocol \cite{Guerraoui2}, which uses logical clocks and a
ring topology, optimizes for high throughput in cluster environments,
rather than low latency as does LLFT. LCR is comparable to the Totem
single-ring protocol \cite{TotemSRP}, which likewise optimizes for
high throughput in local-area networks, rather than to LLFT.
Paxos \cite{PaxosACMTrans,Paxos} is a leader election algorithm for
asynchronous distributed systems, which uses a two-phase commit
strategy in which a majority of the members must vote for the
leader. The requirement of a majority ensures that only one
leader will be elected. Paxos assumes a known existing membership,
and does not change that membership dynamically as members become
faulty and recover. Paxos can achieve consensus in two rounds, if
communication is reliable and processes respond promptly. There exist
versions of Paxos in which a dedicated proposer selects the new leader
and initiates the election, reducing the number of message delays
required to confirm the new leader, provided that the proposer is not
faulty. There also exist extensions of Paxos in which the leader can
change the membership dynamically, to remove faulty members or to add
new members. LLFT also employs such ideas.
Membership protocols for group communication systems, such as Transis
\cite{ADKM:MEMB} and Totem \cite{MMABL:cacm} employ fault detectors,
based on timeouts, to reach distributed agreement on as large a
membership as possible, devolving to smaller memberships, if
necessary. Those protocols are relatively costly in the number of
messages exchanged, and in the delays incurred. To avoid such costs,
LLFT uses a novel Leader-Determined Membership Protocol that does
involve distributed agreement. Rather, it achieves consistent group
membership among the members of a group by having the primary
determine the membership, which it communicates to the backups
in the group.
Membership protocols for group communication systems, such as Isis
\cite{BR:ISIS} and Totem \cite{MMABL:cacm}, use the term {\it view
change} to represent a change in the membership of a group. In
particular, each successive membership, which involves addition
(removal) of a member, constitutes a new view. In LLFT, only
membership changes that correspond to a change of the primary
constitute a new view, which we refer to as a {\it primary view
change}. In typical group communication systems, the membership is
known more widely than by only the members in the group. In LLFT,
only the primary and the backups in the group need to know the
membership of the group.
The LLFT system includes a novel, generic Virtual Determinizer
Framework to capture, transmit and execute ordering information for
non-deterministic operations. The non-deterministic operations
handled by LLFT overlap those considered in other systems such as
Delta-4 \cite{Powell:Delta4}, TFT \cite{Bressoud:TFT}, Hypervisor
\cite{hypervisor} and TARGON/32 \cite{ftunix}. LLFT addresses
non-determinism caused by multi-threading and socket communication,
which those works do not discuss. LLFT does not yet handle
non-determinism introduced by operating system signals and interrupts,
which those works do consider.
Basile {\it et al.} \cite{iyer:tpds06}, Jimenez-Peris and Arevalo
\cite{Peris:srds}, and Narasimhan {\it et al.} \cite{NMM:SRDS} have
addressed the need to sanitize non-deterministic operations, to
achieve strong replica consistency for active replication, rather than
for leader-follower (semi-active or semi-passive) replication. The
LLFT mechanisms that are used to order mutex claims/releases are
closely related to those of the Loose Synchronization Algorithm (LSA)
and Preemptive Deterministic Scheduling Algorithm (PDS) of
\cite{iyer:tpds06}. However, LSA does not address the strong replica
consistency issues introduced by the {\tt pthread\_mutex\_trylock()}
library call, and PDS is suitable for only a specific threading model.
Defago {\it et al.} \cite{Schiper1, Schiper2} have investigated
semi-passive replication in conjunction with a consensus algorithm.
In their model, the primary server produces its results as a single
action, including the reply to the client and the state update for the
backups. Their model admits non-deterministic operations, but not
concurrent processing, with shared data, of requests from multiple
clients. In our more general model, multiple processes, possibly with
multiple threads, interact with each other and also with file and
database systems. Moreover, requests from multiple clients can be
processed concurrently and can access shared data. In their
semi-passive replication, every server replica sends a reply to the
client whereas, in LLFT, the backups do not send replies to the
client, which reduces the amount of network traffic. Their
semi-passive replication uses a rotating coordinator and a variation
of consensus, whereas LLFT uses a leader-determined membership
protocol that is not based on consensus.
Brito {\it et al.} \cite{FetzerFelberICDCS,FetzerFelberSRDS} have
addressed the issues of minimizing latency and multi-threading in
fault-tolerant distributed stream processing systems. The system that
they developed supports active replication, instead of the semi-active
or semi-passive replication that LLFT supports. Their system employs
novel speculation mechanisms based on software transactional memory
to achieve lower latency and higher concurrency.
Zou and Jahanian \cite{Jahanian} have adopted the primary-backup
replication approach for real-time fault-tolerant distributed systems.
Their replication service addresses temporal consistency, introduces
the notion of phase variance, and ensures consistency
deterministically if the underlying communication mechanism provides
deterministic message delivery, and probabilistically if no such
support exists. LLFT itself provides deterministic message delivery
and ensures strong replica consistency.
\vspace*{-0.1in}
\section{Conclusions and Future Work}
\vspace*{-0.05in}
The Low Latency Fault Tolerance (LLFT) system provides fault tolerance
for distributed applications deployed over a local-area network, as in
a data center. Applications programmed using TCP socket APIs, or
middleware such as Java RMI, can be replicated with strong replica
consistency using LLFT, without any modifications to the applications.
Performance measurements show that LLFT achieves low latency message
delivery under normal conditions and low latency reconfiguration and
recovery when a fault occurs. The genericity, application
transparency, and low latency of LLFT make it appropriate for a wide
variety of distributed applications.
Future work includes the sanitization of other sources of
non-determinism (such as operating system signals and
interrupts) and performance optimization. It also includes the
development of more complex applications for LLFT (in particular, file
systems and database systems), and the development of replication
management tools.
\vspace*{-0.1in}
|
2,869,038,154,564 | arxiv | \section{Introduction} \label{sec:introduction}
In 1966, Schmidt \cite{Schmidt1} introduced a two-player game referred to thereafter as
Schmidt's game. Schmidt invented the game primarily as a tool for
studying certain sets which arise in
number theory and Diophantine approximation theory. Schmidt's game, and other similar games,
have since become an important tool in number theory, dynamics and related areas.
Schmidt's game (defined precisely in Subsection \ref{Schmidtsgamedef}) and related games are real games, that is games in which each player plays
a ``real'' (an element of a {\em Polish space}: a completely metrizable and separable space).
Questions regarding which player, if any, has a winning strategy in various games
have been systematically studied over the last century. Games in which one of the players has a winning strategy are said to be \emph{determined}.
The existence of winning strategies often have implications in both set theory and applications to other areas.
In fact, the assumption that certain classes of games are determined can have far-reaching structural consequences.
One such assumption is the axiom of determinacy, $\mathsf{AD}$, which is the statement that all integer games are determined.
The axiom of determinacy for real games, $\ad_{\R}$, would immediately imply the determinacy of Schmidt's game, but it is significantly stronger
than $\mathsf{AD}$ (see Subsection \ref{ss:games} for a more thorough discussion).
A natural question is what form of determinacy axiom is necessary to obtain the determinacy of
Schmidt's game. In particular, can one obtain the determinacy of this game from
$\mathsf{AD}$, or does one need the full strength of $\ad_{\R}$?
Consider the case of the Banach-Mazur game on a Polish space $(X,d)$ with target set $T \subseteq X$.
Here the players I and II at each turn $n$ play a real which codes
a closed ball $B(x_n,\rho_n)=\{ y\in X\colon d(x_n,y)\leq \rho_n\}$. The only ``rule''
of the game is that the players must play a decreasing sequence of closed balls (that is,
the first player to violate this rule loses). If both player follows the rule,
then II wins iff $\bigcap_n B(x_n,\rho_n) \cap T \neq \emptyset$. Although this is a real game,
this game is determined for any $T \subseteq X$ just from $\mathsf{AD}$.
This follows from the easy fact that the Banach-Mazur game is equivalent to the integer game
in which both players play closed balls with ``rational centers'' (i.e., from
a fixed countable dense set) and rational radii.
For Schmidt's game on a Polish space $(X,d)$ with target set $T\subseteq X$, we have
in addition fixed parameters $\alpha,\beta\in (0,1)$. In this game I's first move is
a closed ball $B(x_0,\rho_0)$ as in the Banach-Mazur game. In subsequent moves, the
players play a decreasing sequence of closed balls as in the Banach-Mazur game, but with
a restriction of the radii. Namely, II must shrink the previous radius by a factor of $\alpha$,
and I must shrink the previous radius by $\beta$. So, at move ${2n}$, I plays a closed ball
of radius $\rho_{2n}=(\alpha \beta)^n \rho_0$, and at move ${2n+1}$, II plays a closed ball of radius
$\rho_{2n+1}=\alpha (\alpha \beta)^n \rho_0$. As with the Banach-Mazur game, if both players follow these rules,
then II wins iff $x \in T$ where $\{ x\}=\bigcap_n B(x_n,\rho_n)$.
We call this game the $(\alpha,\beta)$ Schmidt's game for $T$.
A variation of Schmidt's game, first introduced by Akhunzhanov in \cite{Akh},
has an additional rule that the initial radius $\rho_0=\rho$ of I's first move is fixed in advance. We call this the
$(\alpha,\beta,\rho)$ Schmidt's game for $T$.
In all practical applications of the game
we are aware of, the difference between these two versions is immaterial.
However, in general, these games are not literally
equivalent, as the following simple example demonstrates.
\begin{example}
Consider $\mathbb{R}$ with the usual metric and let the target set for
II be $T=(-\infty, -1] \cup [1, \infty) \cup \mathbb{Q}$. Notice that
this set is dense. It is easy to see that if $\rho \geq 2$ and
$\alpha \leq \frac14$ then for any $\beta$, II wins the $(\alpha,
\beta, \rho)$-game, simply by maximizing the distance from the
center of her first move to the origin. But if I is allowed to
choose any starting radius and $\beta < \frac12$, then he is allowed
to play, for instance, $(0, \frac12)$, and then on subsequent moves,
simply avoid each rational one at a time, so that in fact I wins
the $(\alpha, \beta)$-game.
\end{example}
In the case of Schmidt's game (either variation) it is not immediately clear
that the game is equivalent to an integer game, and thus it is not clear that $\mathsf{AD}$
suffices for the determinacy of these games. Our main results have implications regarding the determinacy
of Schmidt's game.
Another class of games which is similar in spirit to Schmidt's game are the so-called Banach games
whose determinacy has been investigated by Becker and Freiling \cite{Becker} \cite{Freiling} (with an important
result being obtained by Martin). Work of these authors has shown that the determinacy of these games
follows from (and is, in fact, equivalent to) $\mathsf{AD}$. Methods similar to those used by Becker, Freiling, and Martin
are instrumental in the proofs of our results as well.
In \S\ref{sec:background} we introduce notation and give some relevant background in the theory of games, descriptive set theory, and the
history of Schmidt's game in particular.
In \S\ref{sec:mr} we prove our main results, including those regarding the determinacy of Schmidt's game.
We prove general results, Theorems~\ref{hrthm}, \ref{detthm}, which give some conditions under which
certain real games are determined under $\mathsf{AD}$ alone. Roughly speaking, these results state that
``intersection'' games which admit strategies which are simple enough to be ``coded by a real,'' in a sense to
made precise, are determined from $\mathsf{AD}$. Schmidt's game, Banach-Mazur games, and other similar games
are intersection games. The simple strategy condition, however, depends on the specific game.
For Schmidt's $(\alpha,\beta,\rho)$ game on $\R$, we show the simple strategy condition is met,
and so this game is determined
from $\mathsf{AD}$. Moreover, for the
$(\alpha,\beta)$ Schmidt's game on $\R$, $\mathsf{AD}$ implies that either player I has a winning strategy
or else for every $\rho$, II has a winning strategy in the $(\alpha,\beta,\rho)$ game
(this does not immediately give a strategy for II in the $(\alpha,\beta)$ game from $\mathsf{AD}$,
as we are unable in the second case to choose, as a function of $\rho$,
a winning strategy for II in the $(\alpha,\beta,\rho)$ game).
For $\R^n$, $n \geq 2$, the simple strategy condition is not met. In fact, for $n \geq 3$
we show that the determinacy of Schmidt's $(\alpha,\beta,\rho)$ games does not follow from $\mathsf{AD}$.
For $n=2$, we do not know if
$\mathsf{AD}$ suffices to get the determinacy of Schmidt's game.
In \S\ref{sec:or} we prove two other results related to the determinacy of Schmidt's game in particular.
First, we show assuming $\mathsf{AD}$ that in any Polish space $(X,d)$, any $p \in (0,1)$, and any
$T \subseteq X$, there is at most one value of $(\alpha,\beta)\in (0,1)^2$ with $\alpha\beta=p$
such that the $(\alpha,\beta)$ Schmidt's game for $T$ is not determined.
Second, we show assuming $\mathsf{AD}$ that for a general Polish space $(X,d)$ and any target set $T\subseteq X$,
the ``non-tangent'' version of Schmidt's $(\alpha,\beta,\rho)$
game is determined. This game is just like Schmidt's game except we require
each player to play a ``non-tangent ball,'' that is, $d(x_n,x_{n+1}) < \rho_n-\rho_{n+1}$. These results help to illuminate the
obstacles in analyzing the determinacy of Schmidt's game.
Finally in \S\ref{sec:questions} we list several open questions which are left unanswered by
our results. We feel that the results and questions of the current paper show an interesting
interplay between determinacy axioms and the combinatorics of Schmidt's game.
\section{Background} \label{sec:background}
In this section we fix the notation we use to describe the games we will be considering,
both for general games and specifically for Schmidt's game. We recall some facts about the forms of determinacy we will
be considering, some necessary background in descriptive set theory to state and prove our theorems, and we explain some of the history and significance of Schmidt's game.
Throughout we let $\omega=\N=\{ 0,1,2,\dots\}$ denote the set of natural numbers.
We let $\R$ denote the set of real numbers (here we mean the elements of the standard real
line, not the Baire space $\omega^\omega$ as is frequently customary in
descriptive set theory).
\subsection{Games} \label{ss:games}
Let $X$ be a non-empty set. Let $X^{<\omega}$ and $X^\omega$ denote respectively the set of
finite and infinite sequences from $X$. For $s \in X^{<\omega}$ we let
$|s|$ denote the length of $s$. If $s,t \in X^{<\omega}$ we write
$s \leq t$ if $s$ is an initial segment of $t$, that is, $t\restriction |s|=s$.
If $s,t \in X^{<\omega}$, we let $s {}^\smallfrown t$ denote the concatenation of $s$ and $t$.
We call $R\subseteq X^{<\omega}$ a {\em tree on $X$} if it is closed under initial segments,
that is, if $t \in R$ and $s\leq t$, then $s \in R$. We can view $R$ as the set of
{\em rules} for a game. That is, each played must move at each turn so that the
finite sequence produced stays in $R$ (the first player to violate this ``rule''
loses the game). If $\vec{x}=(x_0,x_1,\dots) \in X^\omega$, we say $\vec{x}$
has followed the rules if $\vec{x}\restriction n \in R$ for all $n$. We let $[R]$
denote the set of all $\vec x \in X^{\omega}$ such that $\vec x \restriction n \in R$ for all $n$
(i.e., $\vec x$ has followed the rules). We also refer to $[R$] as the set of {\em branches}
through $R$.
We likewise say
$s \in X^{<\omega}$ has followed the rules just to mean $s \in R$.
Fix a set $B \subseteq X^\omega$, which we call the {\em target set},
and let $R \subseteq X^{<\omega}$ be a rule set (i.e., a tree on $X$).
The game $G(B,R)$ on the set $X$ is defined as follows. I and II alternate
playing elements $x_i \in X$. So, I plays $x_0,x_2,\dots$, while II
plays $x_1,x_3,\dots$. This produces the {\em run} of the game
$\vec x=(x_0,x_1,\dots)$. The first player, if any, to violate the rules $R$
loses the run $\vec x$ of the game. If both players follow the rules
(i.e., $\vec x\in [R]$), then we declare I to have won the run iff $\vec x \in B$
(otherwise we say II has won the run).
Oftentimes, in defining a game the set of rules $R$ is defined implicitly
by giving requirements on each players' moves.
If there are no rules, i.e., $R=X^{<\omega}$, then we write
$G(B)$ for $G(B,R)$.
Also, it is frequently convenient
to define the game by describing the payoff set for II instead of I.
This, of course, is formally just replacing $B$ with $X^\omega-B$.
A {\em strategy} for I in a game on the set $X$ is a function
$\sigma \colon \bigcup_{n\in \omega} X^{2n} \to X$. A strategy for II
is a function $\tau \colon \bigcup_{n \in \omega} X^{2n+1}\to X$.
We say $\sigma$ follows the rule set $R$ is whenever $s \in R$ of
even length, than $s {}^\smallfrown \sigma(s)\in R$. We likewise define
the notion of a strategy $\tau$ for II to follow the rules.
We say $\vec x \in X^{\omega}$ follows
the strategy $\sigma$ for I if for all $n \in \omega$,
$x_{2n}=\sigma (\vec x\restriction 2n)$, and similarly define
the notion of $\vec x$ following the strategy $\tau$ for
II. We also extend this terminology in the obvious way
to say an $s \in X^{<\omega}$ has followed $\sigma$ (or $\tau$).
Finally, we say a strategy $\sigma$ for I is a {\em winning strategy}
for I in the game $G(B,R)$ if $\sigma$ follows the rules $R$
and for all $\vec x\in [R]$ which follows $\sigma$ we have $\vec x\in B$,
that is, player I has won the run $\vec x$. We likewise define the notion
of $\tau$ being a winning strategy for II.
If $\sigma$ is a strategy for I, and $\vec z=(x_1,x_3,\dots)$ is a sequence of moves
for II, we write $\sigma* \vec{z} $ to denote the corresponding
run $(x_0,x_1,x_2,x_3,\dots)$ where $x_{2n}=\sigma( x\restriction 2n)$.
We likewise define $\tau * \vec z$ for $\tau$ a strategy for II and
$\vec z=(x_0,x_2,\dots)$ a sequence of moves for I. If $\sigma, \tau$
are strategies for I and II respectively, then we let $\sigma*\tau$
denote the run $(x_0,x_1,\dots)$ where $x_{2n}=\sigma( x\restriction 2n)$
and $x_{2n+1}=\tau( x\restriction 2n+1)$ for all $n$.
We say the game $G(B,R)$ on $X$ is {\em determined} if one of the players
has a winning strategy. The {\em axiom of determinacy} for games on
$X$, denoted $\mathsf{AD}_X$ is the assertion that all games on the set $X$
are determined. Axioms of this kind were first introduced by Mycielski
and Steinhaus. We let $\mathsf{AD}$ denote $\mathsf{AD}_\omega$, that is, the assertion
all two-player integer games are determined. Also important for the current paper
is the axiom $\ad_{\R}$, the assertion that all real games are determined.
Both $\mathsf{AD}$ and $\ad_{\R}$ play an important role in modern descriptive set theory.
although both axioms contradict the axiom of choice, $\mathsf{AC}$, and thus are not
adopted as axioms for the true universe $V$ of set theory, they
play a critical role in developing the theory of natural models such as
$L(\mathbb{R})$ containing ``definable'' sets of reals. It is known that $\ad_{\R}$
is a much stronger assertion than $\mathsf{AD}$ (see Theorem 4.4 of \cite{solovay}).
Sitting between $\mathsf{AD}$ and $\ad_{\R}$ is the determinacy of another class of games called
{\em $\hr$} games, in which one of the players plays reals and the other
plays integers. The proof of one of our theorems will require the use of
$\hr$ games. The axiom $\ad_{\frac{1}{2}\R}$ that all $\hr$ games are determined is known
to be equivalent to $\ad_{\R}$ ($\ad_{\frac{1}{2}\R}$ immediately implies $\text{Unif}$, see Theorem~\ref{martinwoodin} below). However, $\mathsf{AD}$ suffices to obtain the determinacy
of $\hr$ games with Suslin, co-Suslin payoff (a result of Woodin, see \cite{kechrishr}).
We define these terms more precisely in \S\ref{sec:mr}. As in \cite{Becker},
this fact will play an important role in one of our theorems.
One of the central result in the theory of games is the result of Martin \cite{Martin_determinacy}
that all Borel games on any set $X$ are determined in $\mathsf{ZFC}$.
By ``Borel'' here we are referring to the topology on $X^\omega$ given by the
product of the discrete topologies on $X$.
In fact, in just $\mathsf{ZF}$
we have that all Borel games (on any set $X$) are {\em quasi-determined}
(see \cite{Moschovakis} for the definition of quasi-strategy and proof of the extension of Martin's result
to quasi-strategies in $\mathsf{ZF}$, which is due to Hurkens and Neeman).
\begin{theorem}[Martin, Hurkens and Neeman for quasi-strategies]
\label{theoremboreldeterminacy}
Let $X$ be a nonempty set, and let $B\subseteq X^\omega$ be a Borel set, and $R\subseteq X^{<\omega}$
a rule set $R$ (a tree).
Then the game $G(B,R)$ is determined (assuming $\mathsf{ZFC}$, or quasi-determined just assuming $\mathsf{ZF}$).
\end{theorem}
As we mentioned above, $\mathsf{AD}$ contradicts $\mathsf{AC}$. In fact, games played for particular types of
``pathological'' sets constructed using $\mathsf{AC}$ are frequently not determined.
For example, the following result is well-known (e.g. \cite[p. 137, paragraph 8]{Kechris}):
\begin{proposition}
\label{propositiongalestewartundetermined}
Let $B \subseteq \omega^\omega$ be a Bernstein set (i.e., neither the set nor its complement
contains a perfect set). Then the game $G(B)$ is not determined.
\end{proposition}
\subsection{Determinacy and Pointclasses} \label{detpc}
We briefly review some of the terminology and results related to the
determinacy of games and some associated notions concerning pointclasses which
we will need for the proofs of some of our results.
We have introduced above the axioms $\mathsf{AD}$, $\ad_{\frac{1}{2}\R}$, and $\ad_{\R}$ which assert the determinacy
of integer games, half-real games, and real games respectively. We trivially have
$\ad_{\R} \Rightarrow \ad_{\frac{1}{2}\R} \Rightarrow \mathsf{AD}$. All three of these axioms contradict $\mathsf{AC}$,
the axiom of choice. They are consistent, however, with $\mathsf{DC}$, the axiom of
dependent choice, which asserts that if $T$ is a non-empty {\em pruned} tree
(i.e., if $(x_0,\dots,x_n)\in T$ then $\exists x_{n+1}\ (x_0,\dots,x_n,x_{n+1})\in T$)
then there is a branch $f$ through $T$ (i.e., $\forall n\ (f(0),\dots,f(n))\in T$).
$\mathsf{DC}$ is a slight strengthening of the axiom of countable choice. On the one hand,
$\mathsf{DC}$ holds in the minimal model $L(\mathbb{R})$ of $\mathsf{AD}$, while on the other hand even
$\ad_{\R}$ does not imply $\mathsf{DC}$. Throughout this paper, our background theory is $\mathsf{ZF}+\mathsf{DC}$.
The axiom $\ad_{\R}$ is strictly stronger than $\mathsf{AD}$ (see \cite{solovay}), and in fact
it is known that $\ad_{\R}$ is equivalent to $\mathsf{AD}+\text{Unif}$, where $\text{Unif}$
is the axiom that every $R\subseteq \R\times \R$ has a {\em uniformization},
that is, a function $f \colon\text{dom}(R)\to \R$ such that $(x,f(x))\in R$
for all $x \in \text{dom}(R)$. This equivalence will be important for
our argument in Theorem~\ref{thm:r3} that $\mathsf{AD}$ does not suffice for the
determinacy of Schmidt's game in $\R^n$ for $n \geq 3$.
The notion of uniformization is closely connected with the descriptive set
theoretic notion of a {\em scale}. If a set $R\subseteq X\times Y$ (where
$X$, $Y$ are Polish spaces) has a scale, then it has a uniformization. The only property of scales which we use is the existence of uniformizations, so we will not give the definition, which is rather technical, here.
A (boldface) {\em pointclass} $\boldsymbol{\Gamma}$ is a collection
of subsets of Polish spaces closed under continuous preimages, that is, if
$f \colon X\to Y$ is continuous and $A\subseteq Y$ is in $\boldsymbol{\Gamma}$, then
$f^{-1}(A)$ is also in $\boldsymbol{\Gamma}$. We say $\boldsymbol{\Gamma}$ is selfdual if $\boldsymbol{\Gamma}=\check{\boldsymbol{\Gamma}}$ where
$\check{\boldsymbol{\Gamma}}=\{ X-A\colon A\in \boldsymbol{\Gamma}\}$ is the dual pointclass of $\boldsymbol{\Gamma}$. We say
$\boldsymbol{\Gamma}$ is non-selfdual if $\boldsymbol{\Gamma}\neq \check{\boldsymbol{\Gamma}}$. A set $U \subseteq \ww \times X$
is {\em universal} for the $\boldsymbol{\Gamma}$ subsets of $X$ if $U\in \boldsymbol{\Gamma}$ and for every
$A\subseteq X$ with $A\in \boldsymbol{\Gamma}$ there is an $x \in \ww$ with $A=U_x=\{ y\colon (x,y)\in U\}$.
It is a consequence of $\mathsf{AD}$ that every non-selfdual pointclass has a universal set.
For $\kappa$ an ordinal number we say a set
$A \subseteq \ww$ is $\kappa$-Suslin if there is a tree $T$ on $\omega \times \kappa$
such that $A=p[T]$, where $p[T]=\{ x \in \ww \colon \exists f \in \kappa^\omega\
(x,f)\in [T]\}$ denotes the projection of the body of the tree $T$. We say $A$
is Suslin if it is $\kappa$-Suslin for some $\kappa$. We say $A$ is
co-Suslin if $\ww \setminus A$ is Suslin. For a general Polish space $X$,
we say $A \subseteq X$ is Suslin if for some continuous surjection
$\varphi \colon \ww \to X$ we have that $\varphi^{-1}(A)$ is Suslin
(this does not depend on the choice of $\varphi$). Scales are essentially
the same thing as Suslin representations, in particular a set $A\subseteq Y$
is Suslin iff it has a scale, thus relations which are Suslin have uniformizations.
If $\boldsymbol{\Gamma}$ is a pointclass, then we say a set $A$ is {\em projective over} $\boldsymbol{\Gamma}$
if it is in the smallest pointclass $\boldsymbol{\Gamma}'$ containing $\boldsymbol{\Gamma}$ and closed under
complements and existential and universal quantification over $\R$.
Assuming $\mathsf{AD}$, if $\boldsymbol{\Gamma}$ is contained in the class of Suslin, co-Suslin sets, then every set projective over
$\boldsymbol{\Gamma}$ is also Suslin and co-Suslin. For this result,
more background
on these general concepts, as well as the precise definitions of scale and the scale property,
the reader can refer to \cite{Moschovakis}.
Results of Martin and Woodin (see \cite{MartinWoodin} and \cite{Martin_ctb})
show that assuming $\mathsf{AD}+\mathsf{DC}$, the axioms
$\ad_{\R}$, $\text{Unif}$, and scales are all equivalent. More precisely we have the following.
\begin{theorem} [Martin, Woodin] \label{martinwoodin}
Assume $\mathsf{ZF}+\mathsf{AD}+\mathsf{DC}$. Then the following are equivalent:
\begin{enumerate}
\item
$\ad_{\R}$
\item
$\text{Unif}$
\item
Every $A\subseteq \R$ has a scale.
\end{enumerate}
\end{theorem}
Scales and Suslin representations are also important as it follows from $\mathsf{AD}$
that ordinal games where the payoff set is Susin and co-Suslin (the notion of
Suslin extends naturally to sets $A \subseteq \lambda^\omega$ for $\lambda$ an ordinal number)
are determined (one proof of this is due to Moschovakis, Theorem~2.2 of \cite{Moschovakis_od}, another
due to Steel can be found in the proof of Theorem~2 of
\cite{Steel}). We will not need this result for the current paper.
A strengthening of $\mathsf{AD}$, due to Woodin, is the axiom $\ad^+$. This axiom has been very useful
as it allows the development of a structural theory which has been used to obtain a number of
results. It is not currently known if $\ad^+$ is strictly stronger than $\mathsf{AD}$, but
it holds in all the natural models of $\mathsf{AD}$ obtained from large cardinal axioms
(it holds, in particular, in the model $L(\mathbb{R})$, so $\ad^+$ is strictly weaker that $\ad_{\R}$).
In our Theorem~\ref{thm:r3} we in fact show that $\ad^+$ does not suffice
to get the determinacy of Schmidt's $(\alpha,\beta,\rho)$ game in $\R^n$ for $n \geq 3$.
\subsection{Schmidt's game} \label{Schmidtsgamedef}
As mentioned in the introduction, Schmidt
invented the game primarily as a tool for
studying certain sets which arise in
number theory and Diophantine approximation theory. These sets are
often exceptional with respect to both measure and category, i.e., Lebesgue null and meager.
One of the the most significant examples is the following.
Let $\mathbb{Q}$ denote the set of rational numbers. A real number $x$ is said
to be \emph{badly approximable} if there exists a positive constant
$c=c(\alpha )$ such that $\left|x-\frac{p}{q}\right|>\frac{c}{q^2}$ for
all $\frac{p}{q}\in \mathbb{Q}$.
We denote the set of badly approximable numbers by{ \bf{BA}}.
This set plays a major role in Diophantine approximation theory, and is well
known to be both Lebesgue null and meager.
Nonetheless, using his game, Schmidt was
able to prove the following remarkable result:
\begin{theorem}[Schmidt \cite{Schmidt1}]
Let $(f_n)_{n=1}^{\infty}$ be a sequence of $\CC^1$ diffeomorphisms of $\R$. Then the Hausdorff dimension of the set $\bigcap_{n=1}^{\infty}f^{-1}_n({\bf{BA}})$ is $1$. In particular, $\bigcap_{n=1}^{\infty}f^{-1}_n({\bf{BA}})$ is uncountable.
\end{theorem}
Yet another example of the strength of the game is the following.
Let $b\geq 2$ be an integer. A real number $x$ is said to be normal to base $b$ if, for every
$n\in\mathbb{N}$, every block of $n$ digits from $\{0, 1,\dots , b-1\}$ occurs in the base-$b$ expansion of
$x$ with asymptotic frequency $1/b^n$. It is readily seen that the set of numbers normal to no base is both Lebesgue null and meager. Nevertheless, Schmidt used his game to prove:
\begin{theorem}[Schmidt \cite{Schmidt1}]
The Hausdorff dimension of the set of numbers normal to no base is $1$.\\
\end{theorem}
\subsubsection{The game's description}
For the $(\alpha,\beta)$ Schmidt's game on the complete metric space $(X, d)$ with target set $T \subseteq X$, I and II each
play pairs $(x_i, \rho_i)$ in $Y=X \times \R^{>0}$. The $R \subseteq Y^{<\omega}$
of rules is defined by the conditions that $\rho_{i+1}+d(x_i,x_{i+1})\le \rho_{i}$
and $\rho_{i+1}= \begin{cases} \alpha \rho_i & \text{ if } i \text{ is even }\\
\beta \rho_i & \text{ if } i \text{ is odd }\end{cases}$. The rules guarantee that
the closed balls $B(x_i,\rho_i)=\{ x \in \R^n \colon d(x,x_i)\leq \rho_i\}$
are nested. Since the $\rho_i\to 0$, there is a unique point $z \in X$
such that $\{ z\}=\bigcap_i B(x_i,\rho_i)$. For $\vec x\in [R]$, a run of the game
following the rules, we let $f(\vec x)$ be this corresponding point $z$.
The payoff set $B\subseteq Y^\omega$ for player I is $\{ \vec x \in Y^\omega \cap [R]\colon
f(\vec x) \notin T\}$. Formally, when we refer to the $(\alpha,\beta)$ Schmidt's game with
target set $T$, we are referring to the game $G(B,R)$ with these sets $B$ and $R$
just described. The formal definition of Schmidt's $(\alpha,\beta,\rho)$
game with target set $T$ is defined in the obvious analogous manner.
\section{Main Results} \label{sec:mr}
We next prove a general result which states that certain real games are equivalent to $\frac{1}{2}\R$
games. The essential point is that real games which are intersection games (i.e.,
games where the payoff only depends on the intersection of sets coded by the moves
the players make) with the property that if one of the players has a winning strategy in the real game,
then that player has a strategy ``coded by a real'' (in a precise sense defined below),
then the game is equivalent to a $\frac{1}{2}\R$ game. In \cite{Becker} a result attributed to Martin
is presented which showed that the determinacy of a certain class of real games, called Banach games,
follows from $\ad_{\frac{1}{2}\R}$, the axiom which asserts the determinacy of $\frac{1}{2}\R$ games (that is,
games in which one player plays reals, and the other plays integers). In Theorem~\ref{hrthm}
we use ideas similar to Martin's to prove a general result which applies to
intersection games satisfying a ``simple strategy'' hypothesis. Since
many games with applications to number theory and dynamics are intersection games, it seems that in practice
the simple strategy hypothesis is the more significant requirement.
\begin{definition} \label{simple_one_round}
Let $\boldsymbol{\Gamma}$ be a pointclass. A simple one-round $\boldsymbol{\Gamma}$ strategy $s$ for the Polish space $X$ is
a sequence $s=(A_n, y_n)_{n \in \omega}$ where $y_n \in X$, $A_n \in \boldsymbol{\Gamma}$, and the $A_n$ are a partition
of $X$.
A simple $\boldsymbol{\Gamma}$ strategy $\tau$ for player II is a collection $\{ s_u \}_{u \in \omega^{<\omega}}$
of simple one-round $\boldsymbol{\Gamma}$ strategies $s_u$. A simple $\boldsymbol{\Gamma}$ strategy $\sigma$ for player I
is a pair $\sigma=(\bar{y}, \tau)$ where $\bar{y}\in X$ is the first move and
$\tau$ is a simple $\boldsymbol{\Gamma}$ strategy for player II.
\end{definition}
The idea for a simple one-round strategy is that if the opponent moves in the set
$A_n$, then the strategy will respond with $y_n$. Thus there is only ``countably much''
information in the strategy; it is coded by a real in a simple manner.
If $s=(A_n,y_n)$ is a simple one-round strategy,
we will write $s(n)=y_n$ and also $s(x)=y_n$ for any $x \in A_n$.
A general
simple strategy produces after each round a new simple one-round strategy to follow
in the next round. For example, suppose $\sigma$ is a simple strategy for I.
$\sigma$ gives a first move $x_0=\bar{y}$ and a simple one-round strategy
$s_\emptyset$. If II plays $x_1$, then $x_2=\sigma(x_0,x_1)=s_\emptyset(x_1)=
$the unique $y_{n_0}$ such that $x_1 \in A_{n_0}$ where $s_\emptyset=(A_n,y_n)$.
If II then plays $x_3$, then $\sigma$ responds with $s_{n_0}(x_3)$. The play by
$\sigma$ continues in this manner. Formally, a general simple strategy is
a sequence $(s_u)_{u \in \omega^{<\omega}}$ of simple one-round strategies,
indexed by $u \in \omega^{<\omega}$.
If $\boldsymbol{\Gamma}$ is a pointclass with a universal set $U\subseteq \ww\times X$,
then we may use $U$ to code simple one-round $\boldsymbol{\Gamma}$ strategies. Namely,
the simple one-round $\boldsymbol{\Gamma}$ strategy $s=(A_n,y_n)$ is coded by $z \in \ww$
if $z$ codes a sequence $(z)_n \in \ww$ and $U_{(z)_{2n}} =A_n$ and
$(z)_{2n+1}$ codes the response $y_n\in X$ in some reasonable manner
(e.g., via a continuous surjection from $\ww$ to $X$, the exact details are
unimportant).
\begin{remark}
For the remainder of this section, $X$ and $Y$ will denote Polish spaces.
\end{remark}
\begin{definition}
Let $R \subseteq X^{<\omega}$ be a tree on $X$ which we identify as a {\em set of rules}
for a game on $X$. We say a simple one-round
$\boldsymbol{\Gamma}$ strategy $s$ {\em follows the rules} $R$ at position $p \in R$ if
for any $x \in X$, if $p {}^\smallfrown x \in R$, then $p {}^\smallfrown x {}^\smallfrown s(x)\in R$.
\end{definition}
\begin{definition}
Let $R \subseteq X^{<\omega}$ be a set of rules for a real game.
Suppose $p \in X^{<\omega}$ is a position in $R$.
Suppose $f \colon X \to X$ is such that for all $x \in X$, if $p{}^\smallfrown x\in R$,
then $p{}^\smallfrown x {}^\smallfrown f(x) \in R$ (i.e., $f$ is a one-round strategy which follows the rules at $p$).
A {\em simplification} of $f$ at $p$ is simple one-round strategy $s=(A_n,y_n)$ such that
\begin{enumerate}
\item
For every $x$ in any $A_n$, if $p {}^\smallfrown x \in R$, then $p {}^\smallfrown x {}^\smallfrown y_n \in R$.
\item
For every $n$, if there is an $x \in A_n$ such that $p{}^\smallfrown x \in R$,
then there is an $x' \in A_n$ with $p{}^\smallfrown x'\in R$ and $f(x')=y_n$.
\end{enumerate}
We say $\tau$ is a $\boldsymbol{\Gamma}$ simplification of $f$ if all of the set $A_n$
are in $\boldsymbol{\Gamma}$.
\end{definition}
\begin{definition}
We say a tree $R\subseteq X^{<\omega}$ is {\em positional} if for all $p,q \in R$
of the same length
and $x\in X$, if $p {}^\smallfrown x$, $q {}^\smallfrown x$ are both in $R$
then for all $r \in X^{<\omega}$, $p{}^\smallfrown x{}^\smallfrown r \in R$ iff
$q {}^\smallfrown x {}^\smallfrown r \in R$.
\end{definition}
\begin{theorem}[$\mathsf{ZF}+\mathsf{DC}$] \label{hrthm}
Let $\boldsymbol{\Gamma}$ be a pointclass with a universal set with $\boldsymbol{\Gamma}$ contained within the Suslin, co-Suslin sets.
Suppose $B\subseteq X^\omega$ and $R \subseteq X^{<\omega}$ is a positional tree,
and suppose both $B$ and $R$ are in $\boldsymbol{\Gamma}$.
Let $G=G(B,R)$ be the real game on $X$ with payoff $B$ and rules $R$. Suppose the following two conditions
on $G$ hold:
\begin{enumerate}
\item (intersection condition)
For any $\vec{x},\vec{y}\in [R]$, if $x(2k)=y(2k)$ for all $k$, then
$\vec{x}\in B$ iff $\vec{y}\in B$.
\item (simple one-round strategy condition)
If $p\in R$ has odd length, and $f\colon X \to X$ is a rule following
one-round strategy at $p$, then there is a $\boldsymbol{\Gamma}$-simplification of $f$ at $p$.
\end{enumerate}
Then $G$ is equivalent to a Suslin, co-Suslin $\frac{1}{2}\R$ game $G^*$ in the sense that if I (or II)
has a winning strategy in $G^*$, then I (or II) has a winning strategy in $G$.
\end{theorem}
\begin{proof}
Consider the game $G^*$ where I plays pairs $(x_{2k},s_{2k})$ and II plays
integers $n_{2k+1}$. The rules $R^*$ of $G^*$ are that I must play at each round
a real coding $s_{2k}$ which is a simple one-round $\boldsymbol{\Gamma}$ strategy which follows the rules $R$
relative to a position $p{}^\smallfrown x_{2k}$ for any $p$ of length $2k$
(this does not depend on the particular choice of $p$ as $R$ is positional).
I must also play such that $x_{2k}= s_{2k-2}(n_{2k-1})$. II must play
each $n_{2k+1}$ so that there is a legal move $x_{2k+1} \in A^{s_{2k}}_{n_{2k+1}}$
with $p{}^\smallfrown x_{2k} {}^\smallfrown x_{2k+1} \in R$ (for any $p$ of length $2k$).
If I and II have followed the rules, to produce $x_{2k}, s_{2k}$ and $n_{2k+1}$, the payoff
condition for $G^*$ is as follows. Since II has followed the rules,
there is a sequence $x_{2k+1}$ such that the play $(x_0,x_1,\dots)\in [R]$.
I then wins the run of $G^*$ iff $(x_0,x_1,\dots)\in B$. Note that by the intersection
condition, this is independent of the particular choice of the $x_{2k+1}$.
From the definition, $G^*$ is a Suslin, co-Suslin game.
We show that $G^*$ is equivalent to $G$. Suppose first that I wins $G^*$ by
$\sigma^*$. Then $\sigma^*$ easily gives a strategy $\Sigma$ for $G$. For example,
let $\sigma^*(\emptyset)= (x_0, s_0)$. Then $\Sigma(\emptyset)=x_0$.
If II plays $x_1$, then let $n_1$ be such that $x_1 \in A^{s_0}_{n_1}$. Then
$\Sigma(x_0,x_1)=s_0(n_1)$. Continuing in this manner defines $\Sigma$.
If $(x_0,x_1,\dots)$ is a run of $\Sigma$, then there is a corresponding
run $((x_0,s_0), n_1, \dots)$ of $\sigma^*$. As each $s_{2k}$ follows the rules
$R$, then as long as II's moves follow the rules $R$, I's moves by $\Sigma$
also follow the rules $R$. If II has followed the rules $R$ in the run of $G$,
then the run $((x_0,s_0), n_1, \dots)$ of $\sigma^*$ has followed the rules for $G^*$
(II has followed the rules of $G^*$ since for each $n_{2k+1}$,
$x_{2k+1}$ witnesses that $n_{2k+1}$ is a legal move). Since $\sigma^*$ is
winning for $G^*$, the sequence $(x_0,x'_1,x_2,x'_3,\dots)\in B\cap [R]$ for some
$x'_{2k+1}$. By the intersection condition, $(x_0,x_1,x_2,x_3,\dots)\in B$.
Assume now that II has winning strategy $\tau'$ in $G^*$. We first note that
there is winning strategy $\tau^*$ for II in $G^*$ such that $\tau^*$ is projective
over $\boldsymbol{\Gamma}$. To see this, first note that the payoff set for $G^*$ is projective
over $\boldsymbol{\Gamma}$ as both $B$ and $R$ are in $\boldsymbol{\Gamma}$. Also, there is a scaled pointclass
$\boldsymbol{\Gamma}'$, projective over $\boldsymbol{\Gamma}$, which contains the payoff set for II in $G^*$.
By a result of Woodin in \cite{kechrishr} (since II is playing the integer moves in $G^*$)
there is a winning strategy $\tau^*$ which is projective over $\boldsymbol{\Gamma}'$, and thus projective over $\boldsymbol{\Gamma}$.
For the rest of the proof we fix a winning strategy $\tau^*$ for II in $G^*$
which is projective over $\boldsymbol{\Gamma}$.
We define a strategy
$\Sigma$ for II in $G$. Consider the first round of $G$. Suppose I moves with
$x_0$ in $G$. We may assume that $(x_0)\in R$.
\begin{claim}
There is an $x_1$ with $(x_0,x_1)\in R$ such that for all $x_2$ with
$(x_0,x_1,x_2)\in R$, there is a simple one-round $\boldsymbol{\Gamma}$ strategy $s_0$
which follows the rules $R$ from position $x_0$ (so $(x_0,s_0)$ is a legal move for I
in $G^*$) such that if $n_1=\tau^*(x_0,s_0)$ then $x_1\in A^{s_0}_{n_1}$
and $x_2=s_0(x_1)$.
\end{claim}
\begin{subproof}
Suppose not, then for every $x_1$ with $(x_0,x_1)\in R$ there is an $x_2$ with
$(x_0,x_1,x_2)\in R$ which witnesses the failure of the claim.
Define the relation $S(x_1,x_2)$ to hold iff $(x_0,x_1)\notin R$ or
$(x_0,x_1,x_2)\in R$ and the claim fails, that is, for every simple
one-round $\boldsymbol{\Gamma}$ strategy $s$ which follows $R$, if we let $n_1=\tau^*(x_0,s)$,
then either $x_1\notin A^{s}_{n_1}$ or $x_2\neq s(x_1)$.
Since $\tau^*$, $B$, $R$ are projective over $\boldsymbol{\Gamma}$, so is the relation $S$.
By assumption, $\text{dom}(S)=\R$. Since $S$ is projective over $\boldsymbol{\Gamma}$, it is within the scaled
pointclasses, and thus there is a uniformization $f$ for $S$. Note that $f$ follows the
rules $R$. By the simple one-round strategy hypothesis of Theorem~\ref{hrthm},
there is a $\boldsymbol{\Gamma}$-simplification $s_0$ of $f$. Let $n_1=
\tau^*(x_0,s_0)$. Since $\tau^*$ follows the rules $R^*$ for II, there is an $x_1 \in A^{s_0}_{n_1}$
such that $(x_0,x_1)\in R$. Since $s_0$ is a simplification of $f$,
there is an $x'_1$ with $(x_0,x'_1)\in R$ and $f(x'_1)=s_0(n_1)$. Let $x_2=
f(x'_1)$. From the definition of $S$ we have that $(x_0,x'_1,x_2)\in R$.
Since $S(x'_1,x_2)$, there does not exist an $s$ (following the rules)
such that $(x'_1 \in A^{s}_{n_1}
~\text{and}~ x_2=s(x'_1))$ where $n_1=\tau^*(x_0,s)$. But on the other hand, the $s_0$ we have produced
does have this property. This proves the claim.
\end{subproof}
Now that we've proved this claim, we can attempt to define the strategy $\Sigma$.
We would like to have $\Sigma(x_0)$ be any $x_1$ as in the claim. Now since the relation
$A(x_0,x_1)$ which says that $x_1$ satisfies the claim relative to $x_0$ is projective
over $\boldsymbol{\Gamma}$, we can uniformize it to produce the first round $x_1(x_0)$ of the strategy $\Sigma$.
Suppose I now moves $x_2$ in $G$. For each such $x_2$ such that $(x_0,x_1,x_2)\in R$,
there is a rule-following simple one-round $\boldsymbol{\Gamma}$ strategy $s_0$ as in the claim
for $x_1$ and $x_2$. The relation $A'(x_0,x_2,s_0)$ which says that $s_0$ satisfies the claim for
$x_1=x_1(x_0)$,
$x_2$ is projective over $\boldsymbol{\Gamma}$ and so has a uniformization $g(x_0,x_2)$.
In the $G^*$ game we have I play $(x_0, g(x_0,x_2))$. Note that $n_1=\tau^*(x_0,s_0)$
is such that $x_1 \in A^{s_0}_{n_1}$, and $x_2=s_0(x_1)$.
This completes the definition of the first round of $\Sigma$, and the proof
that a one-round play according to $\Sigma$ has a one-round simulation according
to $\tau^*$, which will guarantee that $\Sigma$ wins.
The definition of $\Sigma$ for the general round is defined in exactly the same way,
using $\mathsf{DC}$ to continue.
The above argument also shows that a run of $G$ following $\Sigma$
has a corresponding run of $G^*$ following $\tau^*$. If I has followed the rules
of $G$, then I has followed the rules of $G^*$ in the associated run. Since $\tau^*$
is winning for II in $G^*$, there is no sequence sequence $x'_{2k+1}$
of moves for II such that $(x_0,x'_1,x_2,x'_3,\dots)\in B\cap [R]$. In particular,
$(x_0,x_1,x_2,x_3,\dots) \notin B$ (since $(x_0,x_1,\dots)\in [R]$). Thus, II
has won the run of $G$ following $\Sigma$.
\end{proof}
If $G$ is a real game on the Polish space $X$
with rule set $R$, we say that $G$ is an {\em intersection game}
if it satisfies the intersection condition of Theorem~\ref{hrthm}.
This is equivalent to saying that there is a
function $f\colon X^\omega \to Y$ for some Polish space $Y$
such that $f(\vec x)=f(\vec y)$ if $x(2k)=y(2k)$
for all $k$, and the payoff set for $G$ is of the form $f^{-1}(T)$ for some $T\subseteq Y$.
In many examples, the rules $R$ require the players to play decreasing closed sets with diameters
going to $0$ in some Polish space, and the function $f$ is simply giving the unique point
of intersection of these sets. If we have a fixed rule set $R$ and a fixed
function $f$, the {\em class of games} $G_{R,f}$ associated to $R$ and $f$ is the collection
of games with rules $R$ and payoffs of the form $f^{-1}(T)$ for $T\subseteq Y$.
Thus, we allow the payoff set $T$ to vary, but the set of rules $R$ and the ``intersection function'' $f$
are fixed. In practice, $R$ and $f$ are usually simple, such as Borel relations/functions.
\begin{theorem}[$\mathsf{AD}$] \label{detthm}
Suppose $\boldsymbol{\Gamma}$ is a non-selfdual pointclass within the Suslin, co-Suslin sets
and $G_{R,f}$ is a class of intersection games on the Polish space $X$ with $R$, $f \in \boldsymbol{\Gamma}$,
and $R$ is positional (as above $f \colon X^\omega\to Y$, where $Y$ is a Polish space).
Suppose that for every $T\subseteq Y$ which is Suslin and co-Suslin, if player
I or II was a winning strategy in $G_{R,f}(T)$, then that player has a
winning simple $\boldsymbol{\Gamma}$-strategy. Then for every $T\subseteq Y$, the game
$G_{R,f}(T)$ is determined.
\end{theorem}
\begin{proof}
Fix the rule set $R$ and function $f$ in $\boldsymbol{\Gamma}$.
Let $T \subseteq Y$, we show the real game
$G_{R,f}(T)$ is determined. Following Becker, we consider the integer game $G$ where I and II play out
reals $x$ and $y$ which code trees (indexed by $\omega^{<\omega}$) of simple one-round $\boldsymbol{\Gamma}$ strategies
The winning condition for II as follows.
If exactly one of $x$, $y$ fails to be a simple $\boldsymbol{\Gamma}$-strategy, then that player loses.
If both fail to code simple $\boldsymbol{\Gamma}$-strategies, then II wins. If
$x$ codes a simple $\boldsymbol{\Gamma}$-strategy $\sigma_x$ and $y$ codes a simple $\boldsymbol{\Gamma}$-strategy $\tau_y$,
then II wins iff $\sigma_x*\tau_y \in G_{R,f}(T)$, where $\sigma*\tau$ denotes the unique sequence of reals
obtained by playing $\sigma$ and $\tau$ against each other. From $\mathsf{AD}$, the game $G$ is determined.
Without loss of generality we may assume that II has a winning strategy $w$ for $G$.
Let $S_1\subseteq \ww$ be the set of $z$ such that $z$ codes a simple $\boldsymbol{\Gamma}$-strategy for player I
which follows the rules $R$.
Likewise, $S_2$ is the set of $z$ coding rule following $\boldsymbol{\Gamma}$-strategies $\tau_z$ for II.
Note that $S_1$, $S_2$ are projective over $\boldsymbol{\Gamma}$.
Let
\[
A=\{ \vec{y} \in X^\omega \colon \exists z \in S_1\ \vec y= \sigma_z * \tau_{w(z)} \}.
\]
Since $w$ is a winning strategy for II in $G$, $A\subseteq
X^\omega\setminus G_{R,f}(T)$, so $f(A) \subseteq Y \setminus T$.
Note that $A$ is projective over $\boldsymbol{\Gamma}$ by the complexity assumption on
$R$ and the fact that $S_1$ is also projective over $\boldsymbol{\Gamma}$. We claim
that it suffices to show that II wins the real game $G_{R,f}(Y
\setminus f(A))$. This is because if II wins $G_{R,f}(Y
\setminus f(A))$ with run $\vec y$, i.e. $\vec y \not \in G_{R,f}(Y
\setminus f(A))$, then $f(\vec y) \in f(A) \subseteq Y \setminus T$,
so $\vec y \not \in G_{R, f}(T)$, thus $\vec y$ is a winning run for II in $G_{R, f}(T)$.
We see that $Y \setminus f(A)$ is projective over $\boldsymbol{\Gamma}$, and thus
$G_{R, f}(Y \setminus f(A))$ is equivalent to a Suslin, co-Suslin
$\frac{1}{2}\R$ game by Theorem \ref{hrthm} which is determined (see \cite{kechrishr}), and
so $G_{R, f}(Y \setminus f(A))$ is determined. Now it suffices to
show that I doesn't have a winning strategy in $G_{R, f}(Y \setminus
f(A))$.
Suppose I had a winning strategy for $G_{R, f}(Y \setminus f(A))$. By
hypothesis, I has a winning simple $\boldsymbol{\Gamma}$-strategy coded by some $z \in
\ww$. Let $\vec y= \sigma_z* \tau_{w(z)}$ (note that $z \in S_1$ and
so $w(z)\in S_2$). Since $\sigma_z$ is a winning strategy for I in
$G_{R, f}(Y \setminus f(A))$, we have $f(\vec y) \in Y \setminus
f(A)$. On the other hand, from the definition of $A$ from $w$ we have
that $f(\vec y) \in f(A)$, a contradiction.
\end{proof}
We next apply Theorem~\ref{detthm} to deduce the determinacy of Schmidt's $(\alpha,\beta,\rho)$
games in $\R$ from $\mathsf{AD}$.
\begin{theorem} [$\mathsf{AD}$]\label{thm:schmidtdet}
For any $\alpha,\beta \in (0,1)$, any $\rho \in \R_{>0}$, and any $T\subseteq \R$,
the $(\alpha,\beta,\rho)$ Schmidt's game with target set $T$ is determined.
\end{theorem}
\begin{proof}
Let $\boldsymbol{\Gamma}$ be the pointclass $\boldsymbol{\Pi}^1_1$ of co-analytic sets. Let $R$ be the tree
described by the rules of the $(\alpha,\beta,\rho)$ Schmidt's game. $R$
is clearly a closed set and is positional. The function $f$ of Theorem~\ref{detthm}
is given by $\{ f((x_i,\rho_i)_i) \}= \bigcap_i B(x_i,\rho_i)$. This
clearly satisfies the intersection condition, that is, $G_{R,f}$
is a class of intersection games. Also, $f$ is continuous, so $f\in \boldsymbol{\Gamma}$.
It remains to verify the simple strategy condition of Theorem~\ref{detthm}.
The argument is essentially symmetric in the players, so we consider the case
of player II. In fact we show that for any $T\subseteq \R$, if II
has a winning strategy for the $(\alpha,\beta,\rho)$ Schmidt's game, then II
has a simple Borel strategy. Fix a winning strategy $\Sigma$
for II in this (real) game. Consider $\Sigma$ restricted to the
first round of the game. For every $z_0 \in \R$, there is a half-open
interval $I_{z_0}$ of the form $[z_0,z_0+\epsilon)$ or $(z_0-\epsilon,z_0]$
such that for any $x_0\in I_{z_0}$, we have that $((x_0,\rho_0),
\Sigma(z_0,\rho_0))\in R$. That is, for any $x_0 \in I_{z_0}$ we have that $\Sigma$'s
response to $(z_0,\rho_0)$ is still a legal response to the play $(x_0,\rho_0)$.
Consider the collection $\mathcal{C}$ of all intervals $I=[z,z+\epsilon)$ or $I=(z-\epsilon,z]$
having this property. So, $\mathcal{C}$ is a cover of $\R$ by half-open intervals.
There is a countable subcollection $\mathcal{C}' \subseteq \mathcal{C}$ which covers $\R$.
To see this, first get a countable $\mathcal{C}_0\subseteq \mathcal{C}$ such that
$\cup \mathcal{C}_0 \supseteq \bigcup_{I\in \mathcal{C}} \text{int}(I)$.
The set $\R\setminus\bigcup_{I\in \mathcal{C}} \text{int}(I)$ must be countable, and so
adding countably many sets of $\mathcal{C}$ to $\mathcal{C}_0$ will get $\mathcal{C}'$ as desired.
Let $\mathcal{C}'= \{ I_{z_n} \}_{n \in \omega}$. The first round of the
simple Borel strategy $\tau$ is given by $(A_n,y_n)$ where
$A_n =\{ (x_0,\rho_0)\colon x_0 \in I_{z_n}\setminus \bigcup_{m<n} I_{z_m} \}$ and
$y_n=\Sigma( z_n,\rho_0)$. Clearly $(A_n,y_n)$ is a simple one-round Borel strategy
which follows the rules $R$ of the $(\alpha,\beta,\rho)$ Schmidt's game.
This defines the first round of $\tau$.
Using $\mathsf{DC}$, we continue inductively to define each subsequent round
of $\tau$ in a similar manner.
To see that $\tau$ is a winning strategy for II, simply note that for any run
of $\tau$ following the rules there is a run of $\Sigma$ producing the same
point of intersection.
\end{proof}
This theorem immediately implies the following corollary about Schmidt's original $(\alpha, \beta)$ game.
\begin{corollary}[$\mathsf{AD}$]
For any $\alpha,\beta \in (0,1)$, and any $T\subseteq \R$,
exactly one of the following holds.
\begin{enumerate}
\item Player I has a winning strategy in Schmidt's $(\alpha,\beta)$ game.
\item For every $\rho \in \R_{>0}$, player II has a winning strategy in Schmidt's $(\alpha,\beta, \rho)$ game.
\end{enumerate}
\end{corollary}
In contrast to these results, the situation is dramatically different for $\R^n$, $n \geq 3$.
\begin{theorem} \label{thm:r3}
$\mathsf{AD}^+$ does not imply that the $(\alpha, \beta, \rho)$ Schmidt's game for $T \subseteq \R^n$, $n \geq 3$
are determined.
\end{theorem}
\begin{proof}
We will show that the determinacy of these games in $\R^3$ implies that
all relations $R \subseteq \mathbb{R} \times \mathbb{R}$ can be
uniformized. It is known that $\mathsf{AD}^+$ does not suffice to imply this. The proof for larger $n$ is identical.
Let $R \subseteq \mathbb{R} \times [0, 2\pi)$ such that $\forall x \in
\mathbb{R}~ \exists \theta \in [0, 2\pi)~ (x, \theta) \in R$.
Let $r=\rho - 2\rho\alpha(1-\beta) \sum_{n=0}^\infty (\alpha \beta)^n$.
Let the target set for player II be $T=\{ (x, r\cos\theta, r\sin\theta) \colon (x, \theta) \in R\} \cup \{
(x, y, z) \colon y^2+z^2>r\}$. The value $r$ is the distance from the
$x$-axis that is obtained if I makes a first move $B((x_0,0,0),\rho)$ centered on the $x$-axis,
and at each subsequent turn II moves to maximize the distance from the
$x$-axis and I moves to minimize it (note that these moves all have centers
having the same $x$-coordinate $x_0$). The target set $T$ codes
the relation $R$ to be uniformized along the boundary of the cylinder
of radius $r$ centered along the $x$-axis.
We claim that I cannot win the $(\alpha, \beta, \rho)$ Schmidt's game for $T$.
First note that if I plays his center not on the $x$-axis,
then II can easily win in finitely many moves by simply playing to
maximize distance to the $x$-axis, (This will win the game by the
definition of $r$). So suppose I plays $(x, 0, 0)$ as the center of
his first move. Fix $\theta$ so that $R(x, \theta)$ holds. Then II
can win by always playing tangent towards the direction $(0,
\cos\theta, \sin\theta)$ maximizing distance to the $x$-axis. If I
resists and minimizes distance to the $x$-axis, then the limit point
will be in $\{ (x, r\cos\theta, r\sin\theta) \colon (x, \theta) \in R\}$.
If I ever deviates from this, then again II can win after
finitely many moves by maximizing distance to the $x$-axis.
This shows that I does not have a winning strategy, so by the
assumption that these games are determined, II has a winning
strategy $\tau$. By similar arguments to those above, $\tau$ must
maximize distance from the $x$-axis in response to optimal play by
I. But one can take advantage of this to easily define a
uniformization $f$ of $R$ from $\tau$ by the following:
\[f(x) = \theta \Longleftrightarrow \tau\bigg(B\Big(\left(x, 0,0 \right), \rho\Big)\bigg)
= B\Big(\left(x, (\rho-\alpha\rho)\cos\theta, (\rho-\alpha\rho)\sin\theta\right), \alpha \rho\Big).\]
\end{proof}
\section{Further Results regarding Schmidt's game} \label{sec:or}
In \S\ref{sec:mr} we showed that $\mathsf{AD}$ suffices to get the determinacy of the $(\alpha,\beta,\rho)$
Schmidt's game for any target set $T\subseteq \R$, but that for $T\subseteq \R^n$, $n \geq 3$,
$\mathsf{AD}$ (or $\mathsf{AD}^+$) is not sufficient. The proof for the positive result in $\R$ used a reduction
of Schmidt's $(\alpha,\beta,\rho)$ game to a certain $\frac{1}{2}\R$ game. The fact that $\mathsf{AD}$ does not suffice
for $T\subseteq \R^n$, $n \geq 3$, shows that in general the $(\alpha,\beta,\rho)$ Schmidt's game is not
equivalent to an integer game (for $T\subseteq \R$ it still seems possible the game is equivalent to
an integer game).
A natural question is to what extent we can reduce Schmidt's game to an integer game.
In this section we prove two results concerning this question.
In the proof of Theorem~\ref{thm:r3} it is important that the value $r=r(\alpha,\beta)$ was calibrated to the
particular values of $\alpha$, $\beta$. In other words, if we change the values of $\alpha$, $\beta$ to $\alpha',\beta'$,
using the same target set, so that $r(\alpha'\beta')\neq r(\alpha,\beta)$, then the game is easily determined.
In Theorem~\ref{hyperbolathm} we prove a general result related to this phenomenon. Namely,
we show, assuming $\mathsf{AD}$, that for $T$ (in any Polish space) and each value of $p \in (0,1)$ there is at most
one value of $\alpha, \beta$ with $\alpha\beta=p$ such that the $(\alpha,\beta)$ Schmidt's game with target
set $T$ is not determined. Thus the values of $\alpha,\beta$ must be tuned precisely to have a possibility
of the game being not determined from $\mathsf{AD}$.
The proof of Theorem~\ref{thm:r3} also uses critically the ability of each player to play a ball tangent
to the previous ball. In Theorem~\ref{thm:tangent} below, we make this precise by showing
that the modification of Schmidt's $(\alpha, \beta, \rho)$ game where the players are required to make non-tangent moves is
determined from $\mathsf{AD}$ alone. Thus, the ability of the players to play tangent at each move is a key
obstacle in reducing Schmidt's game to an integer game.
In the Banach-Mazur game, the rational modification of the game is
fairly straightforward, i.e. the allowed moves for the players are
just representatives of balls with centers from some fixed countable
dense subset of $X$ and the radii are positive rationals, in Schmidt's
game there is a slight difference, again due to the restriction on the
players' radii.
\begin{definition}
For a Polish $(X, d)$ and a fixed countable dense
subset $D \subseteq X$ we define the \emph{rational Schmidt} $(\alpha,
\beta)$ game by modifying Schmidt's
$(\alpha, \beta)$-game by restricting the set of allowed moves for
both players to balls $B(x_i,\rho_i)$ where $x_i \in D$ and
$\rho_i \in \left( \bigcup_{n, m \in
\mathbb{N}} \alpha^n\beta^m\mathbb{Q}_{>0}\right)$.
\end{definition}
\begin{theorem}
\label{lemma1}
Let $(X, d)$ be a Polish space. Let $0<\alpha<\alpha'<1$,
$0<\beta'<\beta<1$, and $\alpha\beta=\alpha'\beta'$. Let $D$ be a
countable dense subset of $X$.
\begin{enumerate}
\item If II wins the rational Schmidt's $(\alpha', \beta')$ game for target set $T$
then II wins Schmidt's $(\alpha, \beta)$ game for $T$.
\item If I wins the rational Schmidt's $(\alpha, \beta)$ game for target set $T$
then I wins Schmidt's $(\alpha', \beta')$ game for $T$.
\end{enumerate}
\end{theorem}
\begin{proof}
We will prove the first statement, the proof of the second is similar.
Fix the target set $T\subseteq X$. Let $\tau$ be a winning strategy for II in the
rational Schmidt's $(\alpha', \beta')$ game. We will construct a
strategy for II in Schmidt's $(\alpha, \beta)$ game by using
$\tau$.
Suppose I plays $(x_0, \rho_0)$ as his first move in the $(\alpha,\beta)$ game. Let $\rho=\rho_0$
to conserve notation. Let
$\rho' \in \left( \bigcup_{n, m \in \mathbb{N}}
\alpha^n\beta^m\mathbb{Q}_{>0}\right)$ with
\begin{equation}
\label{rhoprime}
\rho
\frac{\alpha}{\alpha'}\frac{1-\beta}{1-\beta'}<\rho'< \rho\frac{1-\alpha}{1-\alpha'}
\end{equation}
This is possible since
$\frac{\alpha}{\alpha'}\frac{1-\beta}{1-\beta'}<1$ and
$\frac{1-\alpha}{1-\alpha'}>1$ and $\bigcup_{n, m \in
\mathbb{N}} \alpha^n\beta^m\mathbb{Q}_{>0}$ is dense in $\mathbb{R}^{>0}$.
Let $\epsilon_n\df\min\left\{(\alpha\beta)^n(\rho(1-\alpha)-\rho'(1-\alpha')),
(\alpha\beta)^{n-1}(\alpha'\rho'(1-\beta')-\alpha\rho(1-\beta))\right\}$.
Notice that $\epsilon_n >0$ by inequality (\ref{rhoprime}). Now let
$(x_1', \alpha'\rho')=\tau(x_0', \rho')$ where $x_0' \in D \cap B(x_0, \epsilon_0)$.
Let $x_1=x_1'$. By the
definition of $\epsilon_0$ and (\ref{rhoprime}), $B(x_1, \alpha\rho)\subseteq B(x_0,
\rho)$, thus $(x_1, \alpha\rho)$ is a valid response to $(x_0, \rho)$
in Schmidt's $(\alpha, \beta)$ game.
Now given a partial play with centers $\left\{x_k : k \leq
2n\right\}$, continue by induction to generate $x_{2n+1}$ by
considering $(x_{2n+1}', (\alpha'\beta')^n \alpha'\rho')=
\tau\left( \left\{(x_k', r_k) \colon k \leq
2n\right\}\right)$ where
for each $1\leq k\leq n$, $x_{2k-1}'$ is given by $\tau$ and
$x_{2k}'\in D\cap B(x_{2k}, \epsilon_k)$. Again by
the definition of $\epsilon_n$ and (\ref{rhoprime}), $B(x_{2n+1},
(\alpha\beta)^n \alpha\rho) \subseteq B(x_{2n}, (\alpha\beta)^n\rho)$.
We have defined a strategy for II in Schmidt's $(\alpha, \beta)$ game
which has the property that if a run is compatible with this
strategy with centers $\left\{x_k \colon k \in \omega\right\}$ then there
is a corresponding run compatible with $\tau$ with centers
$\left\{x_k' \colon k \in \omega\right\}$ such that for all $k$,
$x_{2k+1}=x_{2k+1}'$, so that $ \lim_{n \rightarrow
\infty} x_n' = \lim_{n \rightarrow \infty} x_n$ and so since
$\tau$ is a winning strategy in the rational Schmidt's $(\alpha',
\beta')$ game, $\lim_{n \rightarrow \infty} x_n \in
T$. So the strategy we have constructed is winning in Schmidt's
$(\alpha, \beta)$ game.
\end{proof}
As a consequence we have the following theorem.
\begin{theorem}[$\mathsf{AD}$]
\label{hyperbolathm} Let $(X, d)$ be a Polish space. Let $T \subseteq
X$. Let $p \in (0, 1)$, then there is at most one point $(\alpha,
\beta) \in (0, 1)^2$ with $\alpha\beta=p$ at which Schmidt's $(\alpha,
\beta)$ game for $T$ is not determined.
\end{theorem}
\begin{proof}
Suppose that Schmidt's $(\alpha, \beta)$ game is not determined
with $\alpha\beta=p$. Let $\alpha_1 <\alpha <\alpha_2$ and $\beta_1 >
\beta > \beta_2$ with $\alpha_1 \beta_1 = \alpha \beta=\alpha_2
\beta_2$. Note that by Theorem~\ref{lemma1} part (1), II cannot have
a winning strategy in the rational Schmidt's $(\alpha_2, \beta_2)$ game,
since II does not have a winning strategy in Schmidt's
$(\alpha, \beta)$ game by assumption. This means that I must
have a winning strategy in the rational Schmidt's $(\alpha_2, \beta_2)$ game
for any such $(\alpha_2, \beta_2)$ (by $\mathsf{AD}$) and thus by
Theorem~\ref{lemma1} part (2), I wins Schmidt's $(\gamma, \delta)$ game
for any $(\gamma, \delta) \in (0, 1)^2$ with $\gamma\delta=p$ and
$\alpha<\gamma$. By a symmetric argument, I has no winning strategy
in the rational Schmidt's $(\alpha_1, \beta_1)$ game, so II must
have a winning strategy in Schmidt's $(\gamma, \delta)$ game for
any $(\gamma, \delta) \in (0, 1)^2$ with $\gamma\delta=p$ and
$\gamma<\alpha$.
\end{proof}
We next consider the variation of Schmidt's game where we restrict the
players to making non-tangent moves. We consider a general Polish space $(X,d)$.
\begin{definition}
We say the ball $B(x_{n+1}, \rho_{n+1})$ is \emph{tangent} to the ball $B(x_n, \rho_n)$ if
$\rho_{n+1} + d(x_n, x_{n+1}) = \rho_n$.
\end{definition}
In the {\em non-tangent} Schmidt's $(\alpha,\beta,\rho)$ game with target set $T \subseteq X$,
a rule of the game is that each player must play a nested ball of the appropriate radius, as in Schmidt's
game, but that ball must not be tangent to the previous ball. Note that the non-tangent variation
of Schmidt's game is still an intersection game, and the rule set $R$ is still Borel.
We will show that the ``simple strategy'' condition of Theorem~\ref{detthm} is also
satisfied, and so the non-tangent Schmidt's game is determined from $\mathsf{AD}$. The proof of this theorem is
similar to that of Theorem~\ref{thm:schmidtdet}. It is clear that the rules of this game are positional, so it will suffice to check the other hypotheses of Theorem~\ref{detthm}.
\begin{theorem}[$\mathsf{AD}$]\label{thm:tangent}
Let $(X,d)$ be a Polish space, and let $\alpha,\beta \in (0,1)$, $\rho \in \R_{>0}$, and $T\subseteq X$,
the non-tangent $(\alpha,\beta,\rho)$ Schmidt's game with target set $T$ is determined.
\end{theorem}
\begin{proof}
We will show that if I (or II) has a winning strategy in the non-tangent $(\alpha,\beta,\rho)$ Schmidt
game, then I (or II) has a simple Borel winning strategy (in the sense of
Definition~\ref{simple_one_round}), thus by Theorem~\ref{detthm}, the result follows.
Without loss of generality, say II has a winning strategy $\Sigma$ in the non-tangent
$(\alpha,\beta,\rho)$ Schmidt's game. We will define a simple Borel strategy $\tau$ for II from $\Sigma$.
Suppose I makes first move $B(x_0,\rho)$,
and $\Sigma$ responds with $B(x_1,\alpha\rho)$, which is not tangent to $B(x_0,\rho)$.
Let $\epsilon= \rho(1-\alpha)-d(x_0,x_1)>0$. If $d(x'_0,x_0) <\epsilon$,
then if I plays $B(x'_0,\rho)$, then $B(x_1,\alpha\rho)$ is still a valid response for
for II. In other words, for each $x_0$, there is an open ball $U$ of some radius, for which
any $x'_0 \in U$ has the property that the response by $\Sigma$ to $(x_0, \rho)$ is also a legal
response to $(x'_0, \rho)$. Let $\mathcal{C}$ be the collection of all such open balls $U$.
Then $\mathcal{C}$ is an open cover of $X$, and since $X$ is Polish, it is Lindel\"of, and thus
$\mathcal{C}$ has a countable subcover $\mathcal{C}' = \setof{U_{z_n}}_{n \in \omega}$. The first round of the
simple Borel strategy $\tau$ is given by $(A_n,y_n)$ where
$A_n =\{ (x_0,\rho)\colon x_0 \in U_{z_n}\setminus \bigcup_{m<n} U_{z_m} \}$ and
$y_n=\Sigma(z_n,\rho)$. Clearly $(A_n,y_n)$ is a simple one-round Borel strategy
which follows the rules $R$ of the non-tangent $(\alpha,\beta,\rho)$ Schmidt's game.
This defines the first round of $\tau$.
Using $\mathsf{DC}$, we continue inductively to define each subsequent round
of $\tau$ in a similar manner.
To see that $\tau$ is a winning strategy for II, simply note that for any run
of $\tau$ following the rules there is a run of $\Sigma$ producing the same
point of intersection.
\end{proof}
\section{Questions} \label{sec:questions}
In Theorem~\ref{thm:schmidtdet} we showed that $\mathsf{AD}$ suffices to the determinacy
of Schmidt's $(\alpha,\beta,\rho)$ game on $\R$. In Theorem~\ref{thm:r3}
we showed that $\ad^+$ does not suffice to prove the determinacy of
Schmidt's $(\alpha,\beta,\rho)$ game on $\R^n$ for $n \geq 3$. In view of these results
several natural questions arise.
First, for $n=2$ our arguments do not seem to resolve the question of the strength
of Schmidt game determinacy in either case of the $(\alpha,\beta,\rho)$ or the
$(\alpha,\beta)$ game. The proof of Theorem~\ref{thm:schmidtdet} does not immediately
apply as $\R^2$ does not have the ``Lindel\"{o}f-like'' property we used for $\R$.
On the other hand, the proof of Theorem~\ref{thm:r3} also does not seem to apply
as we don't seem to have enough freedom in $\R^2$ to code an arbitrary
instance of uniformization as we did in $\R^3$.
In fact, the method of proof of Theorem~\ref{thm:schmidtdet} of using ``simple strategies''
cannot show the determinacy of Schmidt games in $\R^2$ from $\mathsf{AD}$. This is because
while we cannot seem to code an arbitrary uniformization problem into the game,
we can code the characteristic function of an arbitrary set $A\subseteq \R$
in a way similar to the proof of Theorem~\ref{thm:r3}. We could then choose
a set $A$ not projective over the pointclass $\boldsymbol{\Gamma}$ (as in the statement of
Theorem~\ref{detthm}). Then the ``simple strategy'' hypothesis of
Theorem~\ref{detthm} will fail for this instance of the game.
So we ask:
\begin{question} \label{qa}
Does $\mathsf{AD}$ suffice to get the determinacy of either the Schmidt's $(\alpha,\beta,\rho)$
or $(\alpha,\beta)$ games on $\R^2$?
\end{question}
Although the distinction between Schmidt's $(\alpha,\beta,\rho)$ game and
Schmidt's $(\alpha,\beta)$ game seem immaterial in practical applications,
our main theorems apply to the $(\alpha,\beta,\rho)$ games only. So we ask:
\begin{question} \label{qb}
Does $\mathsf{AD}$ suffice to prove the determinacy of Schmidt's $(\alpha,\beta)$
game on $\R^n$?
\end{question}
Also interesting is the converse question of whether the determinacy of
Schmidt's game (either variation) implies determinacy axioms. In
\cite{Freiling} it is shown that the determinacy of Banach games
(which are similar in spirit to Schmidt games) implies $\mathsf{AD}$.
Here we do not have a corresponding result for $\R^n$. We note though
that if $\alpha=\beta=\frac{1}{2}$ and $\rho=\frac{1}{2}$, then the
determinacy of Schmidt's $(\alpha, \beta, \rho)$ game on $X=\ww$ with the standard metric
$d(x,y)= \frac{1}{2^{n+1}}$ where $n$ is least so that $x(n)\neq y(n)$,
gives $\mathsf{AD}$. So we ask:
\begin{question} \label{qc}
Does the determinacy of Schmidt's $(\alpha,\beta,\rho)$ (or $(\alpha,\beta)$)
game on $\R^n$ imply $\mathsf{AD}$? If $n \geq 3$, does Schmidt determinacy
imply $\ad_{\R}$?
\end{question}
A related line of questioning is to ask what hypotheses are needed to get the determincy of
Schmidt's game for restricted classes of target sets. For example, while the determinacy of
the Banach-Mazur game for $\boldsymbol{\Sigma}^1_1$ (that is, analytic) target sets is a theorem of just $\mathsf{ZF}$, the corresponding
situation for Schmidt's game is not clear. so we ask:
\begin{question} \label{qd}
Does $\mathsf{ZF}+\mathsf{DC}$ suffice to prove the determinacy of Schmidt's game in $\R^n$
for $\boldsymbol{\Sigma}^1_1$ target sets?
\end{question}
In view of the results of this paper, it is possible that the answer to Question~\ref{qd}
depends on $n$. We can extend the class of target sets from the analytic sets
to the more general class of Suslin, co-Suslin sets. So we ask:
\begin{question} \label{qe}
Does $\mathsf{AD}$ suffice to prove the determinacy of Schmidt's game in $\R^n$
for Suslin, co-Suslin target sets?
\end{question}
Again, it is possible the answer to Question~\ref{qe} depends on $n$.
Finally, it is reasonable to ask the same questions of this paper for other
real games which also have practical application to number theory and related
aread. Important examples include McMullen's ``strong'' and ``absolute'' variations of
Schmidt's game \cite{McMullen_absolute_winning}. These are also clearly intersection games,
so the question is whether the simple strategy hypothesis of
Theorem~\ref{detthm} applies.
\bibliographystyle{amsplain}
|
2,869,038,154,565 | arxiv | \section*{Acknowledgments}
We thank S.~Singh and J.~Ritt for supplying the mouse gastrointestinal samples. We thank K.~Calabro for helping develop the Monte Carlo simulation code. We thank all the members of the Biomicroscopy Lab for their helpful conversations and careful review of this manuscript. This work was supported by an NIH grant R01-EB010059.
\section*{Author Contributions}
T.N.F., K.K.C. and J.M. conceived and developed the technique. T.N.F. built the setup and acquired the data. T.N.F. and J.M. wrote the manuscript. J.M. supervised the project.
\section*{Competing Financial Interests}
The authors declare no competing financial interests.
\section*{ONLINE METHODS}
\subsection*{Hardware setup}
White light from two LEDs (Luxeon Star MR-WC310-20s) was coupled into optical fibers (Thorlabs BFL48-1000; 0.48~NA; $1000\um$ core) using aspheric condenser lenses (Thorlabs ACL5040-A). Illumination light was launched by the fibers into the sample ($25\mW$ per channel at the fiber output), where it was redirected through the focal plane by multiple scattering and collected by a micro-objective (Mauna-Kea Technologies; $2.6\mm$ diameter; $1\times$ or $2.5\times$ magnification; $60\um$ working distance; water-immersion; 0.8~NA) coupled to a coherent imaging fiber bundle (30,000~cores; $600\um$ active area). The separation distance between the fiber and the micro-objective probe was approximately $1.8\mm$. The proximal face of the fiber bundle was imaged with standard microscope optics (Olympus Plan $10\times$ 0.48~NA air objective, Linos AC $f=200\mm$ tube lens; $4f$ configuration) and recorded with a digital camera (PCO Pixelfly USB; 14-bit; $2\times2$ binning; 35~fps; $1-5\ms$ exposure time per illumination direction). The camera was operated in double shutter mode to reduce the inter-frame delay between exposures ($200\us$), minimizing motion artifacts \cite{Ford2012}. Illumination power delivered by the left and right optical fibers was triggered (Thorlabs LEDD1B) to overlap with the first and second frame in the each image pair, respectively. Frame rate was limited by the camera readout time. Image acquisition and display was performed using custom written software (National Instruments LabVIEW~11.0). Illumination gating and camera exposure were synchronously controlled using a data acquisition card (National Instruments PCI-6221).
\subsection*{Image processing}
A preprocessing routine described previously \cite{Ford2012} was first used to correct for the quasi-periodic sampling pattern imparted by the fiber bundle cores. Each raw image was then normalized by its respective low-pass filtered version (Gaussian filter kernel with $\sigma=80\pix$) to correct for non-uniform illumination profiles and thus ``flatten'' the images. The two normalized images were then either added or subtracted to produce absorption-only or phase gradient-only images, respectively (\textbf{Supplementary Fig.~\ref{fig:rawAddSub}}). Image processing was performed with a graphics processing unit (NVIDIA GTX280) using custom-written software written in CUDA-C \cite{CUDA}.
\subsection*{Monte Carlo simulations}
CUDAMCML \cite{Alerstam2009}, a modification of MCML \cite{Wang1995} enabling execution on graphics processing units (GPUs), was used to perform the simulations. CUDAMCML was further modified to execute on a cluster of CUDA-enabled workstations \cite{Calabro2012}. A semi-infinite slab geometry was modeled with tissue optical parameters $n_\mathrm{tissue}=1.37$, $l_s=150\um$, $l_s^*=3000\um$ and $g=0.95$ ($n$ is index of refraction, $l_s$ and $l_s^*$ are the scattering and transport mean free path lengths, respectively, and $g=1-l_s/l_s^*$ is the anisotropy factor). Illumination fiber parameters were $n_\mathrm{fiber}=1.37$, $\mathrm{diameter}=1000\um$ and numerical aperture, $\mathrm{NA}=0.48$. Micro-objective probe parameters were $n_\mathrm{probe}=1.37$, $\mathrm{diameter}=240\um$ and $\mathrm{NA}=0.8$. Fiber-probe separation was $d=1818\um$. A Henyey-Greenstein phase function was used to characterize photon scattering events \cite{Henyey1941}. $10^8$~photons were processed to estimate the distribution of exit angles of the detected photons as a function of fiber-probe separation (\textbf{Supplementary Fig.~\ref{fig:angularDist}}). Both the total detected intensity and median exit angle were observed to decrease with increasing fiber-probe separation. $10^5$~photons were processed to estimate photon path density as a function of lateral position and depth, revealing the so-called photon banana (\textbf{Fig.~\ref{fig:setup}b}).
\subsection*{Tissue phantom preparation}
The scattering tissue phantom was prepared by heating a $30\mL$ solution of $2\%$~(w/v) agarose (Sigma A5093-100G), $5\%\,2\um$ diameter polystyrene beads (Polysciences 19814-15), and $0.1\%\,45\um$ diameter polystyrene beads (Polysciences 07314-5) in $\mathrm{H_2O}$ to $75\degC$ on a hotplate, followed by pouring the mixture into a $60\mm\times15\mm$ cell culture dish (Corning 430166). The phantom was covered with paraffin film and left to cool to room temperature before imaging. The optical properties of the bulk medium were $l_s=74\um$, $l_s^*=1040\um$ and $g=0.93$, as estimated using Mie theory. The indices of refraction of hydrated agarose gel and polystyrene beads were $n=1.35$ and $n=1.59$, respectively \cite{Pogue2006}. Imaging was performed through water.
\subsection*{Chick embryo preparation}
Fertilized \emph{Gallus gallus} eggs (Carolina Biological Supply Co. 139290) were stored in an incubator at $37\degC$ and $50\%$ humidity, being turned every $7\hours$ to prevent fusion of the chorioallantoic membrane (CAM) with the shell membrane. Imaging was performed at embryonic day 11. A $1\cm$ diameter region of the shell and shell membrane was removed exposing the embryo and CAM. A layer of $37\degC$ saline was dripped over the preparation before imaging \emph{in ovo} with the OBM probe. Following imaging, the embryos were euthanized by hypothermia by storing the eggs at $-15\degC$.
\subsection*{Mouse tissue preparation}
Six week old C57~black~6 mice were euthanized by $\mathrm{CO_2}$ inhalation and the gastrointestinal tract was immediately excised and washed with $4\%$ paraformaldehyde. The colon and small intestine were cut longitudinally, unrolled and cleared of fecal matter. The preparations were stored in $4\%$ paraformaldehyde for several days. Before \emph{ex vivo} imaging, the tissues were pinned to a silicone elastomer slab (Sylgard\textregistered~184, Corning) to expose the apical surfaces. Residual mucus and fecal matter were gently washed away with saline before imaging. The animals used in this study were treated in accordance with the guidelines of the Institutional Animal Care and Use Committee of Boston University.
|
2,869,038,154,566 | arxiv | \section{Introduction}
In the coming years the production and decay of Higgs bosons will play a
central role in many analyses performed at the Large Hadron Collider (LHC). A
crucial ingredient is often provided by the matching coefficients which govern
the coupling of Higgs bosons to gluons. The corresponding effective Lagrange
density is valid in the heavy top quark limit which provides a good
approximation for Higgs boson decays to gluons and the total production cross
section of a single Higgs boson. For less inclusive processes the
applicability of the effective theory approach is limited to parts of the phase
space. This is also true for Higgs boson pair production. In this paper we
perform for the first time a direct calculation of the matching coefficient
for the coupling of one and two Higgs bosons to gluons. Our results are
expressed in terms of general SU($N_c$) colour factors. For $N_c=3$ they can be
compared to expressions obtained from indirect methods where the matching
coefficients are obtained from low-energy theorems (LETs). The essential
ingredient into the LETs is the QCD decoupling constant for the strong
coupling. For this reason we re-visit the calculation of all four-loop
decoupling constants and provide results for a generic SU($N_c$) gauge group.
The remainder of this paper is organized as follows: In Section~\ref{sec:tech}
we fix our notation and introduce the decoupling constants and the effective
Lagrange density for the Higgs-gluon coupling. In Sections~\ref{sec:dec}
and~\ref{sec:higgs} we present our results for the decoupling relations and
Wilson coefficients, respectively. We discuss in detail the extraction of the
coupling of two Higgs bosons to gluons ($C_{HH}$) and in particular the
subtleties in the matching procedure due to the renormalization of products of
operators. Our findings are summarized in Section~\ref{sec:conclusions}. In
the Appendix we collect analytic results for the decoupling constants.
\section{\label{sec:tech}Technicalities}
For convenience of the reader and to fix our notation we repeat in this
Section the definition of the decoupling constants in QCD and the
Wilson coefficients in the effective Lagrange density describing Higgs-gluon
couplings. For a detailed discussion we refer to Ref.~\cite{Chetyrkin:1997un}.
We will work in the $\overline{\mathrm{MS}}$ scheme throughout this paper,
except for the heavy quark mass which we renormalize both in the
$\overline{\mathrm{MS}}$ and on-shell scheme.
The $\overline{\rm MS}$ counterterms are needed up to four-loop order (see,
e.g., Ref.~\cite{Chetyrkin:2004mf}) and the renormalization constant for the
$\overline{\rm MS}$ to on-shell conversion for the heavy quark mass to three
loops~\cite{Chetyrkin:1999ys,Chetyrkin:1999qi,Melnikov:2000qh,Marquard:2007uj}.
\subsection{\label{sub::dec}Decoupling constants}
The bare and renormalized parameters and fields of the QCD Lagrangian are
connected by renormalization constants defined through
\begin{align}
&&g_s^0 = \mu^{\epsilon}Z_g g_s, &&&m_q^0 = Z_m m_q, &&\xi^0 - 1 = Z_3(\xi-1)\,,
\nonumber\\
&&A_\mu^{0,a} = \sqrt{Z_3}A_\mu^a, &&&\psi^0_q = \sqrt{Z_2}\psi_q,&&c^{0,a} =\sqrt{\tilde{Z}_3}c^a
\,.
\label{eq::renconstants}
\end{align}
Here $g_s$ is the QCD gauge coupling with $\alpha_s = g_s^2/(4\pi)$ being the
strong coupling constant, $\mu$ is the renormalization scale,
$D = 4 - 2\epsilon$ the space-time dimension and $\xi$ the gauge parameter
with $\xi = 0$ corresponding to Feynman and $\xi=1$ to Landau gauge. The gluon
field is given by $A^a_\mu$, $\psi_q$ is the quark field of flavour $q$ with
mass $m_q$ and $c^a$ is the ghost field. Bare quantities are denoted by the
superscript ``0''. The renormalization constants $Z_X$ are needed up to
$\mathcal{O}(\alpha_s^4)$~\cite{Chetyrkin:1997dh,Vermaseren:1997fq,vanRitbergen:1997va,Czakon:2004bu,Chetyrkin:2004mf}
for our purposes.
In the following we assume a strong hierarchy in the quark masses and
integrate out a heavy quark with mass $m_h$ from QCD with $n_f$ active quark
flavours.\footnote{The simultaneous decoupling of two heavy quarks with
different masses is discussed in Ref.~\cite{Grozin:2011nk} up to three-loop
order.} The resulting effective Lagrangian has the same form as the
original QCD Lagrangian. However, it only has $n_l=n_f-1$ active quark
flavours and thus only depends on the light degrees of freedom. The
parameters and fields in the effective $n_l$-flavour and full $n_f$-flavour
theory are related via the so-called (bare) decoupling constants
\begin{align}
&&g_s^{0\,(n_l)} = \zeta^0_g g^{0\,(n_f)}_s, &&&m_q^{0\,(n_l)} = \zeta^{0\,(n_f)}_m m^0_q,
&&\xi^{0\,(n_l)} - 1 = \zeta^0_3(\xi^{0\,(n_f)}-1),\nonumber\\
&&A_\mu^{0\,(n_l)} = \sqrt{\zeta^0_3}A_\mu^{0\,(n_f)}, &&&\psi^{0\,(n_l)}_q = \sqrt{\zeta^0_2}\psi_q^{0\,(n_f)},
&&c^{0\,(n_l)} = \sqrt{\tilde{\zeta}^0_3}c^{0\,(n_f)}\,,
\label{eq::deccoeff}
\end{align}
where the superscripts denote the number of active quark flavours. For
simplicity we refrain to show colour indices for the fields. The different
decoupling constants $\zeta_X$ contain the radiative effects of the heavy
quark and can be computed in a perturbative series in $\alpha_s$.
One obtains the renormalized decoupling constants by replacing the bare
parameters and fields in Eq.~\eqref{eq::deccoeff} by renormalized counterparts
using Eq.~(\ref{eq::renconstants}). As an example, consider the gauge coupling
where the renormalized decoupling constant is given by
\begin{align}
g_s^{(n_l)} = \frac{Z^{(n_f)}_g}{Z^{(n_l)}_g} \zeta^0_g g^{(n_f)}_s =
\zeta_g g^{(n_f)}_s
\,.
\label{eq::zetags}
\end{align}
Note that $Z^{(n_l)}_g$ depends on $g_s^{(n_l)}$ which has to be transformed
to $g_s^{(n_f)}$ using Eq.~(\ref{eq::zetags}). Thus, it is natural to apply
Eq.~(\ref{eq::zetags}) iteratively to arrive at four
loops. Note that the loop corrections to the renormalization constants only
contain poles whereas the decoupling constants also contain positive powers in
$\epsilon$. Thus, $Z^{(n_l)}_g$ expressed in terms of $g_s^{(n_f)}$ also
contains positive powers in $\epsilon$.
For later convenience we define the decoupling constant for the strong
coupling constant $\alpha_s$ as
\begin{align}
\zeta_{\alpha_s} = \zeta_g^2\,.
\end{align}
\subsection{\label{sub::wil}Wilson coefficients for Higgs boson production and decay}
In the Standard Model, the coupling of a Higgs boson to gluons is mainly mediated by top
quark loops and thus in the following we have $n_f = 6$ and $n_l = 5$ for the
full and effective theory, respectively. The effective Lagrange density which
describes the coupling of one or two Higgs boson to gluons is obtained after
integrating out the top quark and is given by
\begin{align}
\mathcal{L}_\mathrm{eff} = -\frac{H}{v}C_H^0\mathcal{O}_1^0
+ \frac{1}{2}\left(\frac{H}{v}\right)^2C_{HH}^0\mathcal{O}_1^0\,,
\label{eq::leff}
\end{align}
where $\mathcal{O}_1 = G_{\mu\nu}^a G^{\mu\nu,a}/4$, with $G_{\mu\nu}^a$ being
the gluon field strength tensor, is the only physical operator one has to
consider. It is defined in the $n_l=5$-flavour effective theory. The Wilson
coefficients $C_H^0$ and $C_{HH}^0$ comprise the radiative effects of the top
quark, which is in analogy to the decoupling constants introduced in
Eq.~\eqref{eq::deccoeff}.
The renormalization of $\mathcal{O}_1$ has been discussed in detail in
Ref.~\cite{Spiridonov:1984br} (see also Ref.~\cite{Zoller:2016iam}). In fact,
the renormalization constant $Z_{\mathcal{O}_1}$ can be expressed through the
QCD beta function through all orders in perturbation theory~\cite{Spiridonov:1984br}
\begin{align}
Z_{\mathcal{O}_1} = \frac{1}{ 1 - \beta(\alpha_s^{(5)})/\epsilon }\,,
\end{align}
with
\begin{eqnarray}
\beta(\alpha_s) &=& - \left(\frac{\alpha_s}{\pi}\right)^2 \sum_{n\ge0} \beta_n
\left(\frac{\alpha_s}{\pi}\right)^n \,,\nonumber\\
\beta_0 &=& \frac{1}{4}\left( \frac{11}{3}C_A - \frac{4}{3}T_Fn_f \right)\,,\nonumber\\
\beta_1 &=& \frac{1}{16}\left( \frac{34}{3}C_A^2 - \frac{20}{3}C_A T_F n_f -
4 C_F T_F n_f \right)\,,\nonumber\\
\beta_2 &=& \frac{1}{64}\left(
\frac{2857}{54}C_A^3 - \frac{1415}{27}C^2_A T_F n_f -
\frac{205}{9}C_A C_F T_F n_f + 2 C_F^2 T_F n_f
\right.\nonumber\\
&\phantom{=}& \left.+ \frac{158}{27}C_A T^2_F n^2_f + \frac{44}{9}C_F T^2_F n^2_f\right)\,.
\end{eqnarray}
$Z_{\mathcal{O}_1}$ can be used to obtain the
renormalized Wilson coefficients via the relation
\begin{align}
C_{X}^0\mathcal{O}_1^0 = \frac{C_{X}^0}{Z_{\mathcal{O}_1}}\,Z_{\mathcal{O}_1}\mathcal{O}_1^0 = C_{X}\mathcal{O}_1\,,
\end{align}
with $X\in\{H,HH\}$.
\subsection{Low energy theorems}
There is a close connection between the decoupling constants from
Subsection~\ref{sub::dec} and the Wilson coefficients from~\ref{sub::wil}
which is established by the so-called LETs.
In \cite{Chetyrkin:1997un} a LET relating $\zeta_{\alpha_s}$ and $C_H$ has been derived
\begin{align}
C_H = -\frac{m_t}{\zeta_{\alpha_s}}\frac{\partial}{\partial m_t}\zeta_{\alpha_s}
\label{eq::let_ch}
\end{align}
where we adapted the prefactors to match our conventions. Beyond three loops
$C_H$ has only been obtained with the help of Eq.~(\ref{eq::let_ch}). In this
work we perform an explicit calculation of $C_H$ for general SU($N_c$) colour
factors.
Recently, in Ref.~\cite{Spira:2016zna} a LET has been proposed for $C_{HH}$
which reads
\begin{align}
C_{HH} =
\frac{m^2_t}{\zeta_{\alpha_s}}\frac{\partial^2}{\partial
m_t^2}\zeta_{\alpha_s}
- 2\left(\frac{m_t}{\zeta_{\alpha_s}}\frac{\partial}{\partial
m_t}\zeta_{\alpha_s}\right)^2
\,.
\label{eq::let_chh}
\end{align}
It provides the correct result for $C_{HH}$ at three-loop
order~\cite{Grigo:2014jma};
in Section~\ref{sec:higgs} we perform an explicit calculation of $C_{HH}$ and
show that Eq.~(\ref{eq::let_chh}) also works at four loops.
Note that in QCD $\zeta_{\alpha_s}$ depends on $m_t$ only via logarithms of
the form $\log(\mu^2/m_t^2)$. Thus, it is possible to reconstruct the $m_t$
dependence at $(n+1)$-loop order from the $n$-loop result of
$\zeta_{\alpha_s}$ with the help of renormalization group equations.
Using the LETs this immediately leads to the $(n+1)$-loop results for $C_H$
and $C_{HH}$.
\subsection{Computational setup}
For our calculation we use a well tested, automated setup, starting with the
generation of Feynman diagrams using \verb|qgraf|~\cite{Nogueira:1991ex}. The
output is processed by \verb|q2e| and
\verb|exp|~\cite{Harlander:1997zb,Seidensticker:1999bb,q2eexp}, which generate
\verb|FORM|~\cite{Ruijl:2017dtg} code for the amplitudes and map
them onto individual integral families. We then compute the colour factors of the
diagrams using \verb|color|~\cite{vanRitbergen:1998pn} and combine amplitudes
with the same colour factors and integral families to so-called superdiagrams
so that we can process them together.
After processing Lorentz structures and expanding in the external momenta, we
are left with single-scale tensor tadpole integrals. We perform a tensor
decomposition and reduce the remaining, scalar integrals to master integrals,
using \verb|LiteRed| \cite{Lee:2012cn,Lee:2013mka} and
\verb|FIRE5|~\cite{Smirnov:2014hma}. With the help of the \verb|FindRules| command of
\verb|FIRE5| we identify equivalent master integrals from different integral families.
The master integrals are all known to sufficiently high order in $\epsilon$
\cite{Lee:2010hs} (see also \cite{Schroder:2005va,Chetyrkin:2006dh}). The
missing $\epsilon^3$ term of the integral $J_{6,2}$ (in the notation of
\cite{Lee:2010hs}) was provided in~\cite{Lee:eps3}.
As a cross-check, we also computed the $ggH$ amplitude and the
decoupling constants through three loops using
\verb|MATAD|~\cite{Steinhauser:2000ry}.
\section{\label{sec:dec}Calculation of decoupling constants}
We aim for the calculation of all QCD decoupling constants up to four-loop
order with general SU($N_c$) colour factors. They are obtained from
$\zeta_3^0$, $\tilde{\zeta}_3^0$, $\zeta_2^0$ and $\zeta_m^0$
as introduced in Eq.~(\ref{eq::deccoeff}) and the decoupling constant
of the ghost-gluon vertex, $\tilde{\zeta}_1^0$.
The decoupling constant for the gauge coupling is then given by
\begin{align}
\zeta_g^0 = \frac{\tilde{\zeta}^0_1}{\tilde{\zeta}^0_3\sqrt{\zeta^0_3}}\,.
\end{align}
The remaining decoupling constants for the gluon-quark vertex ($\zeta_1^0$),
the three-gluon vertex ($\zeta_{3g}^0$) and four-gluon-vertex ($\zeta_{4g}^0$)
are obtained with the help of the Ward identities
\begin{eqnarray}
\zeta_1^0 &=& \zeta_g^0 \zeta_2^0 \sqrt{\zeta^0_3}\,,\nonumber\\
\zeta_{3g}^0 &=& \zeta_g^0 (\zeta^0_3)^{3/2}\,,\nonumber\\
\zeta_{4g}^0 &=& (\zeta_g^0)^2 (\zeta^0_3)^2\,.
\end{eqnarray}
The bare decoupling constants $\zeta_3^0$, $\tilde{\zeta}_3^0$, $\zeta_2^0$
and $\zeta_m^0$ are obtained from the hard
part of the gluon and ghost vacuum polarizations $\Pi_G(p^2)$ and
$\Pi_c(p^2)$, as well as the vector and scalar parts of the light quark
self-energy $\Sigma_V(p^2)$ and $\Sigma_S(p^2)$ as~\cite{Chetyrkin:1997un}
\begin{align}
\zeta_3^0 = 1 + \Pi^{0,h}_G(0)~,\nonumber\\
\tilde{\zeta}_3^0 = 1 + \Pi^{0,h}_c(0)~,\nonumber\\
\zeta_2^0 = 1 + \Sigma^{0,h}_V(0)~,\nonumber\\
\zeta_m^0 = \frac{1 - \Sigma^{0,h}_S(0)}{1 + \Sigma^{0,h}_V(0)}\,.
\label{eq::zeta}
\end{align}
To obtain the decoupling constants we only need the leading term in the
limit $m_h \rightarrow \infty$. Thus, we can Taylor-expand in the external
momenta and set them to zero after factoring out the tree-level tensor
structure. This reduces the integrals to single-scale tadpole integrals. In
analogy to Eq.~(\ref{eq::zeta}), $\tilde{\zeta}_1^0$ is obtained from the
ghost-gluon vertex
\begin{align}
\tilde{\zeta}_1^0 = 1 + \Gamma_{G\overline{c}c}(p,q)\Big|_{p,q\to 0}\,.
\label{eq::zeta1til}
\end{align}
where $p$ and $q$ are the four-momenta of the ghost and gluon, respectively. After projecting out
the tree-level contribution both $p$ and $q$ are set to zero.
\begin{table}[t]
\begin{center}
\begin{tabular}[t]{c||r|r|r|r}
\# loops & 1 & 2 & 3 & 4 \\
\hline
$\Pi^{0,h}_G$ & 1 & 7 & 189 & 6\,245 \\
$\Pi^{0,h}_c$ & --- & 1 & 25 & 765 \\
$\Sigma^{0,h}_{V/S}$ & --- & 1 & 25 & 765 \\
$\Gamma_{G\overline{c}c}$ & --- & 5 & 228& 10\,118 \\
\end{tabular}
\caption{\label{tbl::numdia_zeta}Number of diagrams needed for computing
the decoupling constants up to four loops.}
\end{center}
\end{table}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.22\textwidth]{dias/Pi_g4l.eps}
\includegraphics[width=0.22\textwidth]{dias/Pi_c4l.eps}
\includegraphics[width=0.22\textwidth]{dias/qq4l1.eps}
\includegraphics[width=0.22\textwidth]{dias/gcc4l2.eps}
\end{center}
\caption{\label{fig::zeta} Sample four-loop diagrams contributing to the
decoupling constants defined in Eqs.~(\ref{eq::zeta})
and~(\ref{eq::zeta1til}). Solid, curly and dashed lines refer to
fermions, gluons and ghosts, respectively.}
\end{figure}
In Tab.~\ref{tbl::numdia_zeta} we present the number of diagrams generated by
\verb|qgraf| for the individual Green functions. Sample four-loop Feynman diagrams
are shown in Fig.~\ref{fig::zeta}.
We perform the calculation keeping the full dependence on the gauge parameter
$\xi$ which drops out for $\zeta_{\alpha_s}^0$ and $\zeta_m^0$, as expected on
general grounds. All other decoupling constants have an explicit $\xi$
dependence. At three-loop order our results agree with those of
Ref.~\cite{Chetyrkin:1997un} and at four loops we reproduce the results for
$\zeta_{\alpha_s}$ from Refs.~\cite{Schroder:2005hy,Chetyrkin:2005ia} and
$\zeta_m$ from~\cite{Liu:2015fxa} after specifying $N_c=3$.
The decoupling constants $\zeta_{\alpha_s}$ and $\zeta_m$ as well as the
leading terms of the others can be found in Appendix~\ref{app:dec}. We
provide the results for all renormalized decoupling constants in computer
readable form in the ancillary files~\cite{progdata}. For convenience
we offer several options concerning the renormalization scheme of
the heavy quark ($\overline{\rm MS}$ vs. on-shell) and $\alpha_s$
($n_f$ vs. $n_l$ active flavours).
\section{\label{sec:higgs}Direct calculation of matching coefficients}
This section is devoted to the direct calculation of $C_H$ and $C_{HH}$
defined in the
effective Lagrange density in Eq.~(\ref{eq::leff}). Two-loop results for $C_H$ are
known since the beginning of the eighties~\cite{Inami:1982xt,Djouadi:1991tka}
and at three-loop order $C_H$ has been obtained for the first time from a
direct calculation of the Higgs-gluon vertex in the large-$m_t$ limit in
Ref.~\cite{Chetyrkin:1997iv} (see also Ref.~\cite{Steinhauser:2002rq}). Later
the result has been confirmed with the help of a LET derived in
Ref.~\cite{Chetyrkin:1997iv} (see also Ref.~\cite{Kramer:1996iq}). Using the
three-loop decoupling constant for $\alpha_s$, the LET in combination with the
four-loop beta function~\cite{vanRitbergen:1997va,Czakon:2004bu} even leads to
the four-loop result for $C_H$. The same reasoning has been applied in
Refs.~\cite{Schroder:2005hy,Chetyrkin:2005ia,Chetyrkin:2016uhw} to obtain the
five-loop prediction for $C_H$, where an important input is provided by (the
fermionic part of) the five-loop beta function which has been computed in
Refs.~\cite{Baikov:2016tgj,Herzog:2017ohr,Luthe:2017ttg}. To date there is no
direct calculation of $C_H$ at four loops.
For $C_{HH}$ the situation is as follows: at one- and two-loop order $C_{HH}$
and $C_H$ agree. At three-loop order a direct calculation has been performed
in Ref.~\cite{Grigo:2014jma} by matching the full to the effective theory
in Eq.~(\ref{eq::leff}). The result has been confirmed via the LET from
Ref.~\cite{Spira:2016zna}, which can be used to derive the
four-loop result for $C_{HH}$. It is one of the main aims of this paper
to perform a direct calculation of $C_{HH}$ and to confront it with the LET
result.
In the following we use the notation for the matching equations introduced in
Ref.~\cite{Grigo:2014jma}. We compute the $ggH$ and $ggHH$ amplitudes in the
limit where both the effective and the full theory are valid, i.e. for small
external momenta as compared to the top quark mass. This leads again to
single-scale vacuum integrals up to four-loop order. In the following
we use $\alpha_s \equiv \alpha_s^{(5)}(\mu)$ if not indicated differently.
\begin{table}[t]
\begin{center}
\begin{tabular}[t]{c||r|r|r|r}
\# loops & 1 & 2 & 3 & 4 \\
\hline
$ggH$ & 2 & 23 & 657& 23\,251 \\
$ggHH$ 1PI & 6 & 99 &3\,192& 124\,149 \\
$ggHH$ 1PR & --- & 8 & 216 & 7\,200 \\
\end{tabular}
\caption{\label{tbl::numdia_ggh_gghh}Number of diagrams needed for
computing the Higgs-gluon amplitudes up to four loops.}
\end{center}
\end{table}
\subsection{$C_H$}
The Wilson coefficient is obtained by comparing the $ggH$ amplitude in the
effective and full theory which leads to the following matching formula
\begin{align}
C_H Z_{\mathcal{O}_1} \mathcal{A}^\mathrm{eff}_\mathrm{LO} =
\frac{1}{\zeta_3^0}\mathcal{A}^h + {\cal O}(1/m_t)\,.
\label{eq::match_CH}
\end{align}
On the full-theory side $\mathcal{A}^h$ denotes the hard part of the
amplitude, which is obtained from a Taylor expansion in the two external
momenta. It is assumed that the top quark mass and $\alpha_s$ are renormalized
using standard counterterms up to three loops and the factor $1/\zeta_3^0$
takes care of the non-vanishing part of the gluon wave function
renormalization. Due to our choice of the kinematic variables there are only
tree-level contributions on the effective-theory side. Furthermore, we have
the renormalization constant of the effective operator, $Z_{\mathcal{O}_1}$,
and the sought-after (renormalized) matching coefficient $C_H$, which is
obtained by dividing Eq.~(\ref{eq::match_CH}) by $Z_{\mathcal{O}_1}$. Note
that $Z_{\mathcal{O}_1}$ depends on $\alpha_s^{(5)}$ whereas the quantities on
the r.h.s. depend on $\alpha_s^{(6)}$. Before combining the various parts we
use the decoupling constant to transform the strong coupling constant to
$\alpha_s^{(5)}$. We renormalize the top quark mass in a first step in the
$\overline{\rm MS}$ scheme and transform to the on-shell scheme afterwards.
The number of diagrams generated by \verb|qgraf| for $\mathcal{A}^h$ is shown
in Tab.~\ref{tbl::numdia_ggh_gghh} and sample Feynman diagrams
are shown in Fig.~\ref{fig::gghfull}. In a first step we apply
the projector
\begin{align}
P^{\mu\nu} = \frac{1}{2-2\epsilon}\left(g^{\mu\nu}q_1\cdot q_2
- q_1^\nu q_2^\mu - q_1^\mu q_2^\nu \right)\,,
\label{eq::projch}
\end{align} where $q_1^\mu$ and $q_2^\nu$ are the incoming four-momenta of the
external gluons with polarization vectors $\varepsilon^\mu(q_1)$ and
$\varepsilon^\nu(q_2)$. After tensor reduction we obtain the same kind of integral
families as for the decoupling constants of the previous section.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.2\textwidth]{dias/ggh1l.eps}
\includegraphics[width=0.2\textwidth]{dias/ggh2l.eps}
\includegraphics[width=0.2\textwidth]{dias/ggh3l.eps}
\includegraphics[width=0.2\textwidth]{dias/ggh4l1.eps}
\\
\includegraphics[width=0.2\textwidth]{dias/ggh4l2.eps}
\includegraphics[width=0.2\textwidth]{dias/ggh4l3.eps}
\includegraphics[width=0.2\textwidth]{dias/ggh4l4.eps}
\includegraphics[width=0.2\textwidth]{dias/ggh4l5.eps}
\end{center}
\caption{\label{fig::gghfull} Sample one-, two-, three- and four-loop diagrams
contributing to the $gg \rightarrow H$ amplitude. Solid and curly lines refer to
fermions and gluons, respectively. The external Higgs boson is represented
by a dashed line.}
\end{figure}
As before, we perform the calculation for generic SU($N_c$) colour factors
and full dependence on the gauge parameter $\xi$, which drops out after
summing all contributions to $\mathcal{A}^h$.
We cast the final result for the Wilson coefficient $C_H$ in the form
\begin{align}
C_H = -\frac{2\alpha_s}{3\pi} T_F \sum_{i = 1} C_H^{(i)}\left(\frac{\alpha_s}{\pi}\right)^{(i-1)}\,,
\end{align}
where the $C^{(i)}$ are given by
\begin{align}
C_H^{(1)} &= 1~,\\
C_H^{(2)} &= \frac{5}{4} C_A - \frac{3}{4} C_F~,\\
C_H^{(3)} &= \frac{1063}{576} C_A^2 - \frac{25}{12} C_A C_F - \frac{5}{96}
C_A T_F + \frac{27}{32} C_F^2 - \frac{1}{12} C_F T_F\nonumber\\
&+ \left[\frac{7}{16} C_A^2 - \frac{11}{16} C_A
C_F\right]\ln\left(\frac{\mu^2}{M_t^2}\right) + n_l
T_F\left[-\frac{47}{144} C_A - \frac{5}{16} C_F
+ \frac{1}{2} C_F\ln\left(\frac{\mu^2}{M_t^2}\right)\right]~,\\
C_H^{(4)} &= C_A^3\left(\frac{110041}{41472} - \frac{1577}{3072} \zeta(3)\right) + C_A^2 C_F\left(- \frac{{ 99715}}{6912} + \frac{5105}{512}\zeta(3)\right)\nonumber\\
&+ C_A^2 T_F\left(- \frac{1081}{3456} + \frac{1}{384}\zeta(3)\right) + C_A C_F^2\left(\frac{{ 2963}}{384} - \frac{407}{128}\zeta(3)\right)\nonumber\\
&+ C_A C_F T_F\left(\frac{4537}{1728} - \frac{115}{64}\zeta(3)\right) + C_A T_F^2\left(\frac{2}{27} - \frac{7}{64}\zeta(3)\right) - \frac{471}{128} C_F^3\nonumber\\
&+ C_F^2 T_F\left(- \frac{5}{12} + \frac{13}{32}\zeta(3)\right) + C_F T_F^2\left(\frac{113}{432} - \frac{7}{32}\zeta(3)\right)\nonumber\\
&+ \frac{d_R^{abcd}d_A^{abcd}}{N_A { T_F}}\left(- { \frac{2}{3}} +
{ \frac{13}{2}}\zeta(3)\right) + \frac{d_R^{abcd}d_R^{abcd}}{N_A { T_F}}\left(\frac{11}{{ 12}} - { 2}\zeta(3)\right)\nonumber\\
&+ \left[\frac{1993}{1152} C_A^3 - { \frac{275}{72}} C_A^2 C_F - \frac{55}{576} C_A^2 T_F + \frac{99}{{ 64}} C_A C_F^2 - \frac{11}{72} C_A C_F T_F\right]\ln\left(\frac{\mu^2}{M_t^2}\right)\nonumber\\
&+ \left[\frac{77}{192} C_A^3 - \frac{121}{192} C_A^2
C_F\right]\ln^2\left(\frac{\mu^2}{M_t^2}\right)+ n_l\frac{d_R^{abcd}d_R^{abcd}}{N_A { T_F}}\left(\frac{11}{{ 6}} - { 4}\zeta(3)\right)\nonumber\\
&+ n_l T_F\Bigg[C_A^2\left(- \frac{12421}{10368} - \frac{151}{256}\zeta(3)\right) + C_A C_F\left(\frac{9605}{2592} - \frac{1145}{384}\zeta(3)\right)\nonumber\\
&+ C_A T_F\left(\frac{7}{216} - \frac{7}{64}\zeta(3)\right)+ C_F^2\left(\frac{{ 215}}{288} + \frac{127}{96}\zeta(3)\right)+ C_F T_F\left(- \frac{29}{144} - \frac{7}{32}\zeta(3)\right)\Bigg]
\nonumber\\
&+ n_l^2 T_F^2\left[-\frac{161}{2592} C_A - \frac{677}{1296} C_F\right]
\nonumber\\
&+ n_l T_F\left[- \frac{55}{288} C_A^2 + \frac{55}{36} C_A C_F + \frac{5}{144} C_A T_F - { \frac{5}{8}} C_F^2 + \frac{1}{18} C_F T_F\right]\ln\left(\frac{\mu^2}{M_t^2}\right)\nonumber\\
&+ n_l^2 T_F^2\left[\frac{5}{144} C_A + \frac{1}{18} C_F\right]\ln\left(\frac{\mu^2}{M_t^2}\right)
+ n_l T_F\left[- \frac{7}{48} C_A^2 + \frac{11}{16} C_A C_F\right]\ln^2\left(\frac{\mu^2}{M_t^2}\right)\nonumber\\
&- \frac{1}{6} n_l^2 C_F T_F^2\ln^2\left(\frac{\mu^2}{M_t^2}\right)\,.
\label{eq::CH}
\end{align}
$\zeta(n)$ is the Riemann $\zeta$-function, evaluated at $n$, $M_t$ is the
on-shell top quark mass and the SU$(N_c)$ colour factors are given by
\begin{align}
&C_A = N_c,\qquad C_F = \frac{N_c^2-1}{2N_c},\qquad T_F = \frac{1}{2},\nonumber\\
&\frac{d_R^{abcd}d_A^{abcd}}{N_A} = \frac{N_c(N_c^2+6)}{48},\qquad\frac{d_R^{abcd}d_R^{abcd}}{N_A} = \frac{N_c^4 - 6 N_c^2 + 18}{96 N_c^2}\,,
\label{eq::cf}
\end{align}
with $N_A=N_c^2-1$. Note that $C_H$ only contains
$\zeta(3)$ as a transcendental constant while $\mathcal{A}^h$ also contains
other zeta-values and polylogarithms up to weight four. They cancel
in the combination with $1/\zeta_3^0$ and only $\zeta(3)$ survives. After
specifying $N_c=3$ our
result is in full agreement with the expression obtained with the help of the
LET~\cite{Chetyrkin:1997un}. The latter can be used to obtain the five-loop
result with full colour structure. We refrain from showing explicit results in
the paper but include them in the ancillary files~\cite{progdata}. Let
us remark that the five-loop result contains zeta-values and polylogarithms up
to weight five.
\subsection{$C_{HH}$}
The matching procedure to obtain $C_{HH}$ is more involved as for $C_H$.
First of all there are three contributions on the effective-theory side which
are shown in Fig.~\ref{fig::gghheff}: a one-particle irreducible (1PI) term
proportional to $C_{HH}$, a one-particle reducible (1PR) term, which involves
$C_H^2$, and a term mediated by a virtual Higgs boson which splits into a
Higgs boson pair via the Higgs boson self-coupling $\lambda$. The latter is
similar in nature to the effective amplitude in the matching formula for
$C_H$. In fact, also on the full-theory side this contribution involves
diagrams which we already encountered in the computation of $C_H$. As
mentioned in Ref.~\cite{Grigo:2014jma} it is easy to see, that these diagrams
exactly cancel between the full and effective theory.
Thus, the contributions relevant to extract $C_{HH}$ are the 1PI
and 1PR contribution with $\lambda=0$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=.27\textwidth]{dias/gghh-eft-PI.eps}
\includegraphics[width=.25\textwidth]{dias/gghh-eft-PR.eps}
\includegraphics[width=.4\textwidth]{dias/gghh-eft-hhh.eps}
\caption{\label{fig::gghheff}Tree-level contributions to the
$gg \rightarrow HH$ amplitude in the effective theory. The blob
indicates the insertion of the operator ${\cal O}_1$. The left diagram
is proportional to $C_{HH}$, the one in the middle to $C_H^2$ and the
right diagram, which contains the trilinear Higgs coupling $\lambda$, to
$C_H$. The amplitudes corresponding to the three Feynman diagrams are
denoted by $\mathcal{A}^\mathrm{eff}_{\mathrm{LO,1PI}}$,
$\mathcal{A}^\mathrm{eff}_{\mathrm{LO,1PR,}\lambda = 0}$ and
$\mathcal{A}^\mathrm{eff}_{\mathrm{LO,1PR,}\lambda \neq 0}$.}
\end{center}
\end{figure}
The effective-theory side of the matching formula is obtained after
renormalizing the operators in the various contributions of
Fig.~\ref{fig::gghheff}. Whereas the left and right contributions are both
renormalized with a factor $Z_{\mathcal{O}_1}$, the term in the middle needs
special care. In fact, a naive renormalization with $(Z_{\mathcal{O}_1})^2$
leads to uncanceled poles as has already been observed in
Refs.~\cite{Zoller:2012qv,Zoller:2014dca}. A careful analysis of the
renormalization of the product of two operators $\mathcal{O}_1$ has been
performed in Ref.~\cite{Zoller:2016iam} along the lines
of~\cite{Spiridonov:1984br}. It has been observed that apart from the naive
multiplicative renormalization a further term is needed which is proportional to
a single $\mathcal{O}_1$. Adapting the findings of Ref.~\cite{Zoller:2016iam}
to our notation one has
\begin{align}
\mathcal{A}^\mathrm{eff}_{(\mathcal{O}_1)^2} = Z_{\mathcal{O}_1}^2
\mathcal{A}^\mathrm{eff}_{(\mathcal{O}_1^0)^2} +
Z_{11}^L\mathcal{A}^\mathrm{eff}_{\mathcal{O}_1^0}~\,
\label{eq::renO12}
\end{align}
where $\mathcal{A}^\mathrm{eff}_{\mathcal{O}_1^0}$ and
$\mathcal{A}^\mathrm{eff}_{(\mathcal{O}_1^0)^2}$ correspond to amplitudes with
one and two operator insertions. The renormalization constant $Z_{11}^L$
(where $L$ stands for ``linear'') is given by~\cite{Zoller:2016iam}
\begin{align}
Z_{11}^L = \frac{1}{\epsilon}\left(1 -
\frac{\beta(\alpha_s)}{\epsilon}\right)^{-2}\alpha_s^2
\frac{\partial}{\partial\alpha_s}\left[\frac{\beta(\alpha_s)}{\alpha_s}\right]\,.
\end{align}
It has its first non-vanishing contribution at order $\alpha_s^2$.
As we will see below, in our calculation we need the combination
$Z_{11}^L/Z_{\mathcal{O}_1}$ up
to order $\alpha_s^2$ which is given by
\begin{align}
\frac{Z_{11}^L}{Z_{\mathcal{O}_1}} = -\frac{\beta_1}{\epsilon}\frac{\alpha_s^2}{(4\pi)^2} + \mathcal{O}(\alpha_s^3)\,.
\end{align}
We are now in the position to write down the matching formula for $C_{HH}$.
Complementing the effective-theory side, which is basically given by
Fig.~\ref{fig::gghheff}, with the corresponding full-theory amplitudes
and taking into account Eq.~(\ref{eq::renO12}) leads to\footnote{When
applying Eq.~(\ref{eq::renO12}) to Higgs boson pair production we have
$\mathcal{A}^\mathrm{eff}_{(\mathcal{O}_1^0)^2} =
\mathcal{A}^\mathrm{eff}_{\mathrm{LO,1PR,}\lambda = 0}$
and $\mathcal{A}^\mathrm{eff}_{\mathcal{O}_1^0} =
\mathcal{A}^\mathrm{eff}_{\mathrm{LO,1PI}}$.}
\begin{align}
&(C_{HH}Z_{\mathcal{O}_1} +
C_H^2Z_{11}^L)\mathcal{A}^\mathrm{eff}_{\mathrm{LO,1PI}} +
C_H^2Z_{\mathcal{O}_1}^2
\mathcal{A}^\mathrm{eff}_{\mathrm{LO,1PR,}\lambda = 0} +
C_HZ_{\mathcal{O}_1}
\mathcal{A}^\mathrm{eff}_{\mathrm{LO,1PR,}\lambda \neq
0}\nonumber\\
&= \frac{1}{\zeta_3^0}\left(\mathcal{A}^h_{\mathrm{1PI}} +
\mathcal{A}^h_{\mathrm{1PR,}\lambda = 0} +
\mathcal{A}^h_{\mathrm{1PR,}\lambda \neq 0}\right)
+ {\cal O}(1/m_t)
\,,
\label{eq::match_CHH}
\end{align}
where sample Feynman diagrams contributing to $\mathcal{A}^h_{\mathrm{1PI}}$
and $\mathcal{A}^h_{\mathrm{1PR,}\lambda = 0}$ can be found in Figs.~\ref{fig::gghh1PIfull}
and~\ref{fig::gghh1PRfull}, respectively. As already mentioned above, the
contributions with $\lambda\neq0$ cancel in Eq.~(\ref{eq::match_CHH}).
Note that our matching formula differs from the one of Ref.~\cite{Grigo:2014jma}
by the term proportional to $Z_{11}^L$ which contributes for the first time
at four-loop order, since both $C_H^2$ and $Z_{11}^L$ are of order $\alpha_s^2$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.2\textwidth]{dias/gghh1l-PI.eps}
\includegraphics[width=0.2\textwidth]{dias/gghh2l-PI.eps}
\includegraphics[width=0.2\textwidth]{dias/gghh3l-PI.eps}
\\
\includegraphics[width=0.2\textwidth]{dias/gghh3l-PR-PI.eps}
\includegraphics[width=0.2\textwidth]{dias/gghh4l1-PI.eps}
\includegraphics[width=0.2\textwidth]{dias/gghh4l2-PI.eps}
\end{center}
\caption{\label{fig::gghh1PIfull} Sample one-, two-, three- and four-loop
diagrams contributing $\mathcal{A}^h_{\mathrm{1PI}}$ in
Eq.~(\ref{eq::match_CHH}).}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.2\textwidth]{dias/gghh2l-PR.eps}
\includegraphics[width=0.2\textwidth]{dias/gghh3l-PR.eps}
\includegraphics[width=0.2\textwidth]{dias/gghh4l1-PR.eps}
\includegraphics[width=0.2\textwidth]{dias/gghh4l2-PR.eps}
\end{center}
\caption{\label{fig::gghh1PRfull} Sample two-, three- and four-loop diagrams
contributing to $\mathcal{A}^h_{\mathrm{1PR,}\lambda = 0}$ in
Eq.~(\ref{eq::match_CHH}).}
\end{figure}
Let us in the following discuss some features of the matching procedure.
At one-loop order the only non-zero contribution on the r.h.s.
of Eq.~(\ref{eq::match_CHH}) is $\mathcal{A}^h_{\mathrm{1PI}}$ and one obtains
$C_{HH}^{(1)} = C_H^{(1)}$. This also holds at two-loops where the 1PR contributions
on effective- and full-theory side match exactly.
A non-trivial interplay between $\mathcal{A}^h_{\mathrm{1PI}}$ and
$\mathcal{A}^h_{\mathrm{1PR,}\lambda = 0}$ is observed for the first time at
three-loop order~\cite{Grigo:2014jma}. In fact the 1PI and 1PR contributions
are not separately finite any more and the poles only cancel in the sum.
Starting from this order $C_{HH}$ is different from $C_H$.
While the 1PI and 1PR contributions are separately $\xi$-independent at three loops,
for the four-loop colour structure $C_A^2 T_F$ it only drops out in the proper combination.
We computed the 1PI and 1PR amplitudes in the full theory separately and keep
in both cases terms linear in the gauge parameter $\xi$. For both
contributions it is important to keep the three external momenta different
from zero and different from each other in order to avoid the mixing with
unphysical operators~\cite{Spiridonov:1984br}. The external momenta
can be set to zero after projection to the matching
coefficient which is done with the help of
\begin{align}
P^{\mu\nu} = &\frac{1}{2-4\epsilon}\left(\frac{q_1^\nu q_2^\mu
q_{33}}{2q_{12}q_T^2} - \frac{q_1^\nu q_2^\mu}{2q_{12}} -
\frac{q_1^\nu q_3^\mu q_{23}}{q_{12}q_T^2}
-\frac{q_2^\mu q_3^\nu q_{13}}{q_{12}q_T^2} + \frac{q_3^\mu
q_3^\nu}{q_T^2} + g^{\mu\nu}\right)\nonumber\\
&-\frac{q_1^\nu q_2^\mu q_{33}}{4q_{12}q_T^2} - \frac{q_1^\nu
q_2^\mu}{4q_{12}} + \frac{q_1^\nu q_3^\mu q_{23}}{2q_{12}q_T^2}
+\frac{q_2^\mu q_3^\nu q_{13}}{2q_{12}q_T^2} - \frac{q_3^\mu q_3^\nu}{2q_T^2}~,
\label{eq::proj_chh}
\end{align}
where $q_{ij} = q_i \cdot q_j$ and $q_T^2 = 2q_{13}q_{23}/q_{12} - q_{33}$.
$q_1^\mu$ and $q_2^\nu$ are the incoming four-momenta of the external gluons
with polarization vectors $\varepsilon^\mu(q_1)$ and $\varepsilon^\nu(q_2)$
and $q_3$ is the incoming four-momentum of one of the Higgs bosons.
The number of diagrams for the 1PI amplitude can be found in
Tab.~\ref{tbl::numdia_ggh_gghh} and sample diagrams are shown in
Fig.~\ref{fig::gghh1PIfull}. Once the projector of Eq.~(\ref{eq::proj_chh}) is
applied one obtains scalar expressions which still contain scalar products of
$q_1$, $q_2$ and $q_3$ and loop momenta in the numerator. After solving the
corresponding tensor vacuum integrals the resulting scalar products $q_{ij}$
cancel against the corresponding contributions with negative powers from the
projector and all external momenta can be set to zero.
The 1PR amplitude has been obtained in two different ways. First, we computed
the 1PR diagrams up to four-loop order (see Tab.~\ref{tbl::numdia_ggh_gghh}
for the number of diagrams and Fig.~\ref{fig::gghh1PRfull} for typical Feynman
diagrams) in analogy to the 1PI contribution. As a cross-check we computed
the 1PI parts of the 1PR contributions separately and constructed the $n$-loop
1PR $ggHH$ amplitude from $ggH$ amplitudes computed up to $(n-1)$ loops. In this
approach one of the gluons in the $ggH$ amplitude has to be off-shell, which
leads to more non-vanishing Lorentz structures. In practice, we computed the
1PI $ggH$ amplitudes with open Lorentz indices up to three loops.
Full agreement has been found between the two methods.
We cast the final result for the Wilson coefficient $C_{HH}$ in the form
\begin{align}
C_{HH} = -\frac{2\alpha_s}{3\pi} T_F \sum_{i = 1} \left(C_{H}^{(i)} + \Delta_{HH}^{(i)}\right)\left(\frac{\alpha_s}{\pi}\right)^{(i-1)}~,
\end{align}
where the $C_H^{(i)}$ are given in Eq.~\eqref{eq::CH} and
the differences are given by
\begin{align}
\Delta_{HH}^{(1)} &= 0~,\nonumber\\
\Delta_{HH}^{(2)} &= 0~,\nonumber\\
\Delta_{HH}^{(3)} &= \frac{7}{8} C_A^2 - \frac{11}{8} C_A C_F - \frac{5}{6}
C_A T_F + \frac{1}{2} C_F T_F + n_l C_F T_F
~,\nonumber\\
\Delta_{HH}^{(4)} &= \frac{1993}{576} C_A^3 - \frac{1289}{144} C_A^2 C_F -
\frac{3191}{864} C_A^2 T_F + \frac{165}{32} C_A C_F^2 +
\frac{67}{18} C_A C_F T_F + \frac{5}{72} C_A
T_F^2\nonumber\\ &- \frac{3}{2} C_F^2 T_F + \frac{1}{9}
C_F T_F^2 + \left[\frac{77}{48} C_A^3
- \frac{121}{48} C_A^2 C_F -
\frac{7}{12} C_A^2 T_F +
\frac{11}{12} C_A C_F
T_F\right]\ln\left(\frac{\mu^2}{M_t^2}\right)\nonumber\\
& + n_l T_F \left[- \frac{55}{144} C_A^2 + \frac{55}{18} C_A C_F +
\frac{109}{216}C_A T_F -
\frac{11}{4} C_F^2
+ \frac{19}{36} C_F
T_F\right]\nonumber\\
& + n_l^2 T_F^2 \left[\frac{5}{72} C_A + \frac{1}{9}
C_F\right] + n_l T_F \left[- \frac{7}{12} C_A^2 +
\frac{11}{4} C_A C_F - \frac{2}{3} C_F
T_F\right]\ln\left(\frac{\mu^2}{M_t^2}\right)\nonumber\\
& - \frac{2}{3} n_l^2 C_F T_F^2 \ln\left(\frac{\mu^2}{M_t^2}\right)\,.
\end{align}
The three-loop result can be found in Ref.~\cite{Grigo:2014jma}. Our
four-loop result $\Delta_{HH}^{(4)}$ agrees with the expression from
Eq.~(\ref{eq::let_chh})~\cite{Spira:2016zna}. We can thus confirm the validity
of the LET for $C_{HH}$~\cite{Spira:2016zna} through four loops. In analogy
to $C_H$ also for $C_{HH}$ it is possible to construct the five-loop
approximation for general colour structure. The corresponding results can be
found in computer readable form in the ancillary files~\cite{progdata}. After
specifying to $N_c=3$ we agree with the numerical results given in
Ref~\cite{Spira:2016zna}, both for $\overline{\rm MS}$ and on-shell top quark
mass.
\section{\label{sec:conclusions}Conclusions}
We perform for the first time a direct four-loop computation of the Wilson
coefficients $C_H$ and $C_{HH}$ of the effective operators, which couple
gluons to one and two Higgs bosons, respectively. $C_H$ and $C_{HH}$ enter as
building blocks various physical quantities, e.g., the
next-to-next-to-next-to-leading order predictions for
single~\cite{Anastasiou:2016cez,Mistlberger:2018etf} and double Higgs boson
production.\footnote{See also the recent paper~\cite{Banerjee:2018} where
two-loop massless four-point amplitudes have been computed, a further
building block to next-to-next-to-next-to-leading order double Higgs boson
production.} Our results for $C_H$ and $C_{HH}$ agree with the expression
obtained by means of LETs. Furthermore, we compute all QCD decoupling
constants up to four-loop order. If possible we compared with the literature
and find agreement after specifying the colour factors. All our results are
expressed for general SU$(N_c)$ colour factors whereas the four-loop
expressions in the literature are only available for $N_c=3$.
A major result of this paper is the derivation of the matching
equation~(\ref{eq::match_CHH}) which receives a non-trivial renormalization
contribution from the effective-theory amplitude with two insertions
of the operator ${\cal O}_1$. The new term contributes for the first time
at four-loop order and is essential to obtain a finite result.
For the convenience of the reader we collect all analytic results
obtained in this paper in ancillary files~\cite{progdata}.
\section*{Acknowledgements}
We thank Kirill Melnikov and Konstantin Chetyrkin for fruitful discussions and
Konstantin Chetyrkin for drawing our attention to Ref.~\cite{Zoller:2016iam}.
F.H. acknowledges the support by the DFG-funded Doctoral School KSETA.
\begin{appendix}
\section{\label{app:dec}Decoupling constants}
In this appendix we collect the results for the decoupling constants
for general SU($N_c$) colour factors. We provide results for
$\zeta_{\alpha_s}$ and $\zeta_m$ up to four loops and show for all
other $\zeta$s the expressions for the first non-vanishing loop-order.
Computer readable expressions up to four loops can be found in~\cite{progdata}.
Our results read
\begin{align}
\zeta_X = 1 + \sum_{i=1}\zeta_X^{(i)}\left(\frac{\alpha^{(n_f)}_s}{\pi}\right)^{i}~,
\end{align}
where
\begin{align}
\zeta_{\alpha_s}^{(1)} &= - \frac{1}{3} T_F \ln\left(\frac{\mu^2}{m^2}\right)~,\nonumber\\
\zeta_{\alpha_s}^{(2)} &= \frac{2}{9} C_A T_F - \frac{13}{48} C_F T_F + \Big(- \frac{5}{12} C_A T_F + \frac{1}{4} C_F T_F\Big)\ln\left(\frac{\mu^2}{m^2}\right) + \frac{1}{9} T_F^2\ln^2\left(\frac{\mu^2}{m^2}\right)~,\nonumber\\
\zeta_{\alpha_s}^{(3)} &= C_A^2 T_F\Big(\frac{11347}{20736} - \frac{5}{1536}\zeta(3)\Big) + C_A C_F T_F\Big(\frac{2999}{2592} - \frac{1273}{768}\zeta(3)\Big) + C_A T_F^2\Big(\frac{245}{5184}\nonumber\\ & - \frac{7}{128}\zeta(3)\Big) + C_F^2 T_F\Big(- \frac{97}{288} + \frac{95}{192}\zeta(3)\Big) + C_F T_F^2\Big(\frac{103}{1296} - \frac{7}{64}\zeta(3)\Big)\nonumber\\ & + n_l T_F\Bigg[-\frac{1}{2592} C_A T_F - \frac{41}{162}C_F T_F\Bigg] + \Bigg[- \frac{1063}{1728} C_A^2 T_F + \frac{25}{36} C_A C_F T_F - \frac{113}{864} C_A T_F^2\nonumber\\ & - \frac{9}{32} C_F^2 T_F + \frac{5}{24} C_F T_F^2\Bigg]\ln\left(\frac{\mu^2}{m^2}\right)+\Bigg[- \frac{7}{96} C_A^2 T_F + \frac{11}{96} C_A C_F T_F + \frac{25}{72} C_A T_F^2\nonumber\\ & - \frac{5}{24} C_F T_F^2\Bigg]\ln^2\left(\frac{\mu^2}{m^2}\right) - \frac{1}{27} T_F^3\ln^3\left(\frac{\mu^2}{m^2}\right) + n_l T_F\Bigg[\frac{47}{432} C_A T_F + \frac{5}{48} C_F T_F\Bigg]\ln\left(\frac{\mu^2}{m^2}\right)\nonumber\\ & - \frac{1}{12} n_l C_F T_F^2 \ln^2\left(\frac{\mu^2}{m^2}\right)~,\nonumber\\
\zeta_{\alpha_s}^{(4)} &= C_A^3 T_F\Big(
\frac{14060183}{13063680}
- \frac{4663}{630} \mathrm{Li}_5\left(\frac{1}{2}\right)
+ \frac{24153}{2240} \mathrm{Li}_4\left(\frac{1}{2}\right)
+ \frac{8051}{17920} \ln^4(2)\nonumber\\
&+ \frac{4663}{75600} \ln^5(2)
+ \frac{377777}{40320} \zeta(5)
- \frac{6668653}{645120} \zeta(4)
- \frac{70841}{10080} \zeta(4) \ln(2)
+ \frac{1331653}{215040} \zeta(3)\nonumber\\
&- \frac{24153}{8960} \zeta(2) \ln^2(2)
- \frac{4663}{7560} \zeta(2) \ln^3(2)
\Big)+ C_A^2 C_F T_F \Big(
\frac{69024559}{10450944}
+ \frac{8674}{315} \mathrm{Li}_5\left(\frac{1}{2}\right)\nonumber\\
&- \frac{11}{105} \mathrm{Li}_4\left(\frac{1}{2}\right)
- \frac{11}{2520} \ln^4(2)
- \frac{4337}{18900} \ln^5(2)
- \frac{1411867}{40320} \zeta(5)
+ \frac{4919}{8960} \zeta(4)\nonumber\\
&+ \frac{280261}{10080} \zeta(4) \ln(2)
- \frac{1639301}{193536} \zeta(3)
+ \frac{11}{420} \zeta(2) \ln^2(2)
+ \frac{4337}{1890} \zeta(2) \ln^3(2)
\Big)\nonumber\\
&+ C_A^2 T_F^2 \Big(
- \frac{6301303}{65318400}
- \frac{8099}{1440} \mathrm{Li}_4\left(\frac{1}{2}\right)
- \frac{8099}{34560} \ln^4(2)
+ \frac{5}{144} \zeta(5)
+ \frac{30103}{5120} \zeta(4)\nonumber\\
&- \frac{18564121}{4838400} \zeta(3)
+ \frac{8099}{5760} \zeta(2) \ln^2(2)
\Big)
+ C_A C_F^2 T_F \Big(
- \frac{556181}{145152}
- \frac{14458}{315} \mathrm{Li}_5\left(\frac{1}{2}\right)\nonumber\\
&- \frac{39521}{560} \mathrm{Li}_4\left(\frac{1}{2}\right)
- \frac{39521}{13440} \ln^4(2)
+ \frac{7229}{18900} \ln^5(2)
+ \frac{1214657}{20160} \zeta(5)
+ \frac{3818767}{53760} \zeta(4)\nonumber\\
&- \frac{13991}{315} \zeta(4) \ln(2)
- \frac{1990813}{48384} \zeta(3)
+ \frac{39521}{2240} \zeta(2) \ln^2(2)
- \frac{7229}{1890} \zeta(2) \ln^3(2)
\Big)\nonumber\nonumber\\
&+ C_A C_F T_F^2 \Big(
\frac{12072043}{8164800}
+ \frac{1457}{90} \mathrm{Li}_4\left(\frac{1}{2}\right)
+ \frac{1457}{2160} \ln^4(2)
- \frac{5}{24} \zeta(5)
- \frac{24673}{1440} \zeta(4)\nonumber\\
&+ \frac{8133593}{806400} \zeta(3)
- \frac{1457}{360} \zeta(2) \ln^2(2)
\Big)
+ C_A T_F^3 \Big(
\frac{6641}{1306368}
- \frac{545}{18144} \zeta(3)
\Big)\nonumber\\
&+ C_F^3 T_F \Big(
\frac{37441}{34560}
+ \frac{256}{15} \mathrm{Li}_5\left(\frac{1}{2}\right)
+ \frac{1919}{45} \mathrm{Li}_4\left(\frac{1}{2}\right)
+ \frac{1919}{1080} \ln^4(2)
- \frac{32}{225} \ln^5(2)\nonumber\\
&- \frac{3429}{160} \zeta(5)
- \frac{58001}{1440} \zeta(4)
+ \frac{212}{15} \zeta(4) \ln(2)
+ \frac{7549}{320} \zeta(3)
- \frac{1919}{180} \zeta(2) \ln^2(2)\nonumber\\
&+ \frac{64}{45} \zeta(2) \ln^3(2)
\Big)
+ C_F^2 T_F^2 \Big(
\frac{2337647}{1036800}
+ \frac{874}{45} \mathrm{Li}_4\left(\frac{1}{2}\right)
+ \frac{437}{540} \ln^4(2)
- \frac{29737}{1440} \zeta(4)\nonumber\\
&+ \frac{123149}{10800} \zeta(3)
- \frac{437}{90} \zeta(2) \ln^2(2)
\Big)
+ C_F T_F^3 \Big(
- \frac{610843}{3265920}
+ \frac{661}{3780} \zeta(3)
\Big)\nonumber\\
&+ \frac{d_R^{abcd}d_A^{abcd}}{N_A} \Big(
\frac{6617}{30240}
+ \frac{7496}{105} \mathrm{Li}_5\left(\frac{1}{2}\right)
+ \frac{3988}{105} \mathrm{Li}_4\left(\frac{1}{2}\right)
+ \frac{997}{630} \ln^4(2)
- \frac{937}{1575} \ln^5(2)\nonumber\\
&- \frac{274067}{3360} \zeta(5)
- \frac{194179}{6720} \zeta(4)
+ \frac{49661}{840} \zeta(4) \ln(2)
+ \frac{322631}{20160} \zeta(3)
- \frac{997}{105} \zeta(2) \ln^2(2)\nonumber\\
&+ \frac{1874}{315} \zeta(2) \ln^3(2)
\Big)
+ \frac{d_R^{abcd}d_R^{abcd}}{N_A} \Big(
- \frac{2411}{5040}
+ \frac{73}{6} \mathrm{Li}_4\left(\frac{1}{2}\right)
+ \frac{73}{144} \ln^4(2)
+ \frac{5}{12} \zeta(5)\nonumber\\
&- \frac{2189}{192} \zeta(4)
+ \frac{6779}{1120} \zeta(3)
- \frac{73}{24} \zeta(2) \ln^2(2)
\Big)\nonumber\\
&+ n_l \Bigg[
C_A^2 T_F^2\Big(
- \frac{252017}{373248}
- \frac{5}{16} \mathrm{Li}_4\left(\frac{1}{2}\right)
- \frac{5}{384} \ln^4(2)
+ \frac{5}{72} \zeta(5)
- \frac{59}{512} \zeta(4)\nonumber\\
&+ \frac{11813}{27648} \zeta(3)
+ \frac{5}{64} \zeta(2) \ln^2(2)
\Big)
+ C_A C_F T_F^2\Big(
- \frac{35455}{62208}
+ \frac{143}{72} \mathrm{Li}_4\left(\frac{1}{2}\right)
+ \frac{143}{1728} \ln^4(2)\nonumber\\
&- \frac{9359}{2304} \zeta(4)
+ \frac{45287}{13824} \zeta(3)
- \frac{143}{288} \zeta(2) \ln^2(2)
\Big)
+ C_A T_F^3\Big(
\frac{4171}{62208}
+ \frac{1}{12} \mathrm{Li}_4\left(\frac{1}{2}\right)\nonumber\\
&+ \frac{1}{288} \ln^4(2)
- \frac{49}{384} \zeta(4)
- \frac{59}{3456} \zeta(3)
- \frac{1}{48} \zeta(2) \ln^2(2)
\Big)
+ C_F^2 T_F^2\Big(
- \frac{19}{324}
- \frac{49}{18} \mathrm{Li}_4\left(\frac{1}{2}\right)\nonumber\\
&- \frac{49}{432} \ln^4(2)
+ \frac{1453}{576} \zeta(4)
- \frac{1955}{1728} \zeta(3)
+ \frac{49}{72} \zeta(2) \ln^2(2)
\Big)
+ C_F T_F^3\Big(
- \frac{8663}{93312} \nonumber\\
&+ \frac{1}{6} \mathrm{Li}_4\left(\frac{1}{2}\right)
+ \frac{1}{144} \ln^4(2)
- \frac{49}{192} \zeta(4)
+ \frac{77}{432} \zeta(3)
- \frac{1}{24} \zeta(2) \ln^2(2)
\Big)\nonumber\\
&+ \frac{d_R^{abcd}d_R^{abcd}}{N_A}\Big(
- \frac{103}{216}
+ \frac{5}{6} \zeta(5)
+ \frac{1}{2}\zeta(4)
- \frac{131}{72} \zeta(3)
\Big)
\Bigg]\nonumber\\
&+ n_l^2 \Bigg[
C_A T_F^3\Big(
- \frac{841}{62208}
- \frac{5}{216} \zeta(3)
\Big)
+ C_F T_F^3\Big(
- \frac{31147}{93312}
+ \frac{53}{216} \zeta(3)
\Big)
\Bigg]\nonumber\\
&+ \Bigg[
C_A^3 T_F \Big(
- \frac{110041}{124416}
+ \frac{1577}{9216} \zeta(3)
\Big)
+ C_A^2 C_F T_F \Big(
\frac{105763}{20736}
- \frac{5105}{1536} \zeta(3)
\Big)\nonumber\\
&+ C_A^2 T_F^2 \Big(
- \frac{2093}{3888}
+ \frac{1}{768} \zeta(3)
\Big)
+ C_A C_F^2 T_F \Big(
- \frac{3491}{1152}
+ \frac{407}{384} \zeta(3)
\Big)
+ C_A C_F T_F^2 \Big(
- \frac{8875}{7776}\nonumber\\
&+ \frac{1963}{1152} \zeta(3)
\Big)
+ C_A T_F^3 \Big(
- \frac{437}{7776}
+ \frac{7}{96} \zeta(3)
\Big)
+ \frac{157}{128} C_F^3 T_F
+ C_F^2 T_F^2 \Big(
+ \frac{277}{1728}
- \frac{67}{144} \zeta(3)
\Big)\nonumber\\
&+ C_F T_F^3 \Big(
- \frac{545}{3888}
+ \frac{7}{48} \zeta(3)
\Big)
+ \frac{d_R^{abcd}d_A^{abcd}}{N_A} \Big(
\frac{2}{9}
- \frac{13}{6} \zeta(3)
\Big)\nonumber\\
&+ \frac{d_R^{abcd}d_R^{abcd}}{N_A} \Big(
- \frac{11}{36}
+ \frac{2}{3} \zeta(3)
\Big)
\Bigg]\ln\left(\frac{\mu^2}{m^2}\right) + \Bigg[
- \frac{1993}{6912} C_A^3 T_F
+ \frac{1289}{1728} C_A^2 C_F T_F\nonumber\\
&+ \frac{1027}{1152} C_A^2 T_F^2
- \frac{55}{128} C_A C_F^2 T_F
- \frac{53}{54} C_A C_F T_F^2
+ \frac{49}{864} C_A T_F^3
+ \frac{3}{8} C_F^2 T_F^2\nonumber\\
&- \frac{17}{144} C_F T_F^3
\Bigg]\ln^2\left(\frac{\mu^2}{m^2}\right)
+ \Bigg[
- \frac{77}{1728} C_A^3 T_F
+ \frac{121}{1728} C_A^2 C_F T_F
+ \frac{35}{432} C_A^2 T_F^2\nonumber\\
&- \frac{55}{432} C_A C_F T_F^2
- \frac{65}{324} C_A T_F^3
+ \frac{13}{108} C_F T_F^3
\Bigg]\ln^3\left(\frac{\mu^2}{m^2}\right)
+ \frac{1}{81}T_F^4\ln^4\left(\frac{\mu^2}{m^2}\right)\nonumber\\
&+ n_l\Bigg[
C_A^2 T_F^2 \Big(
\frac{12421}{31104}
+ \frac{151}{768} \zeta(3)
\Big)
+ C_A C_F T_F^2 \Big(
- \frac{9605}{7776}
+ \frac{1145}{1152} \zeta(3)
\Big)
+ C_A T_F^3 \Big(
- \frac{41}{3888} \nonumber\\
&+ \frac{7}{192} \zeta(3)
\Big)
+ C_F^2 T_F^2 \Big(
\frac{73}{864}
- \frac{127}{288} \zeta(3)
\Big)
+ C_F T_F^3 \Big(
\frac{917}{3888}
+ \frac{7}{96} \zeta(3)
\Big)
+ \frac{d_R^{abcd}d_R^{abcd}}{N_A} \Big(
- \frac{11}{18} \nonumber\\
&+ \frac{4}{3} \zeta(3)
\Big)
\Bigg] \ln\left(\frac{\mu^2}{m^2}\right)
+ n_l^2\Bigg[
\frac{161}{7776} C_A T_F^3
+ \frac{677}{3888} C_F T_F^3
\Bigg] \ln\left(\frac{\mu^2}{m^2}\right)
+ n_l\Bigg[
\frac{55}{1728} C_A^2 T_F^2\nonumber\\
&- \frac{55}{216} C_A C_F T_F^2
- \frac{11}{96} C_A T_F^3
+ \frac{11}{48} C_F^2 T_F^2
- \frac{49}{432} C_F T_F^3
\Bigg] \ln^2\left(\frac{\mu^2}{m^2}\right)
+ n_l^2\Bigg[
- \frac{5}{864} C_A T_F^3\nonumber\\
&- \frac{1}{108} C_F T_F^3
\Bigg] \ln^2\left(\frac{\mu^2}{m^2}\right)
+ n_l\Bigg[
\frac{7}{432} C_A^2 T_F^2
- \frac{11}{144} C_A C_F T_F^2
+ \frac{5}{54} C_F T_F^3
\Bigg] \ln^3\left(\frac{\mu^2}{m^2}\right)\nonumber\\
&+ \frac{1}{54} n_l^2 C_F T_F^3\ln^3\left(\frac{\mu^2}{m^2}\right)
~,\\
\zeta_{m}^{(1)} &= 0~,\nonumber\\
\zeta_{m}^{(2)} &= \frac{89}{288} C_F T_F - \frac{5}{24} C_F T_F \ln\left(\frac{\mu^2}{m^2}\right) + \frac{1}{8} C_F T_F \ln^2\left(\frac{\mu^2}{m^2}\right)~,\nonumber\\
\zeta_{m}^{(3)} &= C_A C_F T_F\Big(
\frac{16627}{15552}
- 2 \mathrm{Li}_4\left(\frac{1}{2}\right)
- \frac{1}{12} \ln^4(2)
+ \frac{31}{16} \zeta(4)
- \frac{629}{576} \zeta(3)
+ \frac{1}{2} \zeta(2) \ln^2(2)
\Big)\nonumber\\
&+ C_F^2 T_F\Big(
- \frac{683}{576}
+ 4 \mathrm{Li}_4\left(\frac{1}{2}\right)
+ \frac{1}{6} \ln^4(2)
- \frac{11}{4} \zeta(4)
+ \frac{57}{32} \zeta(3)
- \zeta(2) \ln^2(2)
\Big)\nonumber\\
&+ C_F T_F^2\Big(
- \frac{1685}{7776}
+ \frac{7}{18} \zeta(3)
\Big)
+ n_l C_F T_F^2\Big(
\frac{1327}{3888}
- \frac{2}{9} \zeta(3)
\Big)
+ \Bigg[
C_A C_F T_F\Big(
\frac{5}{64}
- \frac{3}{4} \zeta(3)
\Big)\nonumber\\
&+ C_F^2 T_F\Big(
- \frac{13}{64}
+ \frac{3}{4} \zeta(3)
\Big)
- \frac{31}{108} C_F T_F^2
\Bigg]\ln\left(\frac{\mu^2}{m^2}\right)
+ \Bigg[
\frac{29}{96} C_A C_F T_F
- \frac{1}{4} C_F^2 T_F\nonumber\\
&+ \frac{5}{72} C_F T_F^2
\Bigg]\ln^2\left(\frac{\mu^2}{m^2}\right)
+ \Bigg[
\frac{11}{144} C_A C_F T_F
- \frac{1}{18} C_F T_F^2
\Bigg]\ln^3\left(\frac{\mu^2}{m^2}\right)\nonumber\\
&-\frac{53}{144} n_l C_F T_F^2\ln\left(\frac{\mu^2}{m^2}\right)-\frac{1}{36} n_l C_F T_F^2\ln^3\left(\frac{\mu^2}{m^2}\right)
~,\nonumber\\
\zeta_{m}^{(4)} &= C_A^2 C_F T_F\Big(
\frac{4524863}{829440}
- \frac{173}{15} \mathrm{Li}_5\left(\frac{1}{2}\right)
- \frac{14539}{640} \mathrm{Li}_4\left(\frac{1}{2}\right)
- \frac{14539}{15360} \ln^4(2)
+ \frac{173}{1800} \ln^5(2)\nonumber\\
&- \frac{5}{32} \zeta(6)
+ \frac{7551}{1280} \zeta(5)
+ \frac{759689}{30720} \zeta(4)
- \frac{1883}{120} \zeta(4) \ln(2)
- \frac{1640279}{184320} \zeta(3)
- \frac{21}{128} \zeta(3)^2\nonumber\\
&+ \frac{14539}{2560} \zeta(2) \ln^2(2)
- \frac{173}{180} \zeta(2) \ln^3(2)
\Big)
+ C_A C_F^2 T_F\Big(
\frac{1068103}{414720}
+ \frac{514}{15} \mathrm{Li}_5\left(\frac{1}{2}\right)\nonumber\\
&+ \frac{11321}{320} \mathrm{Li}_4\left(\frac{1}{2}\right)
+ \frac{11321}{7680} \ln^4(2)
- \frac{257}{900} \ln^5(2)
- \frac{425}{128} \zeta(6)
- \frac{77977}{1920} \zeta(5)
- \frac{181317}{5120} \zeta(4)\nonumber\\
&+ \frac{1321}{30} \zeta(4) \ln(2)
+ \frac{398489}{30720} \zeta(3)
- \frac{11}{32} \zeta(3)^2
- \frac{11321}{1280} \zeta(2) \ln^2(2)
+ \frac{257}{90} \zeta(2) \ln^3(2)
\Big)\nonumber\\
&+ C_A C_F T_F^2\Big(
- \frac{214882117}{203212800}
+ \frac{28657}{3360} \mathrm{Li}_4\left(\frac{1}{2}\right)
+ \frac{28657}{80640} \ln^4(2)
+ \frac{97}{24} \zeta(5)
- \frac{152979}{17920} \zeta(4)\nonumber\\
&+ \frac{29927237}{11289600} \zeta(3)
- \frac{28657}{13440} \zeta(2) \ln^2(2)
\Big)
+ C_F^3 T_F\Big(
\frac{10301}{10240}
- \frac{112}{5} \mathrm{Li}_5\left(\frac{1}{2}\right)
+ \frac{3227}{240} \mathrm{Li}_4\left(\frac{1}{2}\right)\nonumber\\
&+ \frac{3227}{5760} \ln^4(2)
+ \frac{14}{75} \ln^5(2)
+ \frac{65}{64} \zeta(6)
+ \frac{10003}{320} \zeta(5)
- \frac{20897}{1920} \zeta(4)
- \frac{253}{10} \zeta(4) \ln(2)\nonumber\\
&+ \frac{1427}{480} \zeta(3)
+ \frac{87}{32} \zeta(3)^2
- \frac{3227}{960} \zeta(2) \ln^2(2)
- \frac{28}{15} \zeta(2) \ln^3(2)
\Big)
+ C_F^2 T_F^2\Big(
\frac{257128337}{203212800} \nonumber\\
&+ \frac{5041}{1680} \mathrm{Li}_4\left(\frac{1}{2}\right)
+ \frac{5041}{40320} \ln^4(2)
- \frac{63}{16} \zeta(5)
- \frac{90269}{26880} \zeta(4)
+ \frac{7671973}{1881600} \zeta(3)\nonumber\\
&- \frac{5041}{6720} \zeta(2) \ln^2(2)
\Big)
+ C_F T_F^3\Big(
\frac{1281821}{19595520}
+ \frac{1}{48} \zeta(4)
+ \frac{51}{560} \zeta(3)
\Big)
+ \frac{d_R^{abcd}d_R^{abcd}}{N_F}\Big(
- \frac{611}{384} \nonumber\\
&+ 40 \mathrm{Li}_4\left(\frac{1}{2}\right)
+ \frac{5}{3} \ln^4(2)
- \frac{15}{4} \zeta(6)
- \frac{135}{32} \zeta(5)
- \frac{1445}{64} \zeta(4)
+ \frac{973}{64} \zeta(3)
+ \frac{15}{4} \zeta(3)^2\nonumber\\
&- 10 \zeta(2) \ln^2(2)
\Big)
+ n_l \Bigg[
C_A C_F T_F^2 \Big(
- \frac{5095}{3072}
+ 4 \mathrm{Li}_5\left(\frac{1}{2}\right)
+ \frac{49}{12} \mathrm{Li}_4\left(\frac{1}{2}\right)
+ \frac{49}{288} \ln^4(2)\nonumber\\
&- \frac{1}{30} \ln^5(2)
- \frac{253}{96} \zeta(5)
- \frac{543}{128} \zeta(4)
+ \frac{49}{8} \zeta(4) \ln(2)
- \frac{65}{192} \zeta(3)
- \frac{49}{48} \zeta(2) \ln^2(2)\nonumber\\
&+ \frac{1}{3} \zeta(2) \ln^3(2)
\Big)
+ C_F^2 T_F^2 \Big(
- \frac{15557}{5184}
- 8 \mathrm{Li}_5\left(\frac{1}{2}\right)
- \frac{49}{6} \mathrm{Li}_4\left(\frac{1}{2}\right)
- \frac{49}{144} \ln^4(2)\nonumber\\
&+ \frac{1}{15} \ln^5(2)
+ \frac{157}{16} \zeta(5)
+ \frac{1639}{192} \zeta(4)
- \frac{49}{4} \zeta(4) \ln(2)
- \frac{5}{8} \zeta(3)
+ \frac{49}{24} \zeta(2) \ln^2(2)\nonumber\\
&- \frac{2}{3} \zeta(2) \ln^3(2)
\Big)
+ C_F T_F^3 \Big(
- \frac{57}{256}
+ \frac{9}{16} \zeta(4)
- \frac{5}{144} \zeta(3)
\Big)
\Bigg]
+ n_l^2 C_F T_F^3 \Big(
\frac{17671}{20736}
- \frac{7}{16} \zeta(4)\nonumber\\
&- \frac{5}{144} \zeta(3)
\Big)
+ \Bigg[
C_A^2 C_F T_F \Big(
\frac{233903}{248832}
- \frac{11}{2} \mathrm{Li}_4\left(\frac{1}{2}\right)
- \frac{11}{48} \ln^4(2)
+ \frac{25}{16} \zeta(5)
+ \frac{407}{64} \zeta(4)\nonumber\\
&- \frac{119723}{18432} \zeta(3)
+ \frac{11}{8} \zeta(2) \ln^2(2)
\Big)
+ C_A C_F^2 T_F \Big(
- \frac{3529}{768}
+ 11 \mathrm{Li}_4\left(\frac{1}{2}\right)
+ \frac{11}{24} \ln^4(2)\nonumber\\
&+ \frac{5}{16} \zeta(5)
- \frac{275}{32} \zeta(4)
+ \frac{8913}{1024} \zeta(3)
- \frac{11}{4} \zeta(2) \ln^2(2)
\Big)
+ C_A C_F T_F^2 \Big(
- \frac{39259}{20736} \nonumber\\
&+ 2 \mathrm{Li}_4\left(\frac{1}{2}\right)
+ \frac{1}{12} \ln^4(2)
- \frac{37}{16} \zeta(4)
+ \frac{4343}{1536} \zeta(3)
- \frac{1}{2} \zeta(2) \ln^2(2)
\Big)
+ C_F^3 T_F \Big(
\frac{217}{768} \nonumber\\
&- \frac{15}{8} \zeta(5)
+ \frac{169}{256} \zeta(3)
\Big)
+ C_F^2 T_F^2 \Big(
\frac{8951}{6912}
- 4 \mathrm{Li}_4\left(\frac{1}{2}\right)
- \frac{1}{6} \ln^4(2)
+ \frac{25}{8} \zeta(4)
- \frac{595}{256} \zeta(3)\nonumber\\
&+ \zeta(2) \ln^2(2)
\Big)
+ C_F T_F^3 \Big(
\frac{359}{1944}
- \frac{1}{3} \zeta(3)
\Big)
+ \frac{d_R^{abcd}d_R^{abcd}}{N_F} \Big(
\frac{1}{4}
- \frac{15}{8} \zeta(3)
\Big)
\Bigg]\ln\left(\frac{\mu^2}{m^2}\right)\nonumber\\
&+ \Bigg[
C_A^2 C_F T_F \Big(
\frac{19867}{13824}
- \frac{33}{32} \zeta(3)
\Big)
+ C_A C_F^2 T_F \Big(
- \frac{219}{128}
+ \frac{33}{32} \zeta(3)
\Big)
+ C_A C_F T_F^2 \Big(
- \frac{2059}{6912} \nonumber\\
&+ \frac{3}{8} \zeta(3)
\Big)
+ \frac{15}{16} C_F^3 T_F
+ C_F^2 T_F^2 \Big(
\frac{193}{2304}
- \frac{3}{8} \zeta(3)
\Big)
+ \frac{31}{216} C_F T_F^3
\Bigg]\ln^2\left(\frac{\mu^2}{m^2}\right)\nonumber\\
&+ \Bigg[
\frac{17}{48} C_A^2 C_F T_F
- \frac{143}{384} C_A C_F^2 T_F
- \frac{13}{48} C_A C_F T_F^2
+ \frac{31}{192} C_F^2 T_F^2
- \frac{5}{216} C_F T_F^3
\Bigg]\ln^3\left(\frac{\mu^2}{m^2}\right)\nonumber\\
&+ \Bigg[
\frac{121}{2304} C_A^2 C_F T_F
- \frac{11}{192} C_A C_F T_F^2
+ \frac{1}{128} C_F^2 T_F^2
+ \frac{1}{48} C_F T_F^3
\Bigg]\ln^4\left(\frac{\mu^2}{m^2}\right)\nonumber\\
&+ n_l\Bigg[
C_A C_F T_F^2 \Big(
- \frac{5155}{31104}
+ 2 \mathrm{Li}_4\left(\frac{1}{2}\right)
+ \frac{1}{12} \ln^4(2)
- \frac{43}{16} \zeta(4)
+ \frac{997}{576} \zeta(3)\nonumber\\
&- \frac{1}{2} \zeta(2) \ln^2(2)
\Big)\nonumber
+ C_F^2 T_F^2 \Big(
\frac{319}{192}
- 4 \mathrm{Li}_4\left(\frac{1}{2}\right)
- \frac{1}{6} \ln^4(2)
+ \frac{7}{2} \zeta(4)
- \frac{97}{32} \zeta(3)\\
&+ \zeta(2) \ln^2(2)
\Big)
- \frac{143}{648} C_F T_F^3
\Bigg]\ln\left(\frac{\mu^2}{m^2}\right)
+ n_l^2 C_F T_F^3\Big(
- \frac{3401}{7776}
+ \frac{7}{18} \zeta(3)
\Big)\ln\left(\frac{\mu^2}{m^2}\right)\nonumber\\
&+ n_l\Bigg[
- \frac{2581}{3456} C_A C_F T_F^2
- \frac{9}{64} C_F^2 T_F^2
+ \frac{283}{864} C_F T_F^3
\Bigg]\ln^2\left(\frac{\mu^2}{m^2}\right)
+ \frac{31}{216} n_l^2 C_F T_F^3\ln^2\left(\frac{\mu^2}{m^2}\right)\nonumber\\
&+ n_l\Bigg[
- \frac{13}{96} C_A C_F T_F^2
+ \frac{1}{8} C_F^2 T_F^2
\Bigg]\ln^3\left(\frac{\mu^2}{m^2}\right)
+ n_l\Bigg[
- \frac{11}{288} C_A C_F T_F^2\nonumber\\
&+ \frac{1}{48} C_F T_F^3
\Bigg]\ln^4\left(\frac{\mu^2}{m^2}\right)
+ \frac{1}{144} n_l^2 C_F T_F^3\ln^4\left(\frac{\mu^2}{m^2}\right)
~.
\end{align}
and
\begin{align}
\zeta_{1}^{(2)} &= \frac{5}{96} C_F T_F + \frac{89}{1152} C_A T_F - \left(\frac{1}{8} C_F T_F + \frac{5}{96}C_A T_F\right)\ln\left(\frac{\mu^2}{m^2}\right) + \frac{1}{32} C_A T_F\ln^2\left(\frac{\mu^2}{m^2}\right)~,\\
\zeta_{2}^{(2)} &= \frac{5}{96} C_F T_F - \frac{1}{8} C_F T_F \ln\left(\frac{\mu^2}{m^2}\right)~,\\
\zeta_{3}^{(1)} &= - \frac{1}{3} T_F \ln\left(\frac{\mu^2}{m^2}\right)~,\\
\tilde{\zeta}_{1}^{(3)} &= \left(1-\xi^{(n_f)}\right)\Big(C_A^2 T_F\left(\frac{2039}{27648} - \frac{1}{48}\zeta(3)\right) - \frac{7}{144} C_A^2 T_F \ln\left(\frac{\mu^2}{m^2}\right)\nonumber\\
& + \frac{5}{384} C_A^2 T_F\ln^2\left(\frac{\mu^2}{m^2}\right) - \frac{1}{384} C_A^2 T_F\ln^3\left(\frac{\mu^2}{m^2}\right)\Big)~,\\
\tilde{\zeta}_{3}^{(1)} &= - \frac{89}{1152} C_A T_F + \frac{5}{96} C_A T_F \ln\left(\frac{\mu^2}{m^2}\right) - \frac{1}{32} C_A T_F\ln^2\left(\frac{\mu^2}{m^2}\right)~,\\
\zeta_{3g}^{(1)} &= \frac{1}{3} T_F \ln\left(\frac{\mu^2}{m^2}\right)~,\\
\zeta_{4g}^{(1)} &= \frac{1}{3} T_F \ln\left(\frac{\mu^2}{m^2}\right)~.
\end{align}
In these expression $m\equiv m(\mu)$ is the
$\overline{\rm MS}$ quark mass and $N_F = N_c$. Other variants with $\alpha_s^{(n_l)}$ and the
on-shell heavy quark mass can be found in~\cite{progdata}.
The colour factors are defined in Eq.~(\ref{eq::cf}).
\end{appendix}
|
2,869,038,154,567 | arxiv | \section{Introduction}\label{sec:intro}
The very short lifetime of the $\tau$ lepton ($2.9 \times 10^{-13}$s) makes it very difficult to measure its electric and magnetic dipole moments. While the Standard Model (SM) prediction of the $\tau$ anomalous magnetic moment $a_{\tau}=(g-2)_{\tau}/2$ is known with a tiny uncertainty of $5 \times 10^{-8}$~\cite{Eidelman:2007sb}, this short lifetime has so far prevented the determination of $a_{\tau}$ measuring the $\tau$ spin precession in a magnetic field, like in the electron and muon $g$$-$$2$ experiments. Instead, experiments focused on various high-precision measurements of $\tau$ pair production in high-energy processes, comparing the measured cross sections with the SM predictions. As these processes involve off-shell photons or taus in the $\tau \bar{\tau} \gamma$ vertices, the measured quantity is not directly $a_{\tau}$. The present resolution on $a_{\tau}$ obtained by these experiments is only of $O(10^{-2})$~\cite{Abdallah:2003xd}, more than an order of magnitude larger than its leading SM contribution $\frac{\alpha}{2\pi} \simeq 0.001$~\cite{Schwinger:1948iu}.
The electron and muon $g$$-$$2$, $a_e$ and $a_{\mu}$, have been measured with the remarkable precision of 0.24 ppb~\cite{Hanneke:2008tm} and $540$ ppb~\cite{Bennett:2006fi}, respectively. While $a_e$ perfectly agrees with the SM prediction~\cite{Aoyama:2012wj}, $a_{\mu}$, which is much more sensitive than $a_e$ to strong and weak interactions, shows a long-standing puzzling discrepancy of about 3--4$\sigma$ and provides a powerful test of physics beyond the SM~\cite{Jegerlehner:2015stw, Jegerlehner:2009ry, Melnikov:2006sr, Passera:2004bj,Knecht:2003kc}. A precise measurement of $a_{\tau}$ would offer a new excellent opportunity to unveil new physics effects. Indeed, in a large class of theories beyond the SM, new contributions to the anomalous magnetic moment of a lepton $l$ of mass $m_l$ scale with $m_l^2$. Therefore, given the large factor $m_{\tau}^2/m_{\mu}^2 \sim 283$, the $g$$-$$2$ of the $\tau$ is much more sensitive than the muon one to electroweak and new physics loop effects that give contributions proportional to $m_l^2$. In these scenarios, the present discrepancy in the muon $g$$-$$2$ suggests a new-physics effect in $a_{\tau}$ of $\mathcal{O}(10^{-6})$; several theories exist where this naive scaling is violated and much larger effects are expected~\cite{Giudice:2012ms}.
The SM prediction of a lepton electric dipole moment (EDM) is extremely small and far below present experimental capabilities. Therefore, a measurement of a non-zero value would be direct evidence of new physics. Moreover, models for physics beyond the SM generally induce large contributions to lepton EDMs so that, although there has been no experimental evidence for an EDM so far, we hope that this kind of experiments will soon shed new light on the nature of $CP$ violation.
In this article we study the possibility to determine the electromagnetic dipole moments of the $\tau$ via the radiative leptonic decays $\tau \to l \gamma \nu \bar{\nu}$, with $l=\mu,e$, comparing the theoretical prediction for the differential decay rates with precise data from high-luminosity $B$ factories~\cite{Fael:2013ij,fael:phdthesis}. In particular, we present the results of a feasibility study performed in the conditions of the Belle~\cite{Abashian:2000cg,Brodzicka:2012jm,Akai:2001pf,Abe:2013kxa} and Belle~II~\cite{Abe:2010gxa} experiments at the KEKB~\cite{KEKB} and SuperKEKB~\cite{Aushev:2010bq,Ohnishi:2013fma} colliders, respectively. Following the strategy of the authors of refs.~\cite{GonzalezSprinberg:2000mk,Bernreuther:1993nd}, deviations of the $\tau$ dipole moments from the SM values are analyzed in an effective Lagrangian approach, thus avoiding the interpretation of off-shell form factors. We also examine the feasibility of earlier proposals; in particular, one based on the study of the Pauli form factor of the $\tau$ via $\tau^+ \tau^-$ production in $e^+ e^-$ collisions at the $\Upsilon$ resonances~\cite{Bernabeu:2007rr,Bernabeu:2008ii}, and another relying on the analysis of the radiation zero which occurs in radiative leptonic $\tau$ decays~\cite{Laursen:1983sm}.
In section~\ref{sec:ff} we establish our conventions for the $\tau$ electromagnetic form factors and introduce an effective Lagrangian to study the $\tau$ dipole moments. In section~\ref{sec:status} we review the present theoretical and experimental status on the $\tau$ $g$$-$$2$~and EDM. The theoretical framework to analyze radiative leptonic $\tau$ decays is presented in section~\ref{sec:effectiveltau}, where we provide explicit analytic expressions for the relevant non-standard contributions to the differential decay rates. In section~\ref{sec:fstudy} we outline our method to determine the $\tau$ dipole moments and report the results of our feasibility study for the sensitivities that may be reached at the Belle and upcoming Belle~II experiments. Conclusions are drawn in sec.~\ref{sec:conclusions}.
\section{\boldmath The $\tau$ lepton electromagnetic form factors} \label{sec:ff}
Let us consider the structure of the $f{\bar f}\gamma$ coupling. The most general vertex function describing the interaction between a photon and the initial and final states of an arbitrary on-shell spin $1/2$ fermion $f$, with four-momenta $p$ and $p'$, respectively, can be written in the form
\begin{equation}
\Gamma^\mu (q^2) = - i e Q_f \left\{
\gamma^\mu F_1(q^2)
+ \frac{\sigma^{\mu\nu} q_\nu}{2m_f} \Big[ i F_2(q^2) + F_3 (q^2) \gamma_5 \Big]
+ \Big( \gamma^{\mu} - \frac{2q^{\mu}m_f}{q^2} \Big) \gamma_5 \, F_4 (q^2) \right\},
\label{eqn:ffgammavertex}
\end{equation}
where $e>0$ is the positron charge, $m_f$ is the mass of the fermion, $\sigma_{\mu\nu}=i/2\,[\gamma_\mu,\gamma_\nu]$, and $q=p'-p$ is the ingoing four-momentum of the off-shell photon. Equation~\eqref{eqn:ffgammavertex}, when sandwiched in $\overline{u}(p) \Gamma_\mu (q^2) u(p')$, is the most general expression that satisfies Lorentz and QED gauge invariance. The functions $F_{1}(q^2)$ and $F_{2}(q^2)$ are called the Dirac and Pauli form factors, respectively. In general, they are not physical quantities (for example, they can contain infrared divergences~\cite{Bonciani:2003ai,Mastrolia:2003yz}), but in the limit $q^2 \to 0$ they are measurable and related to the static quantities
\begin{align}
F_1(0) &= 1, &
F_2(0) &= a_f, &
F_3(0) &= d_f \, \frac{2m_f}{e Q_f},
\label{eqn:definitionformfactors}
\end{align}
where $e \, Q_f$ is the charge of the fermion, $a_f$ its anomalous magnetic moment, and $d_f$ its EDM. The electric dipole contribution $F_{3}(q^2)$ violates the discrete symmetries $P$ (parity) and $T$ (time reversal)~\cite{Barr:1988mc,Khriplovich:1997ga,Commins:1900zz}, and therefore $CP$, because of the $CPT$ theorem. $F_{4}(q^2)$ is called the anapole form factor and violates $P$.
In the limit $q^2 \to 0$, the dipole interactions in eq.~\eqref{eqn:ffgammavertex} can be cast in the form
\begin{equation}
C_L \, \sigma_{\mu\nu} q^\nu P_L
+ C_R \, \sigma_{\mu\nu} q^\nu P_R,
\label{eqn:ffcomplex}
\end{equation}
where $P_{L,R} = (1 \mp \gamma^5)/2$. Hermiticity of this expression requires that $C_R = C_L^*=c_f$, with
\begin{equation}
c_f = a_f \frac{e Q_f}{2m_f} - i d_f, \quad \quad a_f, d_f \in \mathbb{R}.
\label{eqn:cf}
\end{equation}
Deviations of the $\tau$ dipole moments from the SM values can be analyzed in the framework of an effective field theory description where the SM Lagrangian is extended by a set of gauge-invariant higher-dimensional operators, built with the SM fields, suppressed by powers of the scale of new physics $\Lambda$~\cite{Buchmuller:1985jz}. We will consider only dimension-six operators, which are the lowest dimensional ones relevant for our analysis. Out of the complete set of 59 independent dimension-six operators in ref.~\cite{Grzadkowski:2010es}, only two of them can directly contribute to the $\tau$ lepton $g$$-$$2$~and EDM at tree level (i.e., not through loop effects):
\begin{align}
Q^{33}_{lW} &= \left( \bar{l}_\tau \sigma^{\mu\nu} \tau_R \right) \sigma^{\mysmall I} \varphi \, W_{\mu\nu}^{\mysmall I},
\label{eqn:QlW} \\
Q^{33}_{lB} &= \left( \bar{l}_\tau \sigma^{\mu\nu} \tau_R \right) \varphi \, B_{\mu\nu}, \label{eqn:QlB}
\end{align}
where $\varphi$ and $l_\tau = (\nu_\tau,\tau_L)$ are the Higgs and the left-handed SU(2) doublets, $\sigma^{\mysmall I}$ are the Pauli matrices, and $W_{\mu\nu}^{\mysmall I}$ and $B_{\mu\nu}$ are the gauge field strength tensors. The leading non-standard effects will therefore arise from the effective Lagrangian
\begin{equation}
\mathcal{L}_{\rm eff} =
\frac{1}{\Lambda^2}
\left[C^{33}_{lW}Q^{33}_{lW}+
C^{33}_{lB} Q^{33}_{lB} + {\rm h.c.} \right].
\label{eqn:leff}
\end{equation}
After the electroweak symmetry breaking, these two operators mix and give additional, beyond the SM, contributions to the $\tau$ anomalous magnetic moment and EDM:
\begin{align}
\tilde{a}_\tau &= \frac{2 m_\tau}{e} \frac{\sqrt{2} v}{\Lambda^2}
\,\, \Re \left[ \cw \Clb^{33} -\sw \Clw^{33} \right] ,\\
\tilde{d}_\tau &= \frac{\sqrt{2} v}{\Lambda^2}
\,\, \Im \left[ \cw \Clb^{33} -\sw \Clw^{33} \right],
\end{align}
where $v=246$~GeV and $\sw$ is the weak mixing angle. Moreover, through the coupling to the $Z$ boson, the effective Lagrangian \eqref{eqn:leff} also gives non-standard contributions to the neutral weak dipole moments:
\begin{align}
\tilde{a}_\tau^W &= \frac{2 m_\tau}{e} \frac{\sqrt{2} v}{\Lambda^2}
\,\, \Re \left[ \sw \Clb^{33} + \cw \Clw^{33} \right] ,\\
\tilde{d}_\tau^W &= -\frac{\sqrt{2} v}{\Lambda^2}
\,\, \Im \left[ \sw \Clb^{33} + \cw \Clw^{33} \right].
\end{align}
The operator $Q^{33}_{lW}$ in~\eqref{eqn:QlW} also generates an additional chirality-flipping coupling between the $\tau$ and the $W$ boson, and a four-point vertex that couples the $\tau$ and the $W$ to the photon or the $Z$ (other four- and five-point vertices, involving the physical Higgs boson, will not be considered since they do not contribute to the $\tau$ dipole moments nor to the decays $\tau \to l \nu \bar{\nu} (\gamma)$). These additional $\tau$-$W$ couplings are proportional to the complex parameter $\Clw^{33}$ and, therefore, to the real combinations
$\tilde{b}_\tau = -(2m_{\tau}/e)(\sqrt{2}v/\Lambda^2) \sw \, \Re \, \Clw^{33} = \sin^2 \! \theta_{\mysmall W} \tilde{a}_\tau - \sw \cw \tilde{a}_\tau^W$
and
$\tilde{c}_\tau = -(\sqrt{2}v/\Lambda^2) \sw \, \Im \, \Clw^{33} = \sin^2 \! \theta_{\mysmall W} \tilde{d}_\tau + \sw \cw \tilde{d}_\tau^W$.
The dynamics of radiative leptonic $\tau$ decays is modified both by non-standard terms proportional to $\tilde{a}_\tau$ and $\tilde{d}_\tau$ (see section~\ref{sec:effectiveltau}), as well as by contributions generated by these new couplings between the $\tau$ and the $W$ boson, which are proportional to $\tilde{b}_\tau$ and $\tilde{c}_\tau$. However, as these new $\tau$-$W$ couplings also affect the ordinary (inclusive) leptonic $\tau$ decays $\tau \to l \nu \bar{\nu}$, we will assume that future bounds on $\tilde{b}_\tau$ and $\tilde{c}_\tau$ will be more stringent than those on $\tilde{a}_\tau$ and $\tilde{d}_\tau$ obtained via radiative leptonic decays. The present limits on $\tilde{b}_\tau$ and $\tilde{c}_\tau$ are of $\mathcal{O}(10^{-3})$; should future bounds on $\tilde{a}_\tau$ and $\tilde{d}_\tau$ reach the sensitivity of $\tilde{b}_\tau$ and $\tilde{c}_\tau$, then a combined analysis of ordinary and radiative leptonic $\tau$ decays for $\tau$ dipole moments and Bouchiat-Michel-Kinoshita-Sirlin parameters~\cite{Michel:1949qe,Bouchiat:1957zz,Kinoshita:1957zz,Kinoshita:1957zza} will become necessary. For the time being, we will neglect these new $\tau$-$W$ couplings.
\section{{\boldmath Status of the $\tau$ lepton $g$-2 and EDM}} \label{sec:status}
In this section we discuss the present status of the SM prediction and experimental determination of the anomalous magnetic moment and EDM of the $\tau$ lepton.
The SM prediction for $a_{\tau}$ is given by the sum of QED, electroweak (EW) and hadronic terms. The QED contribution has been computed up to three loops:
$
a_{\tau}^{\mysmall \rm QED} =
117 \, 324 \, (2) \times 10^{-8}
$~\cite{Laporta:1992pa,Laporta:1993ju,Laporta:1996mq,Passera:2006gc}, where the uncertainty
$\pi^2 \ln^2(m_{\tau}/m_e)(\alpha/\pi)^4 \sim 2\times 10^{-8}$
has been assigned for uncalculated four-loop contributions. The errors due to the uncertainties of the $\mathcal{O}(\alpha^2)$ and $\mathcal{O}(\alpha^3)$ terms, as well as that induced by the uncertainty of $\alpha$, are negligible.
The sum of the one- and two-loop EW contributions is
$
a_{\tau}^{\mysmall \rm EW} = 47.4 (5) \times 10^{-8}
$~\cite{Czarnecki:1995wq,Czarnecki:1995sz,Eidelman:2007sb}. The uncertainty encompasses the estimated errors induced by hadronic loop effects, neglected two-loop bosonic terms and the missing three-loop contribution. It also includes the tiny errors due to the uncertainties in $m_{\rm\scriptstyle top}$ and $m_{\tau}$.
Similarly to the case of the muon $g$$-$$2$, the leading-order hadronic contribution to $a_{\tau}$ is obtained via a dispersion integral of the total hadronic cross section of the $e^+e^-$ annihilation (the role of low energies is very important, although not as much as for $a_{\mu}$). The result of the latest evaluation, using experimental data below 12~GeV, is
$
a_{\tau}^{\mysmall \rm HLO} = 337.5 \, (3.7) \times 10^{-8}
$~\cite{Eidelman:2007sb}.
The hadronic higher-order $(\alpha^3)$ contribution $a_{\tau}^{\mysmall \rm HHO}$ can be divided into two parts:
$
a_{\tau}^{\mysmall \rm HHO}=
a_{\tau}^{\mysmall \rm HHO}(\mbox{vp})+
a_{\tau}^{\mysmall \rm HHO}(\mbox{lbl}).
$
The first one, the $\mathcal{O}(\alpha^3)$ contribution of diagrams containing hadronic self-energy insertions in the photon propagators,
is
$
a_{\tau}^{\mysmall \rm HHO}(\mbox{vp})= 7.6 (2) \times 10^{-8}
$~\cite{Krause:1996rf}.
Note that naively rescaling the corresponding muon $g$$-$$2$ result by a factor $m_{\tau}^2/m_{\mu}^2$ leads to the incorrect estimate $a_{\tau}^{\mysmall \rm HHO}(\mbox{vp}) \sim -28\times 10^{-8}$ (even the sign is wrong!).
Estimates of the light-by-light contribution $a_{\tau}^{\mbox{$\scriptscriptstyle{\rm HHO}$}}(\mbox{lbl})$ obtained rescaling the corresponding one for the muon $g$$-$$2$ by a factor $m_{\tau}^2/m_{\mu}^2$ fall short of what is needed -- this scaling is not justified. The parton-level estimate of~\cite{Eidelman:2007sb} is
$
a_{\tau}^{\mysmall \rm HHO}(\mbox{lbl})= 5 (3) \times 10^{-8},
$
a value much lower than those obtained by naive rescaling. Adding up the above contributions one obtains the SM
prediction~\cite{Eidelman:2007sb}
\begin{equation}
a_{\tau}^{\mysmall \rm SM} =
a_{\tau}^{\mysmall \rm QED} +
a_{\tau}^{\mysmall \rm EW} +
a_{\tau}^{\mysmall \rm HLO} +
a_{\tau}^{\mysmall \rm HHO}
=117 \, 721 \, (5) \times 10^{-8}.
\label{eqn:atSM}
\end{equation}
Errors were added in quadrature.
The EDM interaction violates the discrete $CP$ symmetry. In the SM with massless neutrinos, the only source of $CP$ violation is the CKM-phase (and a possible $\theta$-term in the QCD sector). In refs.~\cite{Jarlskog:1985cw,Jarlskog:1985ht} it was shown that all $CP$-violating amplitudes are proportional to the Jarlskog invariant $J$, defined as
\begin{equation}
\text{Im} \left[ V_{ij} V_{kl} V^*_{il} V^*_{kj} \right] =
J \sum_{m,n} \varepsilon_{ikm} \varepsilon_{jln} \, ,
\end{equation}
where $V_{ij}$ are the CKM matrix elements. Therefore, the lepton EDM must arise from virtual quarks linked to the lepton through the $W$ boson, thus being sensitive to the imaginary part of the CKM matrix elements.
The leading contribution is naively expected at the three-loop level, since two-loop diagrams are proportional to $|V_{ij}|^2$. The problem was first analyzed in some detail in~\cite{Hoogeveen:1990cb}, but it was subsequently shown that also three-loop diagrams yield a zero EDM contribution in the absence of gluonic corrections to the quark lines~\cite{Pospelov:1991zt}. For this reason, lepton EDMs are predicted to be extremely small in the SM, of the $\mathcal{O}(10^{-38} - 10^{-35}) \, e\cdot$cm~\cite{Commins:1900zz}, far below the present $\mathcal{O}(10^{-17}) \, e\cdot$cm experimental reach on the $\tau$ EDM. Even for the electron, the fantastic experimental upper bound $d_e^{\mysmall EXP} < 0.87 \times 10^{-28} ~e\cdot$cm~\cite{Baron:2013eja} is still much larger than the SM prediction $d_e^{\mysmall SM} \sim \mathcal{O}(10^{-38}) \, e \cdot $cm and it is hard to imagine improvements in the sensitivity by ten orders of magnitude! However, new EDM effects could arise at the one- or two-loop level from new physics that violates $P$ and $T$, and be much larger than the tiny SM value, even if they arise from high mass scales.
The present experimental resolution on the $\tau$ anomalous magnetic moment is only of $\mathcal{O}(10^{-2})$~\cite{Abdallah:2003xd}, more than an order of magnitude larger than its SM prediction in Eq.~\eqref{eqn:atSM}. In fact, while the SM value of $a_{\tau}$ is known with a tiny uncertainty of $5 \times 10^{-8}$, the $\tau$ short lifetime has so far prevented the determination of $a_{\tau}$ by measuring the $\tau$ spin precession in a magnetic field, like in the electron and muon $g$$-$$2$ experiments.
The present PDG limit on the $\tau$ $g$$-$$2$~was derived in 2004 by the DELPHI collaboration from $e^+ e^- \to e^+ e^- \tau^+ \tau^-$ total cross section measurements at $\sqrt{s}$ between 183 and 208 GeV at LEP2 (the study of $a_\tau$ via this channel was proposed in~\cite{Cornet:1995pw}). The measured values of the cross-sections were used to extract limits on the $\tau$ $g$$-$$2$~by comparing them to the SM values, assuming that possible deviations were due to non-standard contributions $\tilde{a}_\tau$. The obtained limit at 95\%~CL is~\cite{Abdallah:2003xd}
\begin{equation}
-0.052 < \tilde{a}_\tau < 0.013,
\label{eqn:atauexpbound95}
\end{equation}
which can be also expressed in the form of central value and error as~\cite{Abdallah:2003xd}
\begin{equation}
\tilde{a}_\tau = -0.018 \, (17).
\label{eqn:atauexpbound68}
\end{equation}
The present PDG limit on the EDM of the $\tau$ lepton at $95\%$~CL is
\begin{equation}\label{eq dtauexp}
\begin{split}
& - 2.2 < \mathrm{Re} (d_\tau) < 4.5 \; \; (10^{-17} \; e \mathrm{\cdot cm}), \\
& - 2.5 < \mathrm{Im} (d_\tau) < 0.8 \; \; (10^{-17} \; e \mathrm{\cdot cm}); \\
\end{split}
\end{equation}
it was obtained by the Belle collaboration~\cite{Inami:2002ah} following the analysis of ref.~\cite{Bernreuther:1993nd} for the impact of an effective operator for the $\tau$ EDM in the process $e^+ e^- \rightarrow \tau^+ \tau^-$.
The reanalysis of ref.~\cite{GonzalezSprinberg:2000mk} of various LEP and SLD measurements -- mainly of the $e^+e^- \to \tau^+\tau^-$ cross sections -- allowed the authors to set the indirect 2$\sigma$ confidence interval
\begin{equation}
-0.007 < \tilde{a}_{\tau} < 0.005,
\end{equation}
a bound stronger than that in Eq.~(\ref{eqn:atauexpbound95}). This analysis assumed $\tilde{d}_\tau = 0$. We updated this analysis using more recent data~\cite{Schael:2013ita,Agashe:2014kda} obtaining the almost identical $2\sigma$ confidence interval $-0.007 < \tilde{a}_{\tau} < 0.004$.
At the LHC, bounds on the $\tau$ dipole moments are expected to be set in $\tau$ pair production via Drell-Yan~\cite{Hayreter:2013vna,Hayreter:2015cia} or double photon scattering processes~\cite{Atag:2010ja}. The best limits achievable in $pp \to \tau^+\tau^- + X$ are estimated to be comparable to present existing ones if the total cross section for $\tau$ pair production is assumed to be measured at the $14\%$ level~\cite{Hayreter:2013vna}. Earlier proposals to set bounds on the $\tau$ dipole moments can be found in~\cite{delAguila:1991rm,Samuel:1992fm,Escribano:1993pq,Escribano:1996wp}.
Yet another method to determine $\tilde{a}_{\tau}$ would use the channeling of polarized $\tau$ leptons in a bent crystal similarly to the suggestion for the measurement of magnetic moments of short-living baryons~\cite{Kim:1982ry}. This approach has been successfully tested by the E761 collaboration at Fermilab, which measured the magnetic moment of the $\Sigma^+$ hyperon~\cite{Chen:1992wx}. The challenge of this method is to produce a polarized beam of $\tau$ leptons. One could use the decay $B^+ \to \tau^+ \nu_\tau$, which would produce polarized $\tau$ leptons~\cite{Samuel:1990su}; however this particular decay of the $B$ has a very tiny branching ratio of $\mathcal{O} ( 10^{-4})$. In 1991, when this proposal was published, the idea seemed completely unlikely. Nonetheless, in the era of $B$ factories, when the decay $B^+ \to \tau^+ \nu_\tau$ is already observed~\cite{Agashe:2014kda}, the realization of this idea in a dedicated experiment is definitively not excluded.
The Belle II experiment at the upcoming high-luminosity $B$ factory SuperKEKB will offer new opportunities to improve the determination of the $\tau$ electromagnetic properties. The authors of ref.~\cite{Bernabeu:2007rr,Bernabeu:2008ii} proposed to determine the Pauli form factor $F_{2}(q^2)$ of the $\tau$ via $\tau^+ \tau^-$ production in $e^+ e^-$ collisions at the $\Upsilon$ resonances ($\Upsilon$(1S), $\Upsilon$(2S) and $\Upsilon$(3S)) with a sensitivity of $\mathcal{O}(10^{-5})$ or even better (of course, the center-of-mass energy at super $B$ factories is $\sqrt{s} \sim M_{\Upsilon(4S)} \approx 10$ GeV, so that the form factor $F_{2}(q^2)$ is not the anomalous magnetic moment). When attempting to extract the value of $F_{2}(q^2)$ from scattering experiments (as opposed to using a background magnetic field) one encounters additional complications due to the contributions of various other Feynman diagrams not related to the magnetic form factor. In particular, in the $e^+e^- \to \tau^+ \tau^-$ case, contributions to the cross section arise not only from the usual $s$-channel one-loop vertex corrections, but also from box diagrams, which should be somehow subtracted out. The strategy proposed in~\cite{Bernabeu:2007rr,Bernabeu:2008ii} to eliminate their contamination is to measure the observables on top of the $\Upsilon$ resonances, where the non-resonant box diagrams should be numerically negligible.
However, because of the natural irreducible beam energy spread associated to any $e^+ e^-$ synchrotron, it is very difficult to resolve the narrow peaks of the $\Upsilon (1S,2S,3S)$ in the $\tau^+ \tau^-$ decay channel (the $\Upsilon(4S)$ decays almost entirely in $B\bar{B}$). Indeed, the total visible cross section of these resonances is not a perfect Breit-Wigner, but the convolution of the theoretical Breit-Wigner cross section with a Gaussian spread,
\begin{equation}
\sigma_{\mysmall vis} =
\int \frac{\sigma_{ee\to\Upsilon\to\tau\tau}(s)}{\sqrt{2 \pi} \sigma_{\mysmall W}}
\, \exp \! \left[ - \frac{(\sqrt{s}-M_\Upsilon)^2}{2 \sigma_{\mysmall W}^2} \right]
d\sqrt{s},
\label{eqn:visxsec}
\end{equation}
where $\sigma_{\mysmall W}$ is the irreducible beam energy spread of the accelerator at $\sqrt{s} = M_\Upsilon$ ($\sigma_{\mysmall W}=5.45$~MeV at the upcoming SuperKEKB collider), $\sigma_{ee\to\Upsilon\to\tau\tau}(s)$ is the total cross section in the Breit-Wigner approximation,
\begin{equation}
\sigma_{ee\to\Upsilon\to\tau\tau}(s) \, = \, \sigma_{\mysmall peak} \,
\frac{M_\Upsilon^2\Gamma_{\Upsilon}^2}{(s-M_\Upsilon^2)^2 + M_\Upsilon^2\Gamma_{\Upsilon}^2},
\label{eqn:peakxsec}
\end{equation}
$M_\Upsilon$ and $\Gamma_{\Upsilon}$ are the masses and the widths of the $\Upsilon$ resonances, and the cross section at the peak is given by $\sigma_{\mysmall peak} = 12 \pi {\cal B} ({\Upsilon \to ee}) {\cal B} ({\Upsilon \to \tau\tau})/M_\Upsilon^2$. In the limit $\Gamma_{\Upsilon} \ll \sigma_{\mysmall W}$ of narrow resonances, $\sigma_{ee\to\Upsilon\to\tau\tau}(s)$ can be approximated by
\begin{equation}
\sigma_{ee\to\Upsilon\to\tau\tau}(s) \approx
\sigma_{\mysmall peak} \pi M_\Upsilon \Gamma_{\Upsilon} \delta (s-M_\Upsilon^2).
\end{equation}
The expression for the maximum visible resonant cross section obtained substituting eq.~\eqref{eqn:peakxsec} into eq.~\eqref{eqn:visxsec} is
\begin{equation}
\sigma_{\mysmall vis}^{\mysmall max} = \rho \, \sigma_{\mysmall peak}, \quad \mbox{with} \quad
\rho = \sqrt{\frac{\pi}{8}} \, \frac{\Gamma_{\Upsilon}}{\sigma_{\mysmall W}}.
\label{eqn:finalvisxsec}
\end{equation}
In table~\ref{tab:ures} we compare the maximum visible resonant cross sections for $e^+ e^- \to \Upsilon \to \tau^+ \tau^-$ with the non-resonant cross section $\sigma_{\mysmall non-res} = 0.919(3)$~nb at $\sqrt{s} = M_\Upsilon$~\cite{Banerjee:2007is}. From this table we can conclude that, at the Belle II experiment, the $\tau^+ \tau^-$ events produced with beams at a center-of-mass energy $\sqrt{s} \sim M_\Upsilon$ are mostly due to non-resonant contributions; indeed the visible resonant cross sections are of the same order of the non-resonant ones, or smaller. Even for the multihadron events in the region of $\Upsilon(1S,2S,3S$), the non-resonant cross section dominates with respect to the resonant one (see, for example,~\cite{Artamonov:1983vz}). The situation at Belle was similar (the energy spread at KEKB was $\sigma_{\mysmall W} = 5.24$ MeV~\cite{KEKB}). We therefore conclude that measuring the $e^+ e^- \to \tau^+ \tau^-$ cross section at the upcoming SuperKEKB collider on top of the $\Upsilon$ resonances will not eliminate the contamination of the non-resonant contributions.
\begin{table}
\centering
\begin{tabular}{l|c|c|c|c|c}
\toprule
$\Upsilon$ & $M_\Upsilon$ [GeV] & $\Gamma_{\Upsilon}$ [keV] &
$\sigma_{\mysmall peak}$ [nb] & $\rho$ & $\displaystyle \frac{\sigma_{\mysmall vis}^{\mysmall max}}{\sigma_{\mysmall non-res}}$ \\
\midrule
$\Upsilon(1S)$ & $\phantom{1}9.46$ & $54$ & $101$ & $6.2 \times 10^{-3}$ & $69\%$ \\
$\Upsilon(2S)$ & $10.02$ & $32$ & $56$ & $3.7 \times 10^{-3}$ & $22\%$ \\
$\Upsilon(3S)$ & $10.36$ & $20$ & $68$ & $2.3 \times 10^{-3}$ & $17\%$ \\
$\Upsilon(4S)$ & $10.58$ & $20 \times 10^3$ & -- & -- & -- \\
\bottomrule
\end{tabular}
\caption{Estimated visible cross section at Belle II for $e^+ e^- \to \Upsilon \to \tau^+ \tau^-$.
The machine parameters are from ref.~\cite{Ohnishi:2013fma}.}
\label{tab:ures}
\end{table}
In the next section we will propose a new method to determine the electromagnetic dipole moments of the $\tau$ lepton via precise measurements of its radiative leptonic decays.
\section{{\boldmath Radiative $\tau$ leptonic decays: theoretical framework}} \label{sec:effectiveltau}
The SM prediction, at next-to-leading order (NLO), for the differential rate of the radiative leptonic decays
\begin{equation}
\tau^- \to l^- \, \nu_\tau \, \bar{\nu}_l \, \gamma,
\quad
\label{eqn:raddecay}
\end{equation}
with $l=e$ or $\mu$, of a polarized $\tau^-$ with mass $m_{\tau}$ in its rest frame is
\begin{equation}
\frac{d^6 \Gamma \left(y_0\right) }{dx \, dy \, d\Omega_l\, d\Omega_\gamma} =
\frac{\alpha \, G_F^2 m_{\tau}^5} {(4 \pi)^6}
\frac{x \beta_l}{1+ \deltaw}
\biggl[
G
\, + \, x \beta_l \, \hat{n} \cdot \hat{p}_l \, J
\, + \, y \, \hat{n} \cdot \hat{p}_\gamma \, K
\, + \, x y \beta_l \, \hat{n} \cdot \left(\hat{p}_l \times \hat{p}_\gamma \right) L
\biggr],
\label{eqn:radiativedecayrateNLO}
\end{equation}
where
$G_F=1.166 \, 378 \, 7(6) \times10^{-5}$ GeV$^{-2}$~\cite{Webber:2010zf}
is the Fermi constant determined by the muon lifetime and
$\alpha = 1/137.035\,999\,157\,(33)$
is the fine-structure constant~\cite{Aoyama:2012wj,Aoyama:2014sxa}.
Calling $m$ the mass of the final charged lepton (neutrinos and antineutrinos are considered massless) we define $r=m/m_{\tau}$ and $\rw=m_{\tau}/\mw$, where $\mw$ is the $W$-boson mass; $p$ and $n=(0,\hat{n})$ are the four-momentum and polarization vector of the initial $\tau$, with $n^2=-1$ and $n \cdot p = 0$. Also, $x = 2E_l/m_{\tau}$, $y = 2E_\gamma/m_{\tau}$ and $\beta_l \equiv |\vec{p}_l|/E_l=\sqrt{1-4r^2/x^2}$, where $p_l = (E_l,\vec{p}_l)$ and $p_\gamma = (E_\gamma,\vec{p}_\gamma)$ are the four-momenta of the final charged lepton and photon, respectively. The final charged lepton and photon are emitted at solid angles $\Omega_l$ and $\Omega_{\gamma}$, with normalized three-momenta $\hat{p}_l$ and $\hat{p}_\gamma$, and $c$ is the cosine of the angle between $\hat{p}_l$ and $\hat{p}_\gamma$. The term $ \deltaw =1.04 \times 10^{-6}$ is the tree-level correction to muon decay induced by the $W$-boson propagator~\cite{Ferroglia:2013dga,Fael:2013pja}.
Equation~\eqref{eqn:radiativedecayrateNLO} includes the possible emission of an additional soft photon with normalized energy $y'$ lower than the photon detection threshold $y_0$ (with $y_0 \ll 1$): $y'<y_0<y$.
The function $G (x,y,c,y_0)$ and, analogously, $J$ and $K$, are given by
\begin{equation}
G \, (x,y,c,y_0) =
\frac{4}{3 y z^2}
\left[
g_0 (x,y,z)
+ \rw^2 \, \gw (x,y,z)
+ \frac{\alpha}{\pi} \, g_{\mysmall NLO} (x,y,z,y_0)
\right],
\label{eqn:GNLO}
\end{equation}
where $z=xy(1-c\beta_l)/2$; the LO function $g_0 (x,y,z)$, computed in~\cite{Kinoshita:1958ru,Fronsdal:1959zzb,EcksteinPratt,Kuno:1999jp}, arises from the pure Fermi $V$--$A$ interaction, whereas $\gw(x,y,z)$ is the LO contribution of the $W$-boson propagator derived in~\cite{Fael:2013pja}. The NLO term $g_{\mysmall NLO} (x,y,z,y_0)$ is the sum of the virtual and soft bremsstrahlung contributions calculated in~\cite{Fael:2015gua} (see also refs.~\cite{Fischer:1994pn,Arbuzov:2004wr}). The function $L(x,y,z)$, appearing in front of the product $\hat{n} \cdot \left(\hat{p}_l \times \hat{p}_\gamma \right)$, does not depend on $y_0$; it is only induced by the loop corrections and is therefore of $\mathcal{O}(\alpha/\pi)$. In particular, $L(x,y,z)$ is of the form $\sum_n P_n(x,y,z) \, {\rm Im} \left[I_n (x,y,z)\right]$, where $P_n$ are polynomials in $x,y,z$ and $I_n (x,y,z)$ are scalar one-loop integrals whose imaginary parts are different from zero. Tiny terms of $\mathcal{O}(\alpha \, m_{\tau}^2/\mw^2) \sim 10^{-6}$ were neglected; they are expected to be comparable to the uncomputed next-to-next-to-leading order (NNLO) corrections of $\mathcal{O}((\alpha/\pi)^2)$. The functions $G$, $J$, $K$ and $L$ are free of UV and IR divergences. Their (lengthy) explicit expressions are provided in~\cite{Fael:2015gua}. The corresponding formula for the radiative decay of a polarized $\tau^+$ can be simply obtained replacing $J \to -J$ and $K \to -K$ in eq.~\eqref{eqn:radiativedecayrateNLO} (see table~\ref{tab:tauplus}). If the initial $\tau^{\pm}$ are not polarized, eq.~\eqref{eqn:radiativedecayrateNLO} simplifies to
\begin{equation}
\frac{d^3 \Gamma \left(y_0\right) }{dx \, dc \, dy} =
\frac{\,\alpha G_F^2 m_{\tau}^5} {(4 \pi)^6} \frac{x \beta_l}{1+ \deltaw} \,\, 8 \pi^2 \, G \, (x,y,c,y_0).
\label{eq:radiativedecayrateunpolarizedNLO}
\end{equation}
For the differential rate of leptonic $\tau$ decays in which a virtual photon is emitted and converted into a lepton pair, we refer the reader to the recent comprehensive article in~\cite{Flores-Tlalpa:2015vga}.
The effective Lagrangian \eqref{eqn:leff} generates additional non-standard contributions to the differential decay rate of a polarized $\tau^-$ in eq.~\eqref{eqn:radiativedecayrateNLO}.\footnote{As discussed in section~\ref{sec:ff}, we neglect non-standard $\tau$-$W$ couplings arising from the operator $Q^{33}_{lW}$.} They can be summarised in the shifts:
\begin{align}
G & \,\to\, G \,+\, \tilde{a}_\tau \, G_a,
\label{eqn:Gachange}\\
J & \,\to\, J \,+\, \tilde{a}_\tau \, J_a,
\label{eqn:Jachange}\\
K & \,\to\, K \,+\, \tilde{a}_\tau \, K_a,
\label{eqn:Kachange}\\
L & \,\to\, L \,+\, \left(m_\tau/e \right) \, \tilde{d}_\tau \, L_d,
\label{eqn:Ldchange}
\end{align}
where
\begin{align}
G_a &=
\frac{4}{3z}
\left[r^2 \left(y^2-y z+3 z^2\right)-z (y+2 z) (x+y-z-1) \right],\\
J_a &=
\frac{2}{3z}
\big[
3 r^2 \left(x y+y^2-2 z\right)-2 x^2 y-4 x y^2+2 x y z+x y +4 x z-2 y^3+2 y^2 z \notag \\
& +2 y^2+3 y z-4 z^2-2 z \big], \\
K_a &=
\frac{2}{3 y z}
\big[
12 r^4 y+r^2 \left(-3 x^2 y-3 x y^2-8 x y-6 y^2+8 y z+4 y+6 z^2\right)
+2 x^3 y+4 x^2 y^2\notag \\
&-2 x^2 y z-x^2 y+2 x y^3-2 x y^2 z-2 x y^2-x y z -4 x z^2-2 y^2 z-2 y z^2+2 y z+4 z^3+2 z^2\big],\\
L_d &=
\frac{4}{3yz}
\big[
3 r^2 \left(x y+y^2-2 z\right)-2 x^2 y-4 x y^2+2 x y z+x y+4 x z-2 y^3 +2 y^2 z \notag \\
&+2 y^2+3 y z-4 z^2-2 z
\big]
\end{align}
(we note that $L_d=2J_a/y$). Tiny terms of $\mathcal{O}(\tilde{a}_{\tau}^2)$, $\mathcal{O}(\tilde{d_{\tau}}^2)$ and $\mathcal{O}(\tilde{a}_{\tau} \tilde{d_{\tau}})$ were neglected. For $\tau^+$ decays, the theoretical prediction for the differential decay rate can again be obtained from eq.~\eqref{eqn:radiativedecayrateNLO}, simply performing the following substitutions (see table~\ref{tab:tauplus}):
\begin{align}
G & \,\to\, G \,+\, \tilde{a}_\tau \, G_a,
\label{eqn:Gachangeplus}\\
J & \,\to\, -J \,-\, \tilde{a}_\tau \, J_a,
\label{eqn:Jachangeplus}\\
K & \,\to\, -K \,-\, \tilde{a}_\tau \, K_a,
\label{eqn:Kachangeplus}\\
L & \,\to\, L \,-\, \left(m_\tau/e \right) \, \tilde{d}_\tau \, L_d.
\label{eqn:Ldchangeplus}
\end{align}
Deviations of the $\tau$ dipole moments from the SM values can be determined comparing the SM prediction for the differential rate in eq.~\eqref{eqn:radiativedecayrateNLO}, modified by the terms $G_a$, $J_a$, $K_a$ and $L_d$, with sufficiently precise data.
\begin{table}
\centering
\begin{tabular}{c||c|c|c|c|c|c|c|c}
\toprule
$\tau^-$ & $+G$ & $+J$ & $+K$ & $+L$ & $+G_a$ & $+J_a$ & $+K_a$ & $+L_d$\\
\hline
$\tau^+$ & $+G$ & $-J$ & $-K$ & $+L$ & $+G_a$ & $-J_a$ & $-K_a$ & $-L_d$\\
\bottomrule
\end{tabular}
\caption{Relative signs of the contributions to the differential rate for $\tau^-$ and $\tau^+$ decays.}
\label{tab:tauplus}
\end{table}
\section{Feasibility study at Belle and Belle II} \label{sec:fstudy}
In this section we outline our technique to estimate the sensitivity on $\tau$ dipole moments via $\tau$ leptonic radiative decays. First, however, we will discuss the possibility, suggested in ref.~\cite{Laursen:1983sm}, to determine $\tilde{a}_{\tau}$ taking advantage of the radiation zero which occurs in the radiative leptonic decays $\tau\to{l}\nu\nu\gamma$ for $c=-1$ (i.e., ${l}$ and $\gamma$ back-to-back in the $\tau$ rest frame) and maximal energy of the lepton ${l}$, i.e.\ $x^{\rm max}=2E^{\rm max}_{{l}}/m_{\tau}=1+r^2$. To this end, we analyzed a set of $\tau^+\tau^-$ events, where one $\tau$ decays to the radiative leptonic mode and the other $\tau$ decays to ordinary (inclusive) leptonic mode:
$\tau^{\pm}\to{l}^{\pm}_1\nu\nu\gamma,~\tau^{\mp}\to{l}^{\mp}_2\nu\nu$,
with ${l}_{1,2}=e$ or $\mu$, and ${l}_{1}\neq{l}_{2}$ --- in short:
$({l}^{\pm}_1\gamma,~{l}^{\mp}_2)$.
We excluded
$(e^{\pm}\gamma,~e^{\mp})$ and $(\mu^{\pm}\gamma,~\mu^{\mp})$
events from our analysis because of the large background from
$e^+ e^- \to e^+ e^- \gamma$ and $e^+ e^- \to \mu^+ \mu^- \gamma$
processes. The analyzed events were produced by the KKMC/TAUOLA/PHOTOS generators~\cite{Jadach:1999vf,Jadach:1993hs,Barberio:1993qi} and processed by GEANT3 based program~\cite{geant} in the conditions of the Belle experiment.
The sensitivity to $\tilde{a}_{\tau}$ is determined by the background suppression power $\varepsilon_{\rm sig}/\varepsilon_{\rm bg}$, where $\varepsilon_{\rm sig}$ is the detection efficiency for signal events and $\varepsilon_{\rm bg}$ is that for background events.
The main background comes from the SM radiative leptonic decays (characterized by $\tilde{a}_{\tau}=0$) as well as from $(\tau^+\to{l}^+_1\nu\nu;~\tau^-\to{l}^-_2\nu\nu)\gamma_{\mysmall ISR}$ events with initial state radiation (ISR) towards large polar angles in the detector.
As the fraction of the signal events in the vicinity of the radiation zero point is very small, we extended the signal
region to maximize $\varepsilon_{\rm sig}/\varepsilon_{\rm bg}$:
\begin{equation}
0.1<\cos{\widehat{({l}_2,\gamma)}}<0.8, \quad
\cos{\widehat{({l}_1,\gamma)}}<-0.9,\quad
{\rm and}~E_{\gamma}>0.5~{\rm GeV}.
\end{equation}
Even in this case, the $\tilde{a}_{\tau}$ upper limit (UL) which can be achieved with the whole Belle statistics of about $0.9\times 10^9$ $\tau$ pairs is only UL$(\tilde{a}_{\tau})\simeq 2$. We found that the phenomenon of radiation zero
has no large influence on the $\varepsilon_{\rm sig}/\varepsilon_{\rm bg}$.
The dynamical structure of the signal events, determined by $G_a(x,y,c)$ (for this specific analysis, also terms of $O(\tilde{a}_{\tau}^2)$ were kept), allows us to achieve $\varepsilon_{\rm sig}/\varepsilon_{\rm bg}\sim 100$ only. At the same time, the suppression of the signal branching fraction for $\tilde{a}_{\tau}=1$ is ${\cal B}_{\rm bg}/{\cal B}_{\rm sig}\simeq 2000$, i.e.\ about one order of magnitude larger than $\varepsilon_{\rm sig}/\varepsilon_{\rm bg}$. As a result, there is no possibility to improve significantly the $\tilde{a}_{\tau}\sim 1$ sensitivity. Our feasibility study in the conditions of the Belle experiment therefore shows that the radiation zero method does not help to improve the present limits on $\tilde{a}_{\tau}$.
We will now outline our method to extract $\tilde{a}_{\tau}$ and $\tilde{d}_{\tau}$, which consists in the use of an unbinned maximum likelihood fit of events in the full phase space. The main idea is to consider events where both $\tau$ leptons decay to particular final states. One $\tau^{\mp}$ (signal side) decays to the radiative leptonic mode and the other $\tau^{\pm}$ (tag side) decays to some well-investigated mode with a large branching fraction.
As a tag decay mode we choose $\tau^{\pm}\to\rho^{\pm}\nu\to\pi^{\pm}\pi^0\nu$ ($\rho$-tag mode), which also serves as spin analyser and allows us to be sensitive to the spin-dependent part of the differential decay rate of the signal decay using effects of spin-spin correlation of the $\tau$ leptons~\cite{Tsai:1971vv}. With this technique we analyzed $({l}^{\mp}\nu\nu\gamma,~\pi^{\pm}\pi^0\nu)$ events in the 12-dimensional phase space (PS), see figure~\ref{fig:rhotag}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.6\textwidth]{rhotag}
\caption{The $\rho$-tag mode used in the unbinned maximum likelihood fit. Events are analyzed in the $12$-dimensional phase space of $({l}^{\mp},\gamma,\pi^{\pm},\pi^0)$. Undetected neutrinos are not drawn.}
\label{fig:rhotag}
\end{figure}
The probability density function (PDF) is constructed from the total differential cross section
$\frac{d\sigma}{\rm dPS}(e^+ e^-\to \tau^{\mp}\tau^{\pm}\to ({l}^{\mp}\nu\nu\gamma,~\pi^{\pm}\pi^0\nu))$,
which is given by the sum of a spin-independent term and spin-spin correlation term. To write the total differential cross section we followed the approach developed in refs.~\cite{Fetscher:1990su,Tamai:2003he}. The differential cross section of $e^+e^- \to \tau^+(\hat{n}^+) \, \tau^-(\hat{n}^-)$ in the center-of-mass system (c.m.s.) is given by~\cite{Tsai:1971vv} (asterisks indicate parameters measured in the c.m.s.):
\begin{equation}
\frac{d\sigma(\hat{n}^-,\hat{n}^+)}{d\Omega^*_{\tau}} =
\frac{\alpha^2\beta^*_{\tau}}{64E^{*2}_{\tau}} \left[ D_0+D_{ij} \, n^-_i n^+_j \right],
\label{eqtotdif}
\end{equation}
where
$D_0 = 1+\cos^2{\theta^*_\tau}+\sin^2{\theta^*_\tau}/\gamma^{*2}_{\tau}$,
\begin{equation}
D_{ij} = \left( \begin{array}{@{}c@{~~}c@{~~}c@{}}
(1+\frac{1}{\gamma^{*2}_{\tau}})\sin^2{\theta^*_\tau} & 0 & \frac{1}{\gamma^*_{\tau}}\sin{2\theta^*_\tau} \\
0 & -\beta^{*2}_{\tau}\sin^2{\theta^*_\tau} & 0 \\
\frac{1}{\gamma^*_{\tau}}\sin{2\theta^*_\tau} & 0 & 1+\cos^2{\theta^*_\tau}-\frac{1}{\gamma^{*2}_{\tau}}\sin^2{\theta^*_\tau} \\
\end{array} \right),
\end{equation}
and $\hat{n}^\mp$ is the polarisation vector of $\tau^{\mp}$ in its rest frame (unit three-vector along the $\tau^{\mp}$ spin direction with components $n^\mp_i$). Moreover, $E^*_{\tau}$, $\gamma^*_{\tau}=E^*_{\tau}/m_{\tau}$, $\beta^*_{\tau}=|\vec{p}_{\tau}^{~*}|/E^*_{\tau}$ and $\theta^*_\tau$ are the energy, Lorentz factor, velocity of the $\tau$ and the polar angle of the $\tau^-$ three-momentum $\vec{p}_{\tau}^{~*}$, respectively. The signal differential decay width, discussed earlier in section~\ref{sec:effectiveltau}, can be written in the form (with an unimportant, for this analysis, total normalization constant $\kappa_{{l}\gamma}$):
\begin{equation}
\frac{d\Gamma(\tau^{\mp}(\hat{n}^{\mp})\to{l}^{\mp}\nu\nu\gamma)}
{dx \, dy \, d\Omega_{{l}} \, d\Omega_{\gamma}}= \kappa_{{l}\gamma}
\left[
A(x, y, z)
\pm\hat{n}^{\mp} \! \cdot \vec{B}^{\mp}(x, y, z)
\right],
\end{equation}
where
\begin{align}
A(x, y, z) = & \,\, x\beta_{l}\biggl[G(x, y, c, y_0) + \tilde{a}_{\tau}G_a(x, y, z)\biggr]
\\
\vec{B}^{\mp}(x, y, z) = & \,\, x\beta_{l}\biggl[\hat{p}_{{l}}x\beta_{l} \left(J+\tilde{a}_{\tau}J_a\right)
+ \hat{p}_{\gamma} y \left(K+\tilde{a}_{\tau} K_a\right) + \biggr.
\\
& \biggl. \,\,
+ (\hat{p}_{{l}}\times\hat{p}_{\gamma}) x y \beta_{l}\left(\pm L + (m_{\tau}/e) \tilde{d}_{\tau} L_d\right)\biggr].
\end{align}
The $\tau^{\pm}(\hat{n}^{\pm}) \to \rho^{\pm}(K) \, \nu(q) \to \pi^{\pm}(p_1) \, \pi^0(p_2) \, \nu(q)$ differential
decay rate is (with a total normalization constant $\kappa_{\rho}$):
\begin{equation}
\frac{d\Gamma(\tau^{\pm}(\hat{n}^{\pm})\to\pi^{\pm}\pi^0\nu)}
{dm^2_{\pi\pi} \, d\Omega_{\rho} \, d\Omega_{\pi\rho}} = \kappa_{\rho}
\left[A' \mp \hat{n}^{\pm} \!\cdot \vec{B'}\right] W(m^2_{\pi\pi}),
\end{equation}
where
\begin{align}
& A'= 2 \, (q \cdot Q) \, Q_0-Q^2q_0, & &\vec{B'} =Q^2\vec{K} + 2\, (q \cdot Q) \, \vec{Q}, \notag
\\
& Q = p_1 - p_2, & &K = p_1 + p_2, \notag &
\\
& W(m^2_{\pi\pi}) = |F_{\pi}(m^2_{\pi\pi})|^2 \frac{|\vec{p}_{\rho}| |\vec{p}_{\pi\rho}|}{m_{\tau}m_{\pi\pi}},
& &m^2_{\pi\pi}=K^2, \notag &
\\
& |\vec{p}_{\rho}| = \frac{m_{\tau}}{2} \left(1-\frac{m^2_{\pi\pi}}{m^2_{\tau}}\right), &
&|\vec{p}_{\pi\rho}| = \frac{\lambda^{\frac{1}{2}}(m^2_{\pi\pi},m^2_{\pi},m^2_{\pi^0})}{2m_{\pi\pi}},
\end{align}
and $\lambda(x,y,z) \equiv x^2+y^2+z^2 -2xy-2xz-2yz$ is the K\"{a}llen function. Also, $\vec{p}_\rho$ and $\Omega_\rho$ are the three-momentum and solid angle of the $\rho$ meson in the $\tau$ rest frame, $\vec{p}_{\pi\rho}$ and $\Omega_{\pi\rho}$ are the three-momentum and solid angle of the charged pion in the $\rho$ rest frame, and $F_{\pi}(m^2_{\pi\pi})$ is the pion form factor with the CLEO parameterisation~\cite{Urheim:1997ag}. As a result, the total differential cross section for $({l}^{\mp}\gamma,\rho^{\pm})$ events can be written as \cite{Tsai:1971vv}:
\begin{equation}
\frac{d\sigma({l}^{\mp}\gamma,\rho^{\pm})}
{dE_{{l}}\,d\Omega_{{l}}\,dE_{\gamma}\,
d\Omega_{\gamma}\,d\Omega_{\rho}\,dm^2_{\pi\pi}\,d\Omega_{\pi\rho}\,d\Omega^*_{\tau}} \,=\,
\kappa_{{l}\gamma}\kappa_{\rho} \,
\frac{\alpha^2\beta^*_{\tau}}{64E^{*2}_{\tau}}
\left[ D_0AA'-D_{ij}B^{\mp}_i B'_j \right] W(m^2_{\pi\pi}).
\end{equation}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\textwidth]{tausystem}
\caption{Configuration of the two circles $C_\rho$ and $C_{{l}\gamma}$ on a unit sphere, which are determined by the decays $\tau^+ \to \rho^+ \nu$ and $\tau^- \to {l}^- \nu \bar{\nu} \gamma$, respectively. The kinematically allowed $\tau$ direction in the c.m.s. is given by the intersection between the circumference of $C_\rho$ and spherical sector constrained by $C_{{l}\gamma}$.}
\label{fig:tausystem}
\end{figure}
In the c.m.s., the $\tau^\mp$ directions are limited on an arc $(\Phi^*_A,\Phi^*_B)$. The neutrino mass constraint in the decay $\tau^+ \to \rho^+ \nu$ gives the $\tau^+$ production angle, $\Theta^*_\tau$, with respect to the $\rho$ direction $\hat{n}^*_\rho$. This relation indicates that the $\tau^+$ direction $\hat{n}^*_\tau$, which lies on a unit sphere, is on the circumference of a circle $C_\rho$ with radius equal to $\sin \Theta^*_\tau$, as shown in figure~\ref{fig:tausystem}. Similarly, the invariant mass $m_{\nu \bar{\nu}}>0$ of the two-neutrino system in the decay $\tau^- \to {l}^- \nu \bar{\nu} \gamma$ gives a constraint on $\Theta^{*'}_\tau$, the $\tau$ angle along the direction of the ${l}\gamma$ system. The inequality $m_{\nu \bar{\nu}}>0$ confines the vector $\hat{n}^*_\tau$ to be either inside or outside the circle $C_{{l}\gamma}$, depending on the kinematics. Therefore, in the c.m.s., the direction of the $\tau^\mp$ system is given by the intersection between the circumference of $C_\rho$ and spherical sector constrained by $C_{{l}\gamma}$, i.e.\ the arc $(\Phi^*_A,\Phi^*_B)$.\footnote{We observed in the analysis that the constraint $m_{\nu\nu} < m_\tau - m_{l}$ did not provide additional information on the $\tau$ direction.}
Experimentally one measures particle parameters in the c.m.s. Therefore, defining
$\vec{X}=(|\vec{p}_{l}^{~*}|,\Omega^*_{{l}},|\vec{p}_{\gamma}^{~*}|,\Omega^*_{\gamma},|\vec{p}_{\rho}^{~*}|,\Omega^*_{\rho},m^2_{\pi\pi},\Omega_{\pi\rho})$,
the visible differential cross section is~\cite{Tamai:2003he}:
\begin{equation}
{\cal F} (\vec{X}) = \frac{d\sigma({l}^{\mp}\gamma,\rho^{\pm})}{d \vec{X}}
= \int_{\Phi^*_A}^{\Phi^*_B} \frac{d\sigma({l}^{\mp}\gamma,\rho^{\pm})}
{dE_{l}d\Omega_{l}dE_{\gamma} d\Omega_{\gamma}d\Omega_{\rho}
dm^2_{\pi\pi}d\Omega_{\pi\rho}d\Omega^*_{\tau}} \,\, J \,\,
d\Phi^*_{\tau},
\label{eqn:crosec}
\end{equation}
where the integration is done over the unknown $\tau$ direction, which is constrained to lie on the $(\Phi^*_A,\Phi^*_B)$ arc. Both angles $\Phi^*_A$ and $\Phi^*_B$ are calculated using parameters measured by the experiment. The Jacobian $J$ in eq.~\eqref{eqn:crosec} can be simplified as:
\begin{equation}
J = \left\vert \frac{\partial (E_{l},\Omega_{l},E_{\gamma},\Omega_{\gamma},\Omega_{\rho},\Omega^*_{\tau})}
{\partial (|\vec{p}_{l}^{~*}|,\Omega^*_{l},|\vec{p}_{\gamma}^{~*}|,\Omega^*_{\gamma},|\vec{p}_{\rho}^{~*}|,\Omega^*_{\rho},\Phi^*_{\tau})}\right\vert =
\biggl|\frac{\partial (E_l,\Omega_l)}{\partial (|\vec{p}_{l}^{~*}|,\Omega^*_{l})}\biggr| \,\, \biggl|\frac{\partial (E_{\gamma},\Omega_{\gamma})}{\partial (|\vec{p}_{\gamma}^{~*}|,\Omega^*_{\gamma})}\biggr| \,\, \biggl|\frac{\partial (\Omega_{\rho},\Omega^*_{\tau})}{\partial (|\vec{p}_{\rho}^{~*}|,\Omega^*_{\rho},\Phi^*_{\tau})}\biggr|,
\end{equation}
where
\begin{align}
& \biggl|\frac{\partial (E_{\alpha},\Omega_{\alpha})}{\partial (|\vec{p}_{\alpha}^{~*}|,\Omega^*_{\alpha})}\biggr| =
\frac{|\vec{p}_{\alpha}^{~*}|^2}{E^*_{\alpha} |\vec{p}_{\alpha}|},
\quad \mbox{with } \alpha = l, \gamma,\\
& \biggl|\frac{\partial (\Omega_{\rho},\Omega^*_{\tau})}{\partial (|\vec{p}_{\rho}^{~*}|,\Omega^*_{\rho},\Phi^*_{\tau})}\biggr| = \frac{m_\tau}{|\vec{p}_{\tau}^{~*}|}\frac{|\vec{p}_{\rho}^{~*}|}{E^*_\rho |\vec{p}_{\rho}|}.
\end{align}
In our feasibility study we developed a special generator of the signal $({l}^{\mp}\nu\nu\gamma,~\pi^{\pm}\pi^0\nu)$ events. For the unbinned maximum likelihood fit of the generated events, the PDF is constructed as:
\begin{equation}
{\cal P}(\vec{X})=\frac{{\cal F}(\vec{X})}{\int{\!\cal F}(\vec{X}) \, d\vec{X}}.
\end{equation}
Fitting samples of generated events corresponding to the amount of data available at Belle and expected at Belle II, we studied the sensitivities to the parameters $\tilde{a}_{\tau}$ and $\tilde{d}_{\tau}$.
Our results are collected in table~\ref{tabres}, where the sensitivities are shown for two cases: ({\textit i}) events are tagged by $\tau^{\pm}\to\rho^{\pm}\nu$ only ($\rho$-tag); (\textit{ii}) six decay modes
($\tau^{\pm}\to\rho^{\pm}\nu$, $\tau^{\pm}\to\pi^{\pm}\nu$,
$\tau^{\pm}\to\pi^{\pm}\pi^0\pi^0\nu$, $\tau^{\pm}\to\pi^{\pm}\pi^+\pi^-\nu$,
$\tau^{\pm}\to e^{\pm}\nu\nu$, $\tau^{\pm}\to\mu^{\pm}\nu\nu$) with a total branching fraction of about $90\%$ are used for the tag (full tag). In the full-tag case, the sensitivity increase is due to the statistical factor $\sqrt{90/25.5}=1.88$, compared to the $\rho$-tag case with ${\cal B}=25.5\%$.
We note that the integration over the arc $(\Phi^*_A,\Phi^*_B)$ inflates the uncertainty by a factor of $1.4$ in comparison with the case when the direction of the $\tau$ is known. Also, the inclusion of the spin-dependent part of the differential decay rate increases the sensitivity by a factor of about 1.5. It is interesting to note that the sensitivity for events with $\tau \to e \nu \bar{\nu} \gamma$ is two times worse than that for $\tau \to \mu \nu \bar{\nu} \gamma$ (with the same statistics).
Table~\ref{tabres} also shows, for comparison, the sensitivities to $\tilde{a}_{\tau}$ and $\tilde{d}_{\tau}$ obtained in the most precise previous studies at DELPHI~\cite{Abdallah:2003xd} and Belle~\cite{Inami:2002ah}, respectively. It can be clearly seen that the measurement of $\tilde{a}_{\tau}$ in $\tau$ radiative leptonic decays at Belle II with the full tag can improve the DELPHI result. On the other hand, the expected sensitivity to $\tilde{d}_{\tau}$ is still worse than the most precise measurement of $\tilde{d}_{\tau}$ performed at Belle in $\tau^+\tau^-$ pair production.
\begin{table}[htbp]
\centering
\caption{Sensitivities to $\tilde{a}_{\tau}$ and $\tilde{d}_{\tau}$ in $\tau$ radiative leptonic decays ($\rho$-tag and full-tag cases) which can be achieved with the whole data sample collected at Belle and planned for Belle~II. The present most precise results by DELPHI~\cite{Abdallah:2003xd} and Belle~\cite{Inami:2002ah} are shown in the last two columns. $(m_{\tau}/e)=9.0 \times 10^{13}(e \mathrm{\cdot cm})^{-1}.$}
\label{tabres}
\begin{tabular}{c|llllll}
\toprule
& Belle ($\rho$) & Belle~II ($\rho$) & Belle (full) & Belle~II (full) & DELPHI~\cite{Abdallah:2003xd} & Belle~\cite{Inami:2002ah} \\
\midrule
$\tilde{a}_{\tau}$ & $0.16$ & $0.023$ & 0.085 & 0.012 & 0.017 & --- \\
\midrule
$(m_{\tau}/e)\,\tilde{d}_{\tau}$ & $0.15$ & $0.021$ & 0.080 & 0.011 & --- & 0.0015 \\
\bottomrule
\end{tabular}
\end{table}
\section{Conclusions}\label{sec:conclusions}
The magnetic and electric dipole moments of the $\tau$ lepton are largely unknown. Several proposals have been presented in the past to study them, but the current sensitivity is only of $\mathcal{O}(10^{-2})$ for $a_{\tau}$ and $\mathcal{O}(10^{-3})$ for $d_{\tau}$. In this article we presented a new method to probe $a_{\tau}$ and $d_{\tau}$ using precise measurements of the differential rates of radiative leptonic $\tau$ decays at high-luminosity $B$ factories. In our approach, deviations of the $\tau$ dipole moments from the SM predictions are determined via an effective Lagrangian, thus yielding model-independent results. To this end, in section~\ref{sec:effectiveltau} we provided explicit analytic formulae for the relevant non-standard contributions to the differential decay rates generated by the effective operators contributing to the $\tau$ $g$$-$$2$~and EDM. These expressions, combined with the SM predictions recently computed at NLO in~\cite{Fael:2015gua}, can be compared with precise data to probe the $\tau$ dipole moments. Earlier proposals to determine the $\tau$ anomalous magnetic moment were examined in sections~\ref{sec:status} and~\ref{sec:fstudy}.
Our technique to estimate the sensitivity on $\tau$ dipole moments via $\tau$ leptonic radiative decays was outlined in section~\ref{sec:fstudy}, where we presented a detailed feasibility study of our method in the conditions of the Belle and (upcoming) Belle~II experiments. The results of this study are summarized in table~\ref{tabres}. They show that our approach, applied to the planned full set of Belle~II data for radiative leptonic $\tau$ decays, has the potential to improve the present experimental bound on the $\tau$ $g$$-$$2$. On the contrary, the foreseen sensitivity is not expected to lower the current experimental limit on the $\tau$ EDM.
\acknowledgments
We would like to thank A.~Crivellin, S.\ Rigolin, A.~Santamaria and Z.~Was for very useful discussions and correspondence.
S.E.\ and D.E.\ thank Prof.~H.~Aihara, C.~Ng and F.~Okazawa (University of Tokyo) for fruitful discussions and great help in the development of the necessary software.
The work of M.F.\ is supported by the Swiss National Science Foundation.
M.P.\ also thanks the Department of Physics and Astronomy of the University of Padova for its support. His work was supported in part by the Italian Ministero dell'Universit\`a e della Ricerca Scientifica under the program PRIN 2010-11, and by the European Program INVISIBLES (contract PITN-GA-2011-289442).
\bibliographystyle{JHEP}
\footnotesize
|
2,869,038,154,568 | arxiv | \section*{References}
|
2,869,038,154,569 | arxiv | \section{Introduction}
\label{sec:intro}
With the advent of complex silicon detectors such as monolithic sensors or hybrid detectors with 3D sensors, detailed end-to-end Monte Carlo simulations of such devices have become an indispensable tool for detector R\&D.
They are used to optimize the detector design prior to production, to improve the understanding of the signal formation process, or to interpret data of detectors already in operation.
Over the years, many different Monte Carlo codes have been developed, but they were either dedicated to a specific experiment, specialized on a certain silicon detector type, or are in lack of a modular approach that would allow for a wide range of applications.
The \texorpdfstring{\ensuremath{\text{Allpix}^2}}{Allpix\textasciicircum 2}\xspace pixel detector simulation framework~\cite{apsq} has been developed as a versatile tool with an emphasis on longevity.
It caters to the diverse needs of the R\&D community by embracing a modular scheme that allows to tailor the simulation pipeline to the individual requirements of the detectors and applications.
This section introduces the guiding development principles and covers some of the recent additions to the framework.
\subsection{Guiding Principles of Development}
In order to ensure sustainable development and flexible software, the following principles have been applied to the development of \texorpdfstring{\ensuremath{\text{Allpix}^2}}{Allpix\textasciicircum 2}\xspace.
\paragraph{Integration of Existing Toolkits}
Many powerful tools have been developed and are employed intensively in detector R\&D.
\texorpdfstring{\ensuremath{\text{Allpix}^2}}{Allpix\textasciicircum 2}\xspace leverages their capabilities by providing interfaces to integrate them closely into the simulation.
One prominent example is Geant4~\cite{geant4, geant4-2, geant4-3} which is widely used to simulate the interaction of particles with matter.
In \texorpdfstring{\ensuremath{\text{Allpix}^2}}{Allpix\textasciicircum 2}\xspace, Geant4 is one of several options to simulate the initial energy deposition in the sensor by ionizing radiation.
Geant4 is an extensive toolkit which allows for the detailed simulation of many interaction processes in arbitrary, user-defined geometries.
However, this complexity is sometimes overwhelming for new users.
The relevant modules in \texorpdfstring{\ensuremath{\text{Allpix}^2}}{Allpix\textasciicircum 2}\xspace therefore provide an abstraction layer that auto-generates the geometrical models of the detectors, that translate the key-value configurations of relevant parameters to the respective Geant4 interface commands, and that take care of calling the Geant4 kernel.
Another important tool in silicon detector R\&D are TCAD simulations that solve Poisson's equation in the sensor using detailed doping information.
The resulting field configurations in the sensor allow to draw conclusions on the sensor behavior and to optimize the design.
By providing the possibility of importing static fields from TCAD simulations to complement the Monte Carlo models, \texorpdfstring{\ensuremath{\text{Allpix}^2}}{Allpix\textasciicircum 2}\xspace enables time-resolved simulations of the signal formation on a much faster timescale than transient TCAD simulations while in addition including stochastic effects such as Landau fluctuations.
\paragraph{Validation of Algorithms}
Simulations provide insights into physical processes but require detailed validation before being useful for predictions.
The development procedures set in place for \texorpdfstring{\ensuremath{\text{Allpix}^2}}{Allpix\textasciicircum 2}\xspace attempt to conduct a validation of new algorithms against either other simulation tools or data before being released.
Validations conducted by the core development team are published as a series of papers~\cite{apsq, allpix-hrcmos, allpix-transient}.
In addition, the continuous integration pipeline of the project repository runs a suite of automated tests for each new version of the framework in order to ensure that the existing algorithms continue to work as expected.
\paragraph{Low Entry Barrier for New Users}
\texorpdfstring{\ensuremath{\text{Allpix}^2}}{Allpix\textasciicircum 2}\xspace attempts to facilitate quick starts and first results by providing an extensive documentation in form of a user manual~\cite{apsq_manual}, source code documentation~\cite{apsq-website} and a set of examples for different application scenarios.
The framework is controlled via human-readable configuration files and has support for physical units.
In addition, mailing lists and a forum help connecting users and provide an opportunity to discuss problems.
Since no prior knowledge of programming is required, the framework has already been successfully used in university courses and summer schools on detector instrumentation.
\paragraph{Maintainability of Code}
The development of \texorpdfstring{\ensuremath{\text{Allpix}^2}}{Allpix\textasciicircum 2}\xspace follows best practices for software development.
Extensive code reviews are conducted for all contributions via merge requests, and the code base strictly enforces coding conventions as well as consistent formatting of source code.
Static code analysis is performed regularly to detect possible errors and problems at an early stage.
\texorpdfstring{\ensuremath{\text{Allpix}^2}}{Allpix\textasciicircum 2}\xspace is published under the permissive open-source MIT license to encourage use in both academia and industry.
\subsection{Recent Developments}
Recently, a new major version -- \texorpdfstring{\ensuremath{\text{Allpix}^2}}{Allpix\textasciicircum 2}\xspace 2.0 -- has been released with many changes to the core parts of the software.
Some of the new features will be briefly discussed in the following while a more exhaustive list can be found in the corresponding release notes~\cite{apsq2-release}.
\paragraph{Multithreading} With the new major version, \texorpdfstring{\ensuremath{\text{Allpix}^2}}{Allpix\textasciicircum 2}\xspace supports event-based multithreading.
Events are placed in a central task queue and worker threads pick up new events for processing whenever the previously simulated event is finished.
The number of worker threads can be configured at run-time.
A buffering mechanism is provided in case modules require a deterministic order of events.
In this case, events which are not to be processed by the respective module are cached in the buffer until all previous events have been processed.
The implementation of multithreading in \texorpdfstring{\ensuremath{\text{Allpix}^2}}{Allpix\textasciicircum 2}\xspace allows to retain strong reproducibility.
Each simulation conducted with the same configuration and the same seed for the pseudo-random number generators will yield the exact same result, independent of the number of workers chosen or the load and scheduling of the host machine.
\paragraph{Charge Carrier Lifetime \& Recombination}
With the new version it is possible to load doping profiles and to enable doping-dependent lifetime calculations for charge carriers.
This is especially relevant in situations where low electric field regions and high doping concentrations are present, leading to a very fast recombination of charge carriers with the silicon lattice.
Several recombination models have been implemented and can be configured via the configuration file of the simulation.
\paragraph{Charge Carrier Mobility}
Prior to \texorpdfstring{\ensuremath{\text{Allpix}^2}}{Allpix\textasciicircum 2}\xspace 2.0, only one charge carrier mobility model was implemented and chosen for all simulations~\cite{jacoboni}.
With the new release, a set of different models with dependencies not only on the electric field, but also on the doping concentration if available, have been introduced and can now be selected via the configuration file.
This is of special interest for sensors with high electric fields or strong doping gradients which affect the mobility and velocity of the charge carriers.
In addition to several models from literature, an \emph{extended Canali} model has been implemented, combining a model with doping concentration dependence~\cite{masetti} and one with a saturation velocity~\cite{canali}.
\section{Application Examples}
Over the past years, \texorpdfstring{\ensuremath{\text{Allpix}^2}}{Allpix\textasciicircum 2}\xspace has been used in a variety of different application scenarios.
Many examples have been presented by users at the \emph{2nd Allpix Squared User Workshop}~\cite{apsq-ws}, some of which are briefly summarized in the following.
\subsection{Signal formation in MAPS Prototypes}
\texorpdfstring{\ensuremath{\text{Allpix}^2}}{Allpix\textasciicircum 2}\xspace has been used to conduct detailed transient simulations of the signal formation process in monolithic active pixel sensors~\cite{apsqws-cmos}.
Electrostatic fields and weighting potentials from TCAD simulations are used in conjunction with a simulation of the full detection process to obtain realistic time-response distributions for a variety of sensor designs.
Furthermore, \texorpdfstring{\ensuremath{\text{Allpix}^2}}{Allpix\textasciicircum 2}\xspace simulations have been used to determine the substrate resistivity of a sensor prototype by comparing the detector performance obtained from simulations with different resistivities to that measured in test beam experiments.
Some of the results of the simulations have been published~\cite{allpix-hrcmos, allpix-transient}.
\subsection{EPICAL-2: Electromagnetic Pixel Calorimeter}
It is foreseen to add a forward calorimeter to the ALICE experiment at the LHC to enhance the experiment's physics reach~\cite{ALICECollaboration:2719928}.
A current technology demonstrator, called \emph{EPICAL-2}, consists of 24 layers of ALPIDE sensors~\cite{MAGER2016434} interleaved with \SI{3}{mm} tungsten absorbers and has been simulated with \texorpdfstring{\ensuremath{\text{Allpix}^2}}{Allpix\textasciicircum 2}\xspace~\cite{apsqws-epical}, using the direct interface to Geant4 to simulate the shower development.
Good agreement between data recorded in test beam experiments and the simulation have been found.
The simulation allowed the study of the sensor response to the shower particles, including the longitudinal and transversal shower profiles.
Several adjustments of simulation parameters, such as a more realistic beam profile and energy spectrum, are underway and a publication is in preparation.
\subsection{Dual-Sided Micro-Structured Neutron Detector}
Another application of silicon detector Monte Carlo simulations is the description of charge collection in a dual-sided micro-structured neutron detector~\cite{apsqws-dsmsnd}.
Here, trenches are etched into a silicon sensor and back-filled with LiF which acts as neutron conversion material.
The $\alpha$ and triton secondary particles emerging from the reaction of lithium with thermal neutrons then create charge carriers in the silicon sensor, and \texorpdfstring{\ensuremath{\text{Allpix}^2}}{Allpix\textasciicircum 2}\xspace has been used to simulate the charge carrier motion and signal formation.
Some of the simulation results have been published~\cite{dsmsnd}.
\section{Current Developments}
Currently, many features are under development and a new major release is in preparation.
In the following, a few highlights from the current development cycle are presented.
\subsection{Hexagonal Pixel Geometries}
Initially the geometry sub-system of the framework focused on rectangular pixels or strips in a regular matrix pattern.
However, different pixel shapes can be beneficial for certain sensor designs and a more flexible geometry is in preparation.
Most prominently, this will allow the simulation of hexagonal pixels and honeycomb matrices, which are interesting for a number of applications.
Hexagonal pixel shapes avoid problematic field regions in the pixel corners by reducing the maximum distance of the pixel boundary to the center while maintaining the same area.
Furthermore, hexagons provide a symmetry more close to a circle and therefore feature a more uniform sensor response over the pixel area.
The implementation in \texorpdfstring{\ensuremath{\text{Allpix}^2}}{Allpix\textasciicircum 2}\xspace allows the usage of different hexagon orientations as well as regular or irregular hexagon shapes with different pitches along the grid axes.
In addition, other geometries making use of the more flexible framework are in preparation, such as radial strip sensors for example used in the ATLAS ITk endcap detectors~\cite{CERN-LHCC-2017-005}.
\subsection{Impact Ionization}
An important effect in the presence of large electric fields in silicon sensors is charge multiplication through impact ionization.
Currently, different ionization models are being implemented and tested in \texorpdfstring{\ensuremath{\text{Allpix}^2}}{Allpix\textasciicircum 2}\xspace, and an interface to the \emph{Weightfield2} program~\cite{weightfield} is foreseen.
Similar to the mobility and recombination models, the impact ionization model as well as the multiplication threshold field will be selectable from the main configuration file of the simulation.
Currently, the implementation is undergoing detailed testing and comparison both with other simulation packages and reference data.
\section{Summary \& Outlook}
Silicon Detector Monte Carlo simulations are a vital tool to deepen the understanding and allow the interpretation of the detector performance.
\texorpdfstring{\ensuremath{\text{Allpix}^2}}{Allpix\textasciicircum 2}\xspace is a flexible simulation framework for this purpose, which integrates well with existing toolkits, applies validated algorithms and is easy to get started with.
It has a clean and solid code base, provides comprehensive documentation and is used in many areas, of which the simulation of CMOS sensors, electromagnetic calorimeters, or micro-structured neutron detectors are only a few examples.
The framework has seen continuous development and support over several years and a new major version has recently been released, bringing new physics models as well as multithreading capabilities.
Several new features are already underway and will be published in future versions of the framework.
|
2,869,038,154,570 | arxiv | \section{Introduction}
Parsing expression grammars are a recognition-based system for parsing of formal languages. They were defined by Ford \cite{ford2004parsing}, who showed equivalence with earlier parsing systems by Birman and Ullman \cite{birman1970parsing,birman1970tmg} that are able to recognise the class of \emph{top-down parsing languages} (TDPLs \cite{aho1972theory}).
As a language formalism, PEGs offer an attractive syntax and an efficient linear-time parsing algorithm which is nonetheless simple to implement. This led to a recent trend, which pushes for the adoption of PEGs, both as a theoretical subject \cite{chida17:_linear_parsin_expres_gramm,garnock-jones18:_recog_gener_terms_deriv_parsin_expres_gramm,henglein17:_peg,mizushima10:_packr,Moss:2017ux,Redziejowski:2013bw,Redziejowski:bc,redziejowski18:_tryin_under_peg}, and as a practical tool for parser generators \cite{Becket:2008vb,grimm2006better,Ierusalimschy:2009hl,koprowski11:_trx,kuramitsu16:_fast_flexib_declar_const_abstr,laurent15:_parsin,maidl16:_error_parsin_expres_gramm,Matsumura:2015uu,Medeiros:2008tya}.
See Ford's webpage \cite{fordwebpage} for an extensive bibliography of work around PEGs.
The influence of PEGs is illustrated by the surprising fact that, despite having been introduced only fifteen years ago, the number of available PEG-based parser generators already seems to nearly-match or even supersede the number of parser generators based on any other single parsing method, even when compared with methods which are many decades older.\footnote{We estimate this to be true, based on consulting the Wikipedia page ``Comparison of parser generators'', and searching GitHub for ``parser generator X'', and then counting how many projects appear which use a given method X. Doing so, one obtains the following numbers (ca. September 2019):
\begin{center}
\begin{tabular}{r|c|c|c|c|c|c}
& LR & LL & LALR & GLR & Earley & \textbf{PEG}\\
Wikipedia & 26 & 33 & 63 & 23 & 7 & \textbf{48}\\
GitHub & 62 & 86 & 77 & 10 & 9 & \textbf{122}
\end{tabular}
\end{center}} This seems to be due to the simplicity of the formalism, which allows for the quick appearance of many small DIY projects; the situation is reversed if limits one's attention to high-quality projects, and there does not yet appear to be any serious global tendency to replace older technologies by PEGs. Nonetheless, a few high-quality PEG-based parser generators do exist (e.g. \emph{rats!} \cite{grimm2006better}, or the \emph{Scala Standard Parser-Combinator Library}), and there was at least one serious, influential attempt at creating a programming language which intrinsically relied on PEG as a parsing technology --- the \emph{Fortress} programming language \cite{steele1999growing}, which was being developed by Guy Steele's team at Sun Microsystems. The project is now defunct, but Fortress was once considered as a possible \emph{next-generation} replacement for the \emph{Java} programming language \cite{flood2008fortress}!
Despite this enthusiasm for PEGs, we have also started seeing some objections of a theoretical nature. On one hand, proving the correctness of a given parsing expression grammar is often more difficult than one would like, even for simple examples\footnote{For example, the relatively simple grammar for the $a^n b^n c^n$ language which appears in Ford's original paper \cite{ford2004parsing}, has a (fixable) bug, which eluded discovery for over a decade (including to us, when we read Ford's paper) until the bug was pointed out by a recent paper of Garnock-Jones et al.~\cite{garnock-jones18:_recog_gener_terms_deriv_parsin_expres_gramm}.}. This makes PEGs somewhat problematic as a model of formal languages. On the other hand, there is no natural example of a language which is proven not to have PEGs. We believe that the present work will help in understanding why this is the case.
A first naive look at PEGs may suggest that their computational power should be roughly similar to that of deterministic context-free grammars \cite{ford2004parsing}. Indeed it is known that deterministic context-free languages have PEGs \cite{birman1970parsing}. But already Aho and Ullman \cite{aho1972theory} had shown that the $a^n b^n c^n$ language, which is not context-free, is still a TDPL, and hence has a PEG \cite{ford2004parsing}.
One may still hope that the computational power of PEGs can be contained, in some way, akin to how we can use pumping lemmas to separate the Chomsky hierarchy (e.g. \cite{bar1964formal,li1995new,Hayashi:aa,yu1989pumping,amarilli2012proof}). The following question appears in Aho and Ullman's book \cite{aho1972theory}, and in Ford's article \cite{ford2004parsing}:
\begin{center}
\em Is there a context-free language without a parsing expression grammar?
\end{center}
\noindent
It is possible to prove that if any such language exists, then Greibach's \emph{hardest context-free language} ${\mathcal H}$~\cite{greibach1973hardest} also has no PEGs. So the above problem is equivalent to asking for a proof that ${\mathcal H}$ has no parsing expression grammar. But no PEG is known, even for the much simpler language of \emph{palindromes}. The following questions are both open:
\begin{center}\em
Can a parsing expression grammar recognise the language of palindromes?
\medskip
Is there any linear-time language without a parsing expression grammar?
\end{center}
In fact, the only method we know to prove that a language has no PEG is by using the time-hierarchy theorem of complexity theory \cite{hartmanis1965computational}: using diagonalisation one may construct some language $L_2$ which is decidable, say, in time $n^2$ (by a random-access machine), but not in linear time, and because PEGs can be recognised in linear time using the tabular parsing algorithm of Birman and Ullman \cite{birman1970parsing} (or packrat parsing \cite{ford02:_packr,Ford:A6T0y0WG}), there will be no parsing expression grammar for $L_2$.
This stands in stark contrast with our understanding of, say, context-free languages. In that scenario, one may also construct a language $L_4$ which is decidable in time $n^4$, which cannot be decided in time $n^3$, and hence $L_4$ cannot be context-free (since the CYK algorithm decides any context-free language in time $n^3$, see, e.g., Hopcroft's book \cite{hopcroft2001introduction}). But this brings us no real insight on what it means to be context-free. To understand this, we make use of \emph{pumping lemmas}, and using such lemmas we can easily provide, say, a linear-time-decidable language which is not context-free. A pumping lemma implies a serious limitation on the computational power of context-free languages, which does not apply to universal models of computation, such as Turing machines or random-access machines.
Our current understanding of universal computation, by contrast, is extremely poor. For example, it is a longstanding open problem, to show that linear-time random-access machines cannot be simulated by two-tape Turing machines in linear time, even though it seems intuitive that this should be true. Indeed this problem is well beyond the current state of the art in computational complexity, where such lower-bounds are notoriously difficult to come by. It is also an open problem to provide any context-free language which cannot be decided by a two-tape Turing machine in linear time --- for one-tape Turing machines such a separation is known\footnote{This was first proven for palindromes; see Li and Vitanyi \cite[][\S6.1 and \S6.13]{li1995new}.}.
\medskip
A principal claim of this article is that the recognition procedure underlying parsing expression grammars is, in some sense, ``universal'', and so it will be as difficult to understand as that of a multi-tape Turing machine. A solution to the above questions, thus, may well require a breakthrough in our ability to prove computational complexity lower-bounds.
\medskip
With this in mind, the layout of the article is as follows. In Section \ref{sec:preliminaries}, we provide a formal definition of PEGs, and in Section \ref{sec:example-PEGs} we show a few examples of PEGs with surprising behaviour, and of languages which, unexpectedly, have PEGs. This includes the language of palindromes whose length is a power of two, and it is also shown that PEGs can do a form of counting.
In Section \ref{sec:scaffolding-automata}, we describe a new computational model, the \emph{scaffolding automaton}, and show that it exactly characterises the computational power of PEGs. This is our main result, and provides what we believe to be the right machine model for parsing expression grammars.
We will make good use of this characterisation in Section \ref{sec:applications}, where we show the following results.
\begin{itemize}
\item We revisit the example languages of Section \ref{sec:example-PEGs}, and construct scaffolding automata for them, for the sake of becoming familiar with the model.
\item We show that PEGs are computationally ``universal'', in the following sense: take any computable function $f:\{0, 1\}^\ast\to\{0, 1\}^\ast$; then there exists a computable function $g: \{0, 1\}^\ast \to {\mathbb N}$ such that $$\{ f(x) \$^{g(x)} x \mid x \in \{0, 1\}^\ast \}$$ has a PEG. This result may be used to construct a PEG language which is complete for $\mathsf{P}$ under logspace reductions. This stands in contrast to context-free languages, which cannot be $\mathsf{P}$ complete under logspace reductions unless $\mathsf{P} \subseteq \mathsf{NC}_2$.
\item We show that there can be no pumping lemma for PEGs. There is no total computable function $A$ with the following property: for every PEG $G$, there exists $n_0$ such that for every string $x \in {\mathcal L}(G)$ of size $|x| \ge n_0$, the output $y = A(G, x)$ is in ${\mathcal L}(G)$ and has $|y| > |x|$.
\item We show that PEGs are strongly non real-time for Turing machines: There exists a language with a PEG, such that neither it nor its reverse can be recognised by any multi-tape online Turing machine which is allowed to do only $o(n/(\log n)^2)$ steps after reading each input symbol.
\end{itemize}
\section{Preliminaries}\label{sec:preliminaries}
In this section we will cover some notation, and give a formal definition of parsing expression grammars.
\paragraph{Notation.} For each $k\in {\mathbb N}$, let $(k)_2 \in \{0, 1\}^\ast$ be its shortest binary representation, and $(k)_2^r$ to denote the reversal of its shortest binary representation.
An \emph{alphabet} $\Gamma$ is a finite set of symbols such that $\varnothing \notin \Gamma$.
For a natural number $n \ge 0$, we denote $[n] = \{0, \ldots, n\}$, $[n) = \{0, \ldots, n-1\}$, and $(n] = \{1,\ldots,n\}$. We will use $\lambda$ to denote the empty word, and $\varepsilon$ to denote a parsing expression which accepts the empty word.
\begin{definition}\label{def:parsing-expressions}
Let $\Sigma, \mathsf{NT}$ be two disjoint alphabets; the symbols in $\Sigma$ are called \emph{terminal} symbols, and those in $\mathsf{NT}$ are called \emph{non-terminal} symbols. Then, the set ${\mathcal E}(\Sigma, \mathsf{NT})$ of \emph{parsing-expressions over $\Sigma$ and $\mathsf{NT}$} is defined inductively.
\begin{itemize}
\item At the base of the induction we have $\Sigma \cup \mathsf{NT} \cup \{ {\eps}, {\mathsf{FAIL}} \} \subseteq {\mathcal E}(\Sigma, \mathsf{NT})$.
\item If $e \in {\mathcal E}(\Sigma, \mathsf{NT})$, we will have $\text{\tt !} e$ and $\text{\tt \&} e$ in ${\mathcal E}(\Sigma, \mathsf{NT})$.
\item If $e_1, e_2 \in {\mathcal E}(\Sigma,\mathsf{NT})$, we will have $e_1 e_2$ and $e_1 / e_2$ in ${\mathcal E}(\Sigma, \mathsf{NT})$.
\end{itemize}
\end{definition}
\begin{definition}\label{def:peg}
A \emph{parsing expression grammar} ${\mathcal G}$ is a tuple $\langle \Sigma, \mathsf{NT}, R, S\rangle$, where
\begin{itemize}
\item $\Sigma$ is an alphabet of so-called \emph{terminal symbols}.
\item $\mathsf{NT}$ is an alphabet of so-called \emph{non-terminal symbols}, disjoint from $\Sigma$.
\item $R: \mathsf{NT} \to {\mathcal E}(\Sigma, \mathsf{NT})$ is a function defining the \emph{rules} of ${\mathcal G}$, and associates a $(\Sigma,\mathsf{NT})$-parsing-expression to each non-terminal symbol.
\item $S \in \mathsf{NT}$ is the \emph{starting non-terminal}.
\end{itemize}
\end{definition}
\bigskip\noindent
When writing down a parsing expression grammar, the notation $A \leftarrow e$ is used to signify $R(A) = e$. The reason one uses the left arrow notation is to emphasise that PEGs correspond to a \emph{recognition procedure}, and are not to be thought of as a generative model.
\bigskip\noindent
Ford \cite{ford2004parsing} defines parsing expressions that allow for various operations, such as the \emph{zero-or-more repetitions} operator ``{\tt *}'', or the \emph{any character} symbol ``{\tt .}''. As explained in Ford's paper \cite{ford2004parsing}, these operators can be expressed by using the operators appearing in Definition \ref{def:parsing-expressions}, together with the grammars of Definition \ref{def:peg}. This is similar to how one would define such operators using context-free grammars, so we will not explicitly include these operators as part of Definition \ref{def:parsing-expressions}. For the sake of example, the \emph{zero-or-more repetitions} operator $A^{\text{\tt *}}$, applied to a non-terminal $A$, may be replaced by a new non-terminal $\mathsf{Astar}$ together with the rule $\mathsf{Astar} \leftarrow A\; \mathsf{Astar} \;/\; \varepsilon$.
\bigskip\noindent
\raisebox{-.1cm}{\HandRight}\ \
The \emph{any character} symbol ``{\tt .}'', which we will be using extensively throughout, may be replaced with $(a / b / \ldots)$ for each terminal symbol $a, b, \ldots$ of $\Sigma$. After we define the recognition procedure underlying a parsing expression grammar, in Definition \ref{def:recognition} below, it may be seen that the parsing expression ``$\text{\tt !} {\tt .}$'' recognizes exactly the empty string at the end of the input.
\bigskip\noindent
In order to define a rule $A \leftarrow B / C / \ldots$, we will write rules of the form $A \leftarrow B$, $A \leftarrow C$, \emph{etc}, and say they are \emph{alternatives} of the non-terminal symbol $A$. So, for example, if we say $A \leftarrow B A$ and $A \leftarrow \varepsilon$ are alternatives of $A$, we mean that the rule for $A$ is $R(A) = B A \;/\; \varepsilon$. We will only do this when the order in which the alternatives appear in the rule is indifferent.
\bigskip\noindent
Each parsing expression grammar defines an associated recognition procedure. This procedure gives an operational meaning to each PEG.
\begin{definition}[Recognition]\label{def:recognition}
Let ${\mathcal G} = \langle \Sigma, \mathsf{NT}, R, S\rangle$ be a parsing expression grammar. The \emph{recognition map} is a partial function
\[
\mathsf{Rec}_{\mathcal G}: {\mathcal E}(\Sigma,\mathsf{NT}) \times \Sigma^\ast \to \Sigma^\ast \cup \{ {\mathsf{FAIL}} \};
\]
this map is defined by Algorithm \ref{recognition-procedure} appearing below.
If $\mathsf{Rec}_{\mathcal G}(e, x) = {\mathsf{FAIL}}$, we say that expression $e$ \emph{rejects} input $x$; and if $\mathsf{Rec}_{\mathcal G}(e,x) = x'$ outputs a prefix $x'$ of $x$, we say that expression $e$ \emph{accepts} $x$, and \emph{consumes} $x'$. If $\mathsf{Rec}_{\mathcal G}(e, x) = x$, i.e.~$e$ accepts $x$ and consumes all of $x$, then we say the expression $e$ \emph{recognises} $x$. Otherwise $\mathsf{Rec}_{\mathcal G}(e, x)$ is \emph{undefined}, which happens precisely when the recognition procedure entered an infinite loop. We say that ${\mathcal G}$ is \emph{total} if its recognition map is total, i.e.~if it never enters an infinite loop, on any input.
\end{definition}
\bigskip\noindent
\raisebox{-.1cm}{\HandRight}\ \ The notions \emph{rejects}, \emph{accepts}, \emph{consumes} and \emph{recognises} will be frequently used throughout the paper, and the reader may refer to the above definition to remember what they mean. It is important to understand that a parsing expression $e$ may accept a string $x$, without consuming all of it. For example the expression $\text{\tt \&}(a a)$ accepts the string $a a$ but consumes no symbol in it.
\bigskip
\setlength{\intextsep}{4pt plus 1.0pt minus 2.0pt}
\begin{algorithm}[h]
\caption{Recognition Procedure $\mathsf{Rec}_{\mathcal G}(E, x):$}\label{recognition-procedure}
\begin{algorithmic}[1]
\small
\Require{$E \in {\mathcal E}(\Sigma,\mathsf{NT}), x \in \Sigma^*$}
\Ensure{$\mathsf{Rec}_{\mathcal G}(E, x) \in \Sigma^* \cup \{ {\mathsf{FAIL}} \}$}
\If{$E = \varepsilon$} {\bf return} the empty string $\lambda$
\ElsIf{$E = {\mathsf{FAIL}}$}\ {\bf return} ${\mathsf{FAIL}}$
\ElsIf{$E = a \in \Sigma$}
\If{$x = a z$ for some $z$} {\bf return} $a$ {\bf else return} ${\mathsf{FAIL}}$ \EndIf
\ElsIf{$E = \text{\tt !} e$}
\If{$\mathsf{Rec}_{\mathcal G}(e, x) = {\mathsf{FAIL}}$} {\bf return} $\lambda$ {\bf else return} ${\mathsf{FAIL}}$
\EndIf
\ElsIf{$E = \text{\tt \&} e$}
\If{$\mathsf{Rec}_{\mathcal G}(e, x) \in \Sigma^*$} {\bf return} $\lambda$ {\bf else return} ${\mathsf{FAIL}}$
\EndIf
\ElsIf{$E = e_1 e_2$}
\If{$\mathsf{Rec}_{\mathcal G}(e_1, x) = y_1 \in \Sigma^\ast$ {\bf and } $x = y_1 z$ {\bf and } $\mathsf{Rec}_{\mathcal G}(e_2, z) = y_2 \in \Sigma^\ast$}
\State {\bf return} $y_1 y_2$
\Else\ {\bf return} ${\mathsf{FAIL}}$
\EndIf
\ElsIf{$E = e_1 / e_2$}
\If{$\mathsf{Rec}_{\mathcal G}(e_1, x) \in \Sigma^\ast$}\ {\bf return} $\mathsf{Rec}_{\mathcal G}(e_1, x)$
\Else\ {\bf return} $\mathsf{Rec}_{\mathcal G}(e_2, x)$
\EndIf
\ElsIf{$E = A \in \mathsf{NT}$}\ {\bf return} $\mathsf{Rec}_{\mathcal G}(R(A), x)$
\EndIf
\end{algorithmic}
\end{algorithm}
\begin{definition}\label{def:PEGclass}
A total PEG ${\mathcal G} = \langle \Sigma, \mathsf{NT}, R, S\rangle$ is said to \emph{recognise} the language ${\mathcal L}({\mathcal G}) = \{ x\in \Sigma^\ast \mid \mathsf{Rec}_{\mathcal G}(S, x) = x \}$.
Then $\mathsf{PEG}$ is the class of languages recognised by total PEGs.
\end{definition}
One consequence of the results in this paper is that no algorithm can decide whether a PEG is total. Ford's original paper \cite{ford2004parsing} defined a notion, that of \emph{well-formed} parsing expression grammar, which was inherited from Birman and Ullman \cite{birman1970parsing}. A well-formed PEG is a PEG which obeys a certain syntactic restriction; this restriction guarantees that the above recognition procedure will not enter an infinite loop (but not all total PEGs are well-formed).
Informally, a PEG is well-formed if it avoids left recursion.
To avoid excessive formalism, in this paper we will not concern ourselves with the formal definition of well-formed PEGs. All the PEGs appearing in this paper are total, and, for the readers familiar with the notion of well-formedness, it will be possible to see that they are also well-formed. Furthermore, every theorem in this paper referring to ``total'' PEGs will still hold if one restricts our attention to ``well-formed'' PEGs.
Furthermore, there is an algorithm which accepts a PEG ${\mathcal G}$ as input, and outputs a well-formed PEG ${\mathcal G}'$, such that ${\mathcal G}'$ recognises the same language as ${\mathcal G}$ whenever ${\mathcal G}$ is total. This is akin to the fact that, despite it being undecidable if a given Turing machine runs in time $n^2$, one can take any Turing machine ${\mathcal M}$ and convert it into a (multitape) Turing machine ${\mathcal M}'$ which does run in time $n^2$, and which decides the same language as ${\mathcal M}$ if ${\mathcal M}$ also runs in time $n^2$ \cite[see][]{balcazar1988structural,arora2009computational}.
\section{Illustrative Examples}\label{sec:example-PEGs}
In this section we will study some examples which were instrumental for us to understand the computational power of the model.
\subsection{Power-Length PEGs}
Our initial expectations for the computational power of PEGs were that we should be able to treat them in a similar way as with context-free grammars, by showing a pumping lemma for them.
This owed not so much to what we knew about the computational power of PEGs --- which already Birman and Ullman \cite{birman1970parsing}, and Ford \cite{ford2004parsing}, had shown surpasses that of CFGs --- but rather to the context in which one studies PEGs: \emph{if PEGs are regarded in the context of formal languages, then we should be able to prove some kind of pumping lemma}. But soon we stumbled on the following example from the PhD thesis of Birman \cite{birman1970tmg}:
\begin{theorem}
The unary language of words whose length is a power-of-2
\[
{\mathcal P}_2 = \{ a^{2^n} \mid n \ge 0 \}
\]
is in $\mathsf{PEG}$.
\end{theorem}
How does this relate to pumping lemmas? The known pumping lemmas are able to produce, given a sufficiently large string $x$ in the language, a strictly larger string $y$, also in the language, which is not much larger --- $|y| \le |x| + O(1)$ is sufficient. But here is a language with a PEG, for which $|y|$ is always at least $2 |x|$. And soon after conjecturing that $c \cdot |x|$ might be sufficient, for some universal constant $c$, one is disabused of that notion by the following generalisation of the above:
\begin{theorem}\label{thm:powers-of-k}
For every $\ell \in {\mathbb N}$, the language
${\mathcal P}_\ell = \{ a^{\ell^n} \mid n \ge 0 \}$
is in $\mathsf{PEG}$.
\end{theorem}
\newcommand{\mathsf{IAmPowerLLength}}{\mathsf{IAmPowerLLength}}
\begin{proof}
Consider the following parsing expression grammar ${\mathcal G}$:
\[
\mathsf{IAmPowerLLength} \leftarrow a \text{\tt !} . \quad/\quad \mathsf{Helper} \; \text{\tt !}.
\]
\[
\mathsf{Helper} \leftarrow a^{\ell-1}\; \mathsf{Helper}\; a \quad / \quad a^{\ell-1} (\text{\tt \&} \mathsf{Helper}) a \quad / \quad a ((\text{\tt !} \mathsf{Helper}) a)^{\ell-1}
\]
Let us analyse the behaviour of the recognition procedure $\mathsf{Rec}_{\mathcal G}(\mathsf{Helper}, x)$ for each $x \in \{a\}^\ast$. The shortest $x$ to be accepted will be $a^\ell$; this string is accepted via the third alternative of the $\mathsf{Helper}$ non-terminal, and every symbol will be consumed, so $a^\ell$ is recognised by $\mathsf{Helper}$. Then the second string to be accepted will be $a^{\ell-1} a a^{\ell-1}$, via the second alternative --- the first alternative must have failed because it won't find the last $a$. So the second alternative is triggered, but only the first $\ell$-many $a$ symbols will be consumed, leaving $a^{\ell-1}$ symbols unconsumed (hence the string will be ``accepted'', but it won't be ``recognised''). Then the first alternative will trigger for each new sequence of $\ell-1$ $a$s, each time consuming a new $a$ symbol closer to the end of the input. Hence at this point in total we will have consumed $(\ell-1)\ell$ new symbols, which together with the $\ell$ symbols give us $\ell^2$ consumed symbols, and at this point the non-terminal $\mathsf{Helper}$ will have consumed the entire input. Thus $a^{\ell^2}$ is accepted by $\mathsf{Helper}$. Then again the second alternative is triggered, and then the first, until $\ell^3$ symbols are consumed.
In the end, we conclude that $\mathsf{Helper}$ accepts any string of the form
\[
a^{s(\ell-1)} a^s \; z,
\]
where the first position of the $a^s$-part is the first position at a power-of-$\ell$ distance from the end of the input, and in this case it consumes the first $s \ell$-many $a$ symbols.
\end{proof}
\subsection{PEG for Sometimes-Palindromes}
One may get a sense for the limitations of parsing expression grammars when trying to produce a PEG for recognising palindromes. One quickly comes to the conjecture that PEGs cannot find the middle bit of the input. In the case of palindromes, we make the following conjecture:
\begin{conjecture}
The language of even-length palindromes has no PEG, i.e.
\[
\mathsf{P} = \{ w w^r \mid w \in \{0, 1\}^\ast \} \notin \mathsf{PEG}.
\]
\end{conjecture}
\noindent
However, the above PEG for ${\mathcal P}_2$ \emph{is} able to find the middle bit of every string whose length is a power of two. This allows us to prove the following result:
\begin{theorem}\label{thm:sometimes-palindromes}
The language of palindromes of power-of-two length has a PEG:
\[
\mathsf{SP} = \{ w w^r \mid w \in \{0, 1\}^{2^n}, n \ge 0 \} \in \mathsf{PEG}.
\]
\end{theorem}
\newcommand{\mathsf{IAmPowerTwoLength}}{\mathsf{IAmPowerTwoLength}}
\begin{proof}
The following parsing expression grammar will do:
\[
\mathsf{S} \leftarrow \text{\tt \&}( \mathsf{IAmPowerTwoLength} ) \; \mathsf{Palindrome}
\]
\[
\mathsf{Palindrome} \leftarrow \mathsf{P}\text{\tt !}. \;/\; 0 0 \text{\tt !} . \;/\; 1 1 \text{\tt !} .
\]
\begin{align*}
\mathsf{P} \leftarrow \; & \; 0 \; \text{\tt !}(\mathsf{IAmPowerTwoLength})\; \mathsf{P} \;0\\
/ & \; 1 \; \text{\tt !}(\mathsf{IAmPowerTwoLength}) \; \mathsf{P} \; 1\\
/ & \; 1 \; \text{\tt \&}(\mathsf{IAmPowerTwoLength}) \;1\\
/ & \; 0 \; \text{\tt \&}(\mathsf{IAmPowerTwoLength}) \; 0
\end{align*}
\[
\mathsf{IAmPowerTwoLength} \leftarrow \mathsf{Helper} \; !.
\]
\[
\mathsf{Helper} \leftarrow \mathsf{Bit}\; \mathsf{Helper}\; \mathsf{Bit} \quad / \quad \mathsf{Bit} \; \mathsf{Bit}
\]
\[
\mathsf{Bit} \leftarrow 0 / 1
\]
As in the proof of Theorem \ref{thm:powers-of-k}, the non-terminal $\mathsf{IAmPowerTwoLength}$ accepts exactly at the positions whose distance from the end-of-input is a positive power of two, and consumes the entire input in that case. Hence the expression $(\text{\tt \&} \mathsf{IAmPowerTwoLength})$ accepts exactly at positions whose distance from end-of-input is a positive power of two, and when it accepts it will not consume any input. On the other hand the expression $(\text{\tt !} \mathsf{IAmPowerTwoLength})$ accepts exactly at positions which are \emph{not} at positive-power-of-two distance away from the end-of-input.
The recognition procedure associated with the non-terminal $\mathsf{P}$ now behaves as follows: one of the first two alternatives will be chosen repeatedly, until the first position which is a positive power-of-two is reached; then, at that position, one of the last two alternatives is chosen. (In each case, which of the two alternatives gets chosen is determined by the next bit.) It follows that $\mathsf{P}$ accepts exactly at those positions $i$ such that the input after (and including) position $i$ is of the form:
\[
x\, y \, z
\]
where $x = y^r$, and the leftmost position after $i$ which is at a positive-power-of-two distance away from the end-of-input, is the first bit of $y$. And when $\mathsf{P}$ accepts such a string $x y z$, $\mathsf{P}$ consumes exactly the prefix $x y$.
Inspection of the rules for $\mathsf{Palindrome}$ and $\mathsf{S}$ concludes the proof.
\end{proof}
\subsection{PEG for a Counting Language}
The next example will be crucial in Sections \ref{sec:universality} and \ref{sec:peg-vs-online}, for reasons which we will explain in Section \ref{sec:expressive-power}.
\begin{theorem}\label{thm:counting}
The following \emph{reversed counting language}, over the alphabet $\{0,1,\#,\circ\}$, has a parsing expression grammar:
\[
\{ (n)_2^r \circ (n)_{2} \# \; (n-1)_2^r \circ (n-1)_2 \#\; \cdots \;
\# \; (0)_2^r \circ (0)_2 \# \mid n \ge 0\}.
\]
\end{theorem}
\bigskip\noindent
The characters $\#$ and $\circ$ are part of the input alphabet, and are being used as separators, with no other special meaning. We will call $\#$ the \emph{outer separator}, and $\circ$ the \emph{inner separator}.
\begin{proof}
The proof relies on the intuition built in the previous two proofs. Roughly speaking, it implements the simple increment-by-one algorithm.
Let us begin by presenting only part of the grammar. We will omit the rules associated with the non-terminal $\mathsf{AddOneBlock}$, for now. The grammar begins with the rules:
\[
\mathsf{Sequence} \leftarrow \text{\tt \&} (\mathsf{AddOneBlock})\; \mathsf{InvertedBlock} \; \mathsf{Sequence} \quad / \quad 0\circ 0\#
\]
\[
\mathsf{InvertedBlock} \leftarrow \mathsf{Inverted} \#
\]
\[
\mathsf{Inverted} \leftarrow 1\; \mathsf{Inverted} \; 1 \quad/\quad 0\; \mathsf{Inverted}\; 0 \quad/\quad \circ
\]
The first thing to notice is that $\mathsf{InvertedBlock}$ recognises exactly ``inverted blocks'' of the form $w^r \circ w \#$, where $w \in \{0, 1\}^\ast$. Thus the inputs recognised by $\mathsf{Sequence}$ are exactly sequences of inverted blocks which additionally are accepted by the $\mathsf{AddOneBlock}$ non-terminal; the rules for this non-terminal are:
\[
\mathsf{AddOneBlock} \leftarrow \mathsf{Bit}^+ \circ \mathsf{AddOneCheck}
\]
\[
\mathsf{AddOneCheck} \leftarrow \mathsf{AddOneDigit}\; \mathsf{AddOneCheck} \quad / \quad \#
\]
Now $\mathsf{AddOneBlock}$ accepts strings of the form $x \circ y \#$, such that $x \in \{0, 1\}^\ast$, and such that $\mathsf{AddOneDigit}$ accepts the input at every position of $y$. This will be defined in such a way that, at the $i$-th bit of $y$ (starting from the right), $\mathsf{AddOneDigit}$ will accept if and only if the $i$-th bit of $(n+1)_2$ is $y_i$, where $n$ is the number encoded in the following block (i.e.~after the $\#$).
To enforce this behaviour, we use the following rules:
\begin{align*}
\mathsf{AddOneDigit} \leftarrow\; & \; \text{\tt \&} \mathsf{NextIs1} \; \text{\tt \&} \mathsf{Carry} \; 0\\
/&\; \text{\tt \&} \mathsf{NextIs0} \; \text{\tt \&} \mathsf{Carry} \; 1\\
/&\; \text{\tt \&} \mathsf{NextIs1} \; \text{\tt !} \mathsf{Carry} \; 1\\
/&\; \text{\tt \&} \mathsf{NextIs0} \; \text{\tt !} \mathsf{Carry} \; 0\\
/&\; \text{\tt \&} \mathsf{NextIsCircle} \; \text{\tt \&} \mathsf{Carry} \; 1
\end{align*}
\[
\mathsf{Carry} \leftarrow . \; \text{\tt \&} \mathsf{NextIs1} \; \text{\tt \&} \mathsf{Carry} \quad /\quad \mathsf{Bit} \; \#
\]
The non-terminals $\mathsf{NextIs0}$, $\mathsf{NextIs1}$, and $\mathsf{NextIsCircle}$ will verify that the input symbol in the corresponding position in the next block is a $0$, a $1$ or a $\circ$, respectively. So, for example, if the input after the current position is
\[
y_i y_{i+1} \cdots y_k \# x_k \cdots x_{i-1} x_i,
\]
then $\mathsf{NextIs0}$ will accept iff $x_i = 0$, $\mathsf{NextIs1}$ will accept iff $x_i = 1$, and $\mathsf{NextIsCircle}$ will accept iff $x_i = \circ$.
It results from this that the non-terminal $\mathsf{Carry}$ accepts if and only if there is a carry at the current position, when we add $1$ to the number after the $\#$ separator: we implement the incremented $1$ by setting the carry to $1$ at the least significant bit, and then the carry propagates as long as the number after the separator has a $1$. Then $\mathsf{AddOneDigit}$ successfully checks a single digit in the increment, in the usual way: a $1$ and a carry sum to $0$, a $0$ and a carry sum to $1$, \emph{etcetera}.
All we are left to do is defining the auxiliary non-terminals:
\[
\mathsf{NextIs0} \leftarrow \mathsf{Bit} \; \mathsf{SameLength} \; 0
\]
\[
\mathsf{NextIs1} \leftarrow \mathsf{Bit} \; \mathsf{SameLength} \; 1
\]
\[
\mathsf{NextIsCircle} \leftarrow \mathsf{Bit} \; \mathsf{SameLength} \; \circ
\]
\[
\mathsf{SameLength} \leftarrow \mathsf{Bit}\; \mathsf{SameLength} \;\mathsf{Bit} \quad /\quad \#
\]
\[
\mathsf{Bit}^+ \leftarrow \mathsf{Bit} \; \mathsf{Bit}^+ \;/\; \mathsf{Bit}
\]
\[
\mathsf{Bit} \leftarrow 0 \;/\; 1 \qedhere
\]
\end{proof}
Let us here make an important remark. The simple increment-by-one algorithm works by scanning the bits from right to left. However it does not appear to be possible to implement such a right-to-left scanning using PEGs, but left-to-right scanning can be done, and this is what the $\mathsf{NextIs}\ast$ non-terminals are doing, and checking inversion is possible, as shown by the $\mathsf{Inverted}$ non-terminal. So we may implement right-to-left scanning by inverting at each block and then using left-to-right scanning. This trick will be called ``reverse and scan'', and will be used in our simulation of Turing machines by PEGs (in Section \ref{sec:universality}), as well as in our construction of a non-real-time $\mathsf{PEG}$ language (in Section \ref{sec:peg-vs-online}).
\paragraph{Conclusion}
While carefully considering the examples above, one will get a sense that the computational power of PEGs is much greater than it seems at first glance. When considering why and how these examples work, one is slowly drawn to a generalisation of the above: a computational model for languages recognised by parsing expression grammars. This is what we present in the next section.
\section{Scaffolding Automata}\label{sec:scaffolding-automata}
Let us begin by giving an informal description of a scaffolding automaton. Such an automaton is a computing machine which constructs a labelled, directed, acyclic graph of bounded degree, which we call a \emph{scaffold}. At the start of the computation, the graph is a single node with a special end-marker label; this is the \emph{base} of the scaffold. Then as the computation proceeds new input symbols are read and new nodes are added; the node which was last added is called the \emph{top} of the scaffold. At each step of computation, the scaffolding automaton sees a new input symbol, and is allowed to look at a finite-distance neighbourhood of the top; based on the edges which are present, on the labels it sees, on the input symbol it just read, and on the current state of its finite control, the automaton adds a new node to the scaffold (the new top), and chooses the edges of this new node to point to some nodes in the finite-distance neighbourhood it has just observed. This is repeated until all input symbols are read.
\subsection{Formal Definition}
\begin{definition}[Scaffold]\label{def:scaffold}\label{def:path}
Let $d \ge 1$, $t \ge 0$ be natural numbers, and let $\Gamma$ be an alphabet. An \emph{edge list} of degree $d$ is a tuple
\[
e=(e(0),\ldots,e(d-1))\in ({\mathbb N} \cup \{ \varnothing \})^d.
\]
A \emph{$(d, \Gamma)$-scaffold} of size $t + 1\in \mathbb{N}$ is a labelled multidigraph $S=(V,E,L)$ with set of nodes $V = [t]$, a set of edge lists $E=\{\;e_v\in ({\mathbb N} \cup \{\varnothing\})^d\mid v \in [t]\;\})$, where
\begin{equation*}
\forall v\in [t]\; \forall i \in [d)\;\quad e_v(i)\in [v] \cup \{\varnothing\},\tag*{(``edges point backwards'')}
\end{equation*}
and a labelling function $L: V \to \Gamma \cup \{\varnothing\}$.
We call $t$ the \emph{top} of the scaffold $S$. If $e_v(i)=\varnothing$, one says that that \emph{node $v$ is missing edge $i$}, otherwise we say that \emph{edge $i$ is present at node $v$}. If $L(v) = \varnothing$, one says \emph{$v$ is unlabelled}. Let ${\mathbb S}(d, \Gamma)$ be set of all $(d,\Gamma)$-scaffolds (of any length).
Given a tuple $p \in [d)^k$, and a node $v \in V$ in a $(d,\Gamma)$-scaffold $S = (V,E,L)$, we may inductively define the sequence
\[
v_0 = v\text{ and } v_{j+1} =
\begin{cases}
e_{v_j}(p_j) & \text{if } v_j \in V,\\
\varnothing & \text{if } v_j = \varnothing.
\end{cases}
\]
If this sequence has $v_i = \varnothing$ for some $i \in [k]$, we say $p$ is an \emph{invalid path from $v$ in $S$}. Otherwise we say $p$ is a (valid) \emph{path from $v$ to $v_k$ in $S$}.
\end{definition}
\begin{definition}[Neighbourhood]
Given $S=(V,E, L) \in {\mathbb S}(d,\Gamma)$, $k \ge 0$ and $v\in V$, the \emph{$k$-neighbourhood of $v$ in $S$}, $N_k(S, v)$, is given inductively by $N_0(S, v)=L(v)$ and $N_{k+1}(S, v) = (L(v), N_k(S, e_v(0)), \ldots, N_k(S, e_v(d-1)))$, where we set $N_k(S, \varnothing) = \varnothing$.
The set of \emph{$k$-neighbourhoods for $(d,\Gamma)$-scaffolds}, ${\mathcal N}_k(d,\Gamma)$, is the set of partial, $d$-ary, $\Gamma$-labelled trees. It may be inductively defined by letting ${\mathcal N}_0(d, \Gamma) = \Gamma \cup \{\varnothing\}$ and ${\mathcal N}_{k+1}(d,\Gamma) = (\Gamma \cup \{\varnothing\})\times({\mathcal N}_{k}(d, \Gamma) \cup \{\varnothing \})^d$.
\end{definition}
\begin{definition}[Scaffolding automaton]
A \emph{scaffolding automaton} ${\mathcal A}$ is a tuple ${\mathcal A} = \langle \Sigma, d, \Gamma, k, Q, \delta, q_0, F\rangle$, where,
\begin{itemize}
\item $\Sigma$ is an alphabet, called the \emph{input alphabet},
\item $d \ge 1, k \ge 0$ are natural numbers, called \emph{degree} and \emph{distance}, respectively,
\item $\Gamma$ is an alphabet, called the \emph{working alphabet},
\item $Q$ is a finite set of \emph{states},
\item $q_0 \in Q$ is the \emph{initial state},
\item $F \subseteq Q$ gives the \emph{accepting states}, and
\item the \emph{transition function} is of type
\[
\delta:Q\times\Sigma\times {\mathcal N}_k(d, \Gamma) \to Q \times \Gamma \times ([d)^{\le k}\cup \{\mathsf{SELF}, \varnothing \})^d.
\]
\end{itemize}
\end{definition}
\noindent
A scaffolding automaton builds a scaffold while reading the input. The initial scaffold is $S_0 = (\{0\}, \{\}, L)$ where $L(0) = \varnothing$. The transition function $\delta$ transforms a scaffold as follows.
\begin{definition}[Single step of computation] Let $S = ([t], E, L) \in {\mathbb S}(d,\Gamma)$, and $\delta$
be a transition function. For some $q\in Q$ and $\sigma\in \Sigma$, let
\[
(q', \gamma, p_0, \ldots, p_{d-1}) = \delta(q,\sigma,N_k(S,t)).
\]
The \emph{single-step function} is then given by $\mathsf{Step}_{\delta,\sigma}(q, S) = (q',S')$, where $S' = ([t+1], E', L') \in {\mathbb S}(d,\Gamma)$, with $L'(t+1) = \gamma$, $L'(v) = L(v)$ for $v \in [t]$, and $E' = E \cup \{ e_{t+1} \}$, for the edge list $e_{t+1} = (v_0, \ldots, v_{d-1})$, where $v_i$ is obtained by following path $p_i$ from $t$ in $S$ (and equals $\varnothing$ if $p_i$ is an invalid path from $t$ in $S$); if $p_i = \varnothing$, then $e_{t+1}(i) = \varnothing$ also, and if $p_i = \mathsf{SELF}$, then $e_{t+1}(i) = t+1$.
\end{definition}
\noindent
We now formally define how the computation proceeds.
\begin{definition}
Let ${\mathcal A} = \langle \Sigma, d, \Gamma, k, Q, \delta, q_0, F\rangle$ be a scaffolding automaton, and $x = \sigma_1 \cdots \sigma_n \in \Sigma^n$. Then the \emph{computation of ${\mathcal A}$ on $x$}, denoted ${\mathcal A}(x)$, is a sequence
\[
{\mathcal A}(x) = ((q_0,S_0),(q_1,S_1), \ldots, (q_n, S_n)) \in (Q\times {\mathbb S}(d, \Gamma))^{1+n}.
\]
Having defined $(q_i,S_i)$ up to some $i < n$ --- notice that $q_0$ is the initial state and $S_0$ is the initial scaffold --- we let $(q_{i+1},S_{i+1}) = \mathsf{Step}_{\delta, \sigma_{i+1}}(q_i,S_i)$.
\end{definition}
\begin{definition} Let ${\mathcal A} = \langle \Sigma, d, \Gamma, k, Q, \delta, q_0, F\rangle$ be a scaffolding automaton, and $x = \sigma_1 \cdots \sigma_n \in \Sigma^n$. Let ${\mathcal A}(x) = ((q_0,S_0),(q_1,S_1), \ldots, (q_n, S_n))$ be the computation of ${\mathcal A}$ on $x$.
We say that ${\mathcal A}(x)$ is \emph{accepting} if $q_{n} \in F$; otherwise we say it is \emph{rejecting}.
This defines the \emph{language decided by ${\mathcal A}$}:
\[
{\mathcal L}({\mathcal A}) = \{ x \in \Sigma^\ast \mid {\mathcal A}(x) \text{ is accepting} \}.
\]
\end{definition}
\subsection{Illustrative Examples, Revisited}\label{sec:expressive-power}
We will soon prove that a language has a parsing expression grammar if and only if its reverse is decided by a scaffolding automaton --- this is Theorem \ref{thm:peg-automata} of Section \ref{sec:equivalence-PEGs}. However, in order to become more familiar with the model, let us begin by directly constructing scaffolding automata for the reverse of the languages seen in Section \ref{sec:example-PEGs}.
For each $\ell \in {\mathbb N}$, the power-length language ${\mathcal P}_\ell^r = {\mathcal P}_\ell = \{ a^{\ell^n} \mid n \ge 0 \}$ is its own reversal, so let us construct a scaffolding automaton ${\mathcal A}_\ell$ which decides ${\mathcal P}_\ell$. Informally, an automaton for ${\mathcal P}_\ell$ behaves as follows. The automaton makes sure that every node in the scaffold has an edge to the previous node. It first accepts after reading the first $a$, and then after reading the first $\ell$-many $a$'s --- so it accepts $a$ and $a^\ell$. From this point onward a second edge will be maintained that goes backward in the scaffold; we call this edge the \emph{backtracking edge}; the idea is that for each $\ell - 1$ new symbols read, the backtracking edge in the new top node will be moved a single position backwards (towards the base of the scaffold); once the backtracking edge reaches the base, the automaton enters an accepting state and again points the backtracking edge to the new top. This way, the next accepted string will have $\ell$-times as many symbols as the previous accepted string.\footnote{Because $\ell^{k} = \ell^{k-1} + \ell^{k-1}(\ell - 1)$.
}
Let us translate this informal description to the formal definitions given in the previous section. This will be the only scaffolding automaton for which we will do such a translation.
The scaffolding automaton for ${\mathcal P}_\ell$ is given by ${\mathcal A}_\ell = \langle \Sigma = \{a \}, d = 2, \Gamma = \{ \boxtimes, \Box \}, k = 2, Q, \delta, q_0, F = \{q_1, q_\ell, q''_{\ell-1}\}\rangle$, where $Q = \{q_0, q_1, \ldots, q_\ell,$ $q'_1, \ldots, q'_{\ell-1},$ $q''_1, \ldots, q''_{\ell - 1}\}$. The degree $d$ equals $2$, and at each node in the scaffold edge $0$ will always point to the previous node, and edge $1$ will be the backtracking edge. We will use wildcards when describing elements of ${\mathcal N}_k(\Gamma, d)$, so for example $\ast$ means \emph{any element of ${\mathcal N}_k(\Gamma, d)$} and
\[
\small\Tree [.$\Box$ [.$\Box$ $\ast$ $\ast$ ] [.$\Box$ $\ast$ $\ast$ ] ]
\]
means any element of ${\mathcal N}_k(\Gamma, d)$ (which consists of trees of depth $2$, not trees of depth $1$) whose topmost three nodes are labelled as in the picture above.
\newcommand{\pagedifference}[2]{\number\numexpr\getpagerefnumber{#2}-\getpagerefnumber{#1}\relax}
\newcommand{\ifsamepage}[4]{\ifnum \pagedifference{#1}{#2}=0 #3\else #4\fi}
\bigskip\noindent
The transition function for ${\mathcal A}_\ell$ may now be defined. In \ifsamepage{fig:A2-a10}{fig:A3-a10}{page \pageref{fig:A2-a10}}{pages \pageref{fig:A2-a10} and \pageref{fig:A3-a10}} below, we include the diagrams of the two scaffolds resulting from executing ${\mathcal A}_2$ and ${\mathcal A}_3$ on the string $a^{10}$. It might be helpful to follow those pictures, to get a sense of how ${\mathcal A}_\ell$ works.
\begin{itemize}
\item If we are in the initial state and scaffold, the new top will point to the base, will be labelled by $\boxtimes$, and we move to state $q_1$: \[
\delta\left(q_0, a, \ast\right) = (q_1, \boxtimes, \lambda, \varnothing).
\]
Above, $\lambda$ denotes the empty path, i.e., it is the path to the top node. This edge, edge number $0$, will always be set in this way, so that we may always refer to the previous top node by following edge $0$. The label $\boxtimes$ will be used to distinguish the first node from the rest.
\item We then count $\ell-1$ symbols, as follows: For every $i \in \{ 1, \ldots, \ell-1 \}$ we set \[
\delta\left(q_i, a, \ast \right) = (q_{i+1}, \Box, \lambda, \varnothing).
\]
\item The state $q_\ell$ is accepting. The next symbol --- symbol number $\ell + 1$ --- triggers the beginning of two nested loops, the \emph{outer loop} and the \emph{inner loop}.
As we begin the inner loop we point the backtracking edge to the current node in the scaffold (given by the empty path $\lambda$):
\[
\delta\left(q_\ell, a, \ast \right) = (q'_{1}, \Box, \lambda, \lambda).
\]
The inner loop will loop between the states $q'_1, \ldots, q'_{\ell - 1}$, in such a way that, for each sequence of $\ell-1$ input symbols, the backtracking edge is moved backwards a single position in the scaffold. This happens until the backtracking edge reaches the node immediately before the base of the scaffold, at which point we enter the state $q''_1$, which runs the inner loop one last time until reaching state $q''_{\ell-1}$, which is accepting; at state $q''_{\ell-1}$, we ``reset'' the backtracking edge, and we restart the inner loop at $q'_1$. The outer loop consists of this resetting and restarting of the inner loop.
Let us implement the inner and outer loops. The inner loop counts $\ell-1$ symbols, as follows: for every $i \in \{ 1, \ldots, \ell-2 \}$ we set \[
\delta\left(q'_i, a, \ast \right) = (q'_{i+1}, \Box, \lambda, (1)).
\]
When we have finished the inner cycle but have still not found the $\boxtimes$-marked node, we move the backtracking edge backwards, and loop the inner cycle:
\[
\delta\left(q'_{\ell-1}, a,
\begin{array}{c}
\small\Tree [.$\Box$ [.$\Box$ $\ast$ $\ast$ ] [.$\Box$ $\Box$ $\ast$ ] ]
\end{array} \right) = (q'_1, \Box, \lambda, (1, 0)).
\]
\item Eventually the top node sees node $1$ of the scaffold at distance $2$ through the backtracking edge --- which we may detect since node $1$ is labelled with $\boxtimes$ instead of $\Box$. At this point we will finish running the inner loop using the $q'$ states, and then run it one last time using the $q''$ states, which behave just like the $q'$ states, except that $q''_{\ell - 1}$ is an accepting state whereas $q'_{\ell - 1}$ is not, and $q''_{\ell-1}$ resets the backtracking edge.
This is implemented by setting
\[
\delta\left(q'_{\ell - 1}, a,
\begin{array}{c}
\small\Tree [.$\Box$ [.$\Box$ $\ast$ $\ast$ ] [.$\Box$ $\boxtimes$ $\ast$ ] ]
\end{array}
\right) = (q''_1, \Box, \lambda, (1, 0)),
\]
and, for each $i \in \{1, \ldots, \ell - 2\}$,
\[
\delta\left(q''_i, a, \ast \right) = (q''_{i+1}, \Box, \lambda, (1)),
\]
and finally
\[
\delta\left(q''_{\ell-1}, a, \ast \right) = (q'_1, \Box, \lambda, \lambda).
\]
Compare $q''_{\ell-1}$ with $q'_{\ell - 1}$: $q''_{\ell-1}$ is an accepting state whereas $q'_{\ell - 1}$ is not, and $q''_{\ell-1}$ resets the backtracking edge, whereas $q'_{\ell-1}$ moves the backtracking edge one node backwards.
\end{itemize}
In the setup above, each run of the outer cycle consumes $\ell-1$-times as many symbols as the previous run, thus multiplying the total number of consumed symbols by $\ell$. For example, let us picture the run of ${\mathcal A}_2$ on the string $a^{10}$.
\begin{center}
\begin{tikzpicture}[node distance=1.2cm,>=stealth',bend angle=45,auto]\label{fig:A2-a10}
\tikzstyle{place}=[circle,thick,draw=blue!75,fill=blue!20,minimum size=6mm]
\tikzstyle{red place}=[place,draw=red!75,fill=red!20]
\tikzstyle{transition}=[rectangle,thick,draw=black!75,
fill=black!20,minimum size=4mm]
\tikzstyle{snode}=[circle,thick,draw=black,minimum size=6mm]
\begin{scope}
\node [snode,label={$q_0$}] (n0) {$\varnothing$};
\draw[line width=.5pt] (n0) -- +(135:0.5cm);
\draw[line width=.5pt] (n0) -- +(225:0.5cm);
\node [snode,label={$q_1$},accepting] (n1) [right of=n0] {$\boxtimes$}
edge [post,bend right] (n0);
\draw[line width=.5pt] (n1) -- +(225:0.5cm);
\node [snode,label={$q_2$},accepting] (n2) [right of=n1] {$\Box$}
edge [post,bend right] (n1);
\draw[line width=.5pt] (n2) -- +(225:0.5cm);
\node [snode,label={$q'_1$}] (n3) [right of=n2] {$\Box$}
edge [post,bend right] (n2)
edge [post,bend left] (n2);
\node [snode,label={$q''_1$},accepting] (n4) [right of=n3] {$\Box$}
edge [post,bend right] (n3)
edge [post,bend left] (n1);
\node [snode,label={$q'_1$}] (n5) [right of=n4] {$\Box$}
edge [post,bend right] (n4)
edge [post,bend left] (n4);
\node [snode,label={$q'_1$}] (n6) [right of=n5] {$\Box$}
edge [post,bend right] (n5)
edge [post,bend left] (n3);
\node [snode,label={$q'_1$}] (n7) [right of=n6] {$\Box$}
edge [post,bend right] (n6)
edge [post,bend left] (n2);
\node [snode,label={$q''_1$},accepting] (n8) [right of=n7] {$\Box$}
edge [post,bend right] (n7)
edge [post,bend left] (n1);
\node [snode,label={$q'_1$}] (n9) [right of=n8] {$\Box$}
edge [post,bend right] (n8)
edge [post,bend left] (n8);
\node [snode,label={$q'_1$}] (n10) [right of=n9] {$\Box$}
edge [post,bend right] (n9)
edge [post,bend left] (n7);
\end{scope}
\end{tikzpicture}
\end{center}
In the picture, the upper edge points to the previous node, and the lower edge is the backtracking edge. The state of the automaton when reading each node of the scaffold appears above the node, and the node is drawn as a double circle if this state is an accepting state. As required, the automaton accepts after seeing $1$, $2$, $4$, and $8$ symbols.
For further illustration, let us picture the run of ${\mathcal A}_3$ on $a^{10}$:
\begin{center}
\begin{tikzpicture}[node distance=1.1cm,>=stealth',bend angle=45,auto]\label{fig:A3-a10}
\tikzstyle{place}=[circle,thick,draw=blue!75,fill=blue!20,minimum size=6mm]
\tikzstyle{red place}=[place,draw=red!75,fill=red!20]
\tikzstyle{transition}=[rectangle,thick,draw=black!75,
fill=black!20,minimum size=4mm]
\tikzstyle{snode}=[circle,thick,draw=black,minimum size=6mm]
\begin{scope}
\node [snode,label={$q_0$}] (n0) {$\varnothing$};
\draw[line width=.5pt] (n0) -- +(135:0.5cm);
\draw[line width=.5pt] (n0) -- +(225:0.5cm);
\node [snode,label={$q_1$},accepting] (n1) [right of=n0] {$\boxtimes$}
edge [post,bend right] (n0);
\draw[line width=.5pt] (n1) -- +(225:0.5cm);
\node [snode,label={$q_2$}] (n2) [right of=n1] {$\Box$}
edge [post,bend right] (n1);
\draw[line width=.5pt] (n2) -- +(225:0.5cm);
\node [snode,label={$q_3$},accepting] (n3) [right of=n2] {$\Box$}
edge [post,bend right] (n2);
\draw[line width=.5pt] (n3) -- +(225:0.5cm);
\node [snode,label={$q'_1$}] (n4) [right of=n3] {$\Box$}
edge [post,bend right] (n3)
edge [post,bend left] (n3);
\node [snode,label={$q'_2$}] (n5) [right of=n4] {$\Box$}
edge [post,bend right] (n4)
edge [post,bend left] (n3);
\node [snode,label={$q'_1$}] (n6) [right of=n5] {$\Box$}
edge [post,bend right] (n5)
edge [post,bend left] (n2);
\node [snode,label={$q'_2$}] (n7) [right of=n6] {$\Box$}
edge [post,bend right] (n6)
edge [post,bend left] (n2);
\node [snode,label={$q''_1$}] (n8) [right of=n7] {$\Box$}
edge [post,bend right] (n7)
edge [post,bend left] (n1);
\node [snode,label={$q''_2$},accepting] (n9) [right of=n8] {$\Box$}
edge [post,bend right] (n8)
edge [post,bend left] (n1);
\node [snode,label={$q'_1$}] (n10) [right of=n9] {$\Box$}
edge [post,bend right] (n9)
edge [post,bend left] (n9);
\end{scope}
\end{tikzpicture}
\end{center}
\bigskip\noindent
We started by describing the behaviour for ${\mathcal A}_\ell$ in some detail, and then provided a fully formal specification. We will now limit ourselves to describing the behaviour in \emph{sufficient} detail, so that the reader may be convinced that a fully formal specification may also be done.
\bigskip\noindent
Let us now sketch the scaffolding automata for the remaining two examples of Section \ref{sec:example-PEGs}.
Recognising the language of palindromes of power-two length (which also is its own reversal) uses the same idea of maintaining a backtracking edge, and it is similar to the $\ell = 2$ case of the implementation just shown. The backtracking edge is used not only to ensure that the length of the input is a power of two, but is also used to compare the last read symbol with its corresponding symbol. The corresponding symbol, as it turns out, is exactly the symbol under the backtracking edge, as may be verified by the reader by inspecting the run of ${\mathcal A}_2$ on $a^{10}$, pictured above. In order to make this comparison, thus, the scaffolding automaton may simply label each node with the symbol which was read at that position, and then compare the label of the node under the backtracking edge with the symbol which is now being read. The automaton remembers any violation of this requirement in its finite control, and at each power-of-two length, it accepts if and only if no violation was found.
A scaffolding automaton for recognising the counting language works as follows. The first item in the sequence is of fixed finite length and thus may be recognised --- $\#0^r \circ 0$. Then noticing that if we have recognised the sequence up to $\cdots (n-1)_2^r\circ (n-1)_2\#$ and have an edge pointing to the rightmost bit of $(n-1)_2$, then we may verify, one by one from left-to-right, the bits of $(n)_2^r$ by the usual algorithm for addition. Then we must see a $\circ$, and, having kept an edge pointing to the rightmost bit of $(n)_2^r$, we may now recognise a reversal of $(n)_2^r$, i.e.~$(n)_2$. Then we must see a $\#$. So we have now recognised $\cdots (n)_2^r \circ (n)_2\#$, and we repeat.
This trick, which we have called \emph{reverse and scan}, will be used in the proofs of Theorems \ref{thm:universality} and \ref{thm:non-real-time}.
\subsection{Equivalence with PEGs}\label{sec:equivalence-PEGs}
The rest of this section is devoted to proving that scaffolding automata exactly characterise parsing expression grammars: \begin{theorem}\label{thm:peg-automata}
A language $L \subseteq \Sigma^\ast$ is in $\mathsf{PEG}$ if and only if its reverse $L^r$ is decided by some scaffolding automaton.
\end{theorem}
The question of whether PEG languages are closed under reverse now arises quite naturally. We conjecture that they are not, but Theorem \ref{thm:universality} below suggests it will be very hard to prove such a result.
\begin{proof}[Proof of Theorem \ref{thm:peg-automata}, necessary direction]
We begin by proving that a parsing expression grammar for a language $L \subseteq \Sigma^\ast$ gives rise to a scaffolding automaton for $L^r$. A reader who is familiar with the tabular parsing algorithm of Birman and Ullman \cite{birman1970parsing} for TDPLs should be able to easily see that a scaffolding automata can simulate this algorithm (the edges will correspond to entries in the table). Since Ford \cite{ford2004parsing} has shown TDPLs are equivalent to PEGs, that suffices for obtaining the result.
But Ford's proof of equivalence between PEGs and TDPLs is complex and delicate, whereas scaffolding automata are powerful enough to simulate PEGs directly. So we will prove the result here in full.
Let ${\mathcal G} = \langle \Sigma, \mathsf{NT}, R, S\rangle$ be a total parsing expression grammar. Without loss of generality, we may assume that every rule of ${\mathcal G}$, has one of the forms:
\begin{itemize}
\item $A \leftarrow {\eps}$, $A \leftarrow {\mathsf{FAIL}}$, or $A \leftarrow t$, with $A \in \mathsf{NT}$ a non-terminal symbol and $t \in \Sigma$ a terminal symbol.
\item $A \leftarrow \text{\tt !} B$, $A \leftarrow \text{\tt \&} B$ with $A,B \in \mathsf{NT}$.
\item $A \leftarrow B C$, $A \leftarrow B / C$ with $A,B,C \in \mathsf{NT}$.
\end{itemize}
Indeed, any grammar may be converted into the form above by replacing sub-expressions with new non-terminal symbols.\footnote{For example, one would convert the rule $A \leftarrow \text{\tt \&} B C D / E F / \text{\tt !} G$ to the rules $A \leftarrow A_1 / A_3$, $A_1 \leftarrow B_1 A_2$, $B_1 \leftarrow \text{\tt \&} B$, $A_2 \leftarrow C D$, $A_3 \leftarrow A_4 / A_5$, $A_4 \leftarrow E F$ and $A_5 \leftarrow \text{\tt !} G$.}
\medskip\noindent
We then construct a scaffold automaton ${\mathcal A} = \langle \Sigma, d, \Gamma, k, Q, \delta, q_0, F\rangle$, where
\begin{itemize}
\item $d = |\mathsf{NT}|$ and $k = |\mathsf{NT}|$.
\item $\Gamma = \{ \Box \}$, as we will use a single label, to distinguish the end of the input from the remaining nodes.
\item $Q = \{q_{\text{yes}}, q_{\text{no}} \}$, as we will use only two states, which will behave identically except that only one is accepting.
\item $q_0 = q_{\text{yes}}$ if $\lambda \in {\mathcal L}(G)$ and $q_0 = q_{\text{no}}$ otherwise.
\item $F = \{q_{\text{yes}}\}$.
\end{itemize}
For $q\in Q$, $\sigma\in\Sigma$ and $N = (V, E, L) \in {\mathcal N}_k(d,\Gamma)$, the transition function has
\[
\delta(q, \sigma, N) = (q', \Box, p_0, \ldots, p_{d-1}),
\]
defined as follows. Fix some ordering of $\mathsf{NT}$, and if $A$ is the $i$-th non-terminal symbol in $\mathsf{NT}$, let us use $p_A$ in place of $p_i$. Then:
\begin{itemize}
\item If $A \leftarrow {\eps}$, set $p_A = \mathsf{SELF}$, i.e., create a self loop in the new top node.
\item If $A \leftarrow {\mathsf{FAIL}}$, or $A \leftarrow \sigma'$ with $\sigma' \neq \sigma$, then set $p_A = \varnothing$ --- the new top node will be missing edge $A$.
\item If $A \leftarrow \sigma$, then set $p_A = \lambda$, i.e., create an edge from the new top to the previous top node.
\item If $A \leftarrow \text{\tt !} B$, then we must first compute $p_B$, and then we set $p_A = \mathsf{SELF}$ if $p_B = \varnothing$, and $p_A = \varnothing$ otherwise.
\item If $A \leftarrow \text{\tt \&} B$, then we must first compute $p_B$, and then we set $p_A = \mathsf{SELF}$ if $p_B \neq \varnothing$, and $p_A = \varnothing$ otherwise.
\item If $A \leftarrow B C$, then we must first compute $p_B$; if $p_B = \varnothing$, then we set $p_A = \varnothing$ also; otherwise $p_B$ is a path to some node $v_B$ in $N$; this node will have some edge to $v_{BC} = e_{v_B}(C)$ in $N$ corresponding to $C$; we then let $p_A$ be a path to $v_{BC}$, which is one edge longer than $p_B$. This is where we require $k \ge |\mathsf{NT}|$.\footnote{It may be proven by induction on $|\mathsf{NT}|$ that whenever we set an edge of the new top node, it will be at a distance no greater than $|\mathsf{NT}|$ from the previous top node of the scaffold. Indeed, the only rule which may cause the required distance to increase is the concatenation rule $A \leftarrow B C$. In this case, when the edge $p_B$ points to a node $v_B$ which is a distance $i$ from the previous top node in the scaffold, then $p_A$ will point to the same node $v_{BC}$ as the edge $e_{v_B}(C)$ of $v_B$ corresponding to the non-terminal $C$. So the distance from the previous top node to $v_{BC}$ is now the distance to $v_B$ plus one, i.e., $i+1$. Since, as we argue later, there are no circular dependencies, the maximum distance is then $|\mathsf{NT}|$.}
\item If $A \leftarrow B / C$, then we must first compute $p_B$ and $p_C$, and then we set $p_A = p_B$, if $p_B \neq \varnothing$, and otherwise we set $p_A = p_C$.
\end{itemize}
In the above procedure, we may assume that $p_B$ and $p_C$ are computed before $p_A$, when the rule for $A$ depends on $B$ and $C$. This is because the dependencies of the above procedure (when we say ``we must first compute \ldots'') correspond exactly to the subroutine calls of the recognition procedure $\mathsf{Rec}_{\mathcal G}$. Hence, if we have a cyclic dependency above this will cause $\mathsf{Rec}_{\mathcal G}$ to enter an infinite loop, and our assumption that ${\mathcal G}$ is total implies that this never happens on any input. Hence if at some point a cyclic dependency is triggered, e.g. ``before computing $p_A$ we must first compute $p_B$ and before computing $p_B$ we must compute $p_A$'', then it may safely be ignored by setting the edge $p_A = \varnothing$, since we are guaranteed, by the totality of ${\mathcal G}$, that $\mathsf{Rec}_{\mathcal G}$ will not be called for the non-terminal $A$ at this position, on any input.\footnote{Incidentally, it is based on this observation that one may convert a total PEG ${\mathcal G}$ into an equivalent well-formed PEG. See the discussion after Definition \ref{def:PEGclass}.}
\bigskip\noindent
The above definition ensures that the following property always holds:
\begin{claim}\label{claim:pegscaffold}
Let $x^r = x_n \cdots x_1 \in \Sigma^n$ and consider the scaffold $S = (V, E, L)$ obtained at the last step of the computation of ${\mathcal A}$ on $x^r$. Then the edge of the top node $n \in V$ corresponding to the non-terminal $A \in \mathsf{NT}$ will be present if and only if the corresponding parsing expression $R(A)$ accepts $x = x_1 \cdots x_n$. When present, this edge will point to the position of $x^r$ corresponding to the symbol after $\mathsf{Rec}_{\mathcal G}(R(A), x)$. I.e., if $|\mathsf{Rec}_{\mathcal G}(R(A), x)| = \ell \ge 0$ is the number of consumed symbols, then $e_n \in E$ has $e_n(A) = n-\ell$.
\end{claim}
Having defined how we create the new top node, it suffices to explain how the new state $q'$ is chosen. We will set $q' = q_{\text{yes}}$ if the new edge $e_t(S)$, where $t$ is the new top node, and $e_t(S)$ is the edge corresponding to the starting non-terminal of ${\mathcal G}$, has been set to equal a node with empty label, i.e.~if $L(e_n(S)) = \varnothing$. We set $q' = q_{\text{no}}$ otherwise. Since only the base of the scaffold has an empty label, we will be in an accepting state if and only if $S$ consumes the entire input seen thus far. By Claim \ref{claim:pegscaffold} it follows that ${\mathcal L}({\mathcal A}) = {\mathcal L}({\mathcal G})$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:peg-automata}, sufficient direction]
Now let ${\mathcal A} = \langle \Sigma, d, \Gamma, k, Q, \delta, q_0, F\rangle$ be a scaffolding automaton accepting the language $L$. Assume without loss of generality (by duplicating states) that ${\mathcal A}$ is only in the initial state $q_0$ at the very beginning of the computation, and never re-enters it after reading the first symbol.
We construct a parsing expression grammar ${\mathcal G} = \langle \mathsf{NT}, \Sigma, R,S\rangle$ recognising $L^r$ . The grammar ${\mathcal G}$ will have the following non-terminals:
\begin{itemize}
\item For each $q \in Q$, we have a non-terminal ${\mathsf{State}}(q)$.
\item For each $\gamma \in \Gamma$, we have a non-terminal ${\mathsf{Label}}(\gamma)$.
\item For each $N \in {\mathcal N}_k(d,\Gamma)$, we have a non-terminal ${\mathsf{Neighbourhood}}(N)$.
\item For each $p \in [d)^{\le k}$, we have a non-terminal ${\mathsf{Path}}(p)$.
\item The initial non-terminal of the grammar is ${\mathsf{AutomatonAccepts}}$.
\end{itemize}
\begin{tcolorbox}[breakable]
Now we will define various grammar rules, of the form $$N \leftarrow N_1 \;/\; N_2 \;/\; N_3 \;/\; \ldots,$$ where $N$ is one of the non-terminals ${\mathsf{State}}(q)$, ${\mathsf{Label}}(\gamma)$, \emph{etcetera}, and $N_1, N_2, \ldots$ are parsing expressions.
Below, when we say that we ``add an alternative $N \leftarrow E$'', we mean that the rule corresponding to the non-terminal $N$ should have the parsing expression $E$ appearing as one of the parsing expressions $N_i$ on the right-hand side. If no alternative was added in this process, for a given non-terminal $N$, then the rule corresponding to $N$ is instead $N \leftarrow {\mathsf{FAIL}}$.
So, for example, if during the proof we add the alternative $N \leftarrow A$, the alternative $M \leftarrow B$, then the alternative $N \leftarrow C$, and no other alternatives were added, then the resulting grammar will have the rules $N \leftarrow A \;/\; C$ and $M \leftarrow B$, and for every non-terminal $O$ other than $N$ and $M$, we will have the rule $O \leftarrow {\mathsf{FAIL}}$.
This allows us to specify how each transition of the scaffolding automaton affects the different rules appearing in the grammar. If we had to specify each rule of the grammar completely, then we would need to define the rules of the grammar in a fixed order with respect to the non-terminal appearing on the left side, which would obscure the idea behind the construction.
\end{tcolorbox}
Let $\Sigma = \{\sigma_1, \sigma_2, \ldots \}$ give the (finitely-many) symbols of $\Sigma$. The rules of the grammar are defined as follows. We have the rule
\[
{\mathsf{State}}(q_0) \leftarrow \; \text{\tt !} \; ( \sigma_1 \;/\; \sigma_2 \;/\; \ldots )
\]
and if $N_0$ is the trivial neighbourhood containing a single unlabelled node with no edges (i.e.~the neighbourhood of the top node of the initial scaffold), we also have the rule
\[
{\mathsf{Neighbourhood}}(N_0) \leftarrow \; \text{\tt !} \; ( \sigma_1 \;/\; \sigma_2 \;/\; \ldots )
\]
This ensures that the end of the input of the grammar (which is the beginning of the input of the automaton) matches the initial state and neighbourhood.
\medskip\noindent Now for each possible $q \in Q$, $\sigma \in \Sigma$, and $N \in {\mathcal N}_k(d, \Gamma)$, we have a transition
\[
\delta(q, \sigma, N) = (q', \gamma, p_0, \ldots, p_{d-1}).
\]
Recall that this transition means ``if the scaffolding automaton is in state $q$, reads input symbol $\sigma$, and the neighborhood of the current top node is $N$, then it will move to state $q'$, and create a new top node with label $\gamma$, with edges given by the paths $p_0, \ldots, p_{d-1} \in [d)^{\le k}\cup \{\mathsf{SELF}, \varnothing \}$.''
\medskip\noindent
Let us write ${\mathsf{Transition}}(q, \sigma, N)$ as an abbreviation for the parsing expression $$\text{\tt \&}( \sigma \; {\mathsf{State}}(q) ) \; \text{\tt \&}( \sigma \; {\mathsf{Neighbourhood}}(N) ).$$
We then add the alternative
\[
{\mathsf{State}}(q') \leftarrow {\mathsf{Transition}}(q, \sigma, N).
\]
These alternatives will be added for every transition given by $\delta$. It will follow, by induction on the length of the input string, that ${\mathsf{State}}(q)$ will accept the string $x_{i}\cdots x_1$ if and only if the computation ${\mathcal A}(x_1 \cdots x_i)$ ends in state $q$; even when it accepts, ${\mathsf{State}}(q)$ will never consume any input. Let $F = \{f_1, f_2, \ldots \}$ give the (finitely-many) accepting states. We then naturally have the rule
\[
{\mathsf{AutomatonAccepts}} \leftarrow ({\mathsf{State}}(f_1) \;/\; {\mathsf{State}}(f_2) \;/\; \ldots) \; \text{\tt .*}
\]
\medskip\noindent Then let $\lambda \in [d)^0$ be the sequence of length $0$. We add the alternative $ {\mathsf{Path}}(\lambda) \leftarrow {\eps}, $ i.e., ${\mathsf{Path}}(\lambda)$ is always accepted and consumes no input. Now take a sequence $i p \in [d)^{1 + \ell}$ of length $1 + \ell \ge 1$; then if $p_i \notin \{ \varnothing, \mathsf{SELF} \}$, we add the alternative
\[
{\mathsf{Path}}(i p) \leftarrow {\mathsf{Transition}}(q, \sigma, N) \;\; \sigma \;\; {\mathsf{Path}}(p_i) \;\; {\mathsf{Path}}(p)
\]
If $p_i = \varnothing$, we instead add the alternative:
\[
{\mathsf{Path}}(i p) \leftarrow {\mathsf{Transition}}(q, \sigma, N) \;\; {\mathsf{FAIL}}
\]
And if $p_i = \mathsf{SELF}$, we instead add the alternative:
\[
{\mathsf{Path}}(i p) \leftarrow {\mathsf{Transition}}(q, \sigma, N) \;\; {\mathsf{Path}}(p)
\]
It will follow by induction that the non-terminal ${\mathsf{Path}}(p)$ will accept the string $x_i \cdots x_1$ if and only if path $p$ goes from the top of the scaffold in the computation ${\mathcal A}(x_1 \cdots x_{i})$, i.e.~from node $i$ in that scaffold, to some node $j \le i$. And, if the non-terminal ${\mathsf{Path}}(p)$ accepts $x_i \cdots x_1$, it will consume the input exactly up to (but not including) position $j$, i.e., it will consume the string $x_{i} \cdots x_{j+1}$ (the entire string will be consumed if $j = 0$, i.e., if the edge points to the base of the scaffold). Finally, we add the alternative
\[
{\mathsf{Label}}(\gamma) \leftarrow {\mathsf{Transition}}(q, \sigma, N)
\]
\medskip\noindent The above alternatives may be added in any order, since the various conditions ${\mathsf{Transition}}(q,\sigma,N)$ are disjoint.
The following observation is \emph{crucial} to understand why the above definitions are well-founded: the expression ${\mathsf{Transition}}(q,\sigma,N)$ uses ${\mathsf{State}}$ and ${\mathsf{Neighbourhood}}$ non-terminals, but \emph{only after consuming symbol $\sigma$}; so the accepting/consuming of the various non-terminals depends on the accepting/consuming of the same non-terminals, but in prior positions of the input, where this has already been determined.
All we are left to do is explain how each ${\mathsf{Neighbourhood}}$ is defined. But notice that knowing whether the top of a scaffold has a certain neighbourhood consists of checking that certain paths exist, and that the nodes under these paths have certain labels, and that certain other paths do not exist. For example, if we wish to check for the neighbourhood $N \in {\mathcal N}_2(2, \{\Box, \boxtimes\})$ where the top node is labelled $\Box$, the second edge of the top node leads to a child labelled $\Box$ and that child has itself a child labelled $\boxtimes$ on its first edge, i.e., if $N$ is the neighbourhood:
\[
\small\Tree [.$\Box$ [ ] [.$\Box$ $\boxtimes$ [] ] ]
\]
we then have the rule:
\begin{align*}
\mathsf{Nei}\mathsf{ghbourhood}(N) & \leftarrow \\
& \text{\tt \&} {\mathsf{Label}}(\Box) \\
& \text{\tt !} {\mathsf{Path}}(0) \;\; \text{\tt \&}{\mathsf{Path}}(1)\\
& \text{\tt \&} ({\mathsf{Path}}(1) \;\;{\mathsf{Label}}(\Box))\\
& \text{\tt \&} {\mathsf{Path}}(1,0)\;\; \text{\tt !} {\mathsf{Path}}(1, 1)\\
& \text{\tt \&}({\mathsf{Path}}(1,0 ) \;\; {\mathsf{Label}}(\boxtimes))
\end{align*}
With this observation the proof is now complete.
\end{proof}
\bigskip\noindent
We would like to make the following remark. It may be observed in the grammar above, which simulates a given scaffolding automaton, that the different alternatives may all be added in any order, since they cover disjoint cases. The reader should now suspect that the prioritized choice operator $/$ may, after all, be replaced by the usual disjunction operator $|$ from context-free grammars. This is entirely correct, since $A \;/\; B$ is equivalent to $A \mid (\text{\tt !} A) B$, where $\text{\tt !}$ is the negation operator of PEGs. It is the $\text{\tt !}$ operator that we cannot do away with: our simulation of scaffolding automaton uses the $\text{\tt !}$ operator both for detecting the end of the input and for detecting the absence of a path in the scaffold. Interestingly, it is possible to modify the above construction to remove the second use case, by adding an extra family of non-terminal symbols $\mathsf{NoPath}(p)$, that accepts the input exactly when $p$ is not a valid path starting at that position. The result of this is that any parsing expression grammar may be replaced by a grammar where the operators appearing in parsing expressions are $\text{\tt \&}$, $|$, and the special symbol $\mathsf{EndOfInput}$, which accepts only at the end of the input. Details are left to the reader.
\section{Applications}\label{sec:applications}
In this section we will use Theorem \ref{thm:peg-automata} to prove all of the remaining results mentioned in the abstract.
\subsection{``Universality''}\label{sec:universality}
\begin{theorem}\label{thm:universality}
Let $f:\{0, 1\}^\ast\to\{0, 1\}^\ast$ be any computable function. Then there exists a computable function $g: \{0, 1\}^\ast \to {\mathbb N}$ such that the language
\[
L = \{ f(x) \$^{\ell} x \mid x \in \{0, 1\}^\ast, \ell \ge g(x) \} \subseteq \{0,1,\$\}^\ast
\]
has a parsing expression grammar.
\end{theorem}
\begin{proof}
We describe a scaffolding automaton for the reverse language $L^r$, and then the result follows from Theorem \ref{thm:peg-automata}. The basic idea is to use the \emph{reverse and scan} trick. For this purpose, let $M$ be a one-tape Turing machine computing $f$.
The automaton first reads the input $x^r$, copying the symbols of $x^r$ to the labels of the corresponding nodes and adding an edge connecting each node to the previous one. It then finds the first $\$$ symbol; at this point it continues reading $\$$ symbols, while successively labelling the corresponding nodes of the scaffold with the successive configurations of the Turing machine $M$ on input $x$. After this it checks that the input matches the output of $M$ on input $x$. So, if $c_i$ is the configuration of $M$ on input $x$ at time-step $i$, and $M$ runs for $t$ time steps on input $x$, then the labels, when seen from first to last, form the string: \newcommand{\ovrs}[2]{\mathrlap{\hspace{1pt}#1}{\phantom{#2}}}\begin{align*}
\text{labels:} \qquad & x^r \#\# \; c_0 \# c_0^r \#\# \; c_1 \# c_1^r \#\# \; c_2 \# c_2^r \#\# \; \cdots \; c_t \# c_t^r \#\# \; \underbrace{\# \ldots \#}\\
\text{input:} \qquad & x^r\ovrs{\$}{\#}\ovrs{\$}{\#}\; \ovrs{\$}{c_0} \ovrs{\$}{\#} \ovrs{\$}{c_0^r} \ovrs{\$}{\#}\ovrs{\$}{\#} \; \ovrs{\$}{c_1} \ovrs{\$}{\#} \ovrs{\$}{c_1^r} \ovrs{\$}{\#}\ovrs{\$}{\#} \; \ovrs{\$}{c_2} \ovrs{\$}{\#} \ovrs{\$}{c_2^r} \ovrs{\$}{\#}\ovrs{\$}{\#} \; \cdots \; \ovrs{\$}{c_t} \ovrs{\$}{\#} \ovrs{\$}{c_t^r} \ovrs{\$}{\#}\ovrs{\$}{\#} \; \ovrs{\hspace{6pt}f(x)^r}{\# \ldots \#}
\end{align*}
Here $\#$ is being used as a separator. Note that $\$$ is also being used as a separator, but the symbol $\$$ is part of the actual language being recognized, and the symbol $\#$ is part of the alphabet being used to label the scaffold.
One may verify that the above labelling can be produced by a scaffolding automaton, provided we choose a reasonable encoding for Turing machine configurations (and for this purpose the working alphabet can be as large as desired). For example, we may encode a configuration by the sequence of symbols on the tape, and the position of the tape head will be additionally marked with some (finite) information containing the current state of the computation. With such an encoding, the scaffolding automaton can, for each $i$, produce the labels in the sequence $c_{i+1}$, provided that when reaching the first symbol of $c_{i+1}$, the top of the scaffold has an edge pointing to the last symbol of $c_i^r$ (which is easy to ensure), and that each node in the scaffold has an edge to the previous node; then the labelling $c_{i+1}$ is produced one symbol at a time by scanning $c_i^r$ starting with its last symbol, and producing the symbols of $c_{i+1}$ according to the transition function of $M$. Similarly, for each $i$, one may produce the labels in the sequence $c_i^r$, provided that when reaching the first symbol of $c_i^r$, the top of the scaffold has an edge pointing to the last symbol of $c_i$; then the labelling $c_i^r$ is produced by copying one symbol at a time.
The scaffolding automaton finally accepts if the last $\$$ symbol corresponds exactly to the last position of the (reversal of) last configuration of the computation of $M$ on $x$, and the last $\$$ symbol is followed by the string $y$ which is the reverse of the output written on the tape, in that final configuration; i.e.~if it is followed by $f(x)^r$.
\end{proof}
\bigskip\noindent
We may now show that the recognition procedure underlying parsing expression grammars is complete for polynomial time, under logspace reductions. This was previously unknown, and stands in contrast with context-free grammars.
In the case of context-free grammars, we may define the complexity class $\mathsf{LOGCFL}$, to be the class of languages which are reducible to context-free languages under logspace reductions. It may be proven that this is exactly the class of languages decidable by log-depth Boolean circuits where the OR gates have arbitrary fan-in, and the AND gates have fan-in $2$ \cite[see][p.~137]{johnson1990catalog}. In particular, $\mathsf{LOGCFL}$ is a sub-class of $\mathsf{NC}_2$, which is believed to be strictly contained in $\mathsf{P}$.
In contrast, if we were to define an analogous complexity class $\mathsf{LOGPEG}$, containing those languages that are reducible, via logspace reductions, to PEG-recognizable languages, it turns out that $\mathsf{LOGPEG} = \mathsf{P}$. It is easy to see that $\mathsf{LOGPEG} \subseteq \mathsf{P}$, since $\mathsf{PEG} \subseteq \mathsf{P}$ and $\mathsf{P}$ is closed under logspace reductions. The other direction follows as a corollary of Theorem \ref{thm:universality}.
\begin{corollary}
There is a language $L\in\mathsf{PEG}$ which is complete for $\mathsf{P}$ under logspace reductions.
\end{corollary}
\begin{proof}
Notice in the proof of Theorem \ref{thm:universality} that the resulting function $g: \{0, 1\}^\ast \to {\mathbb N}$ grows quadratically in the running time of the Turing machine $M$. Now consider the function $f$ such that $f(x) = 1$ if $x$ encodes a triple $\langle N, 0^t, y \rangle$ where, in turn, $N$ encodes a Turing machine which accepts input $y$ in $t$ or fewer steps, and $t \ge |N| + |y|$. And let $f(x) = 0$ otherwise. Then, computing $f(x)$ is a problem which is complete for polynomial time under logspace reductions. There are machines for computing $f$ in time $O(t^2)$, and hence $g(\langle N, 0^t, y \rangle) = O(t^4) \le c\cdot t^4$ for some sufficiently large integer constant $c$. The language $L$ of Theorem \ref{thm:universality} is thus also complete for polynomial time under logspace reductions, since $f(\langle N, 0^t, y \rangle) = 1$ if and only if $1\$^{c \cdot t^4} \langle N, 0^t, y \rangle \in L$, and the string $1\$^{c \cdot t^4} \langle N, 0^t, y \rangle$ may be computed from $\langle N, 0^t, y \rangle$ in logarithmic space.
\end{proof}
\subsection{Impossibility of a Pumping Lemma}
\bigskip\noindent
We may define a pumping lemma by the following:
\begin{definition}
A \emph{pumping lemma for PEGs} is a total computable function $A$ such that, for every total\footnote{Although the totality of a given PEG is undecidable, the results of this section still hold if ``total'' is replaced by ``well-formed''. (Recall that well-formedness of PEGs is a decidable syntactic restriction which ensures totality. See remarks after Definition \ref{def:PEGclass}.) It should be understood, hence, that the impossibility of a pumping lemma is not a hidden consequence of the undecidability of totality.} PEG $G$, there exists a length $n_0$ such that for every string $x \in {\mathcal L}(G)$ of size $|x| \ge n_0$, the output $y = A(G, x)$ is in ${\mathcal L}(G)$ and has $|y| > |x|$.
\end{definition}
\noindent
Some explanation is required as to why this definition is the right one.
\begin{itemize}
\item The first observation we may make is that, to our knowledge, every pumping lemma proven thus far either already is of the above form (e.g. \cite{bar1964formal,yu1989pumping,amarilli2012proof}) or can be made to work in the above form with few modifications (e.g. considering resource-bounded Kolmogorov complexity in \cite{li1995new}).
\item The second observation is that if $A$ is not required to be total, then the definition trivialises: there exists a pumping lemma for every recursively-enumerable language. Indeed given any Turing machine $M$ and input $x$, $A$ can simply dovetail on all $y$ larger than $x$ until it finds a larger $y$ accepted by $M$ (if no such $y$ is found, $M$ decides a finite language, and so the requirement on $A$ is trivially satisfied).
\item We mention also that the definition is equivalent to one where $A$ is required to produce an infinite sequence $y\ps1, y\ps2, \ldots$ of strings of increasing size, which is what one typically sees in pumping lemmas.
\end{itemize}
\begin{theorem}\label{thm:no-pumping-lemma}
There is no pumping lemma for PEGs.
\end{theorem}
We must show that any candidate computable function $A$ must fail on some grammar. Intuitively one may quickly realise, by way of Theorem \ref{thm:universality}, that the size of ``the next string'' in the language decided by a parsing expression grammar may well grow as high as any computable function of our choice. Hence given any candidate procedure $A$ meant to serve as a pumping lemma, we should be able to find a PEG language such that the gap between consecutive words grows faster than what the existence of $A$ would allow. The only difficulty in making this argument precise is that we wish to run algorithm $A$ on a PEG for the very same language we are trying to define. This is solved much the same way as in the proof of Kleene's second recursion theorem (see \cite{sipser2012introduction}, \S6.1): one shows that it is possible to construct a scaffolding automaton which has access to its own encoding.
\begin{proof}
For any scaffolding automaton $X$, let $\langle X \rangle$ be a binary encoding of $X$. Let $S \in {\mathbb S}(d, \Gamma)$ be a scaffold and $w \in \Gamma^n$. We say that \emph{$S$ sees $w$ written backwards} if, for every $\ell \in [n)$, following the first edge once and then the second edge $\ell$ times, from the top of $S$, will place us in a node labelled by $w_{n - \ell}$. Suppose we have a scaffolding automaton $C$, which accepts an input of the form $\$^s \langle X' \rangle$, where $\langle X' \rangle$ in turn is the encoding of some scaffolding automaton $X'$. Let $\langle C \rangle$ be an encoding of $C$. We then define a scaffolding automaton $X_{\langle C \rangle}$, which recognises a language ${\mathcal L}(X_{\langle C \rangle}) = \{y_1, y_2, \dots \}$, via the following procedure:
\begin{itemize}
\item $X_{\langle C \rangle}$ begins by checking that the input begins with $\langle C \rangle$, in such a way that after this check, the resulting scaffold sees $\langle C \rangle$ written backwards;
\item $X_{\langle C \rangle}$ also maintains an edge from the current top node to the previous top node, at every step of the computation, and always copies the input into the labels of the scaffold, so it is not forgotten.
\item Then $X_{\langle C \rangle}$ simulates a run of $C$ itself, which by assumption recognises a string of the form:
\[
\$^s \langle X' \rangle
\]
An edge to the last symbol of $\langle X' \rangle$ is preserved by $X_{\langle C \rangle}$ throughout the rest of the computation (on every top node henceforth);
\item Then $X_{\langle C \rangle}$ checks that the following input is the sequence $\# \mathsf{Start}\#$, and enters an accepting state at this point.
\item The scaffold now sees the string $y_1 = \langle C \rangle \$^s \langle X' \rangle \# \mathsf{Start} \#$ backwards.
\item Then for each $j = 1, 2, \ldots$, the automaton repeatedly:
\begin{itemize}
\item Simulates the computation of $A(G_{\langle X'\rangle}, y_j^r)$, in order to recognise an input of the form $\$^{a_j} A(G_{\langle X'\rangle}, y_j^r) \#$, where $G_{\langle X'\rangle}$ is the grammar recognising the reverse of the language decided by $X'$. The grammar $G_{\langle X'\rangle}$ is (constructively) given by Theorem \ref{thm:peg-automata}, and the automaton can recognise an input of this form by way of Theorem \ref{thm:universality}. Here we require that $A$ is total.
\item After scanning this input (while copying it into the labels of the scaffold), the automaton enters an accepting state.
\item The scaffold now sees backwards:
\[
y_{j+1} = y_j \$^{a_j} A(G_{\langle X' \rangle}, y_{j}^r)\#.
\]
\end{itemize}
\end{itemize}
\bigskip\noindent
Let $B$ be the scaffolding automaton which, under the assumption that the top of the scaffold sees an encoding $\langle C \rangle$ written backwards, accepts a string of the form
\[
\$^b \langle X_{\langle C \rangle} \rangle.
\]
Such a scaffolding automaton $B$ exists, by Theorem \ref{thm:universality}. Let $\langle B \rangle$ be the code for the above scaffolding automaton. Then let us consider the scaffolding automaton $X_{\langle B \rangle}$, which accepts $y_1, y_2, \ldots$ --- this sequence is infinite by our assumption that $A$ is total. Note that setting $C = B$ satisfies the assumption that $X_{\langle C \rangle}$ makes on $C$. The string $\langle X' \rangle$ recognised during execution of $X_{\langle B \rangle}$ is exactly $\langle X_{\langle B \rangle} \rangle$. Hence $G_{\langle X'\rangle} = G_{\langle X_{\langle B \rangle} \rangle}$ is a parsing expression grammar deciding the same language as $X_{\langle B \rangle}$, in reverse. i.e.~$G_{\langle X_{\langle B \rangle}\rangle}$ recognises the strings $y_1^r, y_2^r, \ldots$. Now let $n_0$ be an arbitrary natural number, and consider $y_{n_0}$; clearly $|y_{n_0}| \ge n_0$; and yet the smallest string larger than $y_{n_0}$ which is accepted by $X_{\langle B \rangle}$ is $y_{n_0+1} = y_{n_0} \$^{a_{n_0}} A(G_{\langle X_{\langle B\rangle} \rangle}, y_{{n_0}}^r)\#$ --- but its size is strictly greater than $A(G_{\langle X_{\langle B\rangle} \rangle}, y_{n_0}^r)$, and so is the size of $y_{n_0 + k}$ for any natural $k > 1$; hence $A$ must fail on the grammar $G_{\langle X_{\langle B \rangle}\rangle}$.
\end{proof}
\subsection{PEGs vs.~Online Turing Machines}\label{sec:peg-vs-online}
Because scaffolding automata are machines which read a single input symbol at a time, and which do only a constant number of operations per symbol read, they can be thought of as a \emph{real-time} computational model. This led us to conjecture that the reverse of any language in $\mathsf{PEG}$ could be recognised by a real-time Turing machine.
However this conjecture turns out to be demonstrably false.
Let us begin by the following definition:
\begin{definition}
An \emph{online Turing machine} is a Turing machine where the head of the input tape can only move in one direction. At the beginning of the computation, an input $x \in \Sigma^\ast$ is written on the the input tape, and the head of the input tape sits over the leftmost symbol of $x$, and every time the tape head is moved to the right, we say that \emph{another symbol from the input was read}. For convenience, an additional auxiliary tape is provided where the input size $|x|$ is given in binary.\footnote{So that one will not think that the lower-bounds we are about to prove result, somehow, from the fact that the machine does not know the input size. Indeed the reason why the lower-bound holds is more profound. We may even fill the auxiliary tape with any content we please (as a function of $n$), i.e.~the lower-bounds here proven will hold even in the presence of non-uniform advice.}
The class $\mathsf{Online}(t(n))$ is the class of languages $X \subseteq \Sigma^\ast$ which can be decided by an online Turing machine $M$, in the following way. If $x \in \Sigma^n$, then $M(x)$ accepts if $x \in X$ and rejects otherwise, and furthermore, the computation $M(x)$ does at most $t(n)$ steps between each input symbol read.
\end{definition}
\bigskip\noindent
This section is devoted to proving the following:
\begin{theorem}\label{thm:non-real-time}
There exists a language $L \in \mathsf{PEG}$ such that neither $L$ nor $L^r$ is in $\mathsf{Online}(t(n))$, for any $t(n) = o(n/(\log n)^2)$.
\end{theorem}
\bigskip\noindent
The proof of this theorem uses the method of Rosenberg (see \cite{rosenberg1967real}, \S4.1), for proving lower-bounds against online Turing machines. We will explain it here for completeness.
\begin{definition}
Let $L \subseteq \Sigma^\ast$ and $\ell, m \in {\mathbb N}$. We then say that two strings $y_1, y_2 \in \Sigma^\ell$ are $(L,\ell, m)$-equivalent, which we write $y_1 \equiv_L^{\ell,m} y_2$, if
\[
\forall x \in \Sigma^m (y_1 \cdot x \in L \iff y_2 \cdot x \in L)
\]
We may then define the sets ${\mathcal E}_{L}(\ell, m) = \Sigma^\ast\slash\equiv_L^{\ell,m}$ of $(L, \ell, m)$-equivalence classes. To each $L \subseteq \Sigma^\ast$, then, corresponds a function $E_L:{\mathbb N}\times{\mathbb N} \to{\mathbb N}$ giving the number of $(L,\ell, m)$-equivalence classes:
\[
E_L(\ell, m) = |{\mathcal E}_{L}(\ell, m)|
\]
\end{definition}
\bigskip\noindent
The framework of Rosenberg then rests on the following crucial observation:
\begin{theorem}[\cite{hartmanis1965computational}] \label{thm:not-online-method}
If $L \in \mathsf{Online}(t(n))$, then $E_L(\ell, m) \le 2^{O(m \cdot t(\ell + m))}$.
\end{theorem}
\begin{proof}
Let $M$ be an online Turing machine that decides whether $z \in L$ by making $\le t(|z|)$ computation steps per symbol. Let $y \cdot x \in \Sigma^n$, where $y \in \Sigma^\ell$, $x \in \Sigma^m$ and $n = \ell + m$. Consider the configuration $C$ of the computation $M(y \cdot x)$, after $M$ has read all the $\ell$ symbols in $y$ and done whichever computation it does on them, and precisely before it reads the first symbol of $x$. As $M$ then proceeds to read the $m$ symbols of $x$, it can only do $t(n)$ steps per symbol; and thus if one would describe the configuration $C$ partially, by giving only the state of the finite control, the position of the tape heads, and the contents of the tape heads at a distance of $\le m \cdot t(n)$ from the position of the tape heads, then one can simulate the entire computation to its very end.
But since there are only $2^{O(m \cdot t(n))}$-many such possible partial descriptions, then this behaviour can only proceed in so-many different ways.
\end{proof}
\bigskip\noindent
As a warm-up, we begin by showing the following easy result:
\begin{theorem}
There is a language $K \subseteq \{0,1,\#\}^\ast$ in $\mathsf{PEG}$, such that $E_{K}((D+1)\cdot 2^D, D) \ge 2^{2^D}$ for all $D \in {\mathbb N}$. Hence $K \notin \mathsf{Online}(t(n))$, for any $t(n) = o(n/(\log n)^2)$.
\end{theorem}
\begin{proof}
Consider the language:
\[
K^r = \{ x \# w_1 \# w_2 \# \cdots \# w_N \mid x \in \{0, 1\}^\ast, \forall i\; w_i \in \{0, 1\}^\ast, \exists i \; x^r = w_i \}.
\]
A scaffolding automaton can easily decide $K^r$ by maintaining an edge pointing to the last symbol of $x$, and then for each $w_i$ which it sees, scanning $x$ in reverse and comparing it with $w_i$. Hence $K \in \mathsf{PEG}$ by Theorem \ref{thm:peg-automata}.
But looking carefully at $K = \{ w_1 \# w_2 \# \cdots \# w_N \# x \mid \exists i \; x^r = w_i \}$, one sees that if we have $N = 2^D$ strings $w_i$ each of length $D$, then the suffixes $x$ that cause acceptance are exactly those $x$'s in the set $\{w_1, \ldots, w_N\}$, and there are $2^N - 1 = 2^{2^D} - 1$ such sets. The empty set may be obtained by a malformed prefix, where none of the $w_i$ has length $D$, but their concatenation, with $\#$ as a separator, still has length $(D+1) 2^D$. Hence $E_K((D+1) 2^D, D) = 2^{2^D}$.
Now, if $K$ were in $\mathsf{Online}(t(n))$ for $t(n) = o(n / (\log n)^2)$, by Theorem \ref{thm:not-online-method} we would have $E_K((D+1) 2^D, D) \le 2^{D \cdot t((D+1) 2^D)} = 2^{o(2^D)}$, a contradiction.
\end{proof}
\bigskip\noindent
Now we will show the following:
\begin{theorem}\label{thm:not-online-hard-side}
There is a language $H\subseteq\{0,1,\#\}^\ast$, decidable by a scaffolding automaton, such that $E_{H}(O(D \cdot 2^D), D ) \ge 2^{2^D}$.
Hence $H \notin \mathsf{Online}(t(n))$, for any $t(n) = o(n/(\log n)^2)$.
\end{theorem}
The proof of this theorem is significantly more involved, and uses the \emph{reverse and scan} trick we have seen before. So let us first observe that from $K$ and $H$ we may obtain the language $L$ promised by Theorem \ref{thm:non-real-time}. Let $L^r = \{ 0 x 0 \mid x \in K^r \} \cup \{ 1 x 1 \mid x \in H \}$. It is easy to see that $L^r$ has a scaffolding automaton, since $K^r$ and $H$ both do, and so $L \in \mathsf{PEG}$. But an online Turing machine for deciding $L^r$ can be easily converted into an online Turing machine for deciding $H$, and an online Turing machine for deciding $L$ can be converted into an online Turing machine for deciding $K$. Hence neither $L^r$ nor $L$ are in $\mathsf{Online}(t(n))$ for any $t(n) = o(n/(\log n)^2)$.
\begin{proof}[Proof of Theorem \ref{thm:not-online-hard-side}]
Given $(d, \Sigma)$-scaffold $G = (V, E, L)$ (as per Definition \ref{def:scaffold}) with $d \ge 2$, and a node $v$ of $G$, let us define the map ${\mathsf{Path}}_{G,v}: \{0, 1\}^\ast \to V \cup \{\varnothing\}$, so that ${\mathsf{Path}}_{G,v}(x_1 \cdots x_n) = v'$ if the sequence of bits $x_1 \cdots x_n \in \{0, 1\}^n$ is a valid path from $v$ to $v'$ in $G$ (as per Definition \ref{def:path}). If we repeat Definition \ref{def:scaffold} here, for explicitness, we get that ${\mathsf{Path}}_{G,v}$ is given inductively by
\begin{itemize}
\item ${\mathsf{Path}}_{G,v}(\lambda) = v$; and
\item if ${\mathsf{Path}}_{G,v}(x_1 \cdots x_n) = w \neq \varnothing$ and $e_w$ is the edge list corresponding to node $w$ of $G$, then ${\mathsf{Path}}_{G,v}(x_1 \cdots x_n x_{n+1}) = e_w(x_{n+1})$; and
\item if ${\mathsf{Path}}_{G,v}(x_1 \cdots x_n) = \varnothing$, then ${\mathsf{Path}}_{G,v}(x_1 \cdots x_n x_{n+1}) = \varnothing$ also.
\end{itemize}
\medskip
Then let us define the \emph{binary-depth} of $G$ with respect to $v$, $\mathsf{BinDepth}_G(v)$, to be the largest $D \in {\mathbb N}$ such that ${\mathsf{Path}}_{G,v}$ is ``total'' and injective on $\{0, 1\}^{\le D}$, i.e.~$\varnothing \notin {\mathsf{Path}}_{G,v}(\{0, 1\}^{\le D})$, and $|{\mathsf{Path}}_{G,v}(\{0, 1\}^{\le D})| = 2^{D+1}-1$. Intuitively explained: when recursively following the first two edges of $v$ and its descendants, we will find a complete binary tree of depth $D$. Note that, although in general, in a scaffold, we can have two distinct paths leading to the same node, our notion of binary-depth requires that all $2^{D+1}-1$ different paths in $\{0, 1\}^{\le D}$ lead to distinct nodes of $G$. If $\mathsf{BinDepth}_G(v) \ge D$, we will write $\mathsf{BinTree}_{G}(v, D)$ to denote the complete binary tree of depth $D$, rooted at $v$, obtained by recursively following the first two edges until depth $D$ is reached.
\medskip
A scaffolding automaton constructs a scaffold as it processes each new input symbol. We will devise a scaffolding automaton ${\mathcal A}$ as follows. When ${\mathcal A}$ is given any binary string $y \in \{0, 1\}^\ell$, with
\begin{equation}
\label{eq:ell}
\ell = D + 1 + \sum_{n=0}^{2^{D+1} - 1}( 2 |n|_2 + 2 ) = O(D \cdot 2^D)
\end{equation}
where $|n|_2$ is the size of the smallest binary representation of the number $n$, then the resulting scaffold will have binary-depth $\ge D$, with respect to the first child of the top node. Formally said, the computation ${\mathcal A}(y) = ((q_0, S_0), \ldots, (q_\ell, S_\ell))$ constructs the scaffold $S_\ell = ([\ell], E_\ell, L_\ell)$ having $\mathsf{BinDepth}_{S_\ell}(e_\ell(0)) \ge D$.
\medskip
Before showing how this is done let us show why it is enough. The language $H$ will be decided by a scaffolding automaton ${\mathcal A}'$, in the following way: as long as ${\mathcal A}'$ only sees $0$s and $1$s, it will run the algorithm of the automaton ${\mathcal A}$. Besides the labels which ${\mathcal A}$ places at each node, we also copy the corresponding input bit into that node, i.e.~the working alphabet of ${\mathcal A}'$ will be the product of the working alphabet of ${\mathcal A}$ with $\{0, 1\}$. Then we see our first separator symbol $\#$, and we stop running ${\mathcal A}$. Let us call $z$ to the part of the input which precedes the separator symbol. After the separator, we expect to see a string $p \in \{0, 1\}^\ast$, and we interpret $p$ as if it were a path down the tree which is embedded in the scaffold. As we read the symbols of $p$, we thus maintain some edge following down this path. In this way we will traverse some bit positions of $z$, and we can see which bits of $z$ appear in these positions, since we have copied the bits of $z$ into the labels; then, whenever $z$ has a $1$ at such a position, we enter an accepting state, and whenever $z$ has a $0$, we enter a rejecting state.
When $|z| = \ell$ as above, we have a full binary tree of depth $D$, and thus the strings $p \in \{0, 1\}^D$ will point to $2^D$ different positions of $z$. These positions are distinct (as required by the definition of binary-depth). Thus there are $2^{2^D}$ ways of filling such positions with bits. Each such way of filling these positions will give a different $(H, \ell, D)$-equivalence class. Hence
\[
E_H(\ell, D) \ge 2^{2^D}.
\]
\medskip
Now to construct ${\mathcal A}$. The base of the method is similar to how we built a scaffolding automaton for the counting language, in Section \ref{sec:expressive-power}. The scaffold constructed by ${\mathcal A}$ will be labelled by the sequence
\[
(0)_2^r \circ (0)_2 \# \; (1)_2^r \circ (1)_2 \# \; \cdots \; (n-1)_2^r \circ (n-1)_2 \# \; (n)_2^r \circ (n)_2 \# \cdots
\]
where for each natural number $k$, $(k)_2$ is its binary representation, and $(k)_2^r$ is the reverse of its binary representation. The characters $\#$ and $\circ$ are being used as separators, so $\#$ is called the \emph{outer separator}, and $\circ$ the \emph{inner separator}. It may be worthwhile to actually write it down:
\[
0\circ 0\#\; 1\circ 1\#\; 01 \circ 10 \# \; 11 \circ 11 \#\; 001 \circ 100 \#\; 101 \circ 101 \# \; 011 \circ 110 \# \; \ldots.
\]
It is not hard to see that such a labelling can be obtained by a scaffolding automaton: the automaton can copy what is before each inner separator symbol $\circ$ to appear after it in reverse, and then, after writing an outer separator symbol $\#$, it can scan the binary representation of the number $n$, appearing before the $\#$, from the lowest to highest-order bit, and apply the usual algorithm for incrementing a binary number by $1$, thus writing down the binary representation of $n+1$ in reverse.
The nodes of the scaffold are thus divided into blocks, and the $n$-th block is of the form $(n)_2^r \circ (n)_2 \#$.
We must now explain how the edges of the tree are added to the scaffold. The invariant we would like to preserve at the $n$-th block, is the following. Suppose $x_k \cdots x_1 = (n)_2$ is the binary representation of $n$, so that the $n$-th block is labelled by
\[
x_1 \cdots x_k \circ x_k \cdots x_1 \#
\]
Let $v_1 \cdots v_k\; r \; v_k' \cdots v_1'\; s$ be the nodes of the scaffold that get the labels above, i.e., the nodes of the scaffold corresponding to the $n$-th block. Then we would like to maintain the following property:
\begin{invariant} It will always hold, on every block:
\begin{itemize}
\item If $x_i = 1$ for some $i \in \{2, \ldots, k\}$, then we will have $e_{v_i}(0) = e_{v'_i}(0)$, and $\mathsf{BinDepth}(e_{v_i}(0)) \ge i-1$.
\item Furthermore, for distinct $i, j \in \{2, \ldots, k\}$ with $x_i = x_j = 1$, the trees $\mathsf{BinTree}(e_{v_i}(0), i-1)$ and $\mathsf{BinTree}(e_{v_j}(0), j-1)$ are node-disjoint.
\end{itemize}
\end{invariant}
I.e., one should think that if $x_i = 1$, the first edge leaving $v_i$ and $v_i'$ points to the root of the same full binary tree of depth $i - 1$. And that the two trees corresponding to different $v_i$ and $v_j$ share no node.
For simplicity, let us momentarily ignore the ``Furthermore'' part of the invariant, and later argue that it will be upheld.
Now suppose that this invariant holds for the $n$-th block, let us show how the algorithm needs to behave in order to make it hold for the $(n+1)$-th block. Suppose, for simplicity, that $n$ and $n+1$ are both $k$-bit numbers (the case when $n$ is $k$-bits and $n+1$ is $k+1$ bits is similar). Let $x_k \cdots x_1 = (n)_2$ and $y_k \cdots y_1 = (n+1)_2$ be their binary representations. The algorithm constructs the first half of the $(n+1)$-th block by scanning backwards the second half of the $n$-th block.
So, suppose that the second half of the $n$-th block has nodes $v_k' \cdots v_1'$, which are labelled $x_k \cdots x_1$, respectively. Let $s$ be the node which is labeled by the outer separator $\#$ between blocks $n$ and $n+1$. Suppose that the algorithm is about to add the nodes $w_1 \cdots w_k$ to the first half of the $(n+1)$-th block, and intends to write the labels $y_1 \cdots y_k$ into them. This is done by reading $x_k \cdots x_1$ backwards: when the algorithm writes the label $y_1$ into $w_1$, he has an edge pointing to $v_1'$ where he can read $x_1$, when he writes $y_2$ into $w_2$ he has an edge pointing to $v_2'$, which is labelled by $x_2$, and so on. Such ``backwards scanning'' is easy to do provided we maintain an edge at each node which points to the previous node. The algorithm will also maintain an edge pointing to $s$.
When incrementing $x_1 = 0$, then we will have $y_1 = 1$, and so we must make sure that $e_{w_1}(0)$ has binary-depth $\ge 0$: this is easily ensured by letting $e_{w_1}(0) = s$, $e_{w_1}(1) = \varnothing$.
When incrementing $x_1 = 1$, we will have $y_1 = 0$; in this case we set $e_{w_1}(0) = v_1'$, and also set $e_{w_1}(1) = s$. It now holds that $\mathsf{BinDepth}(w_1) \ge 1$, and we will use this as the base case of an induction on the length of the $1$-prefix of $x$. This is illustrated in the figure below, for block number $n = 39$, so that $(n)_2 = x_6 \ldots x_1 = 100111$. So suppose that $x_{i-1} = 1, \ldots, x_1 = 1$ are the labels of $v_{i-1}', \ldots, v_1'$, and that we have written $y_1 = 0, \ldots, y_{i-1} = 0$ as the labels of $w_1, \ldots, w_{i-1}$. We are about to add the node $w_i$ to the first half of the $(n+1)$-th block, using our pointer to $v_i'$ in the second half of the $n$-th block. Suppose by induction that $\mathsf{BinDepth}(w_{i-1}) \ge i - 1$. Now look at $x_i$. If we are not finished with the $1$-prefix of $x$, i.e. if $x_i = 1$, then we must set $y_i = 0$. Our invariant for the previous block tells us that $\mathsf{BinDepth}(e_{v_i'}(0)) = i-1$, and our induction hypothesis gives us $\mathsf{BinDepth}(w_{i-1}) = i - 1$. So we create the new top node $w_i$ with $e_{w_i}(0) = e_{v_i'}(0)$ and $e_{w_i}(1) = w_{i-1}$, so that $\mathsf{BinDepth}(w_i) = i$. This satisfies our induction hypothesis. This case pertains to nodes $w_2$ and $w_3$ of the figure below. If we have reached the point where the carry stops, i.e., if $x_i = 0$, then we will set $y_i = 1$, and for this we create the new top $w_i$ and set $e_{w_i}(0) = w_{i-1}$, $e_{w_i}(1) = \varnothing$. This satisfies our invariant for the first half of $(n+1)$-th block (there is no carry in this case). This case pertains to node $w_4$ of the figure below. Notice how $\mathsf{BinDepth}(w_3) = 3$, i.e., we have a complete binary tree of depth $3$ rooted at $w_3$, which we have drawn in thicker lines for emphasis.
\begin{center}
\begin{tikzpicture}[node distance=1.9cm,>=stealth',bend angle=15,auto]\label{fig:real-time}
fill=black!20,minimum size=4mm]
\tikzstyle{snode}=[circle,draw=black,minimum size=6mm]
\tikzstyle{every path}=[->,every node/.style={font=\scriptsize}]
\begin{scope}
\node [snode,label={$1$}] (v'6) {$v'_6$};
\node [] (x) [above left=0.13cm and 0.1cm of v'6]{$x = $};
\node [isosceles triangle,draw] (v'6_0) [below left=1cm and 0.3cm of v'6] {\tiny depth $5$};
\path
(v'6) edge [bend left=15] node [swap,at start] {$0$} (v'6_0.1);
\node [snode,label={$0$}] (v'5) [right of=v'6] {$v'_5$};
\node [snode,label={$0$}] (v'4) [right of=v'5] {$v'_4$};
\node [snode,label={$1$}] (v'3) [right of=v'4] {$v'_3$};
\node [isosceles triangle,draw,very thick] (v'3_0) [below left=1cm and 0.3cm of v'3] {\tiny depth $2$};
\path
(v'3) edge [bend left=15] node [swap,at start] {$0$} (v'3_0.1);
\node [snode,label={$1$}] (v'2) [right of=v'3] {$v'_2$};
\node [isosceles triangle,draw,very thick] (v'2_0) [below left=1cm and 0.3cm of v'2] {\tiny depth $1$};
\path
(v'2) edge [bend left=15] node [swap,at start] {$0$} (v'2_0.1);
\node [snode,label={$1$},very thick] (v'1) [right of=v'2] {$v'_1$};
\node [snode,label=45:{$\#$},very thick] (s) [below right=1cm and 0.5cm
of v'1] {$s$};
\node [snode,label=-90:{$0$},very thick] (w1) [below=2.5cm of v'1] {$w_1$};
\path
(w1) edge [bend right=15,very thick] node [swap,at start] {$1$} (s)
(w1) edge [bend left=15,very thick] node [swap,at start] {$0$} (v'1);
\node [snode,label=-90:{$0$},very thick] (w2) [left of=w1] {$w_2$};
\path
(w2) edge [bend right=15,very thick] node [at start] {$1$} (w1)
(w2) edge [bend right=15,very thick] node [at start] {$0$} (v'2_0.359);
\node [snode,label=-90:{$0$},very thick] (w3) [left of=w2] {$w_3$};
\path
(w3) edge [bend right=15,very thick] node [at start] {$1$} (w2)
(w3) edge [bend right=15,very thick] node [at start] {$0$} (v'3_0.359);
\node [snode,label=-90:{$1$}] (w4) [left of=w3] {$w_4$};
\path
(w4) edge [bend right=15] node [at start] {$0$} (w3);
\node [snode,label=-90:{$0$}] (w5) [left of=w4] {$w_5$};
\node [snode,label=-90:{$1$}] (w6) [left of=w5] {$w_6$};
\path
(w6) edge [bend right=15] node [at start] {$0$} (v'6_0.359);
\node [] (y) [below left=0.13cm and 0.1cm of w6]{$y = $};
\end{scope}
\end{tikzpicture}
\end{center}
Once we find the first $x_i = 0$, we proceed by copying the remaining nodes and their edges; i.e.~we set $y_i = x_i$, $e_{w_i}(0) = e_{v_i'}(0)$, $e_{w_i}(1) = e_{v_i'}(1)$, until we find the inner separator $\circ$. After the inner separator $\circ$, we simply copy what we have done, i.e.~we set $y_i' = y_i$, $e_{w_i'}(0) = e_{w_i}(0)$, $e_{w_i'}(1) = e_{w_i}(1)$, until we find the outer separator $\#$.
To keep things simple we have not considered the ``Furthermore'' part, so let us deal with it now. We have $y_1=0, \ldots, y_{i-1} = 0, y_i = 1$ for some $i$, which is the last point reached by the carry. Now notice that $\mathsf{BinTree}(w_{i-1}, i-1)$ (which is the tree under $w_3$ in the figure above) is made from ``fresh'' nodes, which did not previously belong to a tree, namely $w_1, \ldots, w_{i-1}$, $s$, and $v'_1$, together with the sub-trees $\mathsf{BinTree}(e_{v_j'}(0), j-1)$, for $1 < j < i$. These subtrees are, by the ``furthermore'' part of the invariant, disjoint from any sub-trees $\mathsf{BinTree}(e_{v_j'}(0), j-1)$ with $j \ge i$. Hence $\mathsf{BinTree}(w_{i-1}, i-1)$ will also be disjoint from $\mathsf{BinTree}(e_{w_j}(0), j-1)$, for $j \ge i$.
\bigskip\noindent
The result of the above is that block number $2^{D+1}$ will have the labels
\[
0^D 1 \circ 1 0^D \#
\]
and if we let $v$ be the node which is labelled by the first $1$ appearing in this block, then we will have $\mathsf{BinDepth}(v) = D$. The expression (\ref{eq:ell}) for $\ell$ is simply the position of the input bit corresponding to the node $v$: we have $2^{D+1}-1$ many blocks before we reach the $2^{D+1}$-th block, and the $n$-th block has size $2|n|_2 + 2$; then we have the $D + 1$ symbols $0^D 1$, the last of which is at the position when the node $v$ is the top of the scaffold.
\end{proof}
\section*{Acknowledgement}
Bruno Loff is the recipient of FCT postdoc grant number SFRH/\allowbreak BPD/\allowbreak 116010/\allowbreak 2016. This work is partially funded by the ERDF through the COMPETE 2020 Programme within project POCI-01-0145-FEDER-006961, and by National Funds through the FCT as part of project UID\allowbreak /EEA\allowbreak /50014\allowbreak /2013. This work was partially supported by CMUP (UID\allowbreak /MAT\allowbreak /00144\allowbreak /2019), FCT (Portugal), FEDER and PT2020. The authors would like to thank Markus Holzer, Martin Kutrib, Leen Torenvliet and Jurgen Vinju for fruitful discussions on this subject.
\section*{Bibliography}
\bibliographystyle{elsarticle-num}
|
2,869,038,154,571 | arxiv |
\section{Conclusions}
In this paper, we present almost optimal distributed algorithms for coverage problems.
Our algorithms beat the previous ones in several fronts: e.g.,
(i) they provably acheive the optimal approximation factors for these problems,
(ii) they run in only four rounds of computation (as opposed to logarithmic number of rounds), and
(iii) their space complexity is independent of the number of elements in the ground set.
Moreover, our algorithms can handle coverage problems with huge subsets
(in which even one subset of the input may not fit on a single machine).
Our empirical study shows practical superiority of our algorithms.
Finally, we identified a new application of our algorithms in feature selection,
and presented preliminary results for this application. It would be nice to explore
this application in more details in the future.
\subsection{Approach}
\newcommand{\textsc{StochasticGreedy}}{\textsc{StochasticGreedy}}
Recall that the sketch construction is based on two types of prunings
for edges and vertices of the input graph:
\vspace{-2ex}
\begin{packed_item}
\item
subsampling the elements, and
\item
removing edges from large-degree elements.
\end{packed_item}
\vspace{-1ex}
The theoretical definition of the sketch provides (i)~the
probability of sampling an element, and (ii)~the upper bound on the
degree of the elements. Though these two parameters are almost
tight in theory, in practice one can use smaller values to get
desirable solutions.
Here we parameterize our algorithm by $\rho$ and $\sigma$,
where $\rho$ is the probability of sampling elements, and
$\sigma$ is the upper bound on element degrees. We
investigate this in our experiments.
The \textsc{StochasticGreedy}\ algorithm~\cite{mirzasoleiman2014lazier} achieves
$1-\frac 1 e -\ensuremath{\varepsilon}$ approximation to maximizing monotone submodular
functions (hence coverage functions) with $O(n \log(1/\ensuremath{\varepsilon}))$
calls to the submodular function. This is theoretically the
fastest known $1-\frac 1 e -\epsilon$ approximation algorithm for
coverage maximization, and is also the most efficient in
practice for miximizing monotone submodular functions, when the input
is very large. We plug it into our MapReduce algorithm,
which then runs much faster, while losing
very little in terms of quality.
For smaller instances we compare our algorithm to \textsc{StochasticGreedy}, but
for larger ones we provide convergence numbers to argue that the two
should get very similar coverage results.
\paragraph{LiveJournal social network} We try different values for
$\rho, \sigma, k$ when running our algorithm on \texttt{livej-3}; see
Figure~\ref{fig:lj}.
For small $k$, the result improves as $\sigma$ grows, but
increasing $\rho$ has no significant effect.
On the other hand, the improvement for larger $k$ comes from
increasing $\rho$ while $\sigma$ is not as important. This observation
matches the definition of our sketch, in which the degree bound is
decreasing in $k$ and the sampling rate is increasing in $k$.
\begin{figure*}
\centering
\includegraphics[scale=0.48]{lj.jpg}
\caption{For the \problem{dominating-set} instance \texttt{livej-3},
these plots show the number of covered nodes against the
relative size of the sketch with
$\rho\in[10^{-3},3\cdot10^{-2}]$,
$\sigma\in[100, 5000]$, and $k\in [10^2,10^4]$. Curves in
one plot correspond to different choices for $\sigma$.
With large $\sigma$, the results of some runs are
indistinguishable from the one next to it in the plot, hence
invisible.}
\label{fig:lj}
\end{figure*}
\begin{figure}
\centering \includegraphics[scale=0.55]{dblp-s}
\caption{The results for \texttt{dblp-3}\ are shown for
$\rho\in[2\cdot10^{-3},5\cdot10^{-2}]$, $\sigma=100$. We plot our
performance relative to \textsc{StochasticGreedy}\ against the fraction of
edges from the input graph retained in our sketch.}
\label{fig:approx-s-size} \end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.55]{dblp-inp}
\caption{The above are results of running the algorithm on the
sampled version of \texttt{dblp-3}\ with $\rho=0.02$, $\sigma=100$. The
$x$ axis denotes the size of the sampled graph relative to
the whole. The $y$ axis shows the quality relative to
\textsc{StochasticGreedy}.}
\label{fig:approx-inp-size}
\end{figure}
\paragraph{DBLP coauthorship network}
Figure~\ref{fig:approx-s-size} compares results of our
algorithm on \texttt{dblp-3}\ (with a range of parameters) to
that of \textsc{StochasticGreedy}. Each point in these
plots represents the mean of three independent runs. Interestingly, a
sketch with merely $3\%$ of the memory footprint of the input graph
attains $\%99.6$ of the quality of \textsc{StochasticGreedy}.
\begin{figure}[h]
\centering
\includegraphics[scale=0.61]{gut}
\caption
Here we plot
the number of covered bigrams against $\rho$ for
\texttt{gutenberg}\ with $\rho\in[10^{-5},3\cdot10^{-2}]$, $\sigma\in
[10^2,10^4]$, and $k\in [10^2,1000^3]$.
The curves
corresponding to different values of $\sigma$ are practically
indistinguishable.}
\label{fig:gut}
\end{figure}
We run our algorithm on induced subgraphs of \texttt{dblp-3}\ of varying
sizes;
see Figure~\ref{fig:approx-s-size}.
Interestingly, the performance of our algorithm improves the larger
the sampled graph becomes. In other words, if one finds parameters
$\rho$ and $\sigma$ on a subgraph of the input and applies it to the
whole graph, one does not lose much in the performance.
\paragraph{Project Gutenberg dataset }
We run our algorithm on \texttt{gutenberg}\ with different values for
$\rho$ and $\sigma$. As shown in Figure~\ref{fig:gut}, the outcome of
the algorithm converges quickly. In other words, for $\rho= 0.003$ and
$\sigma = 100$, the outcome of \textsc{StochasticGreedy}\ on our sketch and on the
input graph are quite similar, while our sketch is $600$ times
smaller.
\subsection{Feature-selection Problem}
\label{sec:featureselection}
Our algorithm is applicable to the \problem{feature-selection}
problem, which is a first step in many learning-based
applications~\cite{GE2003:feature}.
It is often too expensive to carry out a learning task on the entire
matrix or there might be overfitting concerns. Typically a small
subset of ``representative'' features are picked carefully, so as not
to affect the overall quality of the learning task. In practice, we
gauge the performance of feature selection by {\em reconstruction
error} or {\em prediction accuracy}; see~\cite{ABFMRZ16:ICML} for
details of evaluation criteria.
In order to compare our preliminary results to previous
work~\cite{ABFMRZ16:ICML}, we model the problem as a \problem{maximum
$k$-cover} instance by treating columns (i.e., features) as sets and
{\em pairs of rows} (i.e., pairs of sample points) as elements. We say a row {\em covers} a pair
of rows, if that column (feature) is active for both of those rows (sample points), and seek to pick $k$ columns that
{\em cover as many pairs the rows} as possible~\footnote{We also studied covering rows as opposed to covering pairs of rows, but that approach was not effective.}.
Table~\ref{tab:feature} compares our results to prior work. Numbers
show prediction accuracy in percentage. For description of the data
set and the first four algorithms, see~\cite{ABFMRZ16:ICML}. We note
that these algorithms may only run on a $8\%$ sample of the dataset,
hence poorer performance compared to the latter two.
The fifth column exhibits a distributed version of \algoname{2-P} (the
two-phase optimization): Features are carefully partitioned across
many machines via taking into account some cut-based objective, and
then the two-phase optimization handles each part separately. It is
noteworthy that the (distributed) partitioning phase itself takes
significant amount of time to run.
The last column corresponds to our distributed \problem{$k$-cover}
algorithm, which is more efficient than the algorithm of the fifth
column. The results are similar to that of \algoname{Part}.
\begin{table}[ht]
\caption{Results for \problem{feature selection} on
\texttt{news20}\ dataset.}\label{tab:feature}\vspace{2mm}
\begin{center}
\begin{tabular}{rcccccc}
\hline
\multicolumn{1}{c}{$k$} &
{\algoname{Rnd}} &
{\algoname{\iffalse $2$-Phase\else$2$-P\fi}} &
{\algoname{\iffalse DistGreedy\else DG\fi}} &
{\algoname{PCA}} &
\hspace{-1mm}\algoname{Part}\hspace{-1mm} &
\algoname{Cover}\hspace{-1mm}
\\
\hline
500 & 54.9 & 81.8 & 80.2 & 85.8 & 84.5 & 86.2 \\
1000 & 59.2 & 84.4 & 82.9 & 88.6 & 88.4 & 89.4 \\
2500 & 67.6 & 87.9 & 85.5 & 90.6 & 92.3 & 91.2 \\
\hline
\end{tabular}
\end{center}
\end{table}
We emphasize that we can run our algorithm on much larger datasets;
the evidence of this was provided above where we reported results for
\texttt{livej-3}, for instance.
\subsection{Preliminaries} \paragraph{Coverage Problems} We study
three {\em coverage} problems. In all these problems, we consider a
ground set \ensuremath{\mathcal E}\xspace of $m$ elements, and a family $\ensuremath{\mathcal F}\xspace\subseteq
2^\ensuremath{\mathcal E}\xspace$ of $n$ subsets of the elements (i.e., $n=|\ensuremath{\mathcal F}\xspace|$ and
$m=|\ensuremath{\mathcal E}\xspace|$).
{\em The coverage function} $\ensuremath{\mathcal C}\xspace$ is defined as $\ensuremath{\mathcal C}\xspace(S)
= |\cup_{U\in S} U|$ for any subfamily $S\subseteq \ensuremath{\mathcal F}\xspace$ of subsets.
In the {\em $k$-cover} problem, given a parameter $k$, the goal is to
find $k$ sets in $\ensuremath{\mathcal F}\xspace$ with the largest union size. We sometimes
use $\textsf{Opt}\xspace_k$ to denote the size of the union for the optimum solution.
In the {\em set cover} problem, the goal is pick the minimum number of
sets from $\ensuremath{\mathcal F}\xspace$ such all elements in $\ensuremath{\mathcal E}\xspace$ are covered.
We also study a third problem: in the {\em set cover with $\lambda$
outliers} problem\footnote{It is somtimes called the $(1-\lambda)$-%
partial cover problem in the literature.},
the goal is to find the minimum number of sets covering at
least a $1-\lambda$ fraction of the elements in \ensuremath{\mathcal E}\xspace.
Coverage problems can also be modeled as a bipartite graph $G$. In this
graph, \ensuremath{\mathcal F}\xspace corresponds to one part of vertices of $G$, and \ensuremath{\mathcal E}\xspace corresponds
to the other part. For each set $S\in \ensuremath{\mathcal F}\xspace$, there are $\vert S\vert$ edges in $G$
from the vertex corresponding to $S$ in $G$ to all vertices in $G$
corresponding to elements $i \in S$.
For simplicity, we assume that there is no isolated
vertex in \ensuremath{\mathcal E}\xspace. For a (bipartite) graph $G$ and a subset $S$ of
its vertices (in the first part, i.e., $\ensuremath{\mathcal F}\xspace$), we define $\Gamma(G,
S)$ to be the set of neighbors of $S$ in $G$. Note that this notation
can be defined for any graph other than the original instance $G$.
Also if $G$ is the graph corresponding to the original instance of the
coverage problem, we have $\ensuremath{\mathcal C}\xspace(S) = |\Gamma(G, S)|$.
In the offline setting, a simple greedy algorithm results in a $1-\frac 1 e$-approximation for
$k$-cover and a $\log m$-approximation algorithm for the set cover
problem. Moreover, improving these approximation factors
are impossible unless $\textsf{NP}\xspace\subseteq
\textsf{DTIME}\xspace(n^{\log\log n})$~\cite{dinur2014analytical}.
{\bf \noindent Streaming models.}
In the {\em streaming} model, we
focus on the so-called {\em edge arrival}
model as opposed to the more studied {\em set arrival} (aka {\em
vertex arrival}) model. In the former, edges arrive one by one, so
we get to know about the set-element relationships one at a time,
whereas in the latter, sets arrive and bring with them a list of their
elements. The number of passes allowed for processing the data is
crucial and may change the nature of the problem.
{\bf \noindent RAM model.} In the
{\em RAM model}~\cite{aho1974design}, we will have arbitrary access to
the set-element relationship (or edges) albeit at a price. More
specifically, we may look at the list of elements in any set but each
lookup (to read one element in any list) takes one unit of time.
{\bf \noindent Distributed model.} In the distributed computation
model, we assume that the data is distributed across $t$ machines. A
distributed algorithm runs in rounds. In each round, the data can be
processed in parallel on each of $t$ machines and each machine decides
to send a message to another machine. In the same round, each machine
waits to receive all messages from other machines, and then runs an
algorithm on each machine. This computation model can easily model the
MapReduce framework~\cite{osdi-DG04}. The load of each machine is the
total data it processes. In this model, the load of each machine
should be sublinear in terms of the size of input, and two important
factors in deciding the performance of a distributed algorithm in this
model are (i) number of rounds of computation, and (ii) the maximum
load on any machine. These parameters have been discussed and
optimized in previous work discussing distributed algorithms in
MapReduce framework~\cite{soda-KSV10,IMMM14,ANOY14,BBLM14,MZ15}.
{\bf \noindent {The $(1 \pm \ensuremath{\varepsilon})$-approximate oracle}}. We say $\ensuremath{\mathcal C}\xspace_\ensuremath{\varepsilon}$ is a
{\em $(1 \pm \ensuremath{\varepsilon})$-approximate oracle} to function $\ensuremath{\mathcal C}\xspace$, if given a subfamily
of sets it gives us an estimate of their union size within $1\pm\ensuremath{\varepsilon}$
precision. In other words, $\ensuremath{\mathcal C}\xspace_\ensuremath{\varepsilon}$ estimates the coverage
function $\ensuremath{\mathcal C}\xspace$ on any subfamily of the sets as a black box; i.e.,
for any subset $S\subseteq \ensuremath{\mathcal F}\xspace$, we have
\begin{align*}
(1-\epsilon)\ensuremath{\mathcal C}\xspace_{\epsilon}(S) \leq \ensuremath{\mathcal C}\xspace(S) \leq
(1+\epsilon)\ensuremath{\mathcal C}\xspace_{\epsilon}(S).
\end{align*}
\subsection{Related Work}
{\bf \noindent Streaming models for coverage problems.}
Coverage problems have been studied extensively in the context of set
arrival
models~\cite{assadi2016tight,saha2009maximum,nisan2002communication,emek2014semi,chakrabarti2015incidence}
many of which achieve a suboptimal approximation guarantee. In
particular, Saha and Getoor provide a $\frac 1 4$-approximation
algorithm for $k$-cover in one pass using $\tilde{O}(m)$ space. The
same technique gives a $\Theta(\log m)$ approximation algorithm for
set cover in $\Theta(\log m)$ passes, using $\tilde{O}(m)$ space.
On the hardness side, interestingly, Assadi et al.~\cite{assadi2016tight} show that
there is no $\alpha$-approximation one pass streaming algorithm for set cover using $o(nm/\alpha)$ space.
Demaine
et al.~\cite{demaine2014streaming} provide a $4^k \log n$
approximation algorithm for the set cover problem in $4^k$ passes
using $\tilde{O}(nm^{1/k})$ space. However, all the above results
hold only for the set arrival model, and apart from the hardness
results, they do not apply to the edge arrival model. Often in the
graph streaming problems, while the size of the input is
$\tilde{O}(|E|)$ for a graph $G(V,E)$, the size of the solution may be as large as
${\Omega}(|V|)$. Thus, the best hope is to find
the solution in $\tilde{O}(|V|)$ space. The semi-streaming
algorithms are those with space
$\tilde{O}(V)$~\cite{muthukrishnan2005data}. Graph problems in
the semi-streaming setting have been widely studied~\cite{ahn2009,AG11,EpsteinLMS09,mcgregor2005,feigenbaum2005graph,KelnerL11,KonradMM12,KonradR13}.
While the edge arrival graph streaming model has been
extensively
studied~\cite{ahn2013spectral,andoni2014towards,assadi2015tight,chitnis2015kernelization,esfandiari2015streaming,kapralov2014approximating,kapralov2015streaming},
none of these papers studies the coverage problems in the edge arrival model.
{\bf \noindent Distributed algorithms for coverage problems.}
Being a notable special case of submodular maximization, solving maximum
$k$-coverage in a distributed manner has attracted significant
amount of research over the last few
years~\cite{CKT10,CKW10,spaa-LMSV11,BST12,KMVV13,nips13,karbasiKDD2014,MZ15,BENW15,nips15}.
In all these papers, oracle access to the submodular function is
assumed, and therefore, the running time of each round of the
algorithms depends on the size of these sets as well. In this model,
for the coverage maximization problem, Chierichetti et
al.~\cite{CKT10} present a $(1-1/e)$-approximation algorithm in
polylogarithmic number of rounds of computation, and this was
improved to $O(\log n)$ rounds~\cite{BST12,KMVV13}.
Recently, applying the idea of randomized core-sets, a
constant-approximation algorithm has been developed for this problem
that runs in 2 rounds~\cite{MZ15,BENW15}. In particular, the best
known approximation factor for this model is $0.54$~\cite{MZ15}.
Recently, a distributed algorithm for the submodular cover (a generalization of set cover) has been presented in the MapReduce framework~\cite{nips15}, but the distributed model is slightly different and the number of rounds
needed for the algorithm is $\log (nm)$.
\subsection{Our Contributions}
Our contributions in this paper are three-fold: First of all, we rule
out the possibility of developing scalable approximation algorithms
for coverage problems via accessing a $(1 \pm \ensuremath{\varepsilon})$-approximate oracle as a black
box. This hardness result shows the need for a new approach to solving
these problems. Such a hardness result has been observed for general
submodular functions~\cite{HS15}---and not for coverage functions---and
is of independent interest; see Section~\ref{sec:intro:epsilon}.
\begin{table*}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Problem & Credit & \# passes & Approximation & Space & Arrival \\
\hline
\hline
$k$-cover & \cite{saha2009maximum} & $1$ & $ 1/ 4 $ & $\tilde{O}(m)$ & set \\
$k$-cover & Here & 1 & $1- 1/ e -\ensuremath{\varepsilon}$ & $\tilde{O}(n)$ & edge \\
\hline
Set cover w outliers & \cite{emek2014semi,
chakrabarti2015incidence} & $p$ &
$O(\min(n^{\frac{1}{p+1}},e^{-\frac 1 p}))$ & $\tilde{O}(m)$ & set \\
Set cover w outliers & Here & $1$ & $(1+\ensuremath{\varepsilon})\log \frac 1 \lambda$ & $\tilde{O}_{\lambda}(n)$ & edge \\
\hline
Set cover & \cite{chakrabarti2015incidence,saha2009maximum} & $p$
& $(p+1)n^{\frac 1 {p+1}}$ & $\tilde{O}(m)$ & set \\
Set cover & \cite{demaine2014streaming} & $4^k$ & $4^k\log n$ &
$\tilde{O}(nm^{\frac 1 k})$ & set \\
Set cover\protect\footnotemark & \cite{indyk2015towards} & $p$ & $O(p\log n)$ &
$\tilde{O}(nm^{O(\frac 1 p)})$ & set \\
Set cover & Here & $p$ & $(1+\ensuremath{\varepsilon})\log n$ & $\tilde{O}(nm^{O(\frac
1 p)}+m)$ & edge \\
\hline
\end{tabular}
\end{center}
\caption{Comparison of results in streaming models.}
\label{tab:streaming}
\end{table*}
\footnotetext{Independent work}
Secondly, we develop a sketching technique for coverage functions that
can be applied in a very general setting to develop new scalable
algorithms for coverage problems. In particular, it enables us to
convert any $\alpha$-approximation algorithm for a large number of
coverage problems to a $(1-\ensuremath{\varepsilon})\alpha$-approximation
algorithm for the same problem in streaming, distributed, or RAM
models. See Sections~\ref{sec:intro:technique} and~\ref{sec:sketch}.
Thirdly, we show how to apply the above sketching technique and
address the aforementioned shortcomings of existing algorithms for
coverage problems, and report almost optimal results for all the three
coverage problems in streaming, distributed, and RAM models of
computation. These results are summarized in Tables~\ref{tab:streaming},~\ref{tab:RAM}, and~\ref{tab:distributed}.
Despite the extensive previous work on the node arrival
model for coverage problems, our paper is {\em first to study the
problem in the edge arrival model}, and present tight results for
these problems. We accompany these results by matching lower bounds of
the $k$-cover problem.
\begin{table*}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
Problem & Credit & Approximation & Running time & Comment \\
\hline
\hline
$k$ cover & \cite{badanidiyuru2014fast,mirzasoleiman2014lazier} & $1- \frac 1
e -\ensuremath{\varepsilon}$ & $\tilde{O}(nm)$ & submodular functions \\
$k$-cover & Here & $1- \frac 1 e -\ensuremath{\varepsilon}$ & $\tilde{O}(n)$ & - \\
\hline
\end{tabular}
\end{center}
\caption{Results for the RAM model.}
\label{tab:RAM}
\end{table*}
\begin{table*}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Problem & Credit & \# rounds & Approximation & Load per machine & Comment \\
\hline
\hline
$k$-cover & \cite{KMVV13} &
$O(\frac{1}{\ensuremath{\varepsilon}\delta}\log m)$ & $1- \frac 1 e-\ensuremath{\varepsilon}$ & $O(mkn^{\delta})$ & submodular functions \\
$k$-cover & \cite{MZ15} & $2$ & $0.54$ & $\max(m k^2, mn/k)$ & submodular functions \\
$k$-cover & \cite{BENW15b} & $1\over
\ensuremath{\varepsilon}$ & $1-\frac 1 e -\ensuremath{\varepsilon}$ & $\max(m k^2, mn/k)\over \ensuremath{\varepsilon}$ & submodular functions \\
$k$-cover & Here & $3$ & $1- \frac 1 e -\ensuremath{\varepsilon}$ & $\tilde{O}(n+m)$ & - \\
\hline
$\substack{\mbox{\strut Set cover}\\\mbox{\strut with outliers}}$ & Here & $3$ & $(1+\ensuremath{\varepsilon})\log \frac 1 {\lambda}$ & $\tilde{O}(n+m)$ & - \\
\hline
Set cover & \cite{nips15} & $\log (nm) $& $(1+\ensuremath{\varepsilon})\log
n$ & $\Omega(mn^{1-\ensuremath{\varepsilon}})$ & Submodular cover \\
Set cover & Here & $r$ & $(1+\ensuremath{\varepsilon})\log
n$ & $\tilde{O}(nm^{O(\frac 1 r)}+m)$ & - \\
\hline
\end{tabular}
\end{center}
\caption{Comparison of the results for distributed models.}
\label{tab:distributed}
\end{table*}
\subsubsection{A $(1 \pm \ensuremath{\varepsilon})$-approximate oracle is not sufficient}\label{sec:intro:epsilon}
There are several sampling or sketching techniques that can be used
to develop a $(1 \pm \ensuremath{\varepsilon})$-approximate oracle $\ensuremath{\mathcal C}\xspace_\ensuremath{\varepsilon}$ to the coverage
function. One might hope that a black box access to such an
oracle could be used as a subroutine in developing approximation
algorithms with good approximation guarantees. Here, we show that
this is not possible.
\begin{theorem}\label{thm:mainhard}
Any $\alpha$-approximation algorithm for $k$-cover via oracle
$\ensuremath{\mathcal C}\xspace_{\ensuremath{\varepsilon}}$ requires $\exp\left(\Omega({ n
\ensuremath{\varepsilon}^2\alpha^2}-{\log n} )\right)$ queries to the oracle.
\end{theorem}
In particular, for any constant $\ensuremath{\varepsilon}>0$, there is no polynomial-time
$n^{-0.49}$ approximation algorithm for $k$-cover given a
$(1 \pm \ensuremath{\varepsilon})$-approximate oracle $\ensuremath{\mathcal C}\xspace_\ensuremath{\varepsilon}$. This hardness result improves upon
a similar hardness result for submodular functions~\cite{HS15} --- and not
for coverage functions. Our proof technique for this hardness result
might be of independent interest.
Details of this proof is described in Appendix~\ref{sec:epsError}.
In order to prove Theorem~\ref{thm:mainhard}, first we define a
problem called \emph{$k$-purification} for which we show that any
randomized algorithm requires $\delta\exp\left(\Omega(\frac{
\ensuremath{\varepsilon}^2k^2}{n} )\right)$ oracle queries to succeed with probability
$\delta$. In a $k$-purification problem instance, we are given a
random permutation of $n$ items, with $k$ gold and
$n-k$ brass items. The types of the items are not known to us.
We merely have access to an oracle
$\textsf{Pure}\xspace_{\ensuremath{\varepsilon}}(S)$ for $S\subseteq[1,n]$ defined as
\begin{align*}
\begin{cases}
0 & \text{if $\frac{k|S|}{n} - \ensuremath{\varepsilon} \left(\frac{k|S|}{n} + \frac{k^2}{n}\right) \leq \textsf{Gold}\xspace(S) \leq \frac{k|S|}{n} + \ensuremath{\varepsilon} \left(\frac{k|S|}{n} + \frac{k^2}{n}\right) $,} \\
1 & \text{otherwise},
\end{cases}
\end{align*}
where $\textsf{Gold}\xspace(S)$ is the number of gold items in $S$.
The hardness proof is then based on a reduction between
$k$-purification and $k$-cover.
\subsubsection{The sketching technique and
new results for streaming, distributed and RAM models}\label{sec:intro:technique}
In this paper we introduce a powerful sketch to summarize coverage
functions. As its main property, we show that any $\alpha$-approximate
solution to $k$-cover on this sketch is an $(\alpha
-\ensuremath{\varepsilon})$-approximate solution to $k$-cover on the original input with
high probability; see Theorem~\ref{thm:str:main}. Interestingly, this
sketch requires only $\tilde{O}(n)$ space. Our sketch is
fairly similar to $\ell_0$
sketches~\cite{cormode2003comparing}, which are
basically defined to estimate the value of coverage functions; see
Appendix~\ref{Apx:O(nk)} for a formal definition. Indeed, one may
maintain $n$ instances of the $\ell_0$ sketch, and estimate the value
of the coverage function of a single feasible solution of size $k$
with high probability. However, as there are $n \choose k$ different
choices for a solution of size $k$, there is a huge blow-up on the
probability of failure of one of these solutions. In
Appendix~\ref{Apx:O(nk)}, we show a straightforward analysis to
approximate $k$-cover using $\ell_0$ sketches with $\tilde{O}(nk)$
space, which is quite larger than our sketch.
Later on, in Section~\ref{sec:algorithms}, we put together this
powerful sketch with basic ingredients to design algorithms for
coverage problems. In all these
algorithms, $\tilde{O}(1)$ independent instances of the
sketch are constructed and the problems are solved without any other
direct access to the input. Due to its simplicity, our sketch can be
efficiently constructed and our algorithms can be efficiently
implemented
in streaming and distributed models. Interestingly, this
technique provides almost tight approximation algorithms for coverage
problems in these settings. In particular, we first present the
following results for coverage problems in the streaming model.
We remark that all the algorithms designed in this work have success
probabilities $1-\frac 1 n$; i.e., they mail fail to produce the
claimed solution with probability $\frac 1 n$. For simplicity we do
not repeat this condition everywhere.
\begin{theorem}
In the streaming model, for any arbitrary $\ensuremath{\varepsilon}\in (0,1]$,
\begin{itemize}
\item (Thm~\ref{thm:str:kcover}) a single-pass
$(1-\frac 1 e -\ensuremath{\varepsilon})$-approximation algorithm for $k$-cover using
$\tilde{O}(n)$ space.
\item (Thm~\ref{thm:str:epscover}) a single-pass $(1+\ensuremath{\varepsilon})\log\frac 1 {\lambda}$-approximation
algorithm for set cover with $\lambda$ outliers using
$\tilde{O_{\lambda}}(n)$
space.
\item (Thm~\ref{thm:str:setcover}) A $p$-pass
$(1+\ensuremath{\varepsilon})\log m$-approximation algorithm for set cover using
$\tilde{O}(nm^{O(\frac{1}{p})}+m)$ space.
\end{itemize}
\end{theorem}
The above results are the first results presented for the edge arrival
streaming model for coverage problems. Indeed, these results improve
the approximation factor of previously known results for the set
arrival
model~\cite{saha2009maximum,nisan2002communication,emek2014semi,chakrabarti2015incidence},
while the space complexity may not be comparable (i.e., $\tilde{O}(n)$
versus $\tilde{O}(m)$). In fact, our result for streaming set cover
gives an exponential improvement over Demaine et
al.~\cite{demaine2014streaming} on both approximation factor and
number of rounds given the same space. See Table~\ref{tab:streaming}
for comparison to previous work.
Recently and independently of our work,
Indyk et al.\ (Theorem~2.6 in~\cite{indyk2015towards}) provide a $p$-pass $O(p\log n)$-approximation algorithm in $\tilde{O}(mn^{O(1/p)})$ space in the set arrival model. Notice that our results for streaming set cover provide a better approximation factor (i.e., $(1+\ensuremath{\varepsilon})\log(n)$ versus $O(p\log(n))$) with similar space and number of passes, while it handles the more general edge arrival model.
Next, we discuss our result for the RAM model.
While the size of the input graph may be as large as $\Omega(nm)$, we show
that we can query only $\tilde{O}(n)$ bits of the input to construct
our sketch, paving the way for a $(1-\frac 1
e-\ensuremath{\varepsilon})$-approximation algorithm in the classical RAM model
\cite{aho1974design} that runs in $\tilde{O}(n)$ time. This is now the
fastest $(1-\frac 1 e -\ensuremath{\varepsilon})$-approximation algorithm in classical RAM
model.
\begin{theorem}
In the RAM model, given $\ensuremath{\varepsilon}\in (0,1]$, there exists a $(1-\frac 1 e
-\ensuremath{\varepsilon})$-approximation algorithm for $k$-cover, with running time
$\tilde{O}(n)$.
\end{theorem}
Next, we turn our attention to widely used distributed computation
models like MapReduce. While most previous results
for coverage problems are for the more general case of submodular
functions~\cite{CKT10,KMVV13,nips13,IMMM14,MZ15,nips15},
they all assume value oracle access to the submodular function. We present the following
results which achieve almost optimal approximation factors while
running for a small number of rounds and putting a
small data load on each machine (the memory complexity). See Table~\ref{tab:distributed} for comparison to previous work.
\begin{theorem}
In the distributed model, given $\ensuremath{\varepsilon}\in (0,1]$,
we have the following algorithms.
\begin{itemize}
\item (Thm~\ref{thm:mr:kcover}) A three-round $(1-\frac 1 e
-\ensuremath{\varepsilon})$-approximation algorithm for
$k$-cover where the load on each machine is
$\tilde{O}(n+m)$.
\item (Thm~\ref{thm:mr:epscover}) A three-round $(1+\ensuremath{\varepsilon})\log\frac 1 {\lambda}$-approximation
algorithm for set cover with $\lambda$ outliers, where the load on each machine
is $\tilde{O}_{\lambda}(n+m)$.
\item
(Thm~\ref{thm:mr:setcover}) An $r$-round $(1+\ensuremath{\varepsilon})\log m$-approximation algorithm for set cover, where the load on each machine is
$\tilde{O}(nm^{O(\frac{1}{r})}+m)$.
\end{itemize}
\end{theorem}
On the hardness side, we show that any $\frac{1}{2}+\ensuremath{\varepsilon}$-approximation
streaming algorithm for $k$-cover requires $\Omega(n)$ space. This
holds even for streaming algorithms with several passes.
Interestingly, this hardness result holds for a very simple input in
which $k=1$ and $m=2$. Therefore, there is no hope to find a
$\frac{1}{2}+\ensuremath{\varepsilon}$-approximation streaming algorithm for $k$-cover using
$o(n)\Phi(m,k)$ space where $\Phi(m,k)$ is any arbitrary function of
$m$ and $k$ but independent of $n$.
\begin{theorem}\label{thm:str:hard}
Any $\frac{1}{2}+\ensuremath{\varepsilon}$-approximation multi-pass streaming algorithm for
$k$-cover requires $\Omega(n)$ space in total. This holds even for
$k=1$ and $m=2$.
\end{theorem}
The edge arrival version of the coverage problem is more flexible and
can be used to tackle other problems. For instance, in
Appendix~\ref{sec:DominatingSet}, we present a simple reduction that
yields the first nontrivial streaming algorithm for the dominating set
problem. Specifically, we show a $p$-pass $(1+\ensuremath{\varepsilon})\log n$-approximation
streaming algorithm for dominating set, using
$\tilde{O}\Big(n^{1+O(\frac{1}{p})}\Big)$ space, where $n$ is the number of
vertices.
\fi
\section{Introduction}
\input{intro}
\section{Distributed Algorithms} \label{sec:MapReduce}
\input{MapReduce}
\section{Algorithms for RAM Model} \label{sec:RAM}
\input{RAM}
\section{Dominating Set} \label{sec:DominatingSet}
\input{DominatingSet}
\section{Weighted Variants} \label{sec:weightedVariants}
\input{variants}
\section{Empirical Study and Results} \label{sec:empirical}
\input{datasets-short}
\input{experimental}
\input{misc-results}
\input{feature}
\input{conclusion}
\bibliographystyle{plain}
|
2,869,038,154,572 | arxiv |
\section{How to Correct/review this document}
\section{Introduction}
\subsection{Space telescopes and technological innovations}
The ambitious science quests conceived during the past decade (e.g. cosmic origins, exploration of exoplanets) come together with demanding technological challenges. The space agencies developed therefore a roadmap to \emph{fill the gap} between the current status and the future requirements. The LUVOIR studies (some of them in the references, plus the interim report) may be considered as a very good summary in terms of needs identification, overview of requirements, budgeting and expected impact on scientific performances. In particular, the requirements are extremely tight: the coronoagraphic imaging of exo-earths, for instance, has a preliminary error budget of 10 nm Wavefront error (WFE), 10 pm stability, $10^{-10}$ constrast. In such context, it is well acknowledged that the active control of the optical surfaces may result in a \emph{local} increased complexity, leading however to reduced \emph{global} cost and risks.\\
Adaptive optics (AO) is a well established technology for ground based telescopes, considered as a baseline (first light) for the Extremely
Large Telescopes. In an AO system a wavefront sensor (WFS) is illuminated by a guide star (natural or laser) to detect the aberrations induced by the atmosphere and command a deformable mirror (DM) to a shape minimizing the WFE. In the context of space telescopes the active correction would be used to correct the thermo-elastic deformations of the mirrors and restore the optical shape.\\
Ground based AO deals commonly with complexity: large format DM, hundreds or thousands of actuators, fast loop frequency, high power and computational loads. The translation of such elements for a space application is not straightforward. As first, the driving requirements shall be identified. A crucial point here is to address them under the correct cost-benefit view: the goal is to reduce the global cost, risk and complexity. Here comes the ratio of this work: what do we have to offer, from the AO world, for an active space telescope?
\subsection{LATT \& SPLATT: active primary mirrors for space}
The LATT project (see \citeonline{icso2017}, \citeonline{latt2016-1}, and \citeonline{latt2016-2}) is an ESA-funded activity under a TRP grant, in the spirit of the technological developments depicted above. The idea is to leverage on the mature expertise in the field of adaptive optics (AO) for ground based telescopes and extend it to space. The goal in particular is to investigate the conversion of an adaptive secondary into an active primary, or a primary segment. The specific technology is that adopted for the deformable secondaries of the Large Binocular Telescope\cite{asm} (Arizona), the Magellan Telescope (Las Campanas, Chile, see \citeonline{magellan}) and Very Large Telescope (Cerro Paranal, Chile, see \citeonline{vlt-test}), and also for the Extremely Large Telescope adaptive M4 mirror\cite{m4} and the Giant Magellan Telescope adaptive M2 (both currently under construction). The concept is based on a thin Zerodur-glass shell shaped by voice-coil actuators, which are in turn controlled in local close-loop fed by co-located capacitive position sensors; the force produced by the actuators is then adjusted (at high frequency) to obtain the wanted displacement on the position sensor. When working in an adaptive optics system, such voice-coil DM are then controlled at two speeds: in the high frequency regime (kHz) the actuator force is controlled by the capacitive sensor to keep the optical surface in position; at the working frequency (the one selected for the AO loop) the mirror is controlled by the WFS commands. The adaptive secondaries have been in service since 2010 and offered great opportunities for science.\\
In the LATT project, a demonstrator called OBB was fitted with 19 actuators and a 40 cm diameter spherical shell. The thin shell (TS) is 1 mm thick and made of Zerodur, with magnets bonded on its back and coupled with the coil mounted on a Reference Body (RB). The RB is made in aluminum honeycomb with a thin carbon fiber skin. The total areal density of the system is approximately $16 \, \, \mathrm{kg}/\mathrm{m}^2$, including the RB, the TS and the actuators electronic. Such value is particular attractive and could be further optimized with a design update. The system has been subjected to thermo-vacuum test and optical qualification. The concern was the optical controllability with such a low actuator counts, compared to active area; the test in the optical laboratory indicated that the final wavefront error (WFE) was consistent to that of much larger system. As a last point, the OBB was tested on the vibration stand where it was subjected to several vibration spectra, to simulate the launch conditions. The TS, which was the most concerning item, survived the test thanks to an electrostatic locking device, that has been designed specifically for the launch phase.\\
\noindent The project ended in 2015 after the final review at ESA. During an informal follow up phase, we discussed the OBB features beyond the specific requirements and boundaries of the project. In particular we addressed:
\begin{itemize}
\item the correction ``geography'', i.e. primary vs post focal DM, or large vs small size DM;
\item ways to further reduce the areal density through a design update;
\item the implication of a large actuator count design (e.g. $\approx$ 1000 actuators on the primary), both from a power budget and and performance perspective;
\item the architecture of the correction chain including the WFS.
\end{itemize}
A few points deserve some attention here. \\
Voice coil motors (VCM) require a constant power supply, while set-and-forget actuators can be powered off after the correction. So the point is to compare \emph{globally} the benefits of VMC in the system. Are they a way to increase the system stability or performances?\\
Thin shell technology (fabrication, handling, operation) is very mature, but would you really put an extremely fragile glass foil on a rocket? On the other side, what is the expected impact in terms of total mission weight and cost of an active primary with an areal density lower than $16 \, \, \mathrm{kg}/\mathrm{m}^2$, including WF control and support?
The TS is the only mechanical component with optical requirements; the RB conversely can be manufactured to significantly lower specifications, since there is no mechanical contact between it and the TS. This permits also the use of extremely lightweight material such as aluminum honeycomb.\\
\noindent Ground based AO made significant progresses in the field of WFS and control strategy: people there are particularly familiar with high orders (HO) systems, with thousands of actuators, and very large WFE due to air turbulence. Is there a WFS that best suits for space applications, also considering that it will operate at the diffraction limit? Does a dedicated WFS make a positive performance difference?\\
Some of such questions have been addressed within an official follow up named SPLATT, Segments and Pyramids for Large Aperture Space Telescope Technology.
\section{The SPLATT project}
\subsection{Scenario and goals}
The SPLATT project is an INAF activity funded under the INAF Tecno-PRIN2019, and officially started in 2021. The team comes from the ADONI community, the italian network of people working in AO, including researchers both from scientific and technological field. Also from the specific technological point of view, the community is very heterogeneous due to a favourable mix of experts in optics, control theory, optical metrology, AIV, sensors and computer science. The SPLATT team gathered together people from 4 INAF departments, and shows a nice mix of competences. The idea is to build on such a mix to address the points raised above, in particular from the points of view of control, WFS and stability.\\
The goal of the project is to increase the internal know how, by investigating the technological transfer from the ground based AO context, and by pursuing a system-level approach to the problems. In particular we considered two main aspects: the advantages offered by the contactless actuation provided by VCM and the high sensitivity of the pyramid WFS (PWFS, see \citeonline{CITA1}) operating in diffraction limited regime. Both points are to be discussed in the specific perspective of the LUVOIR studies, where optical stability is one of the key-points.\\
The projects has three main tasks:
\begin{itemize}
\item install the OBB (which has been shipped back to INAF from ESA thanks to a loan agreement) on a dedicated optical bench, for further optical qualification;
\item setup a segmented DM plus PWFS simulator, and explore the WF control stability in relevant conditions;
\item draw a roadmap for further developments.
\end{itemize}
Let's analyze the first two items.\\
The contactless actuation mechanism provided by VCM implies that there is no mechanical contact between the TS and the RB: the TS \emph{floats} at a given distance (or gap, typically 200 $\mu$m\xspace) in front of the RB. Several consequences can be derived: the manufacturing specs for the RB become less demanding; thermal deformations on the RB can be recovered with no fitting error by the actuators; mechanical vibrations from the spacecraft to the RB are not propagated to the optical surface, thus resulting in a significant gain in the optical stability. This is the point to be demonstrated in the laboratory: the vibration spectrum measured on the support is reduced on the optical surface.\\
\noindent The PWFS is a pupil-conjugated sensor, with a (double) glass prism installed at the telescope focus; the system provides a slope signal out of the 4 pupil images on the detector, where at least one pixel per sub-image is requested per mode-to-be-corrected. The PWFS has a non-linear, periodic response in the range $0<s<\lambda$ (where $s$ is the WF offset and $\lambda$ is the working wavelength). This point is a limitation on ground and is addressed by introducing a circular modulation; for space, this is conversely a valuable point to be exploited. In addition, the PWFS features an optical gain depending on the signal spatial scale. The highest sensitivity is for low spatial scale modes, which is a plus for segmented system since we expect to correct the segments mis-alignment.\\
\noindent The first point, or the SPLATT experiment, will be discussed in Sec.\ref{sec.experiment} where we will present the laboratory setup. the test plan and the preliminary results. In Sec. \ref{sec.simulation} we will give an overview of the numerical code and present the simulation results.\\
The project is currently (August 2022) at its mid-point; the activity in the laboratory revealed that vacuum (or semi-vacuum) tests are mandatory for a further assessment of the OBB characterization within the specific scope of the project; we expect to deploy a low vacuum chamber by autumn 2022 and conclude the tests by the end of the year. The simulations are in turn basically completed: we plan therefore to start the project wrap-up, to prepare a development roadmap, by Spring 2023.
\section{The SPLATT experiment}\label{sec.experiment}
\subsection{The test plan and procedure}
The test aims at demonstrating the vibration reduction when the TS floats in front of the RB thanks to the VCM. We need to compare two datasets to measure such a reduction: the first dataset is collected with the TS pulled against the RB by the VCM, so that to measure directly the oscillations of the mechanical structure; the second is with the TS floating. A dataset is composed by a large number (e.g. 2000) interferometric frames collected at high frequency (300 Hz). During the measurement, a disturbance is injected on the OBB to make the RB oscillate about the elevation axis; each frame is then analyzed to fit the tip/tilt, then the spectrum of the tilt time series is computed.\\
In the end we have two time series and the associated spectra: the tip/tilt $T_p$ when the TS is pulled toward the RB, and the tip/tilt $T_f$ when the TS is floating. Since $T_p$ is a direct measurement of the disturbance injected, the ratio $T_f/T_p$ is the vibration attenuation.\\
The disturbance is injected as a frequency sweep signal, in order to measure on a single dataset a broad frequency range (1 Hz to 120 Hz).
\subsection{Laboratory setup}
The test setup is composed by an optical bench with an interferometer and the relay optics to illuminate the OBB; a flat mirror mounted at 45 $\deg$ steers the laser beam vertically toward the OBB. A separated test stand, sitting on the ground, holds the OBB above the optical bench. The disturbance is injected by means of a piezo actuators pushing the OBB elevation arm against a spring; the piezo is in turn fed by a waveform generator. When the piezo is operated, the OBB stand oscillates about its elevation axis producing a vertical tilt as seen on the interferometer; the test stand is separated from the optical bench with the mirrors and interferometer to avoid propagating the vibrations on the other elements. The bench is also insulated from the ground to get rid of the environment noise. Such improvements came after a few weeks of commissioning of the test setup, in order to enhance the sensitivity.\\
The interferometer is a PhaseCam6110 by 4D-Technology. It is a Twyman-Green dynamic interferometer and a phase map is produced out of a single frame captured, with typical exposure time of tens of microseconds. The CCD can work in cropped mode; we set a measuring area of 300x300 pixel on a 130 mm optical diameter on the OBB to get a frame rate of 320 Hz. Such high frame rate, together with a very short exposure time, allowed us to measure high frequency tilt signals on the OBB with no fringes smearing.\\
The TS sits on a foam cushion for safety, placed at 5 mm distance from the RB. At the beginning of the testing session, the TS is lifted with the cushion to get in touch with the RB, then the actuators are powered to maximum negative (pulling) force to hold the TS against gravity; the cushion is then released to the safety distance.\\
The OBB and test stand are fitted with accelerometers to measure the mechanical vibration injected by the piezo. One accelerometer, in particular, is placed at the outer edge of the OBB corresponding to the point of maximum oscillation. Its measurements may be integrated and compared with the interferometer signal.
\figtable{20220810-134627}{OBB installed on its test stand.}{20220810-134747}{The TS sitting on the safety foam cushion. The yellow disk (coated with kapton) is the aluminum honeycomb RB. Please note the small air gap between the TS and the RB.}
\figtable{20220810-134725}{Detail of the retention ring, of vertical support for the safety foam cushion.}{20220810-134718}{The piezo actuator installed on the elevation arm to inject the external disturbance.}
\subsection{Preliminary results in air}
As a very first step, we measured the SPLATT attenuation in a discrete frequency range 1 Hz to 120 Hz. The result is shown in Fig.~\ref{fig.response}, for different proportional gains of the system. The picture shows the overshoot at the low frequencies due to the resonance of the internal loop. The resonance frequency and amplitude is in facts depending on the loop gain, with a lower gain resulting in a lower frequency (kP=250, freq=5 Hz). From the plot we observe that the system shows in general an effective attenuation ($T_f/T_p <1$ ), with significantly poorer performances in the 60 Hz and 100 Hz bands. We investigated such regions with dedicated frequency sweep measurements. \\
We repeated the test with different loop gains and the result is shown in Fig.~\ref{fig.latt-atten3}. In the figure we plot the tilt values instead of the attenuation, so that they shall be compared with the black plot which is the tilt measured with the TS attached on the RB. The three plots are well superimposed, meaning that the attenuation, in this frequency band, does not depend from the loop parameters. The resonance at 100 Hz, in particular, is identical for the three plots, thus suggesting it cannot be originating from the internal loop.\\
We then repeated the sweep measurement at different gaps, i.e. with the TS floating at different distances from the RB. The result is shown in Fig.~\ref{fig.latt-atten4}. Under the assumption the behaviour is due to the air coupling, we expected to observe a larger attenuation at larger gaps. The experimental data confirmed the hypothesis and we observed a progressive increase in attenuation changing the gap from 20 um to 200 um. We also observed that the resonance peak is higher al larger gaps. This point is consistent with the following hypothesis: in the band 60 Hz to 90 Hz, a large air gap transmits less efficiently the external excitation; at 100 Hz a resonance of the TS is excited and a larger gap is less efficient in damping it.\\
Such scheme is just a qualitative description. A test in a vacuum chamber is required to fully understand the behaviour.
\fig{response}{9}{Relative frequency response as measured on the test bench. Blue line, reference, or the optical tilt measured with the TS pulled by the actuators toward the RB; red, green, yellow plots: $T_f/T_p$ as described in the text, measured with different actuator gains.}
\figtable{latt-atten3}{Optical tilt measured with TS floating, different actuators gain, compared with the tilt measured with the TS pulled against the RB (black plot).}{latt-atten4}{Optical tilt measured with TS floating at different gap.}
\section{The SPLATT simulation}\label{sec.simulation}
\subsection{Simulation setup}
The simulation toolkit is composed by three main parts: the DM, the PWFS and the closedloop.\\
The DM is segmented and replicates the JWST geometry, with hexagonal segments on a hexagonal grid with 3 rings, within a circular outer mask. The segments alignment modes (piston and tip/tilt) are the system degrees of freedom (DoF) and are simulated by producing the associated shapes on the pupil mask. \\
The PWFS and closedloop code are part of the PASSATA toolkit (see \citeonline{CITA5}), that has been used intensively for the simulation, design, and performance evaluation of the FLAO, ERIS, GMT, MAVIS AO system. In particular, PASSATA was adopted to simulate the WF sensing and control strategy (including the segments piston) for the GMT. \\
\noindent The first stage of the simulation, after importing the DM, is to calibrate the PWFS. This is done in two steps: as first the system mask is created by selecting those pixel with an illumination level above a user-defined threshold. As a second step, the PWFS interaction matrix is calibrated by measuring the PWFS signal when the DoF of the simulated DM are excited individually. The measurement is differential according to the \emph{push-pull} technique: the commands are applied sequentially with positive, then negative amplitude, and the difference of the corresponding signals is taken. The command amplitude shall be chosen carefully to calibrate the system within its linear range.
Our test model is composed by a segmented DM with 19 hexagonal segments and a circular mask, whose degrees of freedom are the local alignment modes, namely segment piston and tip/tilt; the PWFS images the DM on a 36x36 to 76 x76 pixel grid (per sub-pupil), in order to test the effect of a better resolution versus a lower photon signal per pixel. The WFS camera is the CCD39 (the one used for the FLAO\cite{flao} AO system at the Large Binocular Telescope), which has a known noise characteristic and is consistent with a worst case scenario (it is a quite old camera with important read-out noise and a quantum efficiency lower than 35 \%). A summary of the simulation parameters is reported in Tab.~\ref{tab.param}.
\figtable{mask}{A sample of the DM pupil mask, segmented into 19 separated hexagonal sectors. The inter-segment gap is one of the parameters under test.}{ccdframe}{A typical PWFS raw frame: the pyramid tip is located at the geometrical center of the frame, while the 4 sub-frames are the pupil images produced by the pyramid 4 faces. The light unbalance amongst the faces is converted into a slope signal.}
\fig{modes-segm1}{9}{The slope signals corresponding to piston, tip, tilt (left to right) of segment \#1.}
\subsection{Closed loop results}
We tested closing the loop for a demonstration test case. We created an initial DM offset by scrambling the segments with piston and tip/tilt; such initial surface error is 50 nm RMS and is consistent with a preliminary co-alignment and co-phasing performed with a different device (the PWFS at low sensitivity or the scientific camera to identify and pre-adjust the single segments). We then closed the PWFS loop with the parameters reported in Tab.~\ref{tab.param}, in particular the guide star magnitude was 10 and the loop frequency was 10 Hz. The loop frequency is the PWFS frame rate; we must then consider that with a 0.1 gain the effective loop speed is 1 Hz. We are interested in the stability of the WF correction, which is related to the sensitivity of the PWFS. We therefore evaluated the loop performances by computing the dispersion of the DM surface error during the loop, after the initial convergence stage. We basically computed the residual DM surface error versus time: we then calculated the standard deviation of the plot, as an estimation of the WF stability.\\
The result (see Fig.~\ref{fig.closeloop1}) is very promising since we computed a sub-nanometer stability (or sensitivity) compared to a significantly fast loop speed and faint star magnitude.\\
We are currently (summer 2022) running simulations to further asses these results, for instance by evaluating the loop performances at different guide star magnitudes and PWFS sampling, i.e. the number of pixel on each sub-pupil of the pyramid CCD.
\begin{table}[h]
\centering
\begin{tabular}{|c|c| c| c|} \hline
\multicolumn{4}{|c|}{\textbf{DM}} \\ \hline
DM diameter & 5.08 m to 5.19 m & N. Segments size & 19 \\ \hline
Segment geometry & hexagonal & Pupil geometry & Circular, un-obstructed \\ \hline
Segment orientation & 15 $\deg$ & Inter-segm. gap & 2 cm to 10 cm \\ \hline
DM Pixel pitch & 5 mm & Pixel on diameter & 140 \\ \hline \hline
\multicolumn{4}{|c|}{\textbf{PWFS}} \\ \hline
PWFS resolution & 36 to 76 pix & WFS $\lambda$ & 750 nm $\pm 150 nm$ \\ \hline
Frame rate & 10 Hz & Modulation amplitude & 0 $\lambda/D$ \\ \hline
CCD Camera & CCD39 & Noise & Photon, ReadOut \\ \hline
Quantum eff. & 0.32 & Binning & 1 \\ \hline
\multicolumn{4}{|c|}{\textbf{Control loop}} \\ \hline
Controlled modes & 56 & Type of control & Integrator \\ \hline
Delay & 1 & Gain & 0.1 \\ \hline
\multicolumn{4}{|c|}{\textbf{Parameters space}} \\ \hline
Guide star magnitude & 10 to 18 & Modal amp. for calibration & 5 nm to 20 nm \\ \hline
Initial Surface Error & 50 nm RMS & Initial Surface error PtV & 250 nm \\ \hline
\end{tabular} \caption{Overview of the parameters to create the simulated active primary mirror.}
\label{tab.param}
\end{table}
\fig{closeloop1}{6}{left panel: the initial surface offset, local piston,tip/tilt on the segments. Right panel: plot of the residual surface error on the DM, after closing the loop. The sample within the red box is taken as the stability of the PWFS.}
\subsection{Discussion}
Some points descending from the PWFS sensitivity need here to be discussed. As first, it would be possible to push the limit for the guide star magnitude toward the faint end. This implies a large sky coverage with the scientific target to be used also as a reference for the WFS. The case of the extended object shall be addressed to understand the possible limitations. The relatively fast frame rate implies that the open-loop stability of the DM (and of the entire system) can be limited to a sub-minute time scale. We could also consider another scenario (following a \emph{virtual adaptive optics} approach) where the PWFS doesn't actually drive the DM, but its fast cadence readings are used for an enhanced data processing.
\section{Conclusions and perspectives}
The SPLATT project aims at investigating two key elements: contactless DM as segments of an active primary mirror, to exploit their intrinsic insulation from vibration coming from the payload; and sensitive PWFS to drive the correction chain at faster speed and even on faint reference stars. The first point has been addressed in the laboratory using the OBB, a 40 cm diameter demonstrator of active primary, with 19 actuators, manufactured and tested under an ESA TRP. The laboratory activity showed that the actuator local closed loop, controlling the position of the TS, behaves as a low pass filter and rejects effectively (at certain frequency bands) the external disturbances injected by shaking the test stand. The tests shall be repeated in vacuum to assess the impact of the thin air gap between the TS and the RB, that could possibly induce vibrations on the optical surfaces.\\
We then simulated a PWFS using a numerical code developed for ground based telescopes; we created a segmented primary mirror with 19 segments controlled in local piston and tip/tilt. The PWFS was verified at different resolution and closing the loop on guide stars of different magnitudes. The results are consistent with an expected very low noise propagation at the low spatial scale modes; we checked in particular the mirror WF scatter after convergence, taken as the measurement stability, and demonstrated a sub-nanometer sensitivity even with faint stars (magnitude 14) as references.\\
The context of the project is the technological development for high stability optics and correction chain; contactless DM and PWFS could be a building block for the next generation of space telescopes.
\section{Acknowledgements}
The view expressed herein can in no way be taken
to reflect the official opinion of the European Space Agency. The LATT prototype is property of ESA and has been kindly made available by ESA for laboratory testing with a loan agreement.
The SPLATT project is funded by INAF - Istituto Nazionale di Astrofisica under the TECNO-PRIN INAF 2019 program.\\
In memory of Piero Angela, the most brilliant italian scientific journalist, who passed the day this paper was concluded. He inspired us all, when we were kids.
\newcommand{Proc. of SPIE}{Proc. of SPIE}
\bibliographystyle{spiebib}
|
2,869,038,154,573 | arxiv | \section{Introduction}
Define the automorphism group of a vector field on a smooth manifold to be
the group of diffeomorphisms of the manifold preserving the vector field.
A natural question is how small the automorphism group of a vector field
can be. Suppose that we only consider vector fields which are invariant
under a fixed finite group action on the manifold.
In this situation, the automorphism group of the vector field always includes
the action of the finite group and the flow of the vector field.
Our main result implies that if the manifold is compact
and connected then
the set of invariant gradient vector fields whose automorphism
group contains nothing more than this is residual in the set of all invariant gradient vector fields.
(Recall that a residual subset is a countable intersection of
dense open subsets. Since the space of smooth invariant
gradient vector fields is Baire\footnote{See
e.g. \cite[Chap. 2, Theorem 4.4]{H} and the comments afterwards.}, any of its
residual subsets is dense.)
Let us explain in more concrete terms our motivation and main result.
Take $M$ to be a smooth (=$\cC^{\infty}$) manifold, and denote
by $\Xx(M)$ the vector space of smooth vector fields on $M$,
endowed with the $\cC^{\infty}$ topology. Denote the
automorphism group of a vector field $\xX\in\Xx(M)$ by
$$\Aut(\xX)=\{\phi\in\Diff(M)\mid \phi_*\xX=\xX\}.$$
If $\xX\neq 0$ then $\Aut(\xX)$ contains a central subgroup
isomorphic to $\RR$, namely the flow generated by $\xX$. Denote
by $\Aut(\xX)/\RR$ the quotient of $\Aut(\xX)$ be this
subgroup.
Suppose that $M$ is endowed with a smooth and effective action
of a finite group $\Gamma$. Let $\Xx(M)^{\Gamma}\subset\Xx(M)$
be the space of $\Gamma$-invariant vector fields on $M$. F.J.
Turiel and A. Viruel proved recently in \cite{TV} that there
exists some $\xX\in\Xx(M)^{\Gamma}$ such that
$\Aut(\xX)/\RR\simeq \Gamma$. The vector field $\xX$ is given
explicitly in \cite{TV} as a gradient vector field for a
carefully constructed Morse function and a suitable Riemannian
metric. One may wonder, in view of that result, whether the set
of $\xX\in\Xx(M)^{\Gamma}$ satisfying $\Aut(\xX)/\RR\simeq
\Gamma$ is generic in some sense, at least if one restricts to
some particular family of vector fields (such as gradient
vector fields, for example). Here we give an affirmative answer
to this question assuming $M$ is compact and connected.
Suppose, for the rest of the paper, that $M$ is compact and connected. For
any function $f\in\cC^{\infty}(M)$ and any Riemannian metric
$g$ on $M$ we denote by $\nabla^gf\in\Xx(M)$ the gradient of
$f$ with respect to $g$, defined by the condition that
$g(\nabla^gf,v)=df(v)$ for every $v\in TM$. The following is
our main result.
\begin{theorem}
\label{thm:main}
Let $\mM\subset\cC^{\infty}(M,S^2T^*M)$ denote the space of
Riemannian metrics on $M$, with the $\cC^{\infty}$ topology,
and let $\mM^{\Gamma}\subset\mM$ denote the space of
$\Gamma$-invariant metrics. Let $f$ be a $\Gamma$-invariant
Morse function on $M$. There exists a residual subset
$\mM_f\subset\mM^{\Gamma}$ such that any $g\in\mM_f$ satisfies
$\Aut(\nabla^gf)/\RR\simeq \Gamma$.
\end{theorem}
By a result of Wasserman \cite[Lemma 4.8]{W}, the space of
$\Gamma$-invariant Morse functions on $M$ is open and dense
within the space of all $\Gamma$-invariant smooth functions on
$M$ (for openness see the comments before Lemma 4.8 in
\cite{W}). Combining this result with Theorem \ref{thm:main} it
follows that if $\Gg(M)^{\Gamma}\subset\Xx(M)^{\Gamma}$ denotes
the set of $\Gamma$-invariant gradient vector fields, then the
set of $\xX\in\Gg(M)^{\Gamma}$ satisfying
$\Aut(\xX)/\RR\simeq\Gamma$ contains a residual subset in
$\Gg(M)^{\Gamma}$.
Probably Theorem \ref{thm:main} can be proved as well for most
proper $\Gamma$-invariant Morse functions on open manifolds
endowed with an smooth effective action of a finite group.
However, there are exceptions: if $M=\RR^n$ with $n>1$, and
$\Gamma$ is a finite group acting linearly on $M$ preserving
the standard euclidean norm, then $f:M\to\RR$, $f(v)=\|v\|^2$,
is a $\Gamma$-invariant Morse function, and for any
$\Gamma$-invariant Riemannian metric $g$ on $M$,
$\Aut(\nabla^gf)/\RR$ is bigger than $\Gamma$. This follows
from a result of Sternberg, see Theorem \ref{thm:Sternberg}
below.
For the case $\dim M=1$ (i.e., when $M$ is the circle) we prove
a stronger form of Theorem \ref{thm:main}, where residual is
replaced by open and dense, see Theorem \ref{thm:circle}. It is
not inconceivable that this can be done in all dimensions:
while the author has not managed to do so, he does not have any
reason to suspect that it might be false. In fact, a theorem of
Palis and Yoccoz answering an analogous question (in the
non-equivariant setting), with the set of gradient vector fields $\xX(M)$
replaced by a certain set of diffeomorphisms may suggest that it is true.
To be precise, let $\aA_1(M)\subset\Diff(M)$ be the (open) set of
Axiom A diffeomorphisms satisfying the transversality condition and
having a sink or a source. Palis and Yoccoz prove in \cite{PY1} that
the set of diffeomorphisms with the smallest possible centralizer
contains a $\cC^{\infty}$ open and dense subset of $\aA_1(M)$.
Note that the set $\aA_1(M)$ includes Morse--Smale diffeomorphisms, which are
analogues for diffeomorphisms of gradient vector fields.
However, if one considers the same question for the entire
diffeomorphism group endowed with the $\cC^1$ topology then openness
never holds, see \cite{BCW2}.
Before explaining the main ideas in the proof of Theorem
\ref{thm:main}, let us discuss some related problems. A natural
question is whether the property proved in Theorem
\ref{thm:main} is true when replacing the space of invariant
gradient vector fields by the entire space of invariant vector
fields.
\noindent{\bf Problem A.}
Does the set of $\xX\in\Xx(M)^{\Gamma}$ satisfying $\Aut(\xX)/\RR\simeq\Gamma$
contain a residual subset of $\Xx(M)^{\Gamma}$?
Problem A includes Theorem \ref{thm:main} as a particular case,
since $\Gg(M)^{\Gamma}$ is open in $\Xx(M)^{\Gamma}$. Note that
the case $\Gamma=\{1\}$ of Problem A (or even Theorem
\ref{thm:main}) is far from being trivial.
Define the centralizer $Z(\xX)$ of a vector field $\xX$ on $M$
to be the group of diffeomorphisms of $M$ that send orbits of
$\xX$ onto orbits of $\xX$. For any $\xX$, $\Aut(\xX)$ is a
subgroup of $Z(\xX)$, and one may try to explore analogues of
Theorem \ref{thm:main} and Problem A for $Z(\xX)$. However,
even the right question to ask is not clear in this situation.
P.R. Sad \cite{S} studied the case $\Gamma=\{1\}$. His main
result is that for a compact $M$ there is an open and dense
subset $\aA'$ of the set of Morse--Smale vector fields
$\aA\subset\Xx(M)$ such that for any $\xX\in\aA'$ there is a
neighborhood $V\subset\Diff(M)$ of the identity with the
property that any $\phi\in V\cap Z(\xX)$ preserves the orbits
of $\xX$. Unfortunately the restriction to a
neighborhood of the identity in $\Diff(M)$ can not be removed,
as Sad shows with an example.
It is natural to consider analogues of the previous problems replacing vector fields
by diffeomorphisms.
Define the automorphism group of a diffeomorphism $\phi\in\Diff(M)$ to be its
centralizer, i.e.,
$\Aut(\phi)=\{\psi\in\Diff(M)\mid \phi\psi=\psi\phi\}$.
Then $\la\phi\ra=\{\phi^k\mid k\in\ZZ\}$ is a central subgroup of $\Aut(\phi)$.
Let
$$\Diff^{\Gamma}(M)=\{\phi\in\Diff(M)\mid \phi\text{ commutes with the action of $\Gamma$}\}.$$
\noindent{\bf Problem B.}
Does the set of $\phi\in\Diff^{\Gamma}(M)$ such that
$\Aut(\phi)/\la\phi\ra\simeq\Gamma$ contain a residual subset
of $\Diff^{\Gamma}(M)$?
Of course, a positive answer to Problem B does not imply a
positive answer to Problem A, since a diffeomorphism $\phi$
such that $\Aut(\phi)/\la\phi\ra=\Gamma$ can not possibly
belong to the flow of a vector field (for otherwise
$\Aut(\phi)$ should contain a subgroup isomorphic to $\RR$).
One may consider restricted versions of Problem B involving particular diffeomorphisms,
for example, equivariant Morse--Smale diffeomorphisms \cite{field}. These are very particular diffeomorphisms,
but Problem B is already substantially nontrivial for them (even in the case $\Gamma=\{1\}$,
see below).
Problems A and B admit variations in which the regularity of
the vector fields or the diffeomorphisms is relaxed from
$\cC^{\infty}$ to $\cC^r$ for finite $r$. One can also consider
stronger questions replacing {\it residual} by {\it open and
dense} or weaker ones replacing {\it residual} by {\it dense}.
The case $\Gamma=\{1\}$ of Problem B is a famous question of
Smale. It appeared for the first time in \cite[Part IV, Problem (1.1)]{S0},
in more elaborate form in \cite{S1}, and it was included in his list of 18 problems
for the present century \cite{S2}. It was solved for
Morse--Smale $\cC^1$-diffeomorphisms by Togawa \cite{T} and
very recently for arbitrary $\cC^1$-diffeomorphisms by C.
Bonatti, S. Crovisier, A. Wilkinson in \cite{BCW} (see the
survey \cite{BCW1} for further references). The analogous
problem for higher regularity diffeomorphisms is open at
present, although there are by now plenty of partial results:
see e.g. \cite{K} for the case of the circle, \cite{PY1} for
elements in the set $\aA_1(M)$ defined above,
and \cite{PY2} for Anosov diffeomorphisms of tori.
Theorem \ref{thm:main} may be compared to similar results for
other types of tensors. For example, it has been proved in
\cite{mounoud} that on a compact manifold the set of metrics of
fixed signature with trivial isometry group is open and dense
in the space of all such metrics (see also \cite{CMR} for an
infinitesimal version of this with the compactness condition
removed).
\subsection{Main ideas of the proof}
To prove Theorem \ref{thm:main} we treat separately the cases
$\dim M=1$ and $\dim M>1$.
The case $\dim M=1$ is addressed in Section \ref{s:circle},
using rather ad hoc methods. An interesting ingredient is
an invariant of vector fields which, when nonzero,
distinguishes changes of orientation, and which plays an
important role in the classification up to diffeomorphisms of
vector fields on $S^1$ with nondegenerate zeroes.
The main ingredient in the case $\dim M>1$, common to other
papers addressing similar problems, is
a theorem of Sternberg \cite{St} on linearisation of
vector fields near sinks and sources, assuming there are no
resonances. The use of this result in this kind of problems
goes back to work of Kopell \cite{K}, and appears in papers of
Anderson \cite{A1} and Palis and Yoccoz \cite{PY1} among
others.
To apply this theorem in our situation we need to generalize
it to the equivariant setting under the presence of finite
symmetries. This poses some difficulties. For example, in
the equivariant case we can not suppose that the eigenvalues of
the linearisation of a generic vector field at fixed points are
all different: high multiplicities can not be avoided; in
particular, the centraliser of the linearisation is not
necessarily abelian (both Anderson \cite{A1} and Palis--Yoccoz
\cite{PY1} restrict themselves to the case in which the
eigenvalues are different). This is relevant for example when
extending the version of Sternberg's theorem for families
proved by Anderson to the equivariant setting (see Section
\ref{s:sternberg} for details on this).
We close this subsection with a more concrete description of the
proof of the case $\dim M>1$. Suppose a $\Gamma$-invariant Morse function $f$ has been chosen.
The set of metrics $\mM_f$ is defined as the intersection
of a set of invariant metrics, $\mM_0$, and a
countable sequence of subsets $\{\mM_{1,K}\}_{K\in\NN}$. Each of these sets is open and dense in $\mM^{\Gamma}$.
The metrics $g\in \mM_0$, defined in Subsection \ref{ss:mM-0},
have two properties: (1) the eigenvalues of the differential of
$\nabla^gf$ at each critical point are as much different among
themselves as they can be (in particular, the collection of
eigenvalues at two critical points coincide if and only if the
two points belong to the same $\Gamma$-orbit), and (2) there
are no resonances among eigenvalues at any critical point. The
second property allows us to use Sternberg's theorem on
linearisation on neighborhoods of sinks and sources, and a
theorem of Kopell which limits enormously the automorphisms of
the gradient vector field restricted to (un)stable manifolds of
sinks/sources.
The metrics $g\in\mM_{1,K}$, defined in Subsection \ref{s:mM-2-K}, have the following property.
Suppose that $p$ is a sink and $W_g^s(p)$ is the stable manifold of $p$ for $\nabla^gf$, and that $q$ is a source and $W_g^u(q)$ is its unstable manifold for $\nabla^gf$. If $W_g^s(p)\cap W_g^u(q)$ is nonempty, then any automorphism of $\nabla^gf|_{W_g^s(p)}$ whose derivative at $p$ is at distance $\leq K$ from the identity and which matches on $W_g^s(p)\cap W_g^u(q)$
with an automorphism of $\nabla^gf|_{W_g^u(q)}$ is at distance $<K^{-1}$ from an automorphism coming from the action of $\Gamma$ and the flow of $\nabla^gf$.
After defining these sets of metrics, in Section \ref{s:proof-thm:main} we prove
Theorem \ref{thm:main} for manifolds of dimension greater than one, showing that if $g\in\mM_f$ then
$\Aut(\nabla^gf)/\RR\simeq\Gamma$.
The paper concludes with two appendices. The first one gives the proof of a technical result on the variation of the gradient flow of $\nabla^gf$ with respect to variations of $g$, and the second one contains a glossary of the notation used to address the case $\dim M>1$
(Section \ref{s:sternberg} and the next ones).
\noindent{\bf Acknowledgement.} I am very grateful to the
referee for pointing out a substantial simplification in the
proof of the main theorem, which was originally much longer and
more involved, and for detecting a number of mistakes and
suggesting improvements.
\section{Proof of Theorem \ref{thm:main} for $\dim M=1$}
\label{s:circle}
In this section we prove a strengthening of
the case $\dim M=1$ of Theorem \ref{thm:main}. More concretely, in Subsection
\ref{ss:proof-thm:circle} below we prove the following.
\begin{theorem}
\label{thm:circle}
Suppose that a finite group $\Gamma$ acts smoothly and effectively on $S^1$. Let $f$ be a $\Gamma$-equivariant Morse function on $S^1$. Let $\mM^{\Gamma}$ denote the set of $\Gamma$-invariant Riemannian metrics on
$S^1$, endowed with the $\cC^{\infty}$ topology. There exists a dense and open subset $\mM_f\subset\mM^{\Gamma}$ such that for every $g\in\mM_f$
we have $\Aut(\nabla^gf)/\RR\simeq\Gamma$.
\end{theorem}
\subsection{Classifying nondegenerate vector fields on the circle}
To prove Theorem \ref{thm:circle} we will need, in the case
when $\Gamma$ is generated by a rotation, an invariant of
nondegenerate vector fields on the circle that detects change
of orientations. This invariant is one of the ingredients of
the classification of nondegenerate vector fields on the circle
up to orientation preserving diffeomorphism. Detailed
expositions of this classification (in the broader context of
vector fields with zeroes of finite order) have appeared in
\cite{By,Hi}. Here we briefly explain the main ideas of this
result, focusing on the definition of the invariant, both for
completeness and to set the notation for later use.
For any $t\in\RR$ and vector field $\xX$ we denote by $\Phi_t^{\xX}\in\Aut(\xX)$ the flow of $\xX$
at time $t$.
We first consider the local classification of vector fields with a nondegenerate zero.
For any nonzero real number $\lambda$ we denote by $\fF_{\lambda}$ the set of germs of
vector fields on a neighborhood of $0$ in $\RR$ of the form $h\,\partial_x$, where
$h(0)=0$ and $h'(0)=\lambda$ and $x$ is the standard coordinate in $\RR$. Let $\gG$ denote
the group of germs of diffeomorphisms of neighborhoods of $0$ in $\RR$.
For any $\xX\in\fF_{\lambda}$ we denote by $\Aut(\xX)$ the group of all $\phi\in\gG$ such that
$\phi_*\xX=\xX$. For example, $\Phi_t^{\xX}\in\Aut(\xX)$ for every $t$.
The proof of the next lemma follows from a straightforward computation and Cauchy's theorem
on ODE's.
\begin{lemma}
\label{lemma:local-models} Let $\lambda,\mu$ be nonzero real numbers.
\begin{enumerate}
\item Given $\xX\in\fF_{\lambda}$ and $\yY\in\fF_{\mu}$ there exists some $\phi\in\gG$
satisfying $\phi_*\xX=\yY$ if and only if $\lambda=\mu$.
\item For any $\lambda$ and $\xX\in\fF_{\lambda}$ the map $D:\Aut(\xX)\to\RR^*$ sending
$\phi$ to $\phi'(0)$ is an isomorphism of groups. Furthermore, $D\Phi_t^{\xX}=e^{\lambda t}$.
\end{enumerate}
\end{lemma}
We mention in passing that to prove the case $\dim M>1$ of Theorem \ref{thm:main} we will need to extend the previous lemma to higher dimensions, in a way equivariant with respect to finite group actions. This extension will be based on non-equivariant higher dimensional analogues of statements (1) and (2), which are respectively a theorem of
Sternberg (see \cite{St} and Theorem \ref{thm:Sternberg} below) and a theorem of Kopell (see \cite{K} and Subsection \ref{ss:mM-0} below). Both results are substantially deeper than Lemma \ref{lemma:local-models}, and
in particular they require a condition of non-resonance which is trivial in the one dimensional case.
We next explain the classification of nondegenerate vector fields on the
circle. We identify $S^1$ with $\RR/2\pi\ZZ$, so vector fields on $S^1$
can be written as
$$\xX=h\,\partial_x$$
where $h$ is a $2\pi$-periodic smooth function.
We say that $\xX$ is nondegenerate if $h(y)=0$ implies $h'(y)\neq 0$ ($h'(y)$ can be identified
with the derivative of $\xX$ at $y\in h^{-1}(0)$). An immediate consequence
is that $h$ contains finitely many zeroes in $[0,2\pi)$. Another consequence is that $h$
changes sign when crossing any zero of $h$, and this implies that $h^{-1}(0)$ contains
an even number of elements in $[0,2\pi)$.
To classify nondegenerate vector fields on the circle we will associate to them the number of
zeroes, their derivatives at the zeroes (up to cyclic order), and a global invariant
denoted by $\chi$.
To define $\chi$ suppose first of all that $h$ has no zeroes. Denoting by $\Phi_t^{\xX}$
the flow of $\xX$ seen as a vector field on $\RR$, there is a unique real number $t$ such
that $\Phi_t^{\xX}(y)=y+2\pi$ for every $y\in\RR$. Then we set
$$\chi(\xX):=t.$$
Now suppose that $h$ vanishes somewhere, and write its zeroes contained in $[0,2\pi)$ as
$$0\leq z_1<z_2<\dots<z_{2r}<2\pi.$$
We extend this finite list to an infinite sequence by setting $z_{i+2r}=z_i$ for every integer
$i$. Below, we implicitly consider similar periodic extensions for all objects that we are
going to associate to the zeroes $z_i$.
By (2) in Lemma \ref{lemma:local-models}, for every $i$ there exists a connected neighborhood
$U_i$ of $z_i$, disjoint from $z_{i-1}$ and $z_{i+1}$, and a unique smooth involution $\sigma_i:U_i\to U_i$ such that
\begin{equation}
\label{eq:prop-sigma}
\sigma_i(z_i)=z_i,\qquad \sigma_i'(z_i)=-1,\qquad
(\sigma_i)_*\xX=\xX.
\end{equation}
Choose for every $i$ some $t_i^+>z_i$
contained in $U_i$ and define $t_i^-=\sigma_i(t_i^+)$. Then we
have $t_i^+,t_{i+1}^-\in (z_i,z_{i+1})$, so there is a unique
real number $\rho_i$ such that
$$t_{i+1}^-=\Phi_{\rho_i}^{\xX}(t_i^+).$$
Note that $\rho_i$ has the same sign as $h'(z_i)$. Now we
define
\begin{equation}
\label{eq:chi-xX}
\chi(\xX):=\sum_{i=1}^{2r} \rho_i.
\end{equation}
\begin{lemma}
\label{lemma:chi} The number $\chi(\xX)$ only depends on $\xX$,
and not on the choices of $t_i^{\pm}$. Furthermore, endowing
the set of generic vector fields with the $\cC^{\infty}$
topology the map $\xX\mapsto \chi(\xX)$ is continuous.
\end{lemma}
\begin{proof}
We first prove that $\chi(\xX)$ does not depend on the choices
of $t_i^{\pm}$. If for any $i$ we replace $t_i^\pm$ by
$(t_i')^\pm$, then the requirement that
$(t_i')^-=\sigma_i((t_i')^+)$ implies that
$(t_i')^\pm=\Phi_{\pm\delta}^{\xX}(t_i^\pm)$ for some $\delta$,
so $\rho_i$ gets replaced by $\rho_i-\delta$ and $\rho_{i-1}$
gets replaced by $\rho_{i-1}+\delta$, and hence
(\ref{eq:chi-xX}) remains unchanged.
To prove that $\chi(\xX)$ depends continuously on $\xX$ we
first observe that any other vector field sufficiently close to
$\xX$ is also generic and has vanishing locus close to that of
$\xX$. Hence, once we have fixed the intervals $U_i$ and points
$t_i^+$ above, there is a neighborhood $\vV$ of $\xX$ in the
space of all vector fields on the circle such that if
$\yY\in\vV$ and we write $\yY=k\,\partial_x$ then
$k^{-1}(0)\subset\bigcup_i U_i$, each $U_i$ contains a unique
zero $w_i$ of $k$, and $w_i<t_i^+$. So it suffices to prove
that given $\delta>0$, choosing $\vV$ small enough, the
involution $\sigma_i^{\yY}$ satisfying (\ref{eq:prop-sigma})
with $\xX$ resp. $z_i$ replaced by $\yY$ resp. $w_i$ has the
property that $\sigma_i^{\yY}(t_i^+)$ is well defined and at
distance $<\delta$ from $\sigma_i(t_i^+)$.
The previous property will follow if we prove that $\sigma_i$
depends continuously on $\xX$. This is a local question, so let
us assume that $\xX$ is a vector field defined on an open
interval $0\in I\subset\RR$ with $\xX=g\,\partial_x$ and
satisfying $g^{-1}(0)=\{0\}$ and $g'(0)\neq 0$; by Lemma
\ref{lemma:local-models} there is an open interval $0\in
J\subset\RR$ and a smooth embedding $\phi:J\to I$ such that
$\phi_*(\lambda x\,\partial_x)=\xX$ for some nonzero real
$\lambda$.
It is easy to check that both $\phi$ and $\lambda$
depend continuously on
$\xX$.
Take $U=\phi(J\cap -J)$. The map
$\sigma:U\to U$ defined as $\sigma(x)=\phi(-\phi^{-1}(x))$
is a smooth involution of $U$ and it satisfies
$\sigma_*\xX=\xX$. By the previous observations it is clear
that $\sigma$ depends continuously on $\xX$.
\end{proof}
This is the classification theorem of nondegenerate vector fields on $S^1$:
\begin{theorem}
\label{thm:vector-fields-circle}
Given two vector fields $\xX$ and $\yY$ on the circle, there exists an orientation preserving
diffeomorphism $\phi\in\Diff^+(S^1)$ satisfying $\phi_*\xX=\yY$ if and only if
$\xX$ and $\yY$ have the same number of zeroes, the collection of derivatives at the zeroes
of $\xX$ and $\yY$, travelling along $S^1$ counterclockwise, coincide up to a cyclic permutation,
and $\chi(\xX)=\chi(\yY)$.
\end{theorem}
We are not going to prove the previous theorem. In fact we will only use
the "only if" part of it, which is rather obvious from the definitions;
the proof of the "if" part is an easy exercise using
Lemma \ref{lemma:local-models}. See \cite{By,Hi}
for detailed proofs of a more general result.
We close this subsection with another result that will be used in the proof of
Theorem \ref{thm:circle}.
Suppose that $h:\RR\to\RR$ is a smooth function such that $h(0)=h(1)=0$, and that
$h$ does not vanish on the open interval $(0,1)$. Let $\xX=h\,\partial_x$. The next
lemma follows easily from Cauchy's theorem on ODE's.
\begin{lemma}
\label{lemma:aut-interval} Any diffeomorphism $\phi:(0,1)\to
(0,1)$ satisfying $\phi_*\xX=\xX$ is equal to $\Phi_t^{\xX}$
for some $t\in\RR$. In particular if a diffeomorphism
$\phi:(0,1)\to (0,1)$ satisfying $\phi_*\xX=\xX$ is the
identity on an open subset of $(0,1)$ then $\phi$ is the
identity on the entire $(0,1)$.
\end{lemma}
\subsection{Proof of Theorem \ref{thm:circle}}
\label{ss:proof-thm:circle}
Let $\Gamma$ be a finite group acting smoothly and effectively
on $S^1$, and let $f:S^1\to\RR$ be a $\Gamma$-invariant Morse
function. Let $\Crit(f)$ be the set of critical points of $f$.
Any $\Gamma$-invariant Riemannian metric in $\mM^{\Gamma}$ is
isometric to the round circle $$\{x^2+y^2=r^2\}$$ for some
$r>0$, and this allows to identify the action of $\Gamma$ on
$S^1$ with the action of a cyclic or a dihedral group. We treat
separately the two possibilities.
\subsubsection{Dihedral groups} Suppose first that $\Gamma$ is
dihedral. Then $\Gamma$ contains elements that reverse the
orientation. Let $p\in S^1$ be a fixed point on an orientation
reversing element of $\Gamma$, and let $\Gamma_0\subset\Gamma$
be the subgroup of the elements which act preserving the
orientation. Since $[\Gamma:\Gamma_0]=2$,
we have
$\Gamma\,p=\Gamma_0\,p$. On the other hand, $p$ is
necessarily a critical point of $f$, because $f$ is
$\Gamma$-invariant.
Let $\mM_f$ be the set of metrics
$g\in\mM^{\Gamma}$ satisfying:
\begin{equation}
\label{eq:prop-mM-f-circle}
\text{if $x,y\in\Crit(f)$ and $D\nabla^gf(x)=D\nabla^gf(y)$ then
$\Gamma\,x=\Gamma\,y$.}
\end{equation}
It is clear that $\mM_f$ is open and dense in $\mM^{\Gamma}$.
Now suppose that $g\in\mM_f$ and let $\xX=\nabla^gf$. To prove
that $\Aut(\xX)/\RR\simeq\Gamma$ we consider an arbitrary
$\phi\in\Aut(\xX)$ and show that composing $\phi$ with the
action of suitably chosen elements of $\Gamma$ and with the
flow $\Phi_t^{\xX}$ for some $t$ we obtain the
identity.
Let $\phi\in\Aut(\nabla^gf)$. Composing $\phi$ with the action of some $\gamma\in\Gamma$ we
may assume that $\phi$ is orientation preserving. By
(\ref{eq:prop-mM-f-circle}) we have $\phi(p)\in\Gamma\,p$.
Since $\Gamma\,p=\Gamma_0\,p$, up to composing
$\phi$ with the action of some element of $\Gamma_0$ we can
assume that $\phi$ preserves the orientation and fixes $p$.
This implies that $\phi$ fixes all critical points of
$f$.
Let us label counterclockwise the critical points of $f$
as $p_1,p_2,\dots,p_{2r}$. By Lemma \ref{lemma:local-models},
up to composing $\phi$ with $\Phi_t^{\xX}$ for some choice of
$t$ we may assume that $\phi$ is the identity on a neighborhood
of $p_1$. This implies that $\phi$ is the identity on the
entire circle. Indeed, by Lemma \ref{lemma:aut-interval},
$\phi$ is the identity on the arc from $p_1$ to $p_2$, so by
Lemma \ref{lemma:local-models} $\phi$ is
the identity on a neighborhood of $p_2$. We next apply Lemma
\ref{lemma:aut-interval} to the arc from $p_2$ to $p_3$ and
conclude that the restriction of $\phi$ to this arc is equal to
the identity. An so on, until we have traveled around the
entire circle.
\subsubsection{Cyclic groups}
Suppose that $\Gamma$ is a cyclic group. The only case in which
$\Gamma$ can contain orientation reversing elements is that in
which $\Gamma$ consists of two elements, the nontrivial one
being an orientation reversing involution of $S^1$. This
situation can be addressed with the arguments of the previous
case, so let us assume here that all elements of $\Gamma$
preserve the orientation.
Then we define $\mM_f$ to be the set of metrics
$g\in\mM^{\Gamma}$ satisfying property (\ref{eq:prop-mM-f-circle})
above and $\chi(\nabla^gf)\neq 0$.
We claim that $\mM_f$ is open and dense in $\mM^{\Gamma}$.
Since the set of metrics $g\in\mM^{\Gamma}$ satisfying property (\ref{eq:prop-mM-f-circle})
is open and dense, to see that $\mM_f$ is dense it suffices to observe that
if for some choice of $g$ we have $\chi(\nabla^gf)=0$ then
slightly modifying $g$ away from the critical points we may
force $\chi$ to take a nonzero value; furthermore, the
modification of $g$ can be made $\Gamma$-invariant because
$\Gamma$ is generated by a rotation (note that, in contrast, if
$\Gamma$ is a dihedral group then for any $\Gamma$-invariant
metric $g$ we have $\chi(\nabla^gf)=0$).
Openness of $\mM_f$ follows from the second statement in Lemma
\ref{lemma:chi}.
Let $g\in\mM_f$, let $\xX=\nabla^gf$, and let
$\phi\in\Aut(\xX)$.
We claim that $\phi$ is orientation preserving. Indeed, for any
orientation reversing diffeomorphism $\psi$ of $S^1$ we have
$\chi(\psi_*\xX)=-\chi(\xX)$ and since $\chi(\xX)\neq 0$, we can not possibly
have $\psi_*\xX=\xX$. Let $p$ be any critical
point of $f$. By (\ref{eq:prop-mM-f-circle})
we have $\phi(p)\in\Gamma\,p$, so up to composing $\phi$ with
the action of some element of $\Gamma$ we can assume that $\phi(p)=p$.
Then, since $\phi$ preserves the orientation, it fixes all the other
critical points, and the argument is concluded as in the case
of dihedral groups. Hence the proof of Theorem \ref{thm:circle}
is now complete.
In the remainder of the paper we are going to assume that $\dim M>1$.
\section{Equivariant Sternberg's linearisation theorem for families}
\label{s:sternberg} The following is Sternberg's
linearisation theorem \cite[Theorem 4]{St}, which extends to
the smooth setting an analytic result proved by Poincar\'e in
his thesis:
\begin{theorem}[Sternberg]
\label{thm:Sternberg} Let $0\in U\subset\RR^n$ be an open set
and let $\xX:U\to\RR^n$ be a smooth vector field satisfying
$\xX(0)=0$. Suppose that the derivative $D\xX(0)$ diagonalises
and has (possibly complex) eigenvalues
$\lambda_1,\dots,\lambda_n$, repeated with multiplicity.
Suppose that each $\lambda_i$ has negative real part, and that
\begin{equation}
\label{eq:no-resonances}
\lambda_i\neq \sum_{j=1}^n\alpha_j\lambda_j,
\qquad \text{for any }i\text{, and any }
\alpha_1,\dots,\alpha_n\in\ZZ_{\geq 0}\text{ satisfying }\sum\alpha_j\geq 2.
\end{equation}
Then there exists open sets $0\in U'\subset\RR^n$ and $0\in U''\subset U$, and a
diffeomorphism $\phi:U''\to U'$, such that $D\phi(0)=\Id$ and
$\phi\circ\xX\circ\phi^{-1}=D\xX(0)$.
\end{theorem}
Actually \cite[Theorem 4]{St} states that $\phi$ can be chosen to be
$\cC^k$ for every finite and big enough $k$. The fact
that $\phi$ can be assumed to be $\cC^{\infty}$ follows from
\cite[Theorem 6]{K}.
Sternberg proved in \cite{St} an analogous theorem for local
diffeomorphisms of $\RR^n$. Later, Anderson proved \cite[\S2,
Lemma]{A1} a parametric version of Sternberg's theorem for
diffeomorphisms, which can be translated, using the arguments
in \cite[\S 6]{St}, into a theorem on vector fields. Before
stating it, we introduce some notation. Let $D\subset\RR^n$ be
an open disk centered at $0$, and let $\Delta\subset D$ be a
smaller concentric disk. Let $r$ be a natural number. For any
smooth map $\xX:D\to\RR^n$ define
$\|\xX\|_{\Delta,r}=\sup_{x\in \Delta}\|D^r\xX(x)\|$, where
$\|D^r\xX(x)\|$ denotes the sum of the norms of all partial
derivatives of $\xX$ at $x$ of degree $\leq r$. This defines a
(non separated!) topology on $\Map_0(D,\RR^n)$, the set of all
smooth maps $D\to\RR^n$ fixing $0$, and we denote by
$\Map_0(D,\RR^n)_{\Delta,r}$ the resulting topological space.
This is the analogue of Anderson's theorem for vector fields:
\begin{theorem}
\label{thm:Anderson} Let $L:\RR^n\to\RR^n$ be a linear map
which diagonalises with eigenvalues $\lambda_1,\dots,\lambda_n$
satisfying (\ref{eq:no-resonances}). Assume that each
$\lambda_i$ has negative real part, and that
$\lambda_i\neq\lambda_j$ for $i\neq j$.
There exists a neighborhood $N$ of $L$ in
$\Map_0(D,\RR^n)_{\Delta,r+1}$ and a continuous map
$$\Phi:N\to\Map_0(D,\RR^n)_{\Delta,r}$$ such that:
\begin{enumerate}
\item for every $\xX\in N$, $D\Phi(\xX)(0)=\Id$, so
$\Phi(\xX)$ gives a diffeomorphism $U_{\xX}\to
U'_{\xX}$ between neighborhoods of $0$,
\item for every $\xX\in N$,
$\Phi(\xX)\circ\xX\circ\Phi(\xX)^{-1}:U'_{\xX}\to
\RR^n$ is equal to $D\xX(0)$.
\end{enumerate}
\end{theorem}
We will need an analogue of Theorem
\ref{thm:Anderson} in an equivariant setting. However, as was
mentioned in the introduction, the presence of symmetries
usually forces eigenvalues to have high multiplicity, and
consequently the hypothesis in Theorem \ref{thm:Anderson} will
most of the times not hold.
Now the (only) reason why Anderson assumes the eigenvalues
$\lambda_1,\dots,\lambda_n$ to be pairwise distinct is that he
needs to be able to diagonalize the linear maps close to $L$ in
a continuous way. To state this more precisely, let
$\GL^*(\RR,n)\subset\GL(n,\RR)$ denote the open and dense set
of linear automorphisms of $\RR^n$ all of whose eigenvalues are
distinct. Anderson uses the following elementary lemma.
\begin{lemma}
\label{lemma:diag}
Any $L\in\GL^*(n,\RR)$ admits a neighborhood $U\subset\GL^*(n,\RR)$
and smooth maps $f_1,\dots,f_n:U\to\CC^n$ so that for any
$L'\in U$ the vectors $f_1(L'),\dots,f_n(L')$ form a basis of
$\CC^n$ with respect to which $L'$ diagonalizes.
\end{lemma}
So to obtain an equivariant analogue of Theorem
\ref{thm:Anderson} it suffices to define some open and dense
subset of the set of equivariant automorphisms of a vector
space enjoying the same property as $\GL^*(n,\RR)$. This is the purpose
of the following lemma, which also proves a property on centralizers
that will be used later in the paper.
Suppose that $V$ is an $n$-dimensional real vector space, and
that a finite group $G$ acts linearly on $V$. Denote the
centralizer of any $\Lambda\in\Aut(V)$ by
$$Z(\Lambda)=\{\Lambda'\in\Aut(V)\mid \Lambda\Lambda'=\Lambda'\Lambda\}.$$
Let $\Aut_G(V)$ denote the Lie group of automorphisms of $V$
commuting with the $G$ action.
Define $\Aut^*_G(V)$ to be the set of all $\Lambda\in\Aut_G(V)$
such that for any $\lambda\in\CC$ the ($G$-invariant) subspace
$\Ker(\Lambda-\lambda\Id)\subset V$ is irreducible as a
representation of $G$. Given a basis $a_1,\dots,a_n\in V\otimes\CC$ we denote by
$(a_1,\dots,a_n):\CC^n\to V\otimes\CC$ the isomorphism
$(\lambda_1,\dots,\lambda_n)\mapsto\sum\lambda_ia_i$.
\begin{lemma}
\label{lemma:automorfismes-Gamma-generics} The subset
$\Aut^*_G(V)$ is open and dense in $\Aut_G(V)$. Any
$\Lambda\in\Aut^*_G(V)$ has a neighborhood $U\subset
\Aut^*_G(V)$ with smooth maps $f_1,\dots,f_n:U\to V\otimes\CC$
so that for any $\Lambda'\in U$ the vectors
$f_1(\Lambda'),\dots,f_n(\Lambda')$ form a basis of
$V\otimes\CC$ with respect to which $\Lambda'$ diagonalizes, and
conjugation by $(f_1',\dots,f_n')(f_1,\dots,f_n)^{-1}$
gives an isomorphism
$$Z(\Lambda)\stackrel{\simeq}{\longrightarrow}Z(\Lambda').$$
\end{lemma}
\begin{proof}
Denote by $\wh{G}$ the set of irreducible characters of $G$.
For any $\xi\in \wh{G}$ let $V_{\xi}$ be a $G$-representation
with character $\xi$. As a
$G$-representation, we may identify $V$ with $\bigoplus_{\xi\in\wh{G}}V_{\xi}\otimes
E_{\xi}$, where each $E_{\xi}$ is a vector space with trivial
$G$-action. By Schur's lemma the space of $G$-equivariant
endomorphisms of $V$ is
$$\End_G(V)=\bigoplus_{\xi\in\wh{G}}\End E_{\xi}.$$
An endomorphism $\Lambda=(\Lambda_{\xi})_{\xi}$ (where
$\Lambda_{\xi}\in\End E_{\xi}$ for each $\xi$) belongs to
$\Aut_G(V)$ exactly when $\prod_{\xi}\det\Lambda_{\xi}\neq 0$,
and it belongs to $\Aut_G^*(V)$ if and only if, additionally,
no root of the polynomial
$\prod_{\xi}\det(\Lambda_{\xi}-x\Id_{E_{\xi}})\in\RR[x]$ has
multiplicity bigger than one. This condition implies that
$\Lambda_{\xi}\in\GL^*(E_{\xi})$ for each $\xi$. Applying Lemma \ref{lemma:diag}
to each $\Lambda_{\xi}$ we deduce the existence
of a neighborhood $U\subset
\Aut^*_G(V)$ of $\Lambda$ and smooth maps $f_1,\dots,f_n:U\to V\otimes\CC$
and $\lambda_1,\dots,\lambda_n:U\to\CC$
so that for any $\Lambda'\in U$ we have $\Lambda'(f_j(\Lambda'))=\lambda_j(\Lambda') f_j(\Lambda')$
for every $j$.
For any $\Lambda'\in U$ we can identify
$Z(\Lambda')$ with the subgroup of $\Aut(V)$ preserving the subspace of $V\otimes\CC$
spanned by $\{f_j(\Lambda')\mid\lambda_j(\Lambda')=\lambda\}$ for each $\lambda$.
Shrinking $U$ if necessary we may assume that for any $i,j$ and any $\Lambda'\in U$ we have
$$\lambda_i(\Lambda')=\lambda_j(\Lambda')\quad\Longleftrightarrow\quad
\lambda_i(\Lambda)=\lambda_j(\Lambda),$$
so conjugation by $(f_1',\dots,f_n')(f_1,\dots,f_n)^{-1}$
gives an isomorphism
$Z(\Lambda)\stackrel{\simeq}{\longrightarrow}Z(\Lambda')$.
\end{proof}
Take some $G$-invariant Euclidean metric on $V$, let $D\subset
V$ be an open disk centered at $0$, and let $\Delta\subset D$
be a smaller concentric disk. Let $r$ be a natural number. For
any smooth map $\xX:D\to V$ define
$\|\xX\|_{\Delta,r}=\sup_{x\in \Delta}\|D^r\xX(x)\|$ as before.
This defines a topology on $\Map_{G,0}(D,V)$, the set of all
$G$-equivariant smooth maps $D\to V$ fixing $0$. Let
$\Map_{G,0}(D,V)_{\Delta,r}$ be the resulting topological
space. Define analogously $\Map_{0}(D,V)_{\Delta,r}$ by
dropping the equivariance condition. Combining the previous
lemma with the arguments in \cite[\S2, Lemma]{A1} and \cite[\S
6]{St} we obtain the following.
\begin{theorem}
\label{thm:Anderson-equivariant} Let $L\in\Aut_G^*(V)$ have
eigenvalues $\lambda_1,\dots,\lambda_n$ satisfying
(\ref{eq:no-resonances}) and suppose that each $\lambda_i$ has
negative real part. There is a neighborhood $N$ of $L$ in
$\Map_{G,0}(D,V)_{\Delta,r+1}$ and a continuous map
$$\Phi:N\to\Map_{G,0}(D,V)_{\Delta,r}$$ such that:
\begin{enumerate}
\item for every $\xX\in N$, $D\Phi(\xX)(0)=\Id$, so
$\Phi(\xX)$ gives a diffeomorphism $U_{\xX}\to
U'_{\xX}$ between neighborhoods of $0$,
\item for every $\xX\in N$,
$\Phi(\xX)\circ\xX\circ\Phi(\xX)^{-1}:U'_{\xX}\to V$ is
equal to $D\xX(0)$.
\end{enumerate}
\end{theorem}
The only part in the statement of Theorem \ref{thm:Anderson-equivariant}
that does not follow immediately is the fact
that the conjugating map $\Phi$ may be chosen to take values in
$\Map_{G,0}(D,V)_{\Delta,r}$. Sternberg's
argument provides a (continuous, by Anderson) map
$\Phi_0:N\to\Map_{0}(D,V)_{\Delta,r}$, satisfying (i)
$D\Phi_0(\xX)(0)=\Id$ and (ii)
$\Phi_0(\xX)\circ\xX\circ\Phi_0(\xX)^{-1}=D\xX(0)$ (in a
neighborhood of $0$), but $\Phi_0(\xX)$ is not necessarily
equivariant. Now, equality (2) is equivalent to
\begin{equation}
\label{eq:conjugacio-Phi-0}
\Phi_0(\xX)\circ\xX=D\xX(0)\circ \Phi_0(\xX),
\end{equation}
so setting
$$\Phi(\xX)(x)=\frac{1}{|G|}\sum_{g\in G}g\Phi_0(\xX)(g^{-1}x)\in V$$
for every $x\in D$, we have $\Phi(\xX)\in\Map_{G,0}(D,V)_{\Delta,r}$,
and equation (\ref{eq:conjugacio-Phi-0})
immediately gives $\Phi(\xX)\circ\xX=D\xX(0)\circ \Phi(\xX)$.
Trivially we also have $D\Phi(\xX)(0)=\Id$ for every $\xX$, and
$\Phi(\xX):D\to V$ is $G$-equivariant. The map
$\Phi:N\to\Map_{G,0}(D,V)_{\Delta,r}$ is continuous, because
$\Phi_0$ is, so now Theorem \ref{thm:Anderson-equivariant} is
clear.
\section{The space of metrics $\mM_0$}
\label{s:mM-0-mM-1}
\subsection{Preliminaries}
\newcommand{\codim}{\operatorname{codim}}
\newcommand{\free}{\operatorname{free}}
The following result is a standard consequence of the existence of linear
slices for smooth compact group actions (see e.g. \cite[Chap. VI, \S 2]{Br}).
\begin{lemma}
\label{lemma:linearisation}
Let $G$ be a finite group acting smoothly on a connected manifold $X$.
\begin{enumerate}
\item For each subgroup $H\subseteq G$ the fixed point set
$X^H=\{x\in X\mid H\subseteq G_x\}$ is the disjoint union of finitely many closed
submanifolds of $X$ (not necessarily of the same dimension) satisfying
$T_x(X^H)=(T_xX)^H$ for every $x\in X^H$. In
particular, either $X^H=X$ or $X^H$ has empty interior.
\item Assume that the action of $G$ on $X$ is effective.
Then $X^{\free}=\{x\in X\mid G_x=\{1\}\}$ is open and
dense in $X$.
\end{enumerate}
\end{lemma}
\subsection{Sinks, sources, and (un)stable manifolds}
\label{ss:sink,sources}
Let $n>1$ and let $M$ be a compact connected
$n$-dimensional manifold. Suppose that $M$ is
endowed with a smooth and effective action of a finite group
$\Gamma$. Denote the stabilizer of any $x\in M$ by
$$\Gamma_x=\{\gamma\in\Gamma\mid\gamma x=x\}.$$
Let $\mM$ denote the space of Riemannian metrics on $M$, and let $\mM^{\Gamma}\subset\mM$ be the
subset of $\Gamma$-invariant metrics.
Let
$$f:M\to\RR$$
be a $\Gamma$-invariant Morse function. This function will be
fixed throughout the rest of the paper.
If $p$ is a critical point of $f$, so that $\nabla^gf(p)=0$,
the derivative $D\nabla^gf(p)$ is a well defined endomorphism
of $T_pM$ (one may define it using a connection on $TM$, but
the result will be independent of the chosen connection). The endomorphism
$D\nabla^gf(p)$ is self adjoint
with respect to the Euclidean norm on $T_pM$ given by $g$, so
$D\nabla^gf(p)$ diagonalizes.
Denote the index of a critical
point $p$ of $f$ by $\Ind_f(p)$.
Let $\Crit(f)\subset M$ be the set of critical points of $f$,
and for any $k$ let
$$\Crit_k(f)=\{p\in\Crit(f)\mid \Ind_f(p)=k\}.$$
Define the set of sinks
of $f$ to be $\II=\Crit_n(f)$ and the set of sources to be
$\OO=\Crit_0(f)$. The
points in $\II$ (resp. $\OO$) are the sinks (resp. sources)
of the the gradient vector field $\nabla^gf$ for every $g$.
Denote also by $\EE=\II\cup\OO$ the collection of all local
extremes of $f$.
For any $g\in\mM$ and any real number $t$ let $\Phi^g_t:M\to M$
denote the flow at time $t$ of $\nabla^gf$. Define the stable
and unstable manifolds of $p\in\Crit(f)$ to be, respectively,
$$W^s_g(p)=\{q\in M\mid\lim_{t\to\infty}\Phi^g_t(q)=p\},
\qquad W^u_g(p)=\{q\in M\mid\lim_{t\to-\infty}\Phi^g_t(q)=p\}.$$
For any $p\in\EE$ and any $g\in\mM^{\Gamma}$ let
$$L_g(p)=\{\psi\in \Aut(T_pM)\mid (D\nabla^gf(p)) \psi=\psi(D\nabla^gf(p))\}=Z(D\nabla^gf(p)).$$
Since $\Gamma$ is finite and acts effectively on $M$, we can
identify $\Gamma_p$ with a subgroup of $L_g(p)$ using (1) in Lemma \ref{lemma:linearisation} above.
\subsection{The metrics in $\mM_0$: generic eigenvalues at critical points}
\label{ss:mM-0}
Let $$\mM_0\subset\mM^{\Gamma}$$ denote set of $\Gamma$-invariant
metrics $g$ satisfying the following conditions:
\begin{enumerate}
\item[(C1)] for any $p\in\EE$ the eigenvalues $\lambda_1,\dots,\lambda_n$
of the linearization $D\nabla^gf(p)$ satisfy condition (\ref{eq:no-resonances}) in
Theorem \ref{thm:Sternberg};
\item[(C2)] if $p,q\in\EE$, then the eigenvalues of
$D\nabla^gf(p)$ and $D\nabla^gf(q)$ coincide if and
only if $p$ and $q$ belong to the same $\Gamma$-orbit;
\item[(C3)] for any $p\in\EE$ we have $D\nabla^gf(p)\in\Aut_{\Gamma_p}^*(T_pM)$.
\end{enumerate}
Condition (C1), combined with Sternberg's Theorem \ref{thm:Sternberg} and
an easy adaptation of a theorem of Kopell \cite[Theorem 6]{K}
from maps to vector fields, implies that if $p\in\II$ then
the map
\begin{equation}
\label{eq:Aut-L}
D(p):\Aut(\nabla^gf|_{W^s_g(p)})\to L_g(p)
\end{equation}
sending any $\phi\in \Aut(\nabla^gf|_{W^s_g(p)})$ to
$D\phi(p)\in L_g(p)$ is an isomorphism (it is clear that any
such $\phi$ fixes $p$); furthermore, there is a diffeomorphism
$h(p):T_pM \to W^s_g(p)$ making the following diagram
commutative:
\begin{equation}
\label{eq:accio-linealitzada}
\xymatrix{L_g(p)\times T_pM \ar[d]_{D(p)^{-1}\times h(p)}\ar[r] & T_pM \ar[d]^{h(p)} \\
\Aut(\nabla^gf|_{W^s_g(p)})\times W^s_g(p)\ar[r] & W^s_g(p),}
\end{equation}
where the horizontal arrows are the maps defining the actions.
\begin{remark}
\label{rmk:primer-entorn}
Strictly speaking, Sternberg's theorem gives a diffeomorphism
between a neigborhood of $0$ in $T_pM$ and a neighborhood of
$p$ in $W^s_g(p)$ which commutes the flows of $D\nabla^gf(p)$
and of $\nabla^gf$, but such diffeomorphism can be extended
uniquely imposing compatibility with the flows to yield $h(p)$.
\end{remark}
Similarly, for any source $p\in\OO$ the analogous map
$\Aut(\nabla^gpf|_{W^u_g(p)})\to L_g(p)$ is an isomorphism and
there is a diffeomorphism $T_pM \to W^u_g(p)$ which is
equivariant in the obvious sense, analogous to the case of
sinks.
Condition (C2) implies that for any $\phi\in\Aut(\nabla^gf)$
and any $p\in\EE$ we have $\phi(p)=\gamma p$ for some
$\gamma\in\Gamma$. Of course a priori $\gamma$ may depend on
$p$, but in the course of proving Theorem \ref{thm:main} we
will deduce that for $g$ belonging to a residual subset of
$\mM_0$ and any $\phi\in\Aut(\nabla^gf)$, there exists some
$\gamma$ such that $\phi(p)=\gamma p$ for each $p\in\EE$.
By Lemma \ref{lemma:automorfismes-Gamma-generics} $\mM_0$ is
open and dense in $\mM^{\Gamma}$. Moreover, combining (C3)
with Lemma
\ref{lemma:automorfismes-Gamma-generics} and Theorem
\ref{thm:Anderson-equivariant} (together with the obvious analogue
of Remark \ref{rmk:primer-entorn}) we deduce the following
result.
\begin{lemma}
\label{lemma:C1-families} Any $g\in\mM_0$ has a neighborhood
$\uU\subset\mM_0$ such that for any $p\in\EE$ the following holds.
Let $V_p=T_pM$. Endow the space of maps $\Map(V_p,M)$
with the weak (compact-open) $\cC^{\infty}$-topology \cite[Chap 2, \S1]{H}.
For any $g'\in\uU$ there is a linear vector field
$$\xX_{g'}(p):V_p\to V_p$$
depending continuously on $g'$ and a $\Gamma_p$-equivariant embedding
$$h_{g'}(p):V_p\to M$$
depending also continuously on $g'$ with the following properties.
\begin{enumerate}
\item $Z(\xX_{g'}(p))=Z(\xX_g(p))=L_g(p)$ for every
$g'\in\uU$,
\item $h_{g'}(p)$ identifies $\xX_{g'}(p)$ with the restriction of $\nabla^{g'}f$ to $h_{g'}(p)(V_p)$;
hence, $$h_{g'}(p)(V_p)=W_g^s(p)\qquad\text{ if $p\in\II$}$$ and
$$h_{g'}(p)(V_p)=W_g^u(p)\qquad\text{ if $p\in\OO$}.$$
\end{enumerate}
\end{lemma}
Note that we do not claim that the derivative of $h_{g'}(p)$ at $p$ is the identity: in fact
in general this will not be the case (otherwise we could not pretend to have the identifications
$Z(\xX_{g'}(p))=Z(\xX_g(p))$).
\section{The spheres $S_g(p)$, the distributions $\aA_g(p)$, and the sets $F_g(p)$}
Recall that we assume $\dim M>1$.
\subsection{The spheres $S_g(p)$}
\label{ss:spheres}
For any $g\in\mM$ and any sink $p\in\II$ we denote by $\sim$
the equivalence relation in $W^s_g(p)$ that identifies two
points whenever they belong to the same integral curve of
$\nabla^gf$. We then define
$$S_g(p)=(W^s_g(p)\setminus\{p\})/\sim.$$
Let $\epsilon>0$ be a real number and let $\Sigma\subset M$ be
the $g$-geodesic sphere of radius $\epsilon>0$ and center $p$.
If $\epsilon$ is small enough (which we assume), then $\Sigma$
is a submanifold of $M$ diffeomorphic to $S^{n-1}$ and every
equivalence class in $S_g(p)$ contains a unique representative
in $\Sigma$. Hence, composing the inclusion
$\Sigma\hookrightarrow W^s_g(p)\setminus\{p\}$ with the
projection
$$\pi_p:W^s_g(p)\setminus\{p\}\to S_g(p)$$
gives a bijection $\Sigma\simeq S_g(q)$. This allows us to
transport the smooth structure on $\Sigma$ to a smooth
structure (in particular, a topology) on $S_g(p)$, independent
of $\epsilon$.
If $g\in\mM_0$ then the action of $L_g(p)$ on $S_g(p)$ defined
via the identification (\ref{eq:Aut-L}) is smooth, and so is
the natural action of $\Gamma_p$ on $S_g(p)$ (recall that
$\mM_0\subset\mM^{\Gamma}$).
For any $p\in\II$ and $q\in\OO$ let
$$\Omega_g(p,q):=\pi_p(W^s_g(p)\cap W^u_g(q))\subset S_g(p).$$
Since $W_g^u(q)$ is open in $M$, $\Omega_g(p,q)$ is open in $S_g(p)$.
Similarly, if $q$ is a source we define
$$S_g(q)=(W_g^u(q)\setminus\{q\})/\sim$$
and we denote by
$$\pi_q:W_g^u(q)\setminus\{q\}\to S_g(q)$$
the projection. If $p$ is a sink, then we define
$$\Omega_g(q,p)=\pi_q(W^s_g(p)\cap W^u_g(q)),$$
which is an open subset of $S_g(q)$.
For convenience, if
$p,q\in\II$ or $p,q\in\OO$ we define $\Omega_g(p,q)=\emptyset$.
Since the fibers of the restrictions of $\pi_p$ and $\pi_q$ on
$W^s_g(p)\cap W^u_g(q)$ are the same, there are natural
bijections
$$\sigma_g^{p,q}:\Omega_g(p,q)\to\Omega_g(q,p),\qquad\sigma_g^{q,p}=(\sigma_g^{p,q})^{-1},$$
which are easily seen to be diffeomorphisms.
\subsection{The singular distributions $\aA_g(q)$}
\label{ss:singular-distributions} Assume through the remainder
of this section that $g\in\mM_0$. We will consider, for every
$p\in\EE$, the diagonal action of $L_g(p)$ on $S_g(p)^k$ for
some natural number $k$. If $z=(z_1,\dots,z_k)\in S_g(p)^k$ and
$\psi\in L_g(p)$ we denote
$$\psi z=(\psi z_1,\dots,\psi z_k).$$ Similarly, we will
consider the diagonal extension of the maps $\sigma_g^{p,q}$:
$$\sigma_g^{p,q}:\Omega_g(p,q)^k\to \Omega_g(q,p)^k,\qquad
\sigma_g^{p,q} z=(\sigma_g^{p,q} z_1,\dots,\sigma_g^{p,q} z_k).$$
We are going to use below without explicit notice analogous
diagonal extensions of maps to Cartesian products.
Let $n=\dim M$. Define
\begin{equation}
\label{eq:def-r}
r:=\left[\frac{2n^2}{n-1}+1\right].
\end{equation}
The choice of this number will be justified in the proof of
Lemma \ref{lemma:mM-2-K-dense}.
For any $q\in\EE$ we denote by $\aA_g(q)\subset T(S_g(q)^r)$
the subspace consisting of all tangent vectors given by the
infinitesimal action of the Lie algebra of $L_g(q)$. This
gives, for any $z\in S_g(q)^r$, a linear subspace
$\aA_g(q)(z)\subset T_xS_g(q)^r$ whose dimension may vary with
$z$ (hence, one can think of $\aA_g(q)$ as a singular
distribution). In concrete terms,
$$\aA_g(q)(z)=\{\yY_{g,\sigma}(z)\mid \sigma\in\Lie L_g(q)\},$$
where for any $\sigma\in\Lie L_g(q)$ we denote by $\yY_{g,\sigma}$ the vector field
on $S_g(q)^r$ given by the infinitesimal action of $\sigma$.
\subsection{The subset $F_g(q)\subset S_g(q)^r$}
\label{ss:F-g-q}
We next want to identify a dense open subset of $S_g(q)^r$ on
which the action of $L_g(q)$ has the smallest possible isotropy
subgroup, and on which $\aA_g(q)$ restricts to a vector
subbundle of $T(S_g(q)^r)$. We remark that, since $L_g(q)$ is an infinite group,
in this situation we can not use (2) in Lemma \ref{lemma:linearisation}.
Let
$$\xX_g(q):=D\nabla^gf(p)\in \Lie L_g(q).$$ Note that
$e^{t\xX_g(q)}$ corresponds, via the isomorphism $D(q)$ in
(\ref{eq:Aut-L}), to the flow $\Phi^g_t$, so $e^{t\xX_g(p)}$
acts trivially on $S_g(q)$ and hence on $S_g(q)^r$.
Let us denote $V=T_qM$. Then $X:=\xX_g(q)$ is a diagonalizable endomorphism of $V$.
Denote its eigenvalues by $\lambda_1,\dots,\lambda_k$. Let
$V_j\subseteq V$ be the subspace consisting of eigenvectors with eigenvalue $\lambda_j$.
We have a decomposition $V=V_1\oplus\dots\oplus V_k$ with respect to which we may define
projections $\pi_j:V\to V_j$. Let us say that a collection of vectors $w_1,\dots,w_s\in V_j$
is thick if $s>d_j=\dim V_j$ and for any $1\leq i_1<i_2<\dots<i_{d_j}\leq s$ the vectors
$w_{i_1},\dots,w_{i_{d_j}}$ are linearly independent. Finally, we say that a collection of
vectors $v_1,\dots,v_s\in V$ is thick if for any $j$ the projections
$\pi_j(v_1),\dots,\pi_j(v_s)$ form a thick collection of vectors in $V_j$.
Let $G=L_g(q)$.
\begin{lemma}
\label{lemma:thick-free}
Suppose that $v_1,\dots,v_s$ is a thick collection of vectors, and that for
some $g\in G$ there exist real numbers $t_1,\dots,t_s$ satisfying
$gv_j=e^{t_jX}v_j$ for every $j$. Then $g=e^{tX}$ for some real number $t$.
\end{lemma}
\begin{proof}
Consider first the case $k=1$, so that $X$ is a homothecy. Write
$v_{n+1}=a_1v_1+\dots+a_nv_n$. The thickness condition implies that
$a_i\neq 0$ for every $i$. By assumption we have $gv_i=\lambda_iv_i$ for
some real numbers $\lambda_1,\dots,\lambda_s$. In particular,
$$\lambda_{n+1}(a_1v_1+\dots+a_nv_n)=\lambda_1 a_1v_1+\dots+\lambda_n a_nv_n.$$
Taking into account that $v_1,\dots,v_n$ is a basis and equating coefficients we deduce that
$\lambda_{n+1}=\lambda_1=\dots=\lambda_n$. So the case $k=1$ is proved.
The case $k>1$ follows from applying the previous arguments to each $V_j$,
using the fact that every $g\in G$ preserves $V_j$.
\end{proof}
Let $S(V)$ denote the set of orbits of $H=\{e^{tX}\mid t\in\RR\}$ acting on $V\setminus\{0\}$.
$H$ is a central subgroup of $G$, and the action of $G$ on $V$ induces an action of $G/H$ on $S(V)$.
Let $F\subset S(V)^r$ denote the set of tuples $(x_1,\dots,x_r)$ such that, writing
$x_i=Hx_i'$ with $x_i'\in V$ for each $i$, the vectors $x_1',\dots,x_r'$ form a thick collection
(this is independent of the choice of representatives $x_i'$).
\begin{lemma}
\label{lemma:set-F}
\begin{enumerate}
\item $F$ is a dense an open subset of $S(V)^r$;
\item the restricted action of $G/H$ on $F$ is free.
\end{enumerate}
\end{lemma}
\begin{proof}
For (1) note that $r>n$, so the set $F'$ of thick $r$-tuples in $V^r$
can be identified with the complementary of finitely many proper subvarieties
(those corresponding to the possible linear relations among projections to each
summand $V_j$ of subsets of the tuple, given by the vanishing of suitable
determinants). Hence $F'$ is a dense open subset of
$V^r$, which implies that $F\subset S(V)^r$ is open and dense. (2) follows from
Lemma \ref{lemma:thick-free}.
\end{proof}
Assume that $q$ is a sink.
Choose a diffeomorphism $h:V\to W_g^s(q)$ making commutative
the diagram (\ref{eq:accio-linealitzada}) with $p$ replaced by $q$. Then
$h$ induces a diffeomorphism $S(V)\to S_g(q)$, which can be extended linearly
to $S(V)^r\to S_g(q)^r$. Let
$$F_g(q)\subset S_g(q)^r$$
be the image of $F$ under the previous diffeomorphism. The set $F_g(q)$
is independent of the choice of $h$. Indeed, two different choices of
$h$ differ by precomposition with an element of $G$, and the action of
$G$ on $S(V)^r$ preserves $F$. If instead $q$ is a source, consider the
same definition with $W_g^s(q)$ replaced by $W_g^u(q)$.
Lemma \ref{lemma:set-F} and an obvious estimate
imply:
\begin{lemma}
\label{lemma:restriction-aA}
\begin{enumerate}
\item If $z\in F_g(q)$ and $\psi\in L_g(q)$ satisfies $\psi
z=z$ then $\psi =e^{t\xX_g(q)}$ for some $t\in\RR$.
\item The restriction of $\aA_g(q)$ to $F$ is a vector bundle
of rank $\dim G-1\leq n^2-1$.
\end{enumerate}
\end{lemma}
\section{The space of metrics $\mM_{1,K}$}
\label{s:mM-2-K}
We recall again that $\dim M>1$.
\subsection{Definition of $\mM_{1,K}$}
\label{ss:def-mM-2} Let $g\in\mM_0$. Let $p\in\EE$ and let $K$
be a natural number. Denote by $\|\cdot\|_g$ the operator norm
in $\End T_pM$ induced by $g$. Denote by $$L_{g,K}(p)\subset
L_g(p)$$ the subset consisting of those $\psi\in L_g(p)$ such
that $\|\psi\|_g\leq K$, $\|\psi^{-1}\|_g\leq K$, and
$$\|\psi-e^{t\xX_g(p)}\gamma\|_g\geq K^{-1} \text{ for every
$t\in\RR$ and $\gamma\in\Gamma_p$}.$$ Clearly $L_{g,K}(p)$ is compact.
Recall that the number
$r$ has been defined in (\ref{eq:def-r}) in Subsection
(\ref{ss:singular-distributions}) above. For any $\psi\in
L_g(p)$ we denote by
$$\alpha_\psi:S_g(q)^r\to S_g(q)^r$$
the map given by the action of $\psi$.
\begin{definition}
\label{def:mM_{1,K}} Let $p\in\EE$. Define $\mM_{1,K}(p)$ as
the set of all metrics $g\in\mM_0$ such that for any $\psi\in
L_{g,K}(p)$ there exist:
\begin{enumerate}
\item $q,q'\in\EE$ and $z\in\Omega_g(p,q)^r$ satisfying
$$\psi z\in\Omega_g(p,q')^r,\qquad \sigma_g^{p,q}z\in F_g(q),
\qquad \sigma_g^{p,q'}\psi z\in F_g(q'),$$
\item and a vector $u\in\aA_g(q)(\sigma_g^{p,q}z)\subset
T_{\sigma_g^{p,q}z}S_g(q)^r$ such that
\begin{equation}
\label{eq:xi-no-funciona}
D(\sigma_g^{p,q'}\circ \alpha_\psi\circ \sigma_g^{q,p})(u)\notin
\aA_g(q')(\sigma_g^{p,q'}\circ \alpha_\psi(z)).
\end{equation}
\end{enumerate}
\end{definition}
Here $D(\sigma_g^{p,q'}\circ \alpha_\psi\circ \sigma_g^{q,p})$
is the map between tangent spaces given by the differential of
$$\sigma_g^{p,q'}\circ \alpha_\psi\circ \sigma_g^{q,p}:
\sigma_g^{p,q}(\alpha_{\psi}^{-1}(\Omega_g(p,q')^r))\to \Omega_g(q',p)^r,$$
and $\sigma_g^{p,q}(\alpha_{\psi}^{-1}(\Omega_g(p,q')^r))$ is
an open subset of $\Omega_g(q,p)$ containing $\sigma_g^{p,q}z$.
Define finally:
$$\mM_{1,K}=\bigcap_{p\in\EE}\mM_{1,K}(p).$$
\subsection{$\mM_{1,K}$ is open and dense in $\mM_0$}
\begin{lemma}
\label{lemma:mM-2-K-open}
$\mM_{1,K}(p)$ is an open subset of $\mM_0$.
\end{lemma}
\begin{proof}
Let $g\in\mM_{1,K}(p)$. If $\psi\in L_{g,K}(p)\subset L_g(p)$
and $(q,q',z,u)$ satisfy (1) and (2) in Definition
\ref{def:mM_{1,K}}, then we say that $(q,q',z,u)$ {\it rules
out} $\psi$. The set of elements in $L_g(p)$ which are ruled
out by any given tuple $(q,q',z,u)$ is open. Since $L_{g,K}(p)$
is compact, it follows that there exist finitely many tuples
$(q_1,q'_1,z_1,u_1),\dots,(q_{\nu},q'_{\nu},z_{\nu},u_{\nu})$
and open subsets $V_1,\dots,V_{\nu}\subset L_g(p)$ such that
$L_{g,K}(p)\subset V_1\cup\dots\cup V_{\nu}$ and such that, for
every $j$, $(q_j,q'_j,z_j,u_j)$ rules out each element of
$V_j$. Choose subsets $V_j'\subset V_j$ with the property that
$L_{g,K}(p)\subset V_1'\cup\dots\cup V_{\nu}'$, and such that
$\ov{V_j'}$ is compact and contained in $V_j$ for each $j$.
Applying Lemma \ref{lemma:C1-families} to $g$ we deduce the
existence of a neighborhood $\uU\subset\mM_0$ of $g$ and
natural smooth identifications $S_g(q)\simeq S_{g'}(q)$ for
every $g'\in\uU$ and $q\in\EE$. Since in the remainder of the
proof we only consider metrics from $\uU$, we denote $S(q)$
instead of $S_{g'}(q)$. We also get for every $g'\in\uU$
natural isomorphisms of groups $L_g(q)\simeq L_{g'}(q)$ which
are compatible with both inclusions of $\Gamma_z$ in $L_g(q)$
and $L_{g'}(q)$ and with the identifications $S_{g}(q)\simeq
S_{g'}(q)$, and for this reason we write $L(q)$ instead of
$L_{g'}(q)$. Now we may view $V_1,\dots,V_{\nu}$ as subsets of
$L(p)$. Shrinking $\uU$ if necessary we may assume that
$L_{g',K}(p)\subset V_1'\cup\dots\cup V_{\nu}'$ for every
$g'\in\uU$.
The sets $F_{g'}(q)\subset S(q)^r$ are independent of $g'$ and
the distributions $\aA_{g'}(q)$ vary continuously with $g'$.
Similarly the subsets $\Omega_{g'}(p,q)\subset S(p)$ vary
continuously with $g'$, meaning that, for any
$z\in\Omega_g(p,q)$, if $g'$ is sufficiently close to $g$ then
$z\in\Omega_{g'}(p,q)$ as well. The maps
$\sigma_{g'}^{p,q}:\Omega_{g'}(p,q)\to\Omega_{g'}(q,p)$ also
depend continuously on $g'$ in the obvious sense.
For any $g'\in\uU$, any $q\in\EE$, and any $z\in F(q)$ we
define $P_{g'}:T_zS(q)^r\to\aA_{g'}(q)(z)$ to be the orthogonal
projection with respect to $g'$ (recall that $\aA_{g'}(q)(z)$
is a vector subspace of $T_zS(q)^r$).
The previous observations imply the following: for every $1\leq
j\leq \nu$ there exists a neighborhood of $g$,
$\uU_j\subset\uU$, such that for every $g'\in \uU_j$ and any
$\psi'\in \overline{V_j'}$, the tuple
$(q_j,q_j',z_j,P_{g'}(u_j))$ rules out $\psi'$. It then follows
that $\uU_1\cap\dots\cap\uU_{\nu}\subset\mM_{1,K}(p)$, and
hence $\mM_{1,K}(p)$ is open.
\end{proof}
\begin{lemma}
\label{lemma:mM-2-K-dense}
$\mM_{1,K}(p)$ is a dense subset of $\mM_0$.
\end{lemma}
\begin{proof}
We will use the following lemma, whose proof
is postponed to the Appendix.
\begin{lemma}
\label{lemma:variacions-metrica-diff} Suppose that
$g\in\mM^{\Gamma}$, $x\in M\setminus\Crit(f)$ and
$y=\Phi^g_t(x)$ for some nonzero $t$. Suppose that the
stabilizer $\Gamma_x$ is trivial. Let $v\in T_xM$ be a nonzero
vector, and let $w=D\Phi^g_t(x)(v)$. Given any $u\in T_v(TM)$
and any $\Gamma$-invariant open subset $U\subset M$ containing
$\Phi^g_{t'}(x)$ for some $t'\in (0,t)$, there exists some
$g'\in\cC^{\infty}(M,S^2T^*M)^{\Gamma}$ supported on $U$ such
that
$$\left.\frac{\partial}{\partial\epsilon}D\Phi^{g+\epsilon g'}_{-t}(w)\right|_{\epsilon=0}=u.$$
\end{lemma}
Fix some $g\in\mM_0$. We assume for concreteness throughout the
proof that $p$ is a sink. The case in which $p$ is a source
follows from the same arguments (or replacing $f$ by $-f$).
We claim that the set of points in $S_g(p)$ with trivial
stabilizer in $\Gamma$ is open and dense. Indeed, on the one hand the points in
$W_g^s(p)$ with trivial stabilizer form an open and dense
subset, thanks to (2) in Lemma \ref{lemma:linearisation} and
the openness of $W_g^s(p)\subset M$ and on the other hand the
stabilizer of any $z\in W_g^s(p)$ is equal to the stabilizer of
the point in $S_g(p)$ it represents, because the action of
$\Gamma$ preserves $f$ and the restriction of $f$ to the fibers
of the projection $W_g^s(p)\setminus\{p\}\to S_g(p)$ is
injective.
Let $\psi\in L_{g,K}(p)$. Since $\psi$ does not belong to
$\Gamma_p\subset L_g(p)$ and since the union of the sets
$\{\Omega(p,q)\}_{q\in\OO}$ is an open and dense subset of
$S_g(p)$, there exist $q,q'\in\OO$ (not necessarily
distinct) and a point $c\in \Omega(p,q)$ satisfying $\psi
c\in\Omega(p,q')$ and $\psi c\neq \gamma c$ for every
$\gamma\in\Gamma_p$.
By the previous claim, we can also assume that the stabilizer of $c$ is trivial.
Choose a metric on the sphere $S_g(p)$. For any $y\in S_g(p)$ and
any $r>0$ denote by $\ov{B}_{S,r}(y)$ the closed ball in $S_g(p)$ of radius $r$ and center
$y$. Choose $\epsilon>0$ in such a way
that $\gamma \ov{B}_{S,\epsilon}(c) \cap \psi
\ov{B}_{S,\epsilon}(c)=\emptyset$ for every $\gamma\in\Gamma_p
$, and in such a way that $\ov{B}_{S,\epsilon}(c)\subset
\Omega_g(p,q)$ and $\psi \ov{B}_{S,\epsilon}(c)\subset
\Omega_g(p,q')$. Replacing $\epsilon$ by a smaller number if
necessary we can assume that the stabilizers of the points in
$\ov{B}_{S,\epsilon}(c)$ are all trivial.
Take $r$ distinct points $z_{\psi 1},\dots,z_{\psi
r}\in\ov{B}_{S,\epsilon/2}(c)$ and tangent vectors $u_{\psi
i}\in T_{\sigma_g^{p,q}(z_{\psi i})}S_g(q)$ for $i=1,\dots,r$.
Letting $z_{\psi}=(z_{\psi 1},\dots,z_{\psi r})$, we may assume
that $\sigma_g^{p,q}(z_{\psi})\in F_g(q)$ and
$\sigma_g^{p,q'}(\psi z_{\psi})\in F_g(q')$ because $F_g(q)$
(resp. $F_g(q')$) is dense in $S_g^r(q)$ (resp. $S_g^r(q')$).
Let $\oO_{\psi}$ be the set of all elements $\psi'\in L_g(p)$ satisfying
$\gamma \ov{B}_{S,\epsilon/2}(c)\cap
\psi'\ov{B}_{S,\epsilon/2}(c)=\emptyset$ for every
$\gamma\in\Gamma_p$ and $\sigma_g^{p,q'}(\psi' z_{\psi})\in
F_g(q')$.
Clearly $\oO_{\psi}\subset L_g(p)$ is open and contains $\psi$.
Denote the open ball in $M$ with center $x$ and radius $\delta$
by $B_{\delta}(x)$. Take real numbers $a<b<f(p)$ in such a way
that $[a,f(p))$ does not contain any critical value of $f$.
Take $\delta>0$ small enough so that $B_{\delta}(p)$ (resp.
$B_{\delta}(q)$, $B_{\delta}(q')$) is entirely contained in
$W_g^s(p)$ (resp. $W_g^u(q)$ and $W_g^u(q')$) and $\inf
f|_{B_{\delta}(p)}>b$ (resp. $\sup f|_{B_{\delta}(q)}<a$ and
$\sup f|_{B_{\delta}(q')}<a$).
Pick, for each $1\leq i\leq r$, points $x_i\in
B_{\delta}(p)\setminus\{p\}$ and $y_i\in
B_{\delta}(q)\setminus\{q\}$ both representing
$z_{\psi i}\in S_g(p)$,
and a tangent vector $v_i\in T_{x_i}M$ projecting to $u_{\psi
i}\in T_{z_i}S_g(p)$. Define real numbers $t_1,\dots,t_r$ by
the condition that $y_i=\Phi_{t_i}^g(x_i)$, and let
$w_i=D\Phi_{t_i}^g(v_i)$.
Let $U\subset M$ be an open $\Gamma$-invariant subset contained
in $f^{-1}((a,b))\cap W_g^s(p)$ whose projection to $S_g(p)$
contains $z_{\psi 1},\dots,z_{\psi r}$ and is disjoint from
$\psi(\ov{B}_{S,\epsilon/2}(c))$. By Lemma
\ref{lemma:variacions-metrica-diff} one can pick a finite
dimensional vector subspace
$$G_{\psi}\subset\cC^{\infty}(M,S^2T^*M)^{\Gamma},$$
all of whose elements are supported in $U$, with the property that the linear map
\begin{equation}
\label{eq:surjective-map}
G_{\psi}\ni g'\mapsto
\left(\left.\frac{\partial}{\partial\epsilon}D\Phi^{g+\epsilon g'}_{-t_1}(w_1)\right|_{\epsilon=0},\dots,
\left.\frac{\partial}{\partial\epsilon}D\Phi^{g+\epsilon g'}_{-t_r}(w_r)\right|_{\epsilon=0}\right)
\in \bigoplus_{i=1}^r T_{v_i}(TM)
\end{equation}
is surjective.
Choose an open neighborhood $\oO_{\psi}'\subset\oO_{\psi}$ of $\psi$
whose closure in $\oO_{\psi}$ is compact.
Since $L_{g,K}(p)$ is compact there exist
$\psi_1,\dots,\psi_s\in L_{g,K}(p)$ such that
$L_{g,K}(p)\subset\oO'_{\psi_1}\cup\dots\cup\oO'_{\psi_s}$.
Denote $z_i=z_{\psi_i}\in S_g(p)^r$
and $u_i=u_{\psi_i}\in T(S_g(p)^r)$.
Let $G=\sum_iG_{\psi_i}$.
Let $\MMM$ be the set of all $g'\in\mM^{\Gamma}$ satisfying the following conditions:
\begin{enumerate}
\item $g'-g\in G$,
\item $\sigma_{g'}^{p,q}(z_i)\in F_{g'}(q)=F_g(q)$ for
every $i$,
\item $\sigma_{g'}^{p,q'}(\psi z_i)\in F_{g'}(q')
=F_{g}(q')$ for every $i$ and every $\psi\in
\ov{\oO_{\psi_i}'}$.
\end{enumerate}
To explain conditions (2) and (3), note that since
$g'-g\in\sum_iG_{\psi_i}$ and the elements in each $G_{\psi_i}$
are supported away from the critical points, we can canonically
identify $S_g(q)=S_{g'}(q)$ and $S_g(q')=S_{g'}(q')$, and
similarly $F_g(q)=F_{g'}(q)$ and $F_g(q')=F_{g'}(q')$.
Note that $\{g'-g\mid g'\in\MMM\}$ can be identified with an
open subset of $G$ containing $0$, so $\MMM$ has a natural
structure of (finite dimensional) smooth manifold.
Consider, for each $i\in\{1,\dots,s\}$,
$$\VV_i=\{(g',\psi',b)\in \MMM\times \oO_{\psi_i}\times T(S_g(q')^r)\mid b=
D(\sigma_{g'}^{p,q'}\circ \alpha_{\psi'}\circ \sigma_{g'}^{q,p})(u_i)\}$$
and its subvariety
$$\VV_i'=\{(g',\psi',b)\in \MMM\times \oO'_{\psi_i}\times T(S_g(q')^r)\mid b=
D(\sigma_{g'}^{p,q'}\circ \alpha_{\psi'}\circ \sigma_{g'}^{q,p})(u_i)\}.$$
Let also
$$\AA=\MMM\times L_g(p)\times \aA_{g}(q')|_{F_g(q')}.$$
Note that $\VV_i,\,\VV_i',\,\AA$ are subvarieties of $\MMM\times L_g(p)\times T(S_g(q')^r)$.
Let $N_i=\VV_i\cap\AA\cap(\{g\}\times \oO_{\psi_i}\times T(S_g(p)^r)$.
The definition of $G_{\psi_i}$ guarantees that
$\VV_i$ and $\AA$ intersect transversely along $N_i$.
Consequently, there exists a neighborhood of $N_i$,
$$\nN_i\subset \MMM\times \oO_{\psi_i}\times T(S_g(q')^r),$$
such that
the intersection $\VV_i\cap\AA\cap \nN_i$ is a smooth manifold whose dimension satisfies
$$d-\dim(\VV_i\cap\AA\cap \nN_i)=\min\{d+1,(d-\dim\VV_i)+(d-\dim\AA)\},$$
where
$$d=\dim \MMM\times L_p(g)\times T(S_g(q')^r).$$
This formula is consistent with
the convention that a set is empty if and only if its dimension is $-1$.
Consider the projection
$$\pi_i:\VV_i\cap\AA\to \MMM.$$
Since the closure of $\VV_i'$ inside $\VV_i$ is compact, there exists a neighborhood of $g$, $\MMM_i\subset\MMM$, with the property that
$\pi_i^{-1}(\MMM_i)\cap\VV_i'\subset\nN_i$.
Hence,
$\pi_i^{-1}(\MMM_i)\cap\VV_i'$ is a smooth manifold. Let
$$\MMM_i^{\reg}\subset\MMM_i$$
be the set of regular values of $\pi_i$ restricted to $\pi_i^{-1}(\MMM_i)$.
We claim that for every $g'\in\MMM_i^{\reg}$ we have
$\pi_i^{-1}(g')\cap\VV_i'=\emptyset.$
To prove the claim it suffices to check that $\dim \pi_i^{-1}(g')\cap\VV_i'=-1$. Now,
\begin{align*}
\dim \pi_i^{-1}(g')\cap\VV_i' &= \dim(\VV_i\cap\AA\cap \nN_i) - \dim\MMM \\
&=d-\min\{d+1,(d-\dim\VV_i)+(d-\dim\AA)\}- \dim\MMM.
\end{align*}
If $(d-\dim\VV_i)+(d-\dim\AA)\geq d+1$ then this is clearly negative. So assume that instead
$(d-\dim\VV_i)+(d-\dim\AA)<d+1$. Since the projection of $\VV_i$ to $\MMM\times\oO_{\psi_i}$
is a diffeomorphism, we have
$$d-\dim\VV_i=d-(\dim \MMM\times\oO_{\psi_i})=d-(\dim\MMM\times L_g(p))=\dim T(S_g(q')^r)=2r(n-1).$$
On the other hand we have, using (2) in Lemma \ref{lemma:restriction-aA},
\begin{align*}
d-\dim\AA &= \dim T(S_g(q')^r)-\dim\aA_g(q')|_{F_g(q')} \\
&\geq 2r(n-1)-(r(n-1)+n^2-1)=r(n-1)-n^2+1.
\end{align*}
Combining both estimates we compute:
\begin{align*}
\dim \pi_i^{-1}(g')\cap\VV_i' &\leq d-2r(n-1)-r(n-1)+n^2-1-\dim\MMM \\
&=\dim L_p(g)\times T(S_g(q')^r)-3r(n-1)+n^2-1 \\
&\leq n^2+2r(n-1)-3r(n-1)+n^2-1 \\
&=2n^2-r(n-1)-1.
\end{align*}
Our choice of $r$, see (\ref{eq:def-r}),
implies that $2n^2-r(n-1)-1<0$, so the claim is proved.
Finally, let
$$\MMM^{\reg}=\MMM_1^{\reg}\cap\dots\cap\MMM_s^{\reg}.$$
We claim that $\MMM^{\reg}\subset \mM_{1,K}(p)$. Indeed,
suppose that $g'\in\MMM^{\reg}$ and let $\psi\in L_{g,K}(p)$ be
any element. Then $\psi\in\oO'_{\psi_i}$ for some $i$ and we
have, on the one hand,
$$z_i\in S_{g'}(p,q)^r,\quad \psi z_i\in S_{g'}(p,q')^r, \quad
\sigma_{g'}^{p,q}(z_i)\in F_{g'}(q),\quad
\sigma_{g'}^{p,q'}(\psi z_i)\in F_{g'}(q'),$$ and, on the other hand,
the fact that
$\pi_i^{-1}(g')\cap\VV_i'=\emptyset$ implies that
$$D(\sigma_{g'}^{p,q'}\circ \alpha_{\psi}\circ
\sigma_{g'}^{q,p})(u_i)\notin\aA_{g'}(q')(\sigma_{g'}^{p,q'}\circ
\alpha_{\psi'}(z_i)).$$ This proves the claim.
Sard's theorem (see e.g. \cite[Chap 3, \S 1.3]{H}) implies that
$\MMM^{\reg}$ is residual in $\MMM$. Hence $\MMM^{\reg}$ is
dense in a neighborhood of $g\in\MMM$, so $\mM_{1,K}$ is dense
in a neighborhood of $g$.
\end{proof}
Recall that $\mM_{1,K}=\bigcap_{p\in\EE}\mM_{1,K}(p)$. The
preceding two lemmas imply:
\begin{lemma}
\label{lemma:mM-2-K-open-dense}
$\mM_{1,K}$ is a dense an open subset of $\mM_0$.
\end{lemma}
\section{Proof of Theorem \ref{thm:main} for $\dim M>1$}
\label{s:proof-thm:main}
Continuing with the notation of the previous sections, let us define
$$\mM_f=\mM_0\cap\bigcap_{K\in\NN}\mM_{1,K}.$$
Since each of the sets appearing in the right hand
side of the equality is open and dense in $\mM^{\Gamma}$ (see Subsection \ref{ss:mM-0} and Lemma \ref{lemma:mM-2-K-open-dense}), $\mM_f$ is a residual subset of
$\mM^{\Gamma}$.
Fix some $g\in\mM_f$ and let $\phi\in\Aut(\nabla^gf)$. We are going to check that there exists
some $\gamma\in\Gamma$ and some $t\in\RR$ such that
$$\phi(x)=\Phi^g_t(\gamma\,x)$$
for every $x\in M$. This will prove Theorem \ref{thm:main}.
\begin{lemma}
\label{lemma:main-M-0} For each $p\in\II$ (resp. $p\in\OO$)
there exists some $\gamma\in\Gamma$ and some $t\in\RR$
such that $\phi(x)=\Phi^g_t(\gamma\,x)$ for every $x\in W_g^s(p)$
(resp. for every $x\in W_g^u(p)$).
\end{lemma}
\begin{proof}
Suppose that $p\in\II$ (the case $p\in\OO$ is dealt with in the same way
with the obvious modificatoins). By property (C2) in the definition of
$\mM_0$ (see Subsection \ref{ss:mM-0}) there exists some $\gamma\in\Gamma$
such that $\phi(p)=\gamma\,p$. Hence, up to composing $\phi$ with the action of $\gamma$,
we can (and do) suppose that $\phi(p)=p$.
Once we know that $\phi$ fixes $p$, we conclude that it
restricts to a diffeomorphism of $W_g^s(p)$ preserving
$\nabla^gf$, which we identify with an element $\phi_p\in
L_g(p)$ via the isomorphism (\ref{eq:Aut-L}). Next, let us
prove that the action of $\phi_p$ on $S_g(p)$ coincides with
the action of some $\gamma\in\Gamma_p$. If this is not the
case, then $\phi_p\in L_{g,K}(p)$ for some natural $K$ (see
Subsection \ref{ss:def-mM-2}). Since $g\in\mM_{1,K}$, it
follows that there exist sources $q,q'\in\OO$ and
$z\in\Omega(p,q)^r$ satisfying
$
\phi_p z\in\Omega_g(p,q')^r,\qquad \sigma_g^{p,q}z\in F_g(q),
\qquad \sigma_g^{p,q'}\phi_p z\in F_g(q'),$$ and a vector
$u\in\aA_g(q)(\sigma_g^{p,q}(z))$ satisfying
\begin{equation}
\label{eq:contra-1}
D(\sigma_g^{p,q'}\circ\alpha_{\phi_p}\circ\sigma_g^{q,p})(u)\notin
\aA_g(q')(\sigma_g^{p,q'}\circ\alpha_{\phi_p}(z)).
\end{equation}
By the definition of $\aA_g(q)$, we may write
$u=\yY_{g,s}(\sigma_g^{p,q}z)$ for some $s\in\Lie L_g(q)$.
The fact that $z\in\Omega(p,q)^r$ and $\phi_p
z\in\Omega_g(p,q')^r$ implies that $\phi(q)=q'$, so $\phi$ maps
$W_g^u(q)$ diffeomorphically to $W_g^u(q')$; since $\phi$
preserves $\nabla^gf$, $\phi$ induces by conjugation an
isomorphism
$$\psi:L_g(q)\to L_g(q').$$
The corresponding map at the level of Lie algebras associates
to $s$ an element $\psi(s)\in\Lie L_g(q')$, and in fact we have
\begin{align}
D(\sigma_g^{p,q'}\circ\alpha_{\phi_p}\circ\sigma_g^{q,p})(u) &=
D(\sigma_g^{p,q'}\circ\alpha_{\phi_p}\circ\sigma_g^{q,p})(\yY_{g,s}(\sigma_g^{p,q}z)) \notag \\
&=\yY_{g,\psi(s)}(\sigma_g^{p,q'}(\phi_p z)).
\end{align}
The last expression manifestly belongs to
$\aA_g(q')(\sigma_g^{p,q'}\circ\alpha_{\phi_p}(z))$, and this
contradicts (\ref{eq:contra-1}). So we have proved that there
is some $\gamma\in\Gamma_p$ such that $\gamma^{-1}\phi_p$ acts
trivially on $S_g(p)$. Now statement (1) in Lemma
\ref{lemma:restriction-aA} implies that
$\gamma^{-1}\phi_p=e^{t\xX_g(p)}$ for some $t\in\RR$, so we may
write $\phi_p=\gamma e^{t\xX_g(p)}$ or, equivalently, that
$\phi_p(y)=\Phi_t^g(\gamma\,y)$ for every $y\in W_g^s(p)$.
\end{proof}
For any $p\in\EE$ we denote $W_g(p):=W_g^s(p)$ (resp.
$W_g(p):=W_g^u(p)$) if $p\in\II$ (resp. if $p\in\OO$). Now the
proof of the case $\dim M>1$ in Theorem \ref{thm:main} is
concluded as the proof of the main theorem in \cite{TV}. This
is done in two steps. We know there exist
$\{t_p\in\RR\}_{p\in\EE}$ and $\{\gamma_p\in\Gamma\}_{p\in\EE}$
such that $\phi(x)=\Phi_{t_p}^g(\gamma_p x)$ for every
$p\in\EE$ and $x\in W_g(p)$. The first step consists in proving
that if all $\gamma_p$'s are equal then all $t_p$'s are equal
as well (this is \cite[Lemma 5]{TV}). The second step consists
on reducing the general case to the one covered by the first
step. This is explained in the three paragraphs following
\cite[Lemma 5]{TV}.
|
2,869,038,154,574 | arxiv | \section{Introduction}
State determination through measurements, also called \emph{Tomography}, is very important in Quantum Mechanics. It is also important in
Classical mechanics, but it is considerably more nuanced and involved in quantum theory. We recall here a very deep characterisation of
states in general given by Dirac\cite{diracstate}, which can be invoked even in the absence of any Hilbert space structure; according to him, \emph{states}
are the embodiment of the collection of all possible measurement outcomes. Fortunately, both in classical
as well as in quantum mechanics( described by finite-dimensional Hilbert spaces), it is
sufficient to collect measurement outcomes for finite number of observales constituting a \emph{complete set}. For a N-dimensional quantum
system, the state is generically represented by a Hermitian, unit trace density matrix requiring $N^2-1$ real numbers for its complete specification. The outcomes of any observable, at least in the projective measurement scheme, are N eigenvalues along with N probabilities for them.
Only the probabilities carry information about the state and since their sum must equal unity, each measurement yields $N-1$ real parameters.
Thus, in order to obtain the required $N^2-1$ real parameters, one has to measure $N+1$ linearly
independent observables. In the case of qubits, for example, one needs three such measurements. These could be, say, measurements of $S_x,S_y,S_z$, or, equally well of ${\vec S}\cdot{\vec n}_i$ along three non-collinear directions ${\vec n}_i$.
While both sets are equally good in terms of state determination, they are not so, according to
Wootters and Fields \cite{woottersmub}, in terms of their accuracies in state determination. Errors are inevitable in measurements. One could
for example take the variance as a measure of this error(reduced by the usual statistical factor of $M^{-1/2}$ with M being the number of
measurements).
Interpreting the expectation values of the complete set of operators $O^\alpha$ as \emph{coordinates} of the space of states, the measurement
errors can be taken to be the extents of an \emph{error parallelepiped} centred at the point representing the
state in which the measurements were carried out. Then, according to Wootters and Fields, a tomography is optimal when basically the geometrical volume
of the parellelepiped is the smallest.
Variance $\Delta O$ in general depends on both O as well as the state $\rho$ of the system. The volume element
additionally depends on the metric of the state space. Thus in general the error volume has a state dependence.
It is, however, quite meaningless to optimise the error volume for a given state. This is because in general the
criterion for optimality(in the Wootters-Fields case,
the ${\vec n}_i$); in tomography, the state is a priori \emph{unknown}. Because of this, it
is impossible to choose the suitable observables before hand. The best that can be done is to use the
\emph{expected errors} for random choices of the state. In other words, one should only work with state averaged error volumes. Wootters and Fields found it convenient to work with
\emph{state averaged
tomographic information} and they found that it is the greatest
when the ${\vec n}_i$ are mutually orthogonal as in $S_x,S_y,S_z$. The important property they highlight for this choice can be expressed
as, for $\alpha\,\ne\,\beta$,
\begin{equation}
\label{eq:qubit-mub}
|\langle\uparrow^\beta|\uparrow^\alpha\rangle|^2\,=
|\langle\uparrow^\beta|\downarrow^\alpha\rangle|^2\,=
|\langle\downarrow^\beta|\downarrow^\alpha\rangle|^2\,=
\,\frac{1}{2}
\end{equation}
Here $|\uparrow^\alpha\rangle,|\downarrow^\alpha\rangle$ refer to the eigenvectors of the observable $O^\alpha$. The three sets of
eigenvectors in this case are said to form \emph{Mutually Unbiased Bases}(MUB) in the sense that eigenvectors of one operator have
equal probabilties of outcome if that operator is measured on any of the other eigenvectors. They were first introduced by Schwinger
\cite{schwingermub}
who called such bases \emph{complimentary}. For the N-dimensional cases, eqn.(\ref{eq:qubit-mub}) generalises to
\begin{equation}
\label{eq:Ndim-mub}
|\langle\,k^\alpha|j^\beta\rangle|^2\,=\,\frac{1}{N}\quad\quad \alpha\,\ne\,\beta
\quad\quad\,|\langle\,k^\alpha|j^\alpha\rangle|^2\,=\,\delta_{jk}
\end{equation}
Thus, the central result of Wootters and Fields \cite{woottersmub} is that measurements with the complete set $O^\alpha$ will be optimal
if their eigenstates $|k^\alpha\rangle$ form a MUB. Mutually unbiased bases have subsequently been seen to play a fundamental role in
diverse contexts \cite{mubdiverse}. Adamson and Steinberg \cite{mubproof} have experimentally vindicated the Wootters-Fields result.
There has been a paradigm shift in quantum measurements with the so called \emph{Weak Measurements} proposed by Aharonov et al.\cite{aharorig}.
There are
many ways to qualitatively understand how weak measurements work; one such is to replace the narrow pointer states
used for the initial state of an apparatus in a von Neumann model of projective(also called strong) measurements with a broad and coherent superposition of such
narrow pointer states\cite{threeweakndh}. We prefer this description as it counters the commonly held view that weak measurements require a weaker \emph{interaction}
between the system and the apparatus as compared to the \emph{strong} measurements. It is the ratio of
the displacement of the mean pointer position of the apparatus to the width of the apparatus state that
is relevant. We also make a clear distinction between such
weak measurements and the so called
\emph{Weak Value Measurements} which are weak measurements followed by \emph{Post-selection} realised through strong measurements. It has
been pointed out that weak values are special cases of the Dirac-Kirkwood quasiprobabilities \cite{dirackirkwood}.
Quite remarkably, weak value measurements offer a radically different and novel means of tomography. While such \emph{weak tomography} have
been proposed for both pure as well as mixed states \cite{lutomo1,lutomo2,swutomo}, we shall restrict ourselves for the moment to
only pure states. Then, unlike the
standard tomography based on complete sets of observables, weak tomography yields complete information for pure states by making measurements
on the $N\,-1\,$ \emph{Projection operators} for \emph{one} given single observable. The $N\,-1\,$ independent \emph{complex} weak values are directly
measurable. This is in the sense that the post-measurement state of the apparatus can generically be described by a gaussian centred around
the weak value. But the width of such gaussians are also very very large.
Unlike a general density matrix which requires $N^2\,-\,1$ real parameters for its complete description, a pure state requires only $2N\,-\,2$
real, or $N\,-\,1$ complex parameters. Therefore, weak value measurements are naturally suited for pure state tomography in arbitrary
dimensions. The weak values can be taken as the complex coordinates for the state space. It turns out to be a sterographic projection of
the state space. This has been experimentally verified by Kobayashi, Nonaka and Shikano
for optical vortex-beams \cite{weakstereo}. For a detailed account of weak measurements and numerous references see \cite{nori,swutomo}
In this work we have analysed the Wootters-Fields optimality criteria for weak value
tomography of pure states. We point out without going to details that other precision criteria have also been studied \cite{zhouhall,vallone,pangfisher}.
We state our main results right away and provide all relevant
details in the following. In standard tomography one can compare measurements done with different complete sets for their optimality.
Since in the case of weak tomography, the observable for measurement is fixed(we consider all
different projection operators as just different aspects of this single observable), one will have to compare different choices of
post-selection for optimality. Our principal result is that weak value measurements are optimal when post-selected states are mutually
unbiased wrt to the eigenfunctions of the observable under measurement. We show this by explicit calculations for spin-1/2, spin-1 and
spin-3/2 cases. Then we prove the result for \emph{arbitrary} spins. Computing error volumes, and subsequently minimising them
requires the \emph{metric} on the space of states. The state space for pure states is actually a \emph{Projective Hilbert Space} and
these are known to be not only \emph{Complex Manifolds} but also the so called \emph{Ka\"ehler Manifolds}. Metrics of such spaces are completely
fixed in terms of a single scalar function called the \emph{Ka\"ehler Potential}\cite{kaehlerbook}. For the above three cases the metric components are calculated
explicitly and the respective Ka\"ehler potentials determined. The Ka\"ehler potential for the arbitrary spin case is then deduced by induction.
It should be mentioned that such special post-selection states were already used in \cite{lutomo1,lutomo2,swutomo}. In \cite{lutomo1} a post-selected states were chosen that were not only mutually unbiased wrt to the eigenstates
of the measured observable, even their relative phases were constant.
This
ensured that the weak values were actually proportional to the pure state to be determined. In \cite{swutomo} too MUB's were used more as
means of simplifying analysis rather than as fundamental necessities. In fact tomography can be accomplished without any of these simplifications i.e when the post-selected states are not MUB's wrt eigenstates of observables. In this paper we establish the fundamental result that it is
only when post-selected states are MUB's the measurements are the most precise in the sense of the error volumes being the smallest.
\section{\normalsize Optimal Weak Tomography of a Spin-1/2 System}
In this section we give a review of the work done by Hari Dass \cite{threeweakndh} on optimal weak measurement of a spin-1/2 system.
If $|{\pm}\rangle$ are eigenvectors of, say, $S_z$,
any pure state can be written as
\begin{equation}
\label{eq:qubitpure}
|{\psi}\rangle\,=\alpha_+ |{+}\rangle\,+\alpha_- |{-}\rangle\,\quad |\alpha_+|^2\,+\,|\alpha_-|^2\,=\,1
\end{equation}
The weak values for the measurements of the projectors $\Pi_{\pm}\,=\,|{{\pm}}\rangle\langle{{\pm}}|$,
with post-selected state $|{b}\rangle$ are \cite{aharorig}, with
$b_{\pm}=\langle{b}|{{\pm}}\rangle$, and $\phi_0$ the \emph{phase} of $\langle{b}|\psi\rangle$,
\begin{equation}
\label{eq:weakvalues-qubit}
w_{\pm}\,=\,\frac{\langle{b}|{{\pm}}\rangle\langle{{\pm}}|{\psi}\rangle}{\langle{b}|{\psi}\rangle}\rightarrow
\alpha_{\pm}= \frac{w_{\pm}/b_{\pm}}{\sqrt{|\frac{w_+}{b_+}|^2+|\frac{w_-}{b_-}|^2}}\,e^{i\phi_0}\quad\quad w_+\,+\,w_-\,=\,1
\end{equation}
Thus exactly two independent real parameters are left for the
parametrisation of the qubit state. The density matrix for qubits(both pure and mixed) can be represented as
\begin{equation}\label{eq:rhoqubit}
\rho=\frac{I}{2}+\langle S_x \rangle \sigma_x+\langle S_y \rangle\sigma_y+\langle S_z \rangle\sigma_z
\end{equation}
It should be noted that density matrices can be represented in terms of complete sets of observables irrespective of whether the
actual tomography is based on that complete set or not. In terms of $\alpha_{\pm}$, the expectation values occurring above are given by
\begin{equation}
\label{eq:qubitexpectations}
\langle\,S_x\,\rangle\,=\, Re\,\alpha_+^*\,\alpha_-\quad
\langle\,S_y\,\rangle\,=\, Im\,\alpha_+^*\,\alpha_-\quad
\langle\,S_z\,\rangle\,=\, \frac{1}{2}\,\,(|\alpha_+|^2\,-\,|\alpha_-|^2)\,
\end{equation}
While $\alpha_{\pm}$ are merely parametrisations of the density matrices, tomography lies in their determination through
measurements. In weak tomography, $w_{\pm}$ are directly measured and are related to the parameters in the density
matrix, by eqn.(\ref{eq:weakvalues-qubit}).
We define the distance function(hence the metric) on the space of states(density matrices) by
\begin{equation}
\label{eq:metriconrho}
dl^2\,=2\,Tr\,d\rho d\rho
\end{equation}
This coincides with the Fubini-Study metric \cite{crell} for pure states, but differs from it for mixed states. Nevertheless it is an acceptable metric even
for mixed states with the natural isometries inherited from quantum theory.
We get the line element to be
\begin{equation}
\label{eq:qubitmetric}
dl^2\,=\,4\,[(d\langle S_x \rangle)^2\,+\,(d\langle S_y \rangle)^2\,+\,(d\langle S_z \rangle)^2]
\end{equation}
Let us now consider an intermediate step of rewrting $\alpha_{\pm}$ as
\begin{equation}
\label{eq:qubitzpm}
\alpha_{\pm}\,=\,\frac{z_{\pm}}{\sqrt{|z_+|^2\,+\,|z_-|^2}}
\end{equation}
Indeed eqn.(\ref{eq:weakvalues-qubit}) without the constraint $w_+\,+\,w_-\,=\,1$ is of this form with
$z_{\pm}\,=\,\frac{w_{\pm}}{b_{\pm}}\,e^{i\phi_0}$. Without any constraints, $z_{\pm}$
are {\bf four} real parameters, two too many for the qubit-state space. But scaling both z's by a common complex number $\lambda$
changes the $\alpha$'s by a common phase and hence the state, consequently the line element, remain
unchanged.
So the redundancy in z's is two which gets exactly
removed by the one complex constraint. So one can first work out the metric in terms of $z_{\pm}$ and then impose the constraint.
Since the line-element does not change, $g_{z_+z_+}=g_{z_-z_-}=0$. The
constraint, rewritten as
\begin{equation}
\label{eq:zconstraint}
w_+\,+\,w_-\,=\,1\rightarrow b_+z_+\,+\,b_-z_-=1\rightarrow b_+dz_+\,+\,b_-dz_-=0
\end{equation}
while reducing the number of parameters by 2 does not mix $dz_{\pm}$ with $d{\bar z}_{\pm}$.
Hence $g_{z_+z_+},g_{z_-z_-}$ continue to be zero even after
the constraint.
Explicit coordinates for such projective spaces are usually chosen by fixing one of the z's to be
a constant, say, unity. In the weak value coordinates case this is done by the weak value constraint
$w_+\,+\,w_-\,=\,1$.
Though algebraically more elaborate, this is a natural choice dictated by the constraint on weak values. But this choice
introduces explicit $b_i$ dependences into the otherwise purely geometrical entities such as the metric and Ka\"ehler potential etc.
and the precise $b_i$ dependences are critical as they determine the optimality criteria.
The line element for qubit pure state space in weak-value coordinates can be workred out(after some tedious algebra) to be(in what follows we shall use $G_{ij}$ for the \emph{complex} metric and $G\,=\,det G_{ij}$ for its determinant , and likewise, $g_{ij}$ for
the metric in real coordinates and $g\,=\,det g_{ij}$ for the determinant of that metric.
\begin{equation}
\label{eq:qubitweakmetric}
dl^2\,=\,\frac{4}{|b_+|^2|b_-|^2\big ( |\frac{w_+}{b_+}|^2+|\frac{w_-}{b_-}|^2\big )^2}
dw_+d{\bar{w_+}}\,=\,G_{w_+{\bar w}_+}\,dw_+\,d{\bar w}_+
\end{equation}
This metric is \emph{conformal} (see also \cite{korotkov}, for other conformal features of qubit state-space). From our general arguments this is just a reflection of the projective nature of this state space. In the qubit case, with just one complex coordinate, this is all there is to it. For the higher-dimensional cases, all the complex metric components of
the type $G_{w_iw_j}$ have to vanish again.
The line element in terms of the real coordinates $x=Re w_+$ and $y=Im w_+$ is given by
\begin{equation}
\label{eq:qubitrealmetric}
dl^2\,=\,\frac{4|b_+|^2|b_-|^2(dx^2+dy^2)}{((x\,-\,|b_+|^2)^2+y^2+|b_+|^2|b_-|^2)^2}\,=\,g_{ij}\,dx^i\,dx^j
\end{equation}
It is well-known that these state-spaces are Ka\"ehler manifolds \cite{kaehlerbook} for which the nonvanishing components of the metric is given by
\begin{equation}
\label{eq:kahlerpot}
G_{w_i{\bar w}_j}\,=\,\partial_{w_i}\,\partial_{{\bar w}_j}\,K(\{w_i,{\bar w}_j\})
\end{equation}
$K$ is called the Ka\"ehler potential. It is straightforward to show that for the qubit case
($w_+\,+\,w_-\,=\,1$)
\begin{equation}
\label{eq:qubitkahler}
K_{qubit}(w_+,{\bar w}_+) \,=\,4\,\log\,\{|\frac{w_+}{b_+}|^2\,+\,|\frac{w_-}{b_-}|^2\}
\end{equation}
A very important relation to notice is that between the Ka\"ehler potential K and the determinant g(G) of the metric:
\begin{equation}
\label{eq:KGrelnqubit}
g_2\,=\,det g_{ij}\,=\,\frac{16}{|b_+|^4|b_-|^4}\,e^{-K}
\end{equation}
Such a relationship is basic to Ka\"ehler metrics \cite{kaehlerbook}.
These
techniques and results are of wide generality.
The volume element $\sqrt{g_2}\,dxdy$, where $g\,=\,det\,g_{ij}$, is given by
\begin{equation}
\label{eq:qubitvol}
dV_2\,=\,\frac{4|b_+|^2|b_-|^2\,dxdy}{((x\,-\,|b_+|^2)^2+y^2+|b_+|^2|b_-|^2)^2}
\end{equation}
A consistency check is to calculate the total area which should be \emph{independent} of the $b_i$. An important property of weak value measurements comes into play at this stage i.e the
weak values are \emph{unbounded}. Because of this the total area integral can be computed by shifting the x-variable and integrating over
both coordinates over $(-\infty,\infty)$. The answer one gets is $A\,=\,4\pi$, which is the area of a sphere, and is indeed independent of
the $b_i$! Thus the weak value coordinates provide
a sterographic projection of the sphere onto a plane \cite{weakstereo,korotkov}.
Now we come to an evaluation of the error area. Again, subtle features of the weak value measurements become crucial. The measurement of
$Re\,w_+$ is done, in the original scheme \cite{aharorig,swutomo,nori}, by momentum measurements. The post-measurement apparatus
state in that case is a gaussian in momentum space centred around $2\,Re\,w$ but with a very very large width, say, $\Delta$. On the other hand,
measurement of $Im\,w$ has to be done independently by position measurements. For the same initial apparatus state, the post-measurement
state is now a \emph{narrow} gaussian in position space of width $\simeq\,\frac{1}{\Delta}$, but centred around $2\,Im\,w\,\Delta^{-2}$.Thus the error in $Im\,w$
is also large i.e $\Delta$. Strictly speaking the variance in weak measurements is $\sqrt{\Delta^2\,+\,{\Delta_\psi\,S_z}^2}$ \cite{nori},
but the second term is totally negligible. For an ensemble of M measurements each, these are reduced by the usual $\sqrt{M}$ factor giving a
statistical error $\Delta_s$ that can be taken to be small enough to use the \emph{local} volume element( in what follows, we shall take the extents of the error volumes to be $2\Delta_s$ in each direction)
\begin{equation}
\label{eq:qubiterrorarea}
\Delta V_2^{err}\,=\,\frac{16|b_+|^2|b_-|^2\,\Delta_s^2}{((x\,-\,|b_+|^2)^2+y^2+|b_+|^2|b_-|^2)^2}
\end{equation}
It should be noted that when $\Delta_s$ is not small, one will have to express this as a rather complicated integral. But the most important
difference from the Wootters-Fields analysis is that the error $\Delta_s$
is state independent., whereas in their case the errors being variances in given state depend both on the choice of the observable as well
as on the state. On the other hand, the metrics on state space in their cases are essentially flat and do not depend on the state. But in the
weak tomographies the metric is state-dependent.
One could have contemplated minimising $\Delta V_2^{err}$ itself wrt $|b_i|$ for a given state. Even for
a given state, changing $b_i$ would change $w_i$. In fact the relevant form of $\Delta V_2^{err}$ to
consider would be
\begin{equation}
\label{eq:errorvolpsifixed}
\Delta V_2^{err}\,=\,\frac{16\,\Delta_s^2}{|b_+|^2\,|b_-|^2}\,|\langle b|\psi\rangle|^4
\end{equation}
It is indeed possible to find the stationary points i.e $|b_{\pm}|\,=\,|\psi_{\mp}|$. But for tomography of an unknown state, there is no way to post-select accordingly. Therefore in both cases one has to consider state averaged error volumes before
optimising them. The state average of any function $f(x,y)$ on the state space is given by
\begin{equation}
\label{eq:stateaverages}
\langle\,f(x,y)\,\rangle\,=\,\frac{\int\,\sqrt{g}\,dx\,dy\,f(x,y)}{\int\,\sqrt{g}\,dx\,dy}
\end{equation}
Carrying out the state averaging, one finds, for the qubit case,
\begin{equation}
\label{eq:stateaverrqubit}
\langle \Delta V_2^{err} \rangle=\frac{16\Delta_s^2}{3\,|b_+|^2|b_-|^2}
\end{equation}
This state averaged error volume takes its minimal value when $|b_+|^2=|b_-|^2=\frac{1}{2}$(recall that $|b_+|^2\,+\,|b_-|^2\,=\,1$)
i.e., the post-selected state should be mutually unbiased with respect to the eigenstates of the operator being measured.
\section{\normalsize Optimal weak tomography of Spin-1 System}
If $|{i}\rangle$ are eigenvectors of $S_z$ with $i\,=\,+1,0,-1$,
any spin-1 pure state can be written as
\begin{equation}
\label{eq:qutritstate}
|{\psi}\rangle=\sum_{i=1}^3\,\alpha_i\,|{i}\rangle
\end{equation}
The weak values for the projectors $\Pi_i\,=\,|{i}\rangle\langle{i}|\,$ with post-selected state $|{b}\rangle$ are given by
\begin{equation}
\label{eq:weakvalues-qutrit}
w_{i}\,=\,\frac{\langle{b}|{{i}}\rangle\langle{{i}}|{\psi}\rangle}{\langle{b}|{\psi}\rangle}\rightarrow
\alpha_{i}= \frac{w_{i}/b_{i}}{\sqrt{|\frac{w_+}{b_+}|^2+|\frac{w_-}{b_-}|^2}}\,e^{i\phi_0}\quad\quad w_+\,+\,w_0\,+\,w_-\,=\,1
\end{equation}
We express the 3x3 density matrix, requiring 8 real parameters for its full description, in terms of
the complete set $T_i\,=\,\frac{\Lambda_i}{2},i=1,..,8$, where the $\Lambda_i$ are the
Gell-Mann matrices satisfying the algebra
\begin{equation}
\label{eq:gellmannmatrices}
[\Lambda_i,\Lambda_j]\,=\,2\,i\,f_{ijk}\,\Lambda_k\quad\quad
\{\Lambda_i,\Lambda_j\}\,=\,\frac{4}{3}\,\delta_{ij}\,+2\,d_{ijk}\,\Lambda_k\,\rightarrow\,Tr \Lambda_i\,\Lambda_j\,=\,2\,\delta_{ij}
\end{equation}
The density matrix is then represented as
\begin{equation}
\label{eq:rhoqutrit}
\rho\,=\,\frac{I}{3}\,+\,\langle{T_i}\rangle\,\Lambda_i\quad\quad \langle{T_i}\rangle\,=\,Tr\,\rho\,T_i
\end{equation}
The metric on spin-1 state space(valid for both pure and mixed states) is, then,
\begin{equation}
\label{eq:qutritmetric}
dl^2\,=\,4\,\sum_i\,d\langle{T_i}\rangle\cdot\,d\langle{T_i}\rangle
\end{equation}
For the pure state of eqn.(\ref{eq:qutritstate}), one gets,
\begin{eqnarray}
\label{eq:Lambdaexpectation}
\langle{T_1}\rangle\,&+&\,i\,\langle{T_2}\rangle=\,\alpha_+^*\alpha_0;
\langle{T_4}\rangle\,+\,i\,\langle{T_5}\rangle\,=\,\alpha_+^*\alpha_-;
\langle{T_6}\rangle\,+\,i\,\langle{T_7}\rangle\,=\,Im\,\alpha_+^*\alpha_-;\nonumber\\
\langle{T_3}\rangle\,&=&\,\frac{1}{2}\,(|\alpha_+|^2\,-\,|\alpha_0|^2);
\langle{T_8}\rangle\,=\,\frac{1}{2\sqrt{3}}\,(|\alpha_+|^2\,+\,|\alpha_0|^2\,-\,2\,\,|\alpha_-|^2)
\end{eqnarray}
Resulting in the metric
\begin{eqnarray}
\label{eq:qutritmetric2}
D&*&(dl)^2\,=
\bigg [1-w_- -{\bar w}_{-} +\frac{w_{-} {\bar w}_{-}}{|b_-|^2}\bigg ]\frac{dw_+{d{\bar w}}_+}{|b_+|^2}+
\bigg [ \frac{{\bar w}_+{\bar b}_-}{{\bar b}_+}+\frac{w_-b_+}{b_-}-\frac{{\bar w}_+w_-}{{\bar b}_+b_-} \bigg ]
\frac{dw_+{d{\bar w}}_-}{b_+{\bar b}_-}\nonumber\\
&+&
\bigg [\frac{{\bar w}_-{\bar b}_+}{{\bar b}_-}+\frac{w_+b_-}{b_+}-\frac{{\bar w}_-w_+}{{\bar b}_-b_+} \bigg ]\frac{dw_-d{\bar w}_+}{b_-{\bar b}_+}+
\bigg [1-w_+ -{\bar w}_{+} +\frac{w_{+} {\bar w}_{+}}{|b_+|^2}\bigg ]\frac{dw_-d{\bar w}_-}{|b_-|^2}
\end{eqnarray}
Where
\begin{equation}
\label{eq:qutritmetricdenom}
D\,=\,\frac{|b_0|^2}{4}\Big (\frac{|w_+|^2}{|b_+|^2}+\frac{|w_0|^2}{|b_0|^2}+\frac{|w_-|^2}{|b_-|^2}\Big )^2
\end{equation}
We have checked these results by choosing another complete set constructed out of the 4 MUB's given in
\cite{kurzynski}
(see also \cite{durtenglert}) by choosing two independent projectors from each of the 4 sets.
The metric is no longer conformal as in the qubit case but still satisfies the $g_{w_iw_j}\,=\,0$.
It is
Ka\"ehler,
\begin{equation}\label{eq:qutritkahler}
G_{w_i{\bar w}_j}\,=\,\partial_{w_i}\,\partial_{{\bar w}_j}\,K_{qutrit}\quad\quad
K_{qutrit}\,=\,4\,\ln\,(\sum_i\,|\frac{w_i}{b_i}|^2)
\end{equation}
Changing to real coordinates $w_+=x_1+ix_2,w_-=x_3+ix_4$
The determinant $g_3$ of the real metric is given by
\begin{equation}
\label{eq:detgqutrit}
g_3=\frac{256}{|b_+|^4|b_0|^4|b_-|^4 \Big (\frac{|w_+|^2}{|b_+|^2}+\frac{|w_0|^2}{|b_0|^2}+\frac{|w_-|^2}{|b_-|^2}\Big )^6}
\end{equation}
Once again there is a direct relation between $K_{qutrit}$ and $g_3$:
\begin{equation}
\label{eq:KGrelnqutrit}
g_3\,=\,det g_{ij}\,=\,\frac{16}{|b_+|^4|b_0|^4|b_-|^4}\,e^{-\frac{3K}{2}}
\end{equation}
The volume element is given by $dV_3=\sqrt{g_3}dx_1dx_2dx_3dx_4$
The total volume of the state space turns out to be
$V_3\,= \int dV=8\pi^2$.
Note that this is not the surface-volume of sphere in 5 dimensions! The pure state space is a sphere only for the qubit case.
The error volume is computed along similar lines as in the qubit case and turns out to be
\begin{equation}
\label{eq:qutriterrorvol}
\Delta V_3^{err}= \frac{256 \Delta_s^4 }{|b_+|^2|b_0|^2|b_-|^2 \Big (\frac{|w_+|^2}{|b_+|^2}+\frac{|w_0|^2}{|b_0|^2}+\frac{|w_-|^2}{|b_-|^2}\Big )^4}
\end{equation}
The state averaged error volume is
\begin{equation}
\label{eq:averrorqutrit}
\langle \Delta V_3^{err}\rangle= \frac{128\Delta_s^4}{5(|b_+|^2|b_0|^2|b_-|^2)}\quad\quad |b_+|^2\,+\,|b_0|^2\,+\,|b_-|^2\,=\,1
\end{equation}
The above expression is minimum when $|b_+|^2=|b_0|^2=|b_-|^2=\frac{1}{3}$ and thus the measurement is optimal when the post-selected states
are mutually unbiased with respect to the eigenstates of the observable being measured.
\section{\normalsize Generalisation to arbitrary spins.}
The forms of eqns.(\ref{eq:qubitkahler},\ref{eq:qutritkahler}) strongly suggest
the Ka\"ehler Potential for the general case
\begin{equation}
\label{eq:kahlerpotgen}
K_{N}\,=\,4\,\ln\,(\sum_{i=1}^N\,|\frac{w_i}{b_i}|^2)\quad\quad \sum_{i=1}^N\,w_i\,=\,1\quad\quad \sum_{i=1}^N\,|b_i|^2\,=\,1
\end{equation}
This can also be shown by
\emph{induction}
as when $w_N$ is set to zero, one should recover the $N-1$-dimensional case and that the
Ka\"ehler potential is completely symmetric in the variables $z_i\,=\,\frac{w_i}{b_i}$.
We have explicitly verified this for spin-3/2(qudit) case
on using the SU(4) Gell-Mann matrices
\cite{qudit}.In fact, even for the general case, on taking the observables to be half the SU(N)
Gell-Mann
matrices normalised according to $Tr\,\Lambda_i\,\Lambda_j\,=\,2\,\delta_{ij}$, the analogs of
eqns.(\ref{eq:rhoqubit},\ref{eq:qubitmetric},\ref{eq:rhoqutrit},\ref{eq:qutritmetric}) all turn out to be of identical forms.
Comparing with eqns.(\ref{eq:KGrelnqubit},\ref{eq:detgqutrit}), a suggestive generalisation to arbitrary spin case for the determinant $g_N$ of the metric in
real coordinates($w_i\,=\,x_i\,+\,i\,y_i$ for i=1,..,N-1) is
\begin{equation}
\label{eq:detgN}
g_N\,=\,\frac{4^{2N-2}}{\prod_{i=1}^N\,|b_i|^4}\,\frac{1}{(\sum_{i=1}^N\,|\frac{w_i}{b_i}|^2)^{2N}}
\end{equation}
Actually the Ka\"ehler potential has all the information one needs and it can be shown that eqn.(\ref{eq:detgN}) can be derived from
eqn.(\ref{eq:kahlerpotgen}). Once again we see a direct relation between $g_N,K_N$:
\begin{equation}
\label{eq:KGrelngen}
g_N\,=\,det g_{ij}\,=\,\frac{4^{2N-2}}{\prod_{i=1}^N\,|b_i|^4}\,e^{-\frac{NK_N}{2}}
\end{equation}
The volume element $dV_N$ in the general case is
\begin{equation}
\label{eq:dVgen}
d\,V_N\,=\,\frac{4^{N-1}}{\prod_{i=1}^N\,|b_i|^2}\,\frac{1}{(\sum_{i=1}^N\,|\frac{w_i}{b_i}|^2)^{N}}\,\prod_{i=1}^{N-1}\,dx_i\,dy_i
\end{equation}
The total volume of the state space is
\begin{equation}
\label{eq:Vgen}
V_N\,=\,\int\,dV_N
\end{equation}
The error volume in the general case is, likewise,
\begin{equation}
\label{eq:dVerrgen}
\Delta\,V_N^{err}\,=\,\frac{4^{N-1}(2\Delta_s)^{2N-2}}{\prod_{i=1}^N\,|b_i|^2}\,\frac{1}{(\sum_{i=1}^N\,|\frac{w_i}{b_i}|^2)^{N}}
\end{equation}
The state averaged error volume is calculated as before. In calculating both the total volume as well as state
averaged error volumes, one has to evaluate integrals of the type
\begin{equation}
\label{eq:weakintegrals}
I_M\,=\,\int\,\prod_{i=1}^{N-1}\,dx_i\,dy_i\,\frac{1}{(\sum_{i=1}^N\,|\frac{w_i}{b_i}|^2)^M}
\end{equation}
In the case of $V_N$, $M=N$, and in the case of $\langle V^{err}_N\rangle$, $M=2N$. Upon eliminating $w_N$ by $w_N\,=\,1-\sum_{i=1}^{N-1}\,w_i$, and
expressing in terms of real coordinates, the
denominator(without the power M) can be expressed as
\begin{equation}
\label{eq:genden}
D\,=\,x^T\cdot {\cal M}\cdot x\,+\,y^T\,M\,y\,+\,c_N\,-\,2\,c_N\,{\tilde D}^T\cdot x
\end{equation}
with ${\cal M}$ a symmetric N-1xN-1 matrix and ${\tilde D}$ a N-1 column vector given by
\begin{equation}
\label{eq:MtildeD}
{\cal M}_{ij}\,=\,c_i\,\delta_{ij}\,+\,c_N\quad\quad {\tilde D}^T\,=\,\{1,1,\ldots,1\}\quad\quad c_i\,=\,|b_i|^{-2}
\end{equation}
Satisfying
\begin{equation}
\label{eq:weakintegral2}
det {\cal M}\,=\,\prod_{i=1}^N\,|b_i|^{-2}\quad\quad c_N\,-\,c_N^2\,{\tilde D}^T\cdot\,M^{-1}\,{\tilde D}\,=\,1
\end{equation}
Thus
\begin{equation}
\label{eq:weakintegrals2}
I_M\,=\,\prod_{i=1}^N\,|b_i|^2\,\Omega_p\,\int\,dR\,\frac {R^{2N\,-\,3}}{(R^2\,+\,1)^M}
\end{equation}
where $\Omega_p\,=\,\frac{2\,\pi^{p/2}}{\Gamma(p/2)}$ is the solid angle in p dimensions.
On using the definite integrals
\begin{equation}
\label{eq:defints}
\int_0^\infty\,dx\,\frac{x^{N-2}}{(1\,+\,x)^N}\,=\,\frac{1}{N\,-\,1}\quad\quad
\int_0^\infty\,dx\,\frac{x^{N-2}}{(1\,+\,x)^{2N}}\,=\,\frac{\Gamma(N\,-\,1)\,\Gamma(N\,+\,1)}{\Gamma(2\,N)}
\end{equation}
the final results for $V_N$ and $\langle\Delta V_N^{err}\rangle$ are evaluated to be
\begin{equation}
\label{eq:resultsNdim}
V_N\,=\,\frac{4^{N\,-\,1}}{N\,-\,1}\,\frac{\pi^{N\,-\,1}}{\Gamma(N\,-\,1)}\quad\quad\,\langle \Delta V_N^{err} \rangle = \frac{4^{2\,N\,-2}\,\Delta_s^{2N-2}}{(|b_1|^2|b_2|^2...|b_N|^2)}\,\frac{\Gamma(N)\,\Gamma(N\,+\,1)}{\Gamma(2\,N)}
\end{equation}
For optimal weak measurement, we have to minimize this error volume. $\langle\,\Delta V_N^{err}\,\rangle$ is smallest when
$|b_1|^2=|b_2|^2=....=|b_N|^2=\frac{1}{N}$ and thus the measurement is optimal when the post-selected states are mutually unbiased wrt the eigenstates of the observable being measured.
\section{Proof based on information}
Now we show how to prove this by maximising information as done in \cite{woottersmub}. Following them,
the information is taken to be
\begin{equation}
\label{eq:weakinfo}
{\cal I}\,=\,-\ln\,\Delta V_N^{err}\,=\,-(2\,N\,-\,2)\,\ln\,2\,+\,\sum_{i=1}^N\,\ln\,|b_i|^2\,+\,N\,\ln\,\sum_{i=1}^N\,|\frac{w_i}{b_i}|^2
\end{equation}
The state averaged information is, then,
\begin{equation}
\label{eq:stateavweakinfo}
\langle\,{\cal I}\,\rangle\,=\,-\,(2N\,-\,2)\,\ln\,2\,-\,(2\,N\,-\,2)\,\ln\,2\Delta_s\,+\frac{N\,2^{2\,N\,-\,2}\Omega_{2\,N\,-\,2}}{V_N}\,{\tilde I}_N\,+\,\sum_{i=1}^N\,\ln\,|b_i|^2
\end{equation}
Here ${\tilde I}$ stands for
\begin{equation}
\label{eq:tildeI}
{\tilde I}_N\,=\,\int\,dR\,\frac{2\,R^{2\,N\,-\,3}\,\ln R}{(R^2\,+\,1)^N}
\end{equation}
It is easy to see that maximising this is equivalent to minimising the state averaged error volume.
The main subtlety is that the log of the average need not equal the average of the log, but in our case their
difference is independent of $b_i$.
\section*{\normalsize Other weak tomography methods.}
As in any post-selection only a fraction $|\langle{b}|\psi\rangle|^2$ of the data is made use of, ways have been
suggested in \cite{lutomo1,swutomo,lutomo2} to overcome this. They consist in performing weak tomography with a
larger set of $|b_i\rangle$ even a complete set of such post-selected states. It is clear that our analyses can be
applied to each post-selected state and the general result that optimal measurements require $|b_i\rangle$ to be
mutually unbiased wrt eigenstates of the observable continues to hold.
In \cite{swutomo} it was shown that for weak tomography of a pure state it is sufficient to do measurements of a
single projector $A_{\phi}=|{\phi}\rangle\langle{\phi}|$ but with a full bases of $|b_j\rangle$
where $|{\phi}\rangle$, is subject to
$\langle{b_j}|{\phi}\rangle\neq 0$, but otherwise arbitrary. For each $|b_j\rangle$ the measured (complex)weak values are
\begin{equation}
\label{eq:swuphiweakvalue}
W_j\,=\,\frac{\langle{b_j}|\phi\rangle\langle\phi|\psi\rangle}{\langle{b_j}|\psi\rangle}\quad \sum_j\,|\langle\phi|{b_j}\rangle|^2\,=\,1
\end{equation}
Unlike the weak values of earlier tomography
\begin{equation}
\label{eq:weaksumne1}
\sum_j\,W_j\,\ne\,1
\end{equation}
However, we can introduce new complex values ${\tilde w}_j$
\begin{equation}
\label{eq:newphivalues}
{\tilde w}_j\,=\,\frac{|\langle\phi|{b_j}\rangle|2}{W_j}\,=\,\frac{\langle\phi|b_j\rangle\langle{b_j}|\psi\rangle}{\langle\phi|\psi\rangle}
\end{equation}
Formally, ${\tilde w}_j$ can be thought of as the N complex weak values one would obtain by measuring the
projectors $|b_j\rangle\langle{b_j}|$ with $|\phi\rangle$ as the post-selected state.
The corollary of our results
would be that $|\phi\rangle$ should be mutually unbiased to the basis $\{|b_j\rangle\}$. In other words, the
measurements are optimal when $|\langle\phi|b_j\rangle|^2\,=\,\frac{1}{N}$ for every j. Since for every system
there always exist at least two sets of MUB \cite{durtenglert}, such optimal measurements can be realised in many ways.
\acknowledgments
RK thanks TCIS-Hyderabad for the hospitality during which this work was carried out. NDH thanks Justin Dressel
for many enlightening discussions.
|
2,869,038,154,575 | arxiv |
\section{Introduction}\label{sec:Introduction}
Recent work in machine learning and computer vision have demonstrated advantages of integrating human attention with artificial neural network models, as studies show that many machine vision tasks, i.e., image segmentation, image captioning, object recognition, etc., can benefit from adding human visual attention \cite{liu2018visual}.
\par
\Revision{Visual attention is the ability inherited in biological visual systems to selectively recognize regions or features on scenes relevant to a specific task \cite{borji2012quantitative}, where ``bottom-up'' attention (also called exogenous attention) focuses on physical properties in the visual input that are salient and distinguishable, and ``top-down'' attention (also called endogenous attention) generally refers to mental strategies adopted by the visual systems to accomplish the intended visual tasks \cite{paneri2017top}. Early research on saliency prediction aims to understand attentions triggered by visual features and patterns, and thus ``bottom-up'' attention is the research focus \cite{borji2012quantitative}. More recent attempts, empowered by interdisciplinary efforts, start to study both ``bottom-up'' and ``top-down'' attentions, and therefore the terms, saliency prediction and visual attention prediction, are used interchangeably \cite{sun2021visual}. In this paper, we use the term saliency prediction as the prediction of human visual attentions allocations when viewing 2D images, containing both ``bottom-up'' and ``top-down'' attentions. 2D heatmap is usually used to represent human visual attention distribution. Note that saliency prediction studied in this paper is different from neural network's saliency/attention which can be visualized through class activation mapping (CAM) by \citet{zhou2016learning} and other methods \cite{simonyan2013deep, fu2019multicam,selvaraju2016grad}. With the establishment of several benchmark datasets, data driven approaches demonstrated major advancements in saliency prediction (review in \citet{borji2019saliency} and \citet{wang2019revisiting}). However, saliency prediction for natural scenes is the primary focus, and more needs to be done in the medical domain. Hence, we intend to study the saliency prediction for examining chest X-ray (CXR) images, one of the most common radiology tasks worldwide.}
\par
CXR imaging is commonly used for the diagnosis of cardio and/or respiratory abnormalities; it is capable of identifying multiple conditions through a single shot, i.e., COVID-19, pneumonia, heart enlargement, etc. \cite{ccalli2021deep}. There exists multiple public CXR datasets \cite{irvin2019chexpert,wang2017chestx}. However, the creation of large comprehensive medical datasets is labour intensive, and requires significant medical resources which are usually scarce \cite{castro2020causality}. Consequently, medical datasets are rarely as abundant as that for non-medical fields. Thus, machine learning approaches applied on medical datasets need to address the problem of data scarcity. In this paper, we exploit the multi-task learning for solution.
\par
Multi-task learning is known for its inductive transfer characteristics that can drive strong representation learning and generalization of each component task \cite{caruana1997multitask}. Therefore, multi-task learning methods partially alleviates some of the major shortcomings in deep learning, i.e., high demands for data sufficiency and heavy computation loads \cite{crawshaw2020multi}. However, to apply multi-task learning methods successfully, challenges still exist, which can be the proper selection of component tasks, the architecture of the network, the optimization of the training schemes and many others \cite{zhang2021survey,crawshaw2020multi}. This paper investigates the proper configuration of a multi-task learning model that can tackle visual saliency prediction and image classification simultaneously.
\par
The main contributions of this paper are: 1) development of a new deep convolutional neural network (DCNN) architecture for CXR image saliency prediction and classification based on UNet \cite{ronneberger2015u}, and 2) proposal of an optimized multi-task learning scheme that handles overfitting. Our method aims to outperform the state-of-the-art networks dedicated either for saliency prediction or image classification.
\section{Background}
\subsection{Saliency prediction with deep learning}
DCNN is the leading machine learning method applied to saliency prediction \cite{pan2016shallow, kummerer2016deepgaze, jia2020eml, kroner2020contextual}. Besides, transfer learning with pre-trained networks was observed to boost the performance of saliency prediction \cite{oyama2017fully, kummerer2016deepgaze, oyama2018influence}.
A majority of DCNN approaches are for natural scene saliency prediction, and so far, only a few studied the saliency prediction for medical images. By \citet{cai2018multi}, the generative adversarial network is used to predict expert sonographer's saliency when performing standard fetal head plane detection on ultrasound (US) images. However, the saliency prediction is used as a secondary task to assist the primary detection task, and thus, the saliency prediction performance failed to outperform benchmark prediction methods in several key metrics. Similarly, by \citet{karargyris2021creation}, as a proof-of-concept study, the gaze data is used as an auxiliary task for CXR image classification, and the performance of saliency prediction is not reported in the study.
\subsection{CXR image classification with deep learning}
Public datasets for CXR images enabled data driven approaches for automatic image analysis and diagnosis \cite{serte2020deep,li2020accuracy}. Advancements in standardized image classification networks, i.e., ResNet \cite{he2016deep}, DenseNet \cite{huang2017densely}, and EfficientNet \cite{tan2019efficientnet}, facilitate CXR image classification. Yet, CXR image classification remains challenging, as CXR images are noisy, and may contain subtle features that are difficult to recognize even by experts \cite{ccalli2021deep,khan2021intelligent}.
\section{Multi-task Learning Method}
As stated in Section \ref{sec:Introduction}, component task selection, network architecture design, and training scheme are key factors for multi-task learning. We select the classification task together with the saliency prediction based on the fact that attention patterns are task specific \cite{karessli2017gaze}. \Revision{Radiologists are likely to exhibit distinguishable visual behaviors when different patient conditions are shown on CXR images \cite{mclaughlin2017computing}.} This section introduces our multi-task UNet (MT-UNet) architecture, and derives a better multi-task training scheme for saliency prediction and image classification.
\begin{figure}[t]
\floatconts
{fig:MTL_UNet}
{\caption{MT-UNet architecture. The solid blocks represent 3D tensors, $\mathbf{R}^{F\times H\times W}$, where $F$, $H$, and $W$ denote feature (channel), height and width dimensions, respectively. The solid circles represent 1D tensors. Arrows denote operations to the tensors. Numbers above some of the solid blocks stand for the number features in tensors.}}
{\includegraphics[width=0.80\linewidth]{MTL_UNet.pdf}}
\end{figure}
\subsection{Multi-task UNet}
\label{sec:Multi-task UNet}
\Revision{\figureref{fig:MTL_UNet} shows the architecture of the proposed MT-UNet. The network takes CXR images, $\bm{x}\in\mathbf{R}^{1\times H\times W}$, where $H$ and $W$ are image dimensions, as input, and produces two outputs, predicted saliency $\bm{y}_s\in\mathbf{R}^{1\times H\times W}$, and predicted classification $\bm{y}_c\in\mathbf{R}^{C}$, where $C$ is the number of classes.
As the ground truth for $\bm{y}_s$ is human visual attention distribution, represented as a 2D matrix whose elements are non-negative and sum to $1$, $\bm{y}_s$ is normalized by Softmax before output from MT-UNet. Softmax is also applied to $\bm{y}_c$ before output so that the classification outcome can be interpreted as class probability.} For the simplicity of notation, batch dimensions are neglected.
\par
\Revision{The proposed MT-UNet is derived from standard UNet architecture \cite{ronneberger2015u}. As a well-known image-to-image deep learning model, the UNet structure has been adopted for various tasks. For example, the UNet is appended with additional structures for visual scene understanding \cite{jha2020mt}, the features from the bottleneck (middle of the UNet) are extracted for image classification tasks \cite{karargyris2021creation}, and by combining UNet with Pyramid Net \cite{lin2017feature}, features at different depth are aggregated for enhanced segmentation \cite{moradi2019mfp}.
What's more, the encoder-decoder structure of UNet is utilized for multi-task learning, where the encoder structure is used to learn representative features, along with designated decoder structures or classification heads for image reconstruction, segmentation, and/or classification \cite{zhou2021multi, amyar2020multi}.
In our design, we apply classification heads (shaded in light green in \figureref{fig:MTL_UNet}), which are added not only to the bottleneck but also the ending part of the UNet architecture.} This additional classification specific structures aggregates middle and higher-level features for classification, exploiting features learnt at different depths. The attention heads perform global average pooling operations to the 4D tensors, followed by concatenation, and two linear transforms (dense layers) with dropout (rate=$25\%$) in the middle to produce classification outcomes.
\Revision{The MT-UNet belongs to the hard parameter sharing structure in multi-task learning, where different tasks share the same trainable parameters before branched out to each tasks' specific parameters \cite{vandenhende2021multi}. Having more trainable parameters in task specific structures may improve the performance for that task at a cost of introducing additional parameters and increasing computational load \cite{crawshaw2020multi, vandenhende2021multi}. In our design, we wish to avoid heavy structures with lots of task specific parameters, and therefore, task specific structures are minimized. In \figureref{fig:MTL_UNet}, we use yellow and green shades to denote network structures dedicated for saliency prediction and classification, respectively.}
\subsection{Multi-task Training Scheme}
Balancing the losses between tasks in a multi-task training process has a direct impact on the training outcome \cite{vandenhende2021multi}. There exist multi-task training schemes \cite{kendall2018multi, chen2018gradnorm, guo2018dynamic, sener2018multi}, and among which, we adopt the uncertainty based balancing scheme \cite{kendall2018multi} with the modification proposed in \cite{liebel2018auxiliary}. Hence, the loss function is:
\begin{equation}
\label{eq:loss_1}
\mathcal{\bm{L}} = \frac{1}{\sigma_s^2}L_s+\frac{1}{\sigma_c^2}L_c+\ln(\sigma_s+1)+\ln(\sigma_c+1)
\end{equation}
where $L_s$ and $L_c$ are loss values for $\bm{y}_s$ and $\bm{y}_c$, respectively; $\sigma_s>0$ and $\sigma_c>0$ are trainable scalars estimating the uncertainty of $L_s$ and $L_c$, respectively; $\sigma_s$ and $\sigma_c$ are initialized to $1$; $\ln(\sigma_s+1)$ and $\ln(\sigma_c+1)$ are regularizing terms to avoid arbitrary decrease of $\sigma_s$ and $\sigma_c$.
\Revision{With \equationref{eq:loss_1}, we know that $\sigma$ values can dynamically weigh losses of different amplitudes during training, and loss with low uncertainty (small $\sigma$ value) is prioritized in the training process. $\mathcal{\bm{L}}>0$.}
\Revision{Given $\bm{y}_s$ and $\bm{y}_c$ with their ground truth $\bar{\bm{y}}_s$ and $\bar{\bm{y}}_c$, respectively, the loss functions are:}
\begin{equation}
L_s = H(\bar{\bm{y}}_s, {\bm{y}}_s)-H(\bar{\bm{y}}_s),
\label{eq:L_s}
\end{equation}
\begin{equation}
L_c = H(\bar{\bm{y}}_c, {\bm{y}}_c) \quad \quad \quad \quad
\label{eq:L_c}
\end{equation}
\Revision{where $H(Q,R)=-\Sigma_{i}^nQ_i\ln(R_i)$ stands for cross entropy of two discrete distributions $Q$ and $R$, both with $n$ elements; $H(Q)=H(Q,Q)$ stands for the entropy, or self cross entropy, of discrete distribution $Q$. $L_s$ is the Kullback-Leibler divergence (KLD) loss, and $L_c$ is the cross-entropy loss.}
\Revision{
By observing \equationref{eq:L_s} and \equationref{eq:L_c}, we know that only the cross entropy terms, $H(\cdot, \cdot)$, generate gradient when updating network parameters, as the term $-H(\bar{\bm{y}}_s)$ in $L_s$ is a constant and has zero gradient. Therefore, we extend the method in \cite{kendall2018multi}, and use $\frac{1}{\sigma^2}$ to scale a KLD loss ($L_s$) as that for a cross-entropy loss ($L_c$).}
\par
\Revision{
Although the training scheme in \equationref{eq:loss_1} yields many successful applications, overfitting for multi-task networks still can jeopardize the training process, especially for small datasets \cite{wang2020makes}. Multiple factors can cause overfitting, among witch, learning rate, $r>0$, shows the most significant impact \cite{li2019research}. Also, $r$ generally has significant influences on the training outcome \cite{smith2018disciplined}, making it one of the most important hyper-parameters for a training process.
When training MT-UNet, $r$ is moderated by several factors. The first factor is the use of an optimizer. Many optimizers, i.e., Adam \cite{kingma2014adam} and RMSProp \cite{tieleman2012lecture}, deploy the momentum mechanism or its variants, which can adaptively adjust the effective learning rate, $r_e$, during training. As a learning rate scheduler is often used for more efficient training, it is the second factor to influence $r$. The influence of $r$ from a learning rate scheduler can be adaptive, i.e., reduce learning rate on plateau (RLRP), or more arbitrary, i.e., cosine annealing with warm restarts \cite{loshchilov2016sgdr}. By observing \equationref{eq:loss_1}, we know that an uncertainly estimator $\sigma$ for a loss $L$ also serves as a learning rate adaptor for $L$, which is the third factor. More specifically, given a loss value $L$ with learning rate $r$, the effective learning rate for parameters with a scaled loss value $\frac{L}{\sigma^2}$ is $\frac{r}{\sigma^2}$.}
\par
\Revision{
Decreasing $r$ upon overfitting can alleviate its effects \cite{smith2018disciplined, duffner2007online}, but \equationref{eq:loss_1} leads to increased learning rate upon overfitting, further worsening the training process. This happens because training loss decreases when overfitting occurs, reducing its variance at the same time. Thus, $\sigma$ decreases accordingly, which increases the effective learning rate, thus creating a vicious circle of overfitting. More detailed mathematical derivation is presented in Appendix \ref{app:math}. This phenomenon can be observed in \figureref{fig:training}, where changes of losses and $\sigma$ values during a training process following \equationref{eq:loss_1} are presented.
We can see from \figureref{fig:training_losses}, at epoch $40$, after an initial decrease in both the training and validation losses, the training loss start to acceleratedly decrease while the validation loss start to amplify, which is a vicious circle of overfitting. A RLRP scheduler can halt the vicious circle by resetting the model parameters to a former epoch and reducing $r$. Yet, even with reduced $r$, a vicious circle of overfitting can remerge in later epochs.
}
\begin{figure}[thbp]
\centering
\subfigure[Losses]{
\includegraphics[width=0.35\textwidth]{training_losses.eps}
\label{fig:training_losses}
}
\hfill
\subfigure[$\sigma$ values]{
\includegraphics[width=0.35\textwidth]{training_sigmas.eps}
\label{fig:training_sigmas}
}
\caption{Training process visualization with \equationref{eq:loss_1}}
\label{fig:training}
\end{figure}
To alleviate overfitting, we propose the use of the following equations to replace \equationref{eq:loss_1}:
\begin{equation}
\mathcal{\bm{L}} = \frac{1}{\sigma_s^2}L_s+L_c+\ln(\sigma_s+1),
\label{eq:loss_2}
\end{equation}
\begin{equation}
\mathcal{\bm{L}} = L_s+\frac{1}{\sigma_c^2}L_c+\ln(\sigma_c+1).
\label{eq:loss_3}
\end{equation}
The essence of \equationref{eq:loss_2,eq:loss_3} is to fix the uncertainty term for one loss in \equationref{eq:loss_1} to $1$, so that the flexibility in changing effective learning rate is reduced. With the uncertainty term fixed for one component loss, \equationref{eq:loss_2,eq:loss_3} demonstrate the ability to alleviate overfitting and stabilize the training processing. It is worth noting that \equationref{eq:loss_2,eq:loss_3} cannot be used interchangeably.
\Revision{We need to test both equations to check which can achieve better performances, as depending on the dataset and training process, overfitting can occur of different severity in all component tasks.}
In this study, training process with \equationref{eq:loss_3} achieves the best performance. Ablation study of this method is presented in \sectionref{sec:Experiment and Result}.
\section{Dataset and Evaluation Methods}
We use the ``chest X-ray dataset with eye-tracking and report dictation'' \cite{karargyris2021creation} shared via PhysioNet \cite{moody2000physionet} in this study. The dataset was derived from the MIMIC-CXR dataset \cite{johnson2019mimic, johnson2019mimic2} with additional gaze tracking and dictation from an expert radiologist. $1083$ CXR images are included in the dataset, and accompanying each image, there are tracked gaze data; a diagnostic label (either normal, pneumonia, or enlarged heart); segmentation of lungs, mediastinum, and aortic knob; and radiologist's audio with dictation. The CXR images in the dataset are in resolutions of various sizes, i.e., $3056\times2044$, and we down sample and/or pad each image to $640\times416$. A GP3 gaze tracker by Gazepoint (Vancouver, Canada) was used for the collection of gaze data. The tracker has an accuracy of around \SI{1}{\degree} of visual angle, and has a \SI{60}{\Hz} sampling rate \cite{zhu2019novel}.
\par
Several metrics have been used for the evaluation of saliency prediction performances, and they can be classified into location-based metrics and distribution-based metrics \cite{bylinskii2018different}. Due to the tracking inaccuracy of the GP3 gaze tracker, location-based metrics is not suited for this study. Therefore, in this paper, we follow suggestions in \cite{bylinskii2018different} and use KLD for performance evaluation. We also include histogram similarity (HS), and Pearson's correlation coefficient (PCC) for reference purposes.
\Revision{For the evaluation of classification performances, we use the area under curve (AUC) metrics for multi-class classifications \cite{hand2001simple, fawcett2006introduction}, and the classification accuracy (ACC) metrics. We also include the AUC metrics for each class: normal, enlarged heart, and pneumonia, denoted as AUC-Y1, AUC-Y2, and AUC-Y3, respectively.}
\Revision{
In this paper, all metrics values are presented as median statistics followed by standard deviations behind the $\pm$ sign. Metrics with up-pointing arrow $\uparrow$ indicates greater values reflect better performances, and vise versa. Best metrics are emboldened.}
\par
\section{Experiments and Result}
\label{sec:Experiment and Result}
\subsection{Benchmark comparison}
\label{sec:Benchmark comparison}
In this subsection, we compare the performance of MT-UNet, with benchmark networks for CXR image classification and saliency prediction. Detailed training settings are presented in Appendix \ref{app:Training settings}.
\par
For CXR image classification, the benchmark networks are chosen from the top performing networks for CXR image classification examined in \cite{el2021automated}, which are ResNet50 \cite{he2016deep} and Inception-ResNet v2 (abbreviated as IRNetV2 in this paper) \cite{szegedy2017inception}. Following \citet{karargyris2021creation}, we also include a state-of-the-art general purpose classification network: EfficientNetV2-S (abbreviated as EffNetV2-S) \cite{tan2021efficientnetv2} for comparison. \Revision{For completeness, classification using standard UNet with additional classification head (denoted as UNetC) is included.} Results are presented in \tableref{tab:compare_classification}, and We can see that MT-UNet outperforms the other classification networks.
\par
For CXR image saliency prediction, comparison was conducted with $3$ state-of-the-art saliency prediction models, which are SimpleNet \cite{reddy2020tidying}, MSINet \cite{kroner2020contextual} and VGGSSM \cite{cao2020aggregated}. \Revision{Saliency prediction using standard UNet (denoted as UNetS) is also included for reference.} Table \ref{tab:compare_saliency_result} shows the result, where MT-UNet outperforms the rest. \Revision{Visual comparisons for saliency prediction results are presented through \tableref{tab:cam visual} in Appendix \ref{app:Performance evaluation}.}
\begin{table}[thbp]
\centering
\begin{tabular}{@{}c|ccccc@{}}
\toprule
Metrics &MT-UNet &UNetC &EffNetv2-S &IRNetv2 &ResNet50 \\ \midrule
ACC $\uparrow$ &$\bm{0.670}\pm0.018$ &$0.593\pm0.009$ &$0.640\pm0.037$ &$0.640\pm0.017$ &$0.613\pm0.013$ \\
AUC $\uparrow$ &$\bm{0.843}\pm0.012$ &$0.780\pm0.006$ &$0.826\pm0.015$ &$0.824\pm0.014$ &$0.816\pm0.010$ \\
AUC-Y1 $\uparrow$ &$\bm{0.864}\pm0.014$ &$0.841\pm0.007$ &$0.852\pm0.013$ &$0.862\pm0.016$ &$0.845\pm0.015$ \\
AUC-Y2 $\uparrow$ &$\bm{0.912}\pm0.008$ &$0.840\pm0.003$ &$0.901\pm0.015$ &$0.897\pm0.011$ &$0.896\pm0.015$ \\
AUC-Y3 $\uparrow$ &$\bm{0.711}\pm0.027$ &$0.597\pm0.018$ &$0.653\pm0.017$ &$0.633\pm0.036$ &$0.622\pm0.022$ \\
\bottomrule
\end{tabular}
\caption{\Revision{Performance comparison between classification models.}}
\label{tab:compare_classification}
\end{table}
\begin{table}[thbp]
\centering
\begin{tabular}{@{}c|ccccc@{}}
\toprule
Metrics &MT-UNet &UNetS &SimpleNet &MSINet &VGGSSM \\ \midrule
KLD $\downarrow$ &$\bm{0.726}\pm0.004$ &$0.750\pm0.002$ &$0.758\pm0.009$ &$0.748\pm0.003$ &$0.743\pm0.007$ \\
PCC $\uparrow$ &$\bm{0.569}\pm0.004$ &$0.552\pm0.002$ &$0.545\pm0.008$ &$0.557\pm0.002$ &$0.561\pm0.005$ \\
HS $\uparrow$ &$\bm{0.548}\pm0.001$ &$0.540\pm0.001$ &$0.541\pm0.002$ &$0.545\pm0.001$ &$0.545\pm0.003$ \\ \bottomrule
\end{tabular}
\caption{\Revision{Performance comparison between saliency prediction models.}}
\label{tab:compare_saliency_result}
\end{table}
\subsection{Ablation study}
To validate the modified multi-task learning scheme, ablation study is performed. The multi-task learning schemes following \equationref{eq:loss_1,eq:loss_2,eq:loss_3} are compared, and they are denoted as MTLS1, MTLS2, and MTLS3, respectively. Please note that the best-performing MTLS3 is used for benchmark comparison in \sectionref{sec:Benchmark comparison}. \figureref{fig:compare_scheme} in Appendix \ref{app:Performance evaluation} shows the training process for MTLS2 and MTLS3. With \figureref{fig:training,fig:compare_scheme}, we can see that overfitting occurs both for MTLS1 and MTLS2, but the overfitting is reduced in MTLS3. The training processes shown in \figureref{fig:training,fig:compare_scheme} are with optimized hyper-parameters. The resulting performances are compared in \tableref{tab:compare_scheme_result} in Appendix \ref{app:Performance evaluation}. We can see that MTLS3 outperforms the rest learning schemes both in classification and in saliency prediction.
\par
\Revision{
To validate the effects of using classification head that aggregates features from different depths, we create ablated versions of MT-UNet that use features from either the bottleneck or the top layer of the MT-UNet for classification, denoted as MT-UNetB and MT-UNetT, respectively.
Results are presented in \tableref{tab:compare_scheme_result} in Appendix \ref{app:Performance evaluation}.
We can see that MT-UNet generally performs better than MT-UNetT and MT-UNetB.}
\section{Discussion}
In this paper, we build the MT-UNet model and propose a further optimized multi-tasking learning scheme for saliency prediction and disease classification with CXR images. While a multi-task learning model has the potential of enhancing the performances for all component tasks, a proper training scheme is one of the key factors to fully unveil its potentiality. As shown in \tableref{tab:compare_scheme_result}, MT-UNet with the standard multi-task learning scheme may barely outperform existing models for saliency prediction or image classification.
\par
\Revision{
Several future work could be done to improve this study. The first would be the expansion of the gaze tracking dataset for medical images. So far, only $1083$ CXR images are publicly available with radiologist's gaze behavior, limiting extensive studies of gaze-tracking assisted machine learning methods in the medical field.
Also, more dedicated studies on multi-task learning methods, especially for small datasets, can be helpful for medical machine learning tasks. Overfitting and data deficiency are the lingering challenges encountered by many studies. A better multi-task learning method may handle these challenges more easily.}
\midlacknowledgments{We would like to thank physionet.org for providing the open platform for dataset sharing, and we also like to express our gratitude to contributors who collected, organised and published the multi-modal chest X-ray dataset for this research. This research is supported by Compute Canada and the Natural Sciences and Engineering Research Council of Canada (NSERC).}
|
2,869,038,154,576 | arxiv | \section{Introduction}
Baker and Norine defined linear equivalence and ranks of divisors on graphs and showed that they have many properties similar to those of linear equivalence and dimensions of complete linear series on algebraic curves. In particular, they satisfy analogues of the Riemann-Roch and Abel-Jacobi Theorems \cite{BakerNorine07}. These definitions and results extend to metric graphs \cite{GathmannKerber08, MikhalkinZharkov08} and the rank of a divisor on an ordinary graph is equal to its rank on the associated metric graph in which all edges have length 1 \cite{HladkyKralNorine10}. We follow the usual convention from tropical geometry by referring to the first Betti number of a graph as its genus.
Let $\Gamma$ be a metric graph of genus $g$. The group $\Pic_0(\Gamma)$ is a real torus of dimension $g$ parametrizing equivalence classes of divisors of degree zero on $\Gamma$, and $\Pic_d(\Gamma)$ is the torsor over $\Pic_0(\Gamma)$ that parametrizes equivalence classes of divisors of degree $d$. The \textbf{Brill-Noether locus}
\[
W^r_d(\Gamma) \subset \Pic_d(\Gamma)
\]
is the subset parametrizing divisor classes of rank at least $r$. This is a closed polyhedral subset of $\Pic_d(\Gamma)$; see Section~\ref{sec:BNloci}. We consider the dimension of $W^r_d$ as a function on the moduli space of metric graphs of genus $g$, as studied by Culler and Vogtmann \cite{CullerVogtmann86}. This corresponds to the open subset of the moduli space of tropical curves studied in \cite{BrannettiMeloViviani11} where all vertices are marked with zero. See also \cite{GathmannKerberMarkwig09, Kozlov09, Caporaso11, Chan11} for other approaches to moduli of tropical curves and metric graphs.
\begin{thm} \label{thm:notsemicont}
The function taking a metric graph $\Gamma$ to $\dim W^r_d(\Gamma)$ is not upper semicontinuous on the moduli space of metric graphs of genus 4.
\end{thm}
\noindent This differs from the analogous situation in algebraic geometry. There is a universal Brill-Noether locus $W^r_d$ in the universal Jacobian, proper over the moduli space $M_g$ of smooth projective algebraic curves of genus $g$ \cite[Chapter~4]{ACGH} so, by general results on fiber dimension \cite[Theorem~13.1.3]{EGA4.3}, the function $\dim W^r_d$ is upper semicontinuous in the Zariski topology. In particular, there is a dense open subset of $M_g$ where $\dim W^r_d$ achieves its minimum. In the interesting cases, where this minimum is nonnegative but less than $g$, this minimum is the Brill-Noether number
\[
\rho(g,r,d) = g-(r+1)(g-d+r).
\]
There are open sets in the moduli space of metric graphs of genus $g$ where $\dim W^r_d$ is strictly larger than $\rho(g,r,d)$ for trivial reasons; for instance, the locus of graphs constructed by taking a trivalent tree with $g$ leaves and attaching a loop to each leaf is open, and all such graphs are hyperelliptic. However, contractions of separating edges induce isomorphisms on Picard groups and Brill-Noether loci \cite{MikhalkinZharkov08, CaporasoViviani10, BakerFaber11}, and the map taking a metric graph to the graph obtained by contracting all of its separating edges is a strong deformation retract onto the moduli space of metric graphs without separating edges. This retraction collapses the open sets of hyperelliptic graphs described above onto a set of high codimension. The relevant question, therefore, is whether the Brill-Noether number is equal to the dimension of the Brill-Noether locus on a dense subset of the moduli space of metric graphs without separating edges. Still, the answer is negative.
\begin{thm} \label{thm:open}
There is an open subset of the moduli space of metric graphs of genus 4 parametrizing graphs $\Gamma$ without separating edges such that $W^1_3(\Gamma)$ has positive dimension.
\end{thm}
\noindent On this open set, the dimension of $W^1_3$ is strictly larger than the the Brill-Noether number $\rho(4,1,3)$, which is zero.
The graphs used in the proof of Theorem~\ref{thm:open} are ``loops of loops" of genus 4. More generally, we consider \textbf{loops of loops} of genus $g \geq 3$, which are trivalent metric graphs that have $2g-2$ vertices labeled $v_1, \ldots, v_{g-1}, w_1, \ldots, w_{g-1}$, with a single edge joining $v_i$ to $w_i$, two edges joining $w_i$ to $v_{i+1}$, and two edges joining $w_{g-1}$ to $v_1$, as shown.
\begin{center}
\begin{picture}(200,200)
\put(30,60){\circle{40}}
\put(30,140){\circle{40}}
\put(100,178){\circle{40}}
\put(170,140){\circle{40}}
\put(170,60){\circle{40}}
\qbezier(18,76)(10,100)(18,124)
\qbezier(182,76)(190,100)(182,124)
\qbezier(42,156)(57,172)(80,178)
\qbezier(120,178)(143,172)(158,156)
\qbezier(42,44)(57,26)(80,20)
\qbezier(120,20)(143,26)(158,44)
\put(100,17){\circle*{2}}
\put(90,18){\circle*{2}}
\put(110,18){\circle*{2}}
\put(18,76 ){\circle*{5}}
\put(18,124 ){\circle*{5}}
\put(182,76 ){\circle*{5}}
\put(182,124 ){\circle*{5}}
\put(42,156 ){\circle*{5}}
\put(80,178 ){\circle*{5}}
\put(120,178 ){\circle*{5}}
\put(158,156 ){\circle*{5}}
\put(42,44 ){\circle*{5}}
\put(158,44 ){\circle*{5}}
\put(5,78){$v_1$}
\put(3,121){$w_1$}
\put(35,164){$v_2$}
\put(65,182){$w_2$}
\put(122,182){$v_3$}
\put(155,163){$w_3$}
\put(188, 121){$v_4$}
\put(188,76){$w_4$}
\put(30,30){$w_{g-1}$}
\put(157,32){$v_5$}
\end{picture}
\end{center}
\noindent Our study of ranks of divisors on such graphs depends heavily on Luo's theory of rank determining sets. On an algebraic curve, every set of $g+1$ distinct points is rank determining, and hence every minimal rank determining set on an algebraic curve has size at most $g+1$. Luo showed that every metric graph of genus $g$ has a rank determining set of size $g+1$, and conjectured that every minimal rank determining set should have at most this size \cite[p.~1792]{Luo11}.
\begin{thm} \label{thm:minimal}
Let $\Gamma$ be a loop of loops of genus $g$. Then the set of trivalent vertices $\{v_1, w_1, \ldots, v_{g-1}, w_{g-1}\}$ is a minimal rank determining set on $\Gamma$.
\end{thm}
\noindent This minimal rank determining set has size $2g-2$, which is greater than $g+1$ when $g$ is at least 4, giving counterexamples to Luo's conjecture.
\begin{rem}
These results for loops of loops of genus 4 are moderated by the observation that any genus 4 curve with a regular semistable model where the dual graph of the special fiber is such a loop of loops is Brill-Noether general. This is because a genus 4 curve that is not Brill-Noether general must be hyperelliptic, and hence, by Baker's Specialization Lemma \cite{Baker08}, the dual graph of its special fiber must be a hyperelliptic graph. Such graphs have an involution for which the quotient is a tree \cite{BakerNorine09}, and loops of loops have no such involutions. The following definition and theorems give a framework in which one can understand and generalize this observation without appealing to special facts about hyperelliptic graphs and curves of low genus. See Remark~\ref{rem:recover}.
\end{rem}
We propose the following definition of the Brill-Noether rank $w^r_d(\Gamma)$, as a substitute for $\dim W^r_d(\Gamma)$ when the latter is not well-behaved.
\begin{defn} \label{def:main}
Let $\Gamma$ be a metric graph such that $W^r_d(\Gamma)$ is nonempty. The \textbf{Brill-Noether rank} $w^r_d(\Gamma)$ is the largest integer $\rho$ such that, for every effective divisor $E$ of degree $r + \rho$, there exists a divisor $D$ of degree $d$ and rank at least $r$ on $\Gamma$ such that $D - E$ is effective. If $W^r_d(\Gamma)$ is empty then we define $w^r_d(\Gamma)$ to be $-1$.
\end{defn}
\noindent In many respects, the Brill-Noether rank of a graph is analogous to the dimension of the Brill-Noether locus of an algebraic curve just as the Baker-Norine rank of a divisor on a graph is analogous to the dimension of the complete linear series of a divisor on a curve; see Proposition~\ref{prop:algrank} for the classical side of this analogy. Like Baker-Norine ranks of divisors, these Brill-Noether ranks vary upper semicontinuously in families and satisfy a specialization inequality.
\begin{thm} \label{thm:semicont}
The function taking $\Gamma$ to $w^r_d(\Gamma)$ is upper semicontinuous on the moduli space of metric graphs.
\end{thm}
\begin{thm} \label{thm:specialization}
Let $X$ be a smooth projective curve over a discretely valued field with a regular semistable model whose special fiber has dual graph $\Gamma$. Then
\[
\dim W^r_d(X) \leq w^r_d(\Gamma).
\]
\end{thm}
\noindent Since every graph with integer edge lengths is the dual graph of the special fiber of such a model, and since every metric graph is a limit of dilations of graphs with integer edge lengths, it follows that the Brill-Noether ranks of arbitrary metric graphs are essentially bounded below by the corresponding Brill-Noether numbers.
\begin{cor}
Let $\Gamma$ be a metric graph of genus $g$. Then $w^r_d(\Gamma) \geq \min \{ \rho(g,r,d), g \}$.
\end{cor}
\noindent The paper concludes with a proof that the Brill-Noether rank takes the expected value for loops of loops in the case $(g,r,d) = (4,1,3)$.
\begin{thm} \label{thm:w13}
Let $\Gamma$ be a loop of loops of genus 4. Then $w^1_3(\Gamma) = 0$.
\end{thm}
\begin{rem} \label{rem:recover}
Together, Theorems~\ref{thm:specialization} and \ref{thm:w13} recover the fact that, if $X$ is a smooth projective curve over a discretely valued field with a regular semistable model whose special fiber has dual graph $\Gamma$, then $X$ has only finitely many divisor classes of degree 3 and rank 1.
\end{rem}
\begin{rem}
The failure of upper semicontinuity in Theorem~\ref{thm:notsemicont} also has an analogue for linear series. The complete linear series of a divisor on a metric graph also has a natural polyhedral structure and hence a well-defined dimension \cite{HMY09}, but these dimensions do not vary upper semicontinuously in families. See Example~\ref{ex:yu}.
\end{rem}
\begin{rem}
The specialization inequality stated here as Theorem~\ref{thm:specialization} is a close analogue of Baker's Specialization Lemma \cite[Section~2]{Baker08}, and the basic idea appeared already in \cite{tropicalBN}, where it was used to deduce the classical Brill-Noether Theorem from a ``tropical Brill-Noether Theorem". The tropical Brill-Noether Theorem proved there gives an explicit graph $\Gamma$ for each genus $g$ with the following properties.
\begin{enumerate}
\item If $\rho(g,r,d)$ is negative then $\Gamma$ has no divisors of degree $d$ and rank $r$.
\item If $\rho(g,r,d)$ is zero then $\Gamma$ has exactly the expected number of distinct divisor classes of degree $d$ and rank $r$.
\item If $\rho(g,r,d)$ is nonnegative then $\dim W^r_d(\Gamma) = \min \{\rho(g,r,d), g\}$ and $w^r_d(\Gamma) = \rho(g,r,d)$.
\end{enumerate}
These graphs are chains of $g$ loops, joined by separating vertices of valence 4. Since the Brill-Noether theory of a graph depends only on the Jacobian and the image of the Abel-Jacobi map, and since contractions of separating edges leave these unchanged, the same results hold on trivalent chains of loops where the 4-valent separating vertices are replaced by separating edges. Such graphs correspond to an open subset of the moduli space of metric graphs, and some have speculated that properties (1)-(3) should furthermore hold on a dense open subset of the moduli of metric graphs, just as the analogous properties hold on a dense open subset of the moduli space of curves. Theorem~\ref{thm:open} shows that this is not the case.
Theorems~\ref{thm:open}, \ref{thm:semicont}, and \ref{thm:w13} do show that the locus where the Brill-Noether rank $w^r_d$ is equal to the Brill-Noether number $\rho(g,r,d)$ is open and strictly larger than the locus where (1)-(3) hold, but it remains unclear whether this locus is dense in the moduli space of graphs without separating edges. See also \cite[Conjecture~6.6]{Caporaso11b} for an interesting recent conjecture on the locus of graphs with Brill-Noether general properties.
\end{rem}
\noindent \textbf{Acknowldegments.} We thank L.~Caporaso and Y.~Luo for helpful comments on an earlier draft of this paper and J. Yu for providing Example~\ref{ex:yu}.
\section{Preliminaries}
We briefly recall the basic facts about divisors and Abel-Jacobi theory for metric graphs, following \cite{Mikhalkin06, BakerNorine07, GathmannKerber08, MikhalkinZharkov08, BakerFaber11}, to which we refer the reader for proofs, references, and further details.
\subsection{Divisors} Let $\Gamma$ be a compact connected metric graph. A \textbf{divisor} on $\Gamma$ is a finite ${\mathbb Z}$-linear combination of points of $\Gamma$, and we write $\operatorname{Div}\nolimits(\Gamma)$ for the additive group of all such divisors. The \textbf{degree} of a divisor $D = a_1 v_1 + \cdots + a_r v_r$ is the sum of its coefficients $a_1 + \cdots + a_r$.
Let $f$ be a piecewise linear function with integer slopes on $\Gamma$. For each point $v$ in $\Gamma$, the sum of the incoming slopes of $f$ at $v$ is denoted $\ord_v(f)$. This sum is zero for all but finitely many points of $\Gamma$, so
\[
\divisor(f) = \sum_{v \in \Gamma} \ord_v(f) \cdot v
\]
is a divisor. The divisors of piecewise linear functions with integer slopes are called \textbf{principal} and we write $\operatorname{Prin}(\Gamma)$ for the subgroup of all principal divisors on $\Gamma$. The quotient
\[
\Pic(\Gamma) = \operatorname{Div}\nolimits(\Gamma) / \operatorname{Prin}(\Gamma),
\]
is called the \textbf{Picard group} of $\Gamma$ and elements of $\Pic(\Gamma)$ are called \textbf{divisor classes}. Every principal divisor has degree zero, so the degree of a divisor class is well-defined. We write $\Pic_d(\Gamma)$ for the subset of divisor classes of degree $d$. In particular, $\Pic_0(\Gamma)$ is the subgroup of divisor classes of degree zero.
\subsection{Abel-Jacobi theory} Abel-Jacobi theory for metric graphs identifies $\Pic_0(\Gamma)$ with the \textbf{Jacobian torus}
\[
\Jac(\Gamma) = \Omega^*(\Gamma) / H_1(\Gamma,{\mathbb Z}),
\]
where $\Omega^*(\Gamma)$ is the dual vector space of the real vector space of \textbf{harmonic 1-forms} on $\Gamma$, which assign a real-valued slope to each edge in $\Gamma$ in such a way that the sum of the incoming slopes is zero at every vertex. As in the classical Abel-Jacobi theory of algebraic curves, the homology group $H_1(\Gamma, {\mathbb Z})$ embeds as a lattice in $\Omega^*(\Gamma)$ through integration of 1-forms along 1-cycles.
For each integer $d$, the subset $\Pic_d(\Gamma)$ of divisor classes of degree $d$ is a torsor over the real torus $\Pic_0(\Gamma)$. The \textbf{Abel-Jacobi map}
\[
\Phi : \Gamma \rightarrow \Pic_1(\Gamma)
\]
takes a point $v$ to the divisor class $[v]$. It contracts all separating edges of $\Gamma$ and maps the resulting graph without separating edges homeomorphically onto its image. Furthermore, this map is piecewise linear in the appropriate sense. If we fix a basepoint then $\Pic_1(\Gamma)$ is identified with $\Pic_0(\Gamma)$ and $\Phi$ is identified with the map
\[
\Phi_w : \Gamma \rightarrow \Pic_0(\Gamma)
\]
taking a point $v$ to $[v-w]$. Then $\Omega^*(\Gamma)$ is identified with the universal cover of $\Pic_1(\Gamma)$ and the restriction of the Abel-Jacobi map to any contractible subset $U$ factors through a piecewise-linear map to the real vector space $\Omega^*(\Gamma)$.
\subsection{Linear series}
Let $D = a_1 v_1 + \cdots + a_r v_r$ be a divisor on $\Gamma$. Then $D$ is called \textbf{effective} if each of its coefficients $a_1, \ldots, a_r$ is nonnegative. A divisor $D'$ on $\Gamma$ is \textbf{equivalent} to $D$ if $D - D'$ is a principal divisor, which exactly means that the divisor classes $[D]$ and $[D']$ are equal in $\Pic (\Gamma)$.
The \textbf{complete linear series} $|D|$ is the set of all effective divisors equivalent to $D$; it is naturally identified with the underlying set of a finite, connected polyhedral complex \cite{HMY09}. Certain linear paths in $|D|$ are obtained by \textbf{firing subgraphs}. The boundary of a closed subgraph $\Gamma'$ may be thought of as a divisor in which each boundary point $v_i$ appears with multiplicity equal to its out degree in $\Gamma$. Roughly speaking, firing $\Gamma'$ for a small positive time $\epsilon$ means subtracting the boundary of $\Gamma'$ and adding the boundary of an $\epsilon$ neighborhood of $\Gamma'$. Any two divisors in $|D|$ can be connected by a sequence of firings of subgraphs.
Since $|D|$ is the underlying set of a polyhedral complex which depends only on the class $[D]$, we may consider $\dim |D|$ as a function on $\Pic(\Gamma)$. The complex $|D|$ is not necessarily pure dimensional, but we follow the usual convention that the dimension of a non pure complex is the maximum of the dimensions of its cells. The following example of Josephine Yu shows that $\dim |D|$ is not upper semicontinuous on $\Pic_2$ of a genus 3 graph.
\begin{ex} \label{ex:yu}
Let $\Gamma$ be the graph of genus $3$ with four vertices labeled $v_0$, $v_1$, $w_0$, $w_1$, single edges joining $v_0$ to $v_1$ and $w_0$ to $w_1$, and pairs of edges joining $v_i$ to $w_i$, as shown. All edges have length 1.
\smallskip
\begin{center}
\begin{picture}(200,100)
\put(60,10){\line(1,0){80}}
\put(60,10){\line(0,1){80}}
\put(60,90){\line(1,0){80}}
\put(140,10){\line(0,1){80}}
\qbezier(60,10)(25,50)(60,90)
\qbezier(140,10)(175,50)(140,90)
\put(60.5,10.5){\circle*{5}}
\put(60.5,89.5){\circle*{5}}
\put(139.5,10.5){\circle*{5}}
\put(139.5,89.5){\circle*{5}}
\put(110,10){\circle*{5}}
\put(103,17){$w_{\lambda}$}
\put(55,0){$w_0$}
\put(55,95){$v_0$}
\put(136,0){$w_1$}
\put(136,95){$v_1$}
\end{picture}
\end{center}
\smallskip
For real numbers $0 < \lambda < 1$, let $v_\lambda$ be the point in the segment $[v_0, v_1]$ at distance $\lambda$ from $v_0$. Similarly, let $w_\lambda$ be the point in $[w_0, w_1]$ at distance $\lambda$ from $w_1$. For $0 \leq t < 1$, we consider the divisor
\[
D_t = v_0 + w_{1-t}.
\]
When $t$ is positive, the complete linear series $R(D_t)$ is a 1-dimensional segment of length $t$, consisting of the divisors $v_\lambda + w_{1-\lambda}$ for $0 \leq \lambda \leq t$. When $t$ is zero, the complete linear series $R(D_0)$ is the single point $v_0 + w_1$. Since $[D_0]$ is the limit as $t$ goes to zero of the classes $[D_t]$, it follows that $\dim |D|$ is not upper semicontinuous on $\Pic_2(\Gamma)$.
\end{ex}
\subsection{Dhar's burning algorithm}
Given a fixed basepoint $v$ and an effective divisor $D$, Dhar's burning algorithm is a canonical and efficient method for finding a subgraph to fire so that the points in $D$ move toward $v$. The algorithm terminates after finitely many steps, arriving at a divisor for which the corresponding subgraph is empty. This divisor is called \textbf{$v$-reduced} and is characterized by the following properties. First, the $v$-reduced divisor is effective away from $v$. Next, among divisors equivalent to $D$ and effective away from $v$, it has the maximal possible multiplicity at $v$. Finally, the set of distances to $v$ from the remaining points is lexicographically minimal. Most importantly for our purposes, by the first two properties, if $D$ is $v$-reduced and does not contain $v$, then no effective divisor equivalent to $D$ contains $v$. This burning algorithm for finding $v$-reduced divisors can be adapted to give an algorithm for determining ranks of divisors, as discussed in the following subsection. See \cite{Dhar90, BakerNorine07, Luo11} for details.
\subsection{Ranks of divisors}
Baker and Norine defined the rank of of an effective divisor $D$ as follows.
\begin{defn*}[\cite{BakerNorine07}]
The rank $r(D)$ is the largest integer $r$ such that, for every effective divisor $E$ of degree $r$ on $\Gamma$, there is a divisor $D'$ equivalent to $D$ such that $D' - E$ is effective.
\end{defn*}
\noindent If $D$ is not equivalent to an effective divisor then $r(D)$ is defined to be $-1$. The rank $r(D)$ depends only on the divisor class $[D]$ in $\Pic (\Gamma)$. The locus in $\Pic(\Gamma)$ of divisors of rank at least $r$ is closed, so the rank $r(D)$ is upper semicontinuous on $\Pic(\Gamma)$, unlike $\dim |D|$.
Luo's theory of rank determining sets shows that, in order to determine the rank of a divisor $D$, it is not necessary to test whether $D- E$ is equivalent to an effective divisor for all effective divisors $E$ of degree $r$; it suffices to check this for a finite and relatively small set of divisors. We briefly recall the basic notions of this theory.
For any subset $A \subset \Gamma$, the \textbf{$A$-rank} $r_A(D)$ is the largest integer $r$ such that, for every effective divisor $E$ of degree $r$ with support in $A$, there is a divisor $D'$ that is equivalent to $D$ such that $D' - E$ is effective. Clearly $r_A(D) \geq r(D)$ for all $A$ and $D$.
\begin{defn*}[\cite{Luo11}]
A subset $A \subset \Gamma$ is \textbf{rank determining} if $r_A(D) = r(D)$ for all divisors $D$ on $\Gamma$.
\end{defn*}
\noindent In the same paper where he introduced this notion, Luo proved that every graph $\Gamma$ of genus $g$ has a rank determining set of size $g+1$, showed that rank determining sets are preserved by homeomorphisms, and gave necessary and sufficient topological criteria for a finite subset to be rank determining.
\section{Brill-Noether loci of metric graphs} \label{sec:BNloci}
If $X$ is a smooth projective algebraic curve then, for nonnegative integers $r$ and $d$, the Brill-Noether locus $W^r_d(X) \subset \Pic_d(X)$ is the subset parametrizing divisor classes of degree $d$ whose complete linear series have dimension at least $r$. This subset carries a natural scheme structure, given by its realization as a degeneracy locus of a natural map of vector bundles over $\Pic_d(X)$. See \cite[Chapter~4]{ACGH} for details.
We define the Brill-Noether locus of the metric graph $\Gamma$ as follows.
\begin{defn}
For nonnegative integers $r$ and $d$, the \textbf{Brill-Noether locus}
\[
W^r_d(\Gamma) \subset \Pic_d(\Gamma)
\]
is the set of divisor classes of degree $d$ and rank at least $r$.
\end{defn}
This Brill-Noether locus carries a natural topology as a subspace of the torus torsor $\Pic_d(\Gamma)$. We will show that $W^r_d(\Gamma)$ is, roughly speaking, the underlying set of a closed polyhedral complex. To make this precise, we define polyhedral subsets of $\Pic_d(\Gamma)$ as follows.
Fix a basepoint $w$ in $\Gamma$. Then the map taking a divisor class $[D]$ of degree $d$ to $[D - dw]$ identifies $\Pic_d(\Gamma)$ with $\Pic_0(\Gamma)$. In particular, the choice of basepoint identifies $\Omega^*(\Gamma)$ with the universal cover of $\Pic_d(\Gamma)$.
\begin{defn}
A subset of $\Pic_d(\Gamma)$ is \textbf{polyhedral} if it is the image of a finite union of polytopes in $\Omega^*(\Gamma)$.
\end{defn}
\noindent A different choice of basepoint will change the map from $\Omega^*(\Gamma)$ to $\Pic_d(\Gamma)$ by a translation, so the notion of polyhedral subsets of $\Pic_d(\Gamma)$ is well-defined, independent of the choice of basepoint. Polyhedral subsets are always closed, and the union of any finite number of polyhedral subsets of $\Pic_d(\Gamma)$ is polyhedral.
\begin{ex} \label{ex:AJ}
The restriction of the Abel-Jacobi map to each edge of $\Gamma$ factors through a linear map to $\Omega^*(\Gamma)$. Therefore, the image $\Phi(\Gamma)$ is a polyhedral subset of $\Pic_1(\Gamma)$.
\end{ex}
\begin{lem} \label{lem:int}
The intersection of any finite number of polyhedral subsets of $\Pic_d(\Gamma)$ is polyhedral.
\end{lem}
\begin{proof}
If we fix a basis for $H_1(\Gamma, {\mathbb Z})$ then $\Pic_d(\Gamma)$ is obtained from the unit cube in $\Omega^*(\Gamma)$ with respect to this basis by identifying opposite faces, and a subset of $\Pic_d(\Gamma)$ is polyhedral if and only if its preimage in the unit cube is a finite union of polytopes. The lemma follows, since any finite intersection of finite unions of polytopes is again a finite union of polytopes.
\end{proof}
\begin{lem} \label{lem:sums}
If $S$ and $S'$ are polyhedral subsets of $\Pic_d(\Gamma)$ and $\Pic_{d'}(\Gamma)$ then the sumset $S + S'$ is a polyhedral subset of $\Pic_{d + d'}(\Gamma)$.
\end{lem}
\begin{proof}
Say $S$ and $S'$ are the images of the finite unions of polytopes $P_1 \cup \cdots \cup P_k$ and $P'_1 \cup \cdots \cup P'_\ell$ in $\Omega^*(\Gamma)$, respectively. Then $S + S'$ is the union of the images of the Minkowski sums $P_i + P'_j$.
\end{proof}
For a nonnegative integer $d$, let $\operatorname{Eff}_d(\Gamma) \subset \Pic_d(\Gamma)$ be the set of classes of effective divisors of degree $d$ on $\Gamma$.
\begin{prop} \label{prop:Eff}
The set of effective classes $\operatorname{Eff}_d (\Gamma)$ is a polyhedral subset of dimension $\min\{d,g\}$ in $\Pic_d(\Gamma)$.
\end{prop}
\begin{proof}
The fact that $\operatorname{Eff}_d$ is polyhedral of dimension at most $d$ follows from Example~\ref{ex:AJ} and Lemma~\ref{lem:sums}, since $\operatorname{Eff}_d(\Gamma)$ is the sum of $d$ copies of the 1-dimensional polyhedral image of the Abel-Jacobi map. Since dimensions of sumsets are subadditive and $\operatorname{Eff}_g(\Gamma)$ is the full torus $\Pic_g(\Gamma)$, the dimension of $\operatorname{Eff}_d(\Gamma)$ must be equal to $d$ for $1 \leq d \leq g$.
\end{proof}
\begin{prop} \label{prop:polyhedral}
The Brill-Noether locus $W^r_d(\Gamma)$ is a polyhedral subset of $\Pic_d(\Gamma)$.
\end{prop}
\begin{proof}
Fix a finite rank determining set $A$ for $\Gamma$. Then there are finitely many effective divisors $E_1, \ldots, E_k$ of degree $r$ with support contained in $A$, and $W^r_d(\Gamma)$ is the intersection of the images of the maps
\[
\varphi_i : \operatorname{Eff}_{d-r}(\Gamma) \rightarrow \Pic_d(\Gamma)
\]
taking the class of an effective divisor $D$ to $[D + E_i]$. The proposition then follows from Lemma~\ref{lem:int} and Proposition~\ref{prop:Eff}; the image of each $\varphi_i$ is polyhedral, since it is a translation of $\operatorname{Eff}_{d-r}(\Gamma)$. Then $W^r_d(\Gamma)$ is the intersection of these finitely many polyhedral subsets, and hence polyhedral by Lemma~\ref{lem:int}.
\end{proof}
\begin{rem}
Like the complete linear series of a divisor on a metric graph, the Brill-Noether locus $W^r_d(\Gamma)$ is polyhedral but not necessarily pure dimensional, so $\dim W^r_d(\Gamma)$ refers to the maximum of the dimensions of its polyhedral cells. It is also worth noting that $|D|$ is a contractible complex in a vector space, while $W^r_d(\Gamma)$ often has nontrivial topology and lives in the torus $\Pic_d(\Gamma)$.
\end{rem}
\section{Loops of loops}
Let $\Gamma$ be a loop of loops of genus $g$, as described in the introduction. So $\Gamma$ is a trivalent metric graph whose vertices are labeled $v_1, \ldots, v_{g-1}, w_1, \ldots w_{g-1}$ with a single edge of length $\ell_i$ joining $v_i$ to $w_i$, two edges joining $w_i$ to $v_{i+1}$ for $1 \leq i \leq g-2$ and two edges joining $w_{g-1}$ to $v_1$. The case $g = 4$ is pictured here.
\begin{center}
\begin{picture}(200,165)
\put(60,20){\line(1,0){80}}
\put(12,80){\line(3,4){48}}
\put(188,80){\line(-3,4){48}}
\qbezier(60,144)(100,174)(140,144)
\qbezier(60,144)(100,114)(140,144)
\qbezier(12,80)(11,30)(60,20)
\qbezier(12,80)(61,70)(60,20)
\qbezier(188,80)(189,30)(140,20)
\qbezier(188,80)(139,70)(140,20)
\put(60,20){\circle*{5}}
\put(140,20){\circle*{5}}
\put(12,80){\circle*{5}}
\put(60,144){\circle*{5}}
\put(188,80){\circle*{5}}
\put(140,144){\circle*{5}}
\put(-2,82){$v_1$}
\put(193,82){$w_2$}
\put(43,147){$w_1$}
\put(145,147){$v_2$}
\put(53,9){$w_3$}
\put(138,9){$v_3$}
\put(98,27){$\ell_3$}
\put(43,103){$\ell_1$}
\put(151,103){$\ell_2$}
\end{picture}
\end{center}
We study ranks of divisors on $\Gamma$ using Luo's theory of rank determining sets. This theory builds on Dhar's burning algorithm \cite{Dhar90} and the properties of $v$-reduced divisors developed in \cite[Section~3]{BakerNorine07}. See also \cite[Section~2]{Luo11} for details on these basic notions, which we will use freely, without further mention.
Recall that Luo defines an open subset of a metric graph to be \textbf{special} if it is connected and every connected component of its complement has a vertex with out degree at least two. It follows that the closure of a special open set is a connected subgraph of positive genus. A subset $A \subset \Gamma$ is rank determining if there are no special open sets in the complement of $A$; see Definition~3.2 and Theorem~3.8 in \cite{Luo11}.
\begin{lem}
The set $A = \{ v_1, \ldots, v_{g-1}, w_1, \ldots, w_{g-1} \}$ is rank determining in $\Gamma$.
\end{lem}
\begin{proof}
The closure of any connected component of $\Gamma \smallsetminus A$ is a tree, so the closure of any connected open subset of $\Gamma \smallsetminus A$ has genus zero. It follows that there are no special open sets in the complement of $A$.
\end{proof}
To prove Theorem~\ref{thm:minimal} we must show that no proper subset of $A$ is rank determining. The following example gives an explicit divisor on a loop of loops of genus 4 whose rank is not determined by $A \smallsetminus \{w_3\}$.
\begin{ex}
Suppose $g = 4$, the lengths $\ell_1$ and $\ell_2$ are both 1, and $\ell_3$ is 2. Let $D = v_1 + w_2 + v_3$. Firing the genus 2 subgraph bounded by $v_1$ and $w_2$ shows that $D$ is equivalent to $w_1 + v_2 + v_3$. Since $D$ is equivalent to an effective divisor containing any point of $A' = A \smallsetminus \{w_3\}$, we have
\[
r_{A'}(D) \geq 1.
\]
However, $D$ is also equivalent to $D' = v_1 + v_2 + w$, where $w$ is the midpoint of the edge $[v_3, w_3]$, and it is straightforward to check by Dhar's burning algorithm that $D'$ is $w_3$-reduced. Since $D'$ does not contain $w_3$, it follows that no effective divisor equivalent to $D$ contains $w_3$. Therefore $r(D)= 0$, and $A'$ is not rank determining.
\end{ex}
Since rank determining sets are preserved by homeomorphisms \cite[Theorem~1.10]{Luo11}, and homeomorphisms of $\Gamma$ act transitively on $A$, it follows that no proper subset of $A$ is rank determining for $g = 4$. We now prove the general case.
\begin{proof}[Proof of Theorem~\ref{thm:minimal}]
To show that the rank determining set $A$ is minimal, it will suffice to show that, for each point $v$ of $A$ there is a special open set in $\Gamma$ whose intersection with $A$ is exactly $\{v\}$. See \cite[Proposition~3.26]{Luo11}. Since homeomorphisms of $\Gamma$ act transitively on $A$, it will suffice to consider $v = v_1$. Let $U$ be the connected open neighborhood of $v_1$ bounded by $w_1$ and $w_{g-1}$. The complement of $U$ is connected and $w_1$ has outdegree 2, so $U$ is special. Since the intersection of $U$ with $A$ is exactly $\{v\}$, the theorem follows.
\end{proof}
We now return to the case $g = 4$ and show that there is an open set of loops of loops $\Gamma$ such that the Brill-Noether locus $W^1_3(\Gamma)$ has positive dimension.
\begin{proof}[Proof of Theorem~\ref{thm:open}]
Let $\Gamma$ be a loop of loops of genus 4. Suppose $\ell_1 > \ell_2 > \ell_3$ and $\ell_2 + \ell_3 > \ell_1$. The set of all such graphs is open in the moduli space of metric graphs without separating edges. Therefore, to prove the theorem it will suffice to show that $\dim W^1_3(\Gamma) \geq 1$.
Let $D$ be a divisor of the form $v_1 + w_3 + w$, where $w$ is in the interval $[v_2, w_2]$ at distance at least $\ell_1 - \ell_3$ from $v_2$. We claim that $D$ has rank at least 1. The theorem follows from this claim, the set of classes of all such divisors in $\Pic_3(\Gamma)$ is an embedded interval.
It remains to show that $D$ has rank at least 1. Firing the genus 2 subgraph of $\Gamma$ bounded by $v_1$ and $w$ shows that $D$ is equivalent to $v' + v_2 + w_3$ with $v'$ in the interval $[v_1, w_1]$ at distance $\ell_1 - d(v_2, w)$ from $v_1$. Similarly, firing the loop bounded by $v_1$ and $w_3$ shows that $D$ is equivalent to
\[
D' = v + w + v_3,
\]
where $v$ is the point in $[v_1, w_1]$ at distance $\ell_3$ from $v_1$.
Now, starting from $D'$ and firing the genus 1 subgraph bounded by $v$ and $w$ shows that $D$ is equivalent to $v_3 + w_2 + v'$, with $v'$ in the segment $[v_1, w_1]$. Similarly, starting from $D'$ and firing the genus 2 subgraph bounded by $v$ and $w$ shows that $D$ is equivalent to $w_1 + w' + v_3$, with $w'$ in the segment $[v_2, w_2]$. Altogether, this shows that $D$ is linearly equivalent to effective divisors that contain each element of $A$. Therefore $r_A(D) = 1$ and, since $A$ is rank determining, $r(D) = 1$, as required.
\end{proof}
We conclude this section by using limits of loops of loops of genus 4 to show that $\dim W^r_d$ is not upper semicontinuous on the moduli space of bridgeless metric graphs.
\begin{proof}[Proof of Theorem~\ref{thm:notsemicont}]
Let $\Gamma_0$ be the degenerate loop of loops with three vertices $v_1$, $v_2$, and $v_3$, where each pair of distinct vertices is joined by a pair of edges of length 1.
\begin{center}
\begin{picture}(200,120)(0,10)
\put(40,120){\circle*{5}}
\put(160,120){\circle*{5}}
\put(100,20){\circle*{5}}
\qbezier(40,120)(45,55)(100,20)
\qbezier(40,120)(95,85)(100,20)
\qbezier(100,20)(155,55)(160,120)
\qbezier(100,20)(105,85)(160,120)
\qbezier(40,120)(100,148)(160,120)
\qbezier(40,120)(100,92)(160,120)
\put(26,123){$v_1$}
\put(165,123){$v_2$}
\put(96,9){$v_3$}
\end{picture}
\end{center}
Fix positive real numbers $\ell_1$, $\ell_2$, and $\ell_3$ such that $\ell_1 > \ell_2 > \ell_3$ and $\ell_2 + \ell_3 > \ell_1$, as in the proof of Theorem~\ref{thm:open}. Let $\Gamma_t$ be the loop of loops of genus 4 in which $[v_i, w_i]$ has length $t \cdot \ell_i$ and all other edges have length 1. Then $\Gamma_0$ is the limit of $\Gamma_t$ as $t$ goes to zero. In the proof of Theorem~\ref{thm:open} we showed that $W^1_3(\Gamma_t)$ has positive dimension for $t > 0$.
We claim that $W^1_3(\Gamma_0)$ consists of the single rank 1 class $[v_1 + v_2 + v_3]$ and hence is zero dimensional. Indeed, let $D$ be a divisor of degree 3 and rank 1 on $\Gamma$. Replacing $D$ by an equivalent divisor, we may assume $D$ is $v_1$-reduced, so $D = v_1 + v + w$ for some points $v$ and $w$ in $\Gamma$. Dhar's burning algorithm shows that, since $D$ is $v_1$-reduced, the points $v$ and $w$ cannot both be in the interior of the same edge. Applying Dhar's algorithm again, from $v_2$ and $v_3$, shows that $D$ is $v_2$-reduced and $v_3$-reduced, as well. Since $r(D) = 1$, by hypothesis, and the set $\{v_1, v_2, v_3\}$ is rank determining, it follows that $D$ must contain $v_2$ and $v_3$. Therefore, $[D] = [v_1 + v_2 + v_3]$, as claimed.
We have shown that $\dim W^1_3(\Gamma_t)$ is positive for all positive $t$ and $\dim W^1_d(\Gamma_0)$ is zero. Therefore, since $\Gamma_0$ is the limit of $\Gamma_t$ as $t$ goes to zero, $\dim W^1_3$ is not upper semicontinuous on the moduli space of metric graphs.
\end{proof}
\section{Brill-Noether rank}
In this final section, we show that the ranks of Brill-Noether loci of metric graphs, as defined in the introduction, vary upper semicontinuously in families and are related to dimensions of algebraic Brill-Noether loci by a specialization inequality.
Let $G$ be a connected graph. Label the vertices of $G$ by $v_1, \ldots, v_m$ and the edges by $e_1, \ldots, e_n$. So the genus of a geometric realization is $g = n-m + 1$. We consider
\[
\sigma = {\mathbb R}_{\geq 0}^n
\]
as a parameter space for possibly degenerate metric realizations of $G$; the point $\ell = (\ell_1, \ldots, \ell_s)$ corresponds to the metric graph $\Gamma_\ell$ in which $e_i$ has length $\ell_i$. If $\ell_i = 0$, this produces a degenerate realization in which $e_i$ is contracted.
Over $\sigma$, there is a universal family $\Gamma_\sigma$ of possibly degenerate metric realizations of $G$, obtained by gluing the cones
\[
\gamma_i = \{(\ell_1, \ldots, \ell_n, t) \in {\mathbb R}_{\geq 0}^{n + 1} \ | \ 0 \leq t \leq \ell_i \}.
\]
The gluing depends on the choice of an orientation for each edge, but the resulting metric space is independent of all choices, as is the natural projection to $\sigma$ obtained by forgetting the last coordinate. The fiber over $\ell$ is the metric graph $\Gamma_\ell$, and the intersection of this fiber with $\gamma_i$ is the edge $e_i$ of length $\ell_i$.
We now describe the subspace of $\sigma$ that parametrizes possibly degenerate realizations of $G$ that have genus $g$. For each subset $I \subset \{ 1, \ldots, n \}$, let $G_I$ be the subgraph whose geometric realization is the union of the edges $e_i$ for $i \in I$, and let $g_I$ be the first Betti number of $G_I$. Let $\tau_I$ be the face of $\sigma$ where $\ell_i = 0$ for $i \in I$. If $\ell$ is in the relative interior of $\tau_I$ then $\Gamma_\ell$ has genus $g - g_I$. In particular, the open subset
\[
\sigma^* = \sigma \smallsetminus \bigcup_{g_I > 0} \tau_I
\]
parametrizes possibly degenerate metric realizations of $G$ with genus $g$. The moduli space of metric graphs of genus $g$ is the colimit of a natural diagram of such open cones for all combinatorial graphs of genus $g$. Therefore, to show the Brill-Noether rank is upper semicontinuous it will suffice to prove this on $\sigma^*$.
We write $\Gamma^*$ for the preimage of $\sigma^*$ in $\Gamma_\sigma$. The natural piecewise linear parame\-tri\-zation map $\sigma^* \times \Gamma_{(1, \ldots, 1)} \rightarrow \Gamma^*$ in which the edge $e_i$ is stretched uniformly by a factor of $\ell_i$ in the fiber over $\ell$ is a homotopy equivalence, as is the inclusion of any fiber $\Gamma_\ell \subset \Gamma^*$ for $\ell \in \sigma^*$. The dual of the space of harmonic forms $\Omega^*(\Gamma_\ell)$ is naturally identified with $H_1(\Gamma_\ell, {\mathbb R})$ for each $\ell \in \sigma^*$, and hence with $H_1(\Gamma^*, {\mathbb R})$. We fix the vertex $v_1$ as a basepoint and define the relative Jacobian as
\[
\Pic_0 (\Gamma^*) = \sigma^* \times \big( H_1(\Gamma^*, {\mathbb R}) / H_1(\Gamma^*, {\mathbb Z}) \big) .
\]
The standard arguments in Abel-Jacobi theory for a single graph then produce a piecewise linear relative Abel-Jacobi map over $\sigma^*$
\[
\Phi: \Gamma^* \rightarrow \Pic_0(\Gamma^*),
\]
compatible with projections to $\sigma^*$, whose base change to $\ell \in \sigma^*$ is the usual Abel-Jacobi map for $\Gamma_\ell$.
We claim that the universal family of realizations $\pi:\Gamma^* \rightarrow \sigma^*$ is proper, in the topological sense, meaning that the preimage of any compact set in $\sigma^*$ is compact in $\Gamma^*$. To see this, just note that if $C$ is any subset of $\sigma^*$ then $\pi^{-1}(C)$ is the continuous image of $C \times \Gamma_{(1,\ldots, 1)}$ under the natural piecewise linear parametrization map described above. Therefore, if $C$ is compact then $\pi^{-1}(C)$ is a continuous image of the compact space $C \times \Gamma_{(1, \ldots, 1)}$, and hence is also compact. Since proper maps are universally closed and $\Phi$ commutes with projection to $\sigma^*$, it follows that the image of the universal graph $\Phi(\Gamma^*)$ is closed in the universal Jacobian $\Pic_0(\Gamma^*)$. We consider $\Pic_0(\Gamma^*)$ as a group object in the category of topological spaces over $\sigma^*$, so the set $\operatorname{Eff}_d(\Gamma^*)$ parametrizing graphs $\Gamma_\ell$ with the class of an effective divisor of degree $d$ is the sumset of $d$ copies of $\Phi(\Gamma^*)$, and hence is also closed.
We now show that the function taking a graph $\Gamma$ to $w^r_d(\Gamma)$ is upper semicontinuous.
\begin{proof}[Proof of Theorem~\ref{thm:semicont}]
With the fixed basepoint $v_1$, the torus torsor $\Pic_d(\Gamma_\ell)$ is identified with $\Pic_0(\Gamma_\ell)$ for all $\ell$, so we consider Brill-Noether loci inside the torus $\Pic_0$ instead of inside the torsor $\Pic_d$.
First, we claim that the universal Brill-Noether locus $W^r_d(\Gamma^*) = \bigsqcup_\ell W^r_d(\Gamma_\ell)$ is closed in $\Pic_0(\Gamma^*)$. To see this, let $v_i^*$ be the section of $\Gamma^*$ given by the vertex $v_i$ for $1 \leq i \leq m$. Then, for any tuple of nonnegative integers $a = (a_1, \ldots, a_m)$ such that $a_1 + \cdots + a_m = r$, we have the closed subset
\[
S_a = a_1 v_1^* + \cdots + a_m v_m^* + \operatorname{Eff}_{d-r}(\Gamma^*)
\]
in $\operatorname{Eff}_d(\Gamma^*)$. Since the vertex set $\{v_1, \ldots, v_m\}$ is rank determining on $\Gamma_\ell$ for all $\ell$, by \cite{Luo11}, the universal Brill-Noether locus over $\sigma^*$ is
\[
W^r_d(\Gamma^*) = \bigcap_a S_a,
\]
which is an intersection of closed sets and hence closed.
Now, consider the map
\[
\mu: \operatorname{Eff}_{d-r-\rho}(\Gamma^*) \times \operatorname{Eff}_{r+\rho}(\Gamma^*) \rightarrow \Pic_d(\Gamma^*),
\]
given by adding effective divisor classes. We consider the closed set $\mu^{-1}(W^r_d(\Gamma^*))$ and its image $Z \subset \operatorname{Eff}_{r+\rho}(\Gamma^*)$ under projection to the second factor. This closed set $Z$ parametrizes graphs $\Gamma_\ell$ with the class of an effective divisor of degree $(r + \rho)$ that is contained in an effective divisor of degree $d$ and rank at least $r$. Now, by definition, $w^r_d(\Gamma_\ell)$ is at least $\rho$ if and only if $Z$ contains $\operatorname{Eff}_{r+ \rho}(\Gamma_\ell)$. Therefore, we consider the complement
\[
U = \operatorname{Eff}_{r + \rho}(\Gamma^*) \smallsetminus Z,
\]
which is open in $\operatorname{Eff}_{r+\rho}(\Gamma^*)$ and parametrizes graphs $\Gamma_\ell$ with the class of an effective divisor of degree $(r + \rho)$ that is not contained in any effective divisor of degree $d$ and rank $r$. It remains to show that the image of $U$ is open in $\sigma^*$.
We claim that the projection from $\operatorname{Eff}_{r + \rho}(\Gamma^*)$ to $\sigma^*$ is open. Indeed, the projection $p_2$ from $(\Gamma_{(1, \ldots, 1)})^{r + \rho} \times \sigma^*$ to $\sigma^*$ factors through the natural parametrizing map onto $\operatorname{Eff}_{r + \rho}(\Gamma^*)$. Since $p_2$ is an open mapping, it follows that the image of $U$ is open, which proves the claim. We have shown that the set of $\ell$ in $\sigma^*$ such that $w^r_d(\Gamma_\ell)$ is less than $\rho$ is open, for arbitrary $\rho$. Therefore $w^r_d$ is upper semicontinuous, as required.
\end{proof}
We now explore the relationship between Brill-Noether ranks of metric graphs and dimensions of Brill-Noether loci of algebraic curves. Consider a smooth projective algebraic curve $X$ of genus $g$, and suppose the Brill-Noether locus $W^r_d(X)$ is nonempty. Let $D$ be an effective divisor on $X$ whose class is in $W^r_d(X)$.
Roughly speaking, this class being in $W^r_d(X)$ means that $D$ is a configuration of $d$ points on $X$ that can be moved in a family parametrized by the projective line to contain any configuration of $r$ points on $X$. Now, suppose $[D]$ is contained in a positive dimensional component of $W^r_d(X)$ and $E$ is an effective divisor of degree $r$ on $X$. Then $D$ is equivalent to some divisor $D'$ such that $D' - E$ is effective. Moving $[D]$ in a family $[D_t]$ parametrized by a (nonrational) algebraic curve in $W^r_d(X)$ should lead to a family $D'_t$ of divisors of degree $d$ that contain $E$ and whose classes lie in $W^r_d(X)$, and the residual divisors $D'_t - E$ should be configurations of $d - r$ points that move in a family sweeping out all of $X$. Since $E$ is arbitrary, this would mean that any effective divisor of degree $r + 1$ on $X$ is contained in an effective divisor whose complete linear series has dimension at least $r$. The following proposition makes this rough idea precise, and extends it to the case where the Brill-Noether locus has dimension greater than 1.
\begin{prop} \label{prop:algrank}
Let $X$ be a smooth projective curve. Suppose $W^r_d(X)$ is not empty, and let $E$ be an effective divisor of degree $r + \dim W^r_d(X)$ on $X$. Then there is a divisor $D$ whose class is in $W^r_d(X)$ such that $D-E$ is effective.
\end{prop}
\begin{proof}
Consider the subset $S \subset X^d$ consisting of tuples $(x_1, \ldots, x_d)$ such that the divisor class $[x_1 + \cdots + x_d]$ is in $W^r_d(X)$. In other words, $S$ is the preimage of $W^r_d(X)$ under the natural map $\phi: X^d \rightarrow \Pic_d(X)$. The fiber of this map over a divisor class $[D]$ is invariant under the action of the symmetric group on the $d$ factors, and the quotient is the complete linear series $|D|$. If $[D]$ is in $W^r_d(X)$, then the fiber $\phi^{-1}([D])$ surjects onto $X^r$ under the projection to the first $r$ factors. Therefore, $S$ has dimension at least $r + \dim W^r_d(X)$.
Choose $k$ as large as possible such that projection to the first $k$ factors maps $S$ surjectively onto $X^k$. We must show that $k$ is at least $r + \dim W^r_d(X)$.
Suppose $k$ is less than $r + \dim W^r_d(X)$, so the general fiber of the projection $\pi: S \rightarrow X^k$ is positive dimensional. Since $S$ is proper, the fiber dimension is upper semicontinuous, and hence every fiber is positive dimensional. Let $x = (x_1, \ldots, x_k)$ be a point in $X^k$, and consider the projection of $\pi^{-1}(x)$ onto the $i$th coordinate of $X^d$, for $i > k$. Each of these projections has the same image, by symmetry. Therefore, the image must be 1-dimensional, and hence $\pi^{-1}(x)$ surjects on $X$. Since this holds for all $x$ in $X^k$, it follows that $S$ surjects onto $X^{k+1}$, contradicting the choice of $k$. We conclude that $k$ is at least $r + \dim W^r_d(X)$, and the proposition follows.
\end{proof}
\noindent The proposition gives further justification for the rough idea that the rank of the Brill-Noether locus of a metric graph is an avatar for the dimension of the Brill-Noether locus of an algebraic curve. We now use the proposition to prove the specialization inequality stated in the introduction, which says that the Brill-Noether rank can only go up when specializing from curves to graphs.
\begin{proof}[Proof of Theorem~\ref{thm:specialization}]
Let $X$ be a smooth projective curve of genus $g$ over a discretely valued field with a regular semistable model whose special fiber has dual graph $\Gamma$, and suppose $W^r_d(X)$ is nonempty. We must show that any effective divisor $E$ of degree $r + \dim W^r_d(X)$ on $\Gamma$ is contained in an effective divisor of degree $d$ and rank $r$. First, we prove this in the case where $E$ is rational, i.e. supported at points a rational distance from the vertices of $\Gamma$, and then we prove the general case by rational approximation.
Let $E$ be an effective divisor of degree $r + \dim W^r_d(X)$ on $\Gamma$ that is rational. Then $E$ is the specialization of an effective divisor $E'$ on $X$. By Proposition~\ref{prop:algrank} there is an effective divisor $D'$ of degree $d$ and rank at least $r$ on $X$ that contains $E'$. Then the specialization of $D'$ is an effective divisor of
degree $d$ and rank at least $r$ on $\Gamma$ that contains $E$.
Now, let $E$ be an arbitary, possibly nonrational, effective divisor of
degree $r + \dim W^r_d(X)$ on $\Gamma$. Choose a sequence $\{E_1, E_2, \ldots \}$ of rational
divisors on $\Gamma$ that converges to $E$. For each $E_i$ let $D_i$ be an
effective divisor of degree $d$ and rank at least $r$ that contains $E_i$.
Since the $d$th symmetric product of $\Gamma$ is compact, some subsequence
of $\{D_1, D_2, \ldots \}$ converges to a divisor $D$. Now, since $W^r_d(\Gamma)$
is closed and contains $[D_i]$ for all $i$, the limit $D$ must have rank at
least $r$. Then $D$ is an effective divisor of
degree $d$ and rank at least $r$ on $\Gamma$ that contains $E$, and the theorem follows.
\end{proof}
We conclude by showing that the Brill-Noether $w^1_3$ takes the expected value $\rho(4,1,3) = 0$ for a loop of loops of genus 4.
\begin{proof}[Proof of Theorem~\ref{thm:w13}]
Let $\Gamma$ be a loop of loops of genus 4. We may assume that $[v_1, w_1]$ is the longest of the three single edges, i.e. that $\ell_1$ is greater than or equal to $\ell_2$ and $\ell_3$. We claim that $v_1 + w_1$ is not contained in any effective divisor of degree 3 and rank at least 1.
Let $v_1 + w_1 + w$ be an effective divisor of degree 3 that contains $v_1 + w_1$. The following case by case analysis shows that $D$ has rank zero by exhibiting, in each case, a vertex $v$ of $\Gamma$ such that the $v$-reduced divisor equivalent to $D$ does not contain $v$.
\bigskip
\noindent \emph{Case 1:} Suppose $w$ is contained in either $[v_1, w_1]$ or one of the open segments $(w_1, v_2)$, $(w_2, v_3)$, or $(w_3, v_1)$. Then $D$ is $v_2$-reduced, but does not contain $v_2$.
\bigskip
\noindent \emph{Case 2:} Suppose $w$ is contained in the segment $[v_2, w_2]$. Firing the genus 1 subgraph bounded by $v_1$ and $w$ shows that $D$ is equivalent to $D' = v_1 + w' + w_2$, where $v'$ is in the segment $[v_1, w_1]$. Then $D'$ is $v_3$-reduced but does not contain $v_3$.
\bigskip
\noindent \emph{Case 3:} Suppose $w$ is contained in the segment $[v_3, w_3]$. Firing the genus 1 subgraph bounded by $v_1$ and $w$ shows that $D$ is equivalent to $D'' = v' + w_1 + v_3$, where $v'$ is in the segment $[v_1, w_1]$. Then $D''$ is $v_2$-reduced but does not contain $v_2$.
\end{proof}
\bibliographystyle{amsalpha}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
2,869,038,154,577 | arxiv | \section{Introduction}
A system that is driven out of equilibrium by a quantum quench will display many interesting, non-equilibrium phenomena (for a review, see Ref.~\onlinecite{Mitra2018}). However, it is generally believed that after some transient period, a generic, infinite (or in practice large) system equilibrates; time-translational invariance is recovered and all local observables are well described by a thermal distribution.\cite{DAlessio2016,Srednicki1999,Neumann1929,Goldstein2010}
A simple way to prevent thermalization is continuous driving of the system. In such a scenario, however, energy is not conserved, and the unique stationary state of a generic, ergodic system is thus an infinite-temperature state.\cite{Hone1997,Lazarides2014, DAlessio2014} Well-known exceptions are integrable models (e.g., non-interacting ones)\cite{Lazarides2014b} as well as many-body localized systems,\cite{Ponte2015} which display emergent integrability.\cite{Ros2015,Imbrie2016, Imbrie2016b,Imbrie2017}
One way to circumvent heating is to consider an infinite (i.e., open) quantum system where an infinite amount of energy can dissipate. Such a setup is attractive from a theoretical perspective: it prevents recurrence, allows for a non-trivial stationary state even without time-dependent driving, and is physically-relevant as some coupling to the environment can never be fully eliminated. Treating an open quantum system, however, presents a major hurdle to many theoretical methods.
It is a priori unclear how long transient dynamics persist and when a stationary state will be reached. Additionally, effects like prethermalization\cite{Berges2004} render the use of time-evolution based methods highly non-trivial. Since it is essential to work with infinite systems, approaches that treat the environment perturbatively might not be able to capture the correct long-time behavior. Due to these difficulties, many theoretical approaches are restricted to small interacting regions, weak coupling to reservoirs, or translationally-invariant systems.
In this paper, we present a method to approximately determine the stationary-state of open, interacting quantum wires. The system is driven out of equilibrium via a coupling to non-interacting reservoirs that initially feature different chemical potentials (i.e., a bias voltage). To be precise, we employ the so-called functional renormalization group (FRG)\cite{Metzner2012} in a Keldysh-contour formulation.\cite{Jakobs2007,Gezzi2007,Karrasch2010,Kennes2012} The FRG treats the two-particle interaction in a perturbative sense -- although it still includes an infinite resummation of terms of arbitrary order -- but accounts for the reservoirs exactly. In contrast to previous approaches to this problem,\cite{Jakobs2007} we incorporate second-order contributions and can therefore describe inelastic processes and heating effects. Due to the significant numerical cost, second-order FRG schemes have so far only been implemented for electronic systems in thermal equilibrium\cite{Karrasch2008,Heyder2014,Sbierski2017,Markhof2018,Weidinger2019} as well as for the single impurity Anderson model out of equilibrium.\cite{Jakobs2010b}
In this paper, we close this gap and implement a second-order Keldysh FRG approximation (using a reservoir cutoff) for quantum wires. Solving the corresponding flow equations is highly demanding and requires advanced numerical techniques. By combining a semi-analytic solution with MPI parallelization, we can treat systems of up to 60 interacting lattices sites.
It is fair to say that there is no `gold standard' available to study the non-equilibrium steady-state of an interacting, open, one-dimensional quantum system. While the density-matrix renormalization group \cite{schmitteckert04,schmitteckert10,fhm1,fhm2} as well as the numerical renormalization group\cite{nrg1,nrg2} are considered to be numerically-exact, they are restricted to short time scales or short chains.
Iterative path integral or quantum Monte Carlo based approaches are generically restricted to small-to-intermediate two-particle interactions strength or short times, respectively.\cite{iterp,PhysRevB.82.205323,qmc1,qmc2,qmc3} Exact, Bethe-ansatz based methods have been developed to study the out-of-equilibrium steady state,\cite{Bertini2016,Castro-Alvaredo2016} but these powerful tools are limited to closed, integrable systems. Other approaches such as continuous unitary transformations\cite{floweq1} struggle to capture the emergent, collective behavior of one-dimensional systems. For a further discussion, we refer a recent review article on finite-temperature transport in one-dimensional systems.\cite{bertini2020finitetemperature}
We structure this exposition as follows. We first introduce the general model Hamiltonian (Sec.~\ref{sec:class_of_models}) and give a brief overview of Keldysh Green's functions (Sec.~\ref{sec:greenfunc}). The novel, second-order FRG scheme is discussed in Sec.~\ref{sec:sfrgFinite}; we put a particular emphasis on how to solve the flow equations in an efficient, highly-parallelized way. Results are presented in Sec.~\ref{sec:tb_chains}. First, our method is benchmarked in the equilibrium limit, which is well-understood (Sec.~\ref{ssec:eq_results}). Non-equilibrium is discussed in Sec.~\ref{ssec:bias_results}. We demonstrate that the FRG data depends strongly on the choice of the cutoff scheme. This is particularly severe out-of-equilibrium where a) no physical arguments exists in favor of a certain cutoff, and b) secular higher-order terms appear, which are only partly included in a our approach. In a nutshell, a straightforward second-order, reservoir-cutoff FRG framework is highly-demanding yet inadequate to study interacting quantum wires out of equilibrium.
\section{Class of models discussed}
\label{sec:class_of_models}
In this paper, we consider time-independent, fermionic models with a finite number \(N\in\mathbb{N}\) of interacting degrees of freedom:
\begin{equation}\label{eq:ham_gen}
H_\mathrm{chain}=\sum_{i,j=1}^N h_{ij} c_i^\dagger c_j +\frac{1}{4}\sum_{i,j,k,l=1}^N v_{ijkl} c_i^\dagger c_j^\dagger c_l c_k.
\end{equation}
Later on, we will discuss the concrete example of a tight-binding chain. With this application in mind, we refer to Eq.~(\ref{eq:ham_gen}) as the \emph{chain}.
In order to devise a numerically-efficient FRG scheme, it will be crucial that \(v\) is short-ranged. An example, and the focus of this paper, is a nearest-neighbor interaction. Importantly, we impose no restrictions on \(h\). The chain is quadratically coupled to a finite number \(N_\mathrm{res}\) of infinite, non-interacting reservoirs:
\begin{equation}
\begin{split}
H^\nu_\mathrm{res}&=\sum_k \epsilon^\nu_k a_{k,\nu}^\dagger a_{k,\nu}^\vdag,\\
H^\nu_\mathrm{coup}&=\sum_{i,k} t^\nu_{i,k} c_i^\dag a_{k,\nu}^\vdag\ +\ \mathrm{h.c.},\\
H_\mathrm{tot}&=H_\mathrm{chain}+\sum_{\nu=1}^{N_\mathrm{res}} H^\nu_\mathrm{res}+H^\nu_\mathrm{coup}.
\end{split}
\end{equation}
For the discussion of the computational complexity of our scheme, we assume \(N_\mathrm{res}\ll N\).
The system is initially prepared in a product state of an arbitrary quadratic state within the chain and thermal equilibrium of the (decoupled) reservoirs; the latter is fully characterized by temperatures $T_\nu$ and chemical potentials \(\mu_\nu\). The influence of a reservoir on the chain can be described by the following retarded and Keldysh hybridization functions:
\begin{equation}\label{eq:self_hybrid}\begin{split}
\Gamma^\textnormal{ret/K}(\omega)&=\sum_{\nu=1}^{N_\textnormal{res}}\Gamma^{\nu,\textnormal{ret/K}}(\omega),\\
\Gamma^{\nu,\textnormal{ret}}_{ij}(\omega) & = \sum_k t^\nu_{i,k}t^{\nu*}_{j,k} \frac{1}{\omega-\epsilon^\nu_k+\I 0^+},\\
\Gamma^{\nu,\textnormal{K}}_{ij}(\omega)&= [1-2n^\nu(\omega)]\, 2\I\, \textnormal{Im }\Gamma^{\nu,\mathrm{ret}}_{ij}(\omega),
\end{split}\end{equation}
where $n^\nu(\omega)$ is the Fermi function:
\begin{equation}
n^\nu(\omega) = \frac{1}{1+\exp[(\omega-\mu_\nu)/T_\nu]}.
\end{equation}
We assume that every part of the chain features a decay channel into at least one of the reservoirs. This is essential in order to obtain a well-defined steady state which is independent of the initial preparation of the chain.
In this paper, we exclusively work with reservoirs that feature a flat density of states; this so-called wide-band limit is justified if the bandwidth of the reservoirs exceeds all other energy scales. Moreover, we assume that the reservoirs are either at zero or infinite temperature. Eq.~(\ref{eq:self_hybrid}) then takes the simpler form
\begin{equation}\begin{split}
\label{eq:restrRes}
\Gamma^{\nu,\mathrm{ret}}(\omega)& =-\I \Gamma^\nu,\\
\Gamma^{\nu,\mathrm{K}}(\omega)& =-2\I[1-2n^\nu(\omega)]\Gamma^\nu\\& = -2\I\begin{cases}\sgn(\omega-\mu_\nu)\Gamma^\nu & T_\nu=0 \\ 0 & T_\nu=\infty,\end{cases}
\end{split}\end{equation}
where \(\Gamma^\nu\in\mathbb{C}^{N\times N}\) are positive, hermitian matrices characterizing the coupling to the individual reservoirs. Note that all infinite-temperature reservoirs do not contribute to the Keldysh component.
\section{Green's functions}
\label{sec:greenfunc}
Out of equilibrium, the natural language to describe correlation functions is
the Keldysh formalism. We assume familiarity and refer the reader to other works for a thorough
introduction.\cite{Rammer1986} To make this paper self-contained, however, we will briefly introduce our notation and recapitulate some key concepts.
The single-particle Green's functions in the stationary state take the form
\begin{equation}
G(\omega)=\begin{pmatrix}
G^{11}(\omega) & G^{12}(\omega)\\
G^{21}(\omega) & G^{22}(\omega)
\end{pmatrix}=\begin{pmatrix}
G^\mathrm{ret}(\omega) & G^\mathrm{K}(\omega)\\
0 & G^\mathrm{adv}(\omega)
\end{pmatrix}.
\end{equation}
The retarded component reads
\begin{equation}\label{eq:gr_from_self}
\begin{split}
G^\mathrm{ret}_{ij}(t,t')&=G^\mathrm{ret}_{ij}(t-t')=-\I\theta(t-t')\left\langle \left[c_j^\dagger(t'),c_i(t)\right]_+\right\rangle,\\
G^\mathrm{ret}_{ij}(\omega)&=\int_{-\infty}^\infty \mathrm{d}t \mathrm{e}^{\I\omega t} G^\mathrm{ret}_{ij}(t)
=G^\mathrm{adv}_{ji}(\omega)^*,
\end{split}
\end{equation}
and is related to the non-interacting retarded Green's function $g^\textnormal{ret}(\omega)$ via the Dyson equation:
\begin{equation}\label{eq:dysonret}\begin{split}
G^\mathrm{ret}(\omega) & =\frac{1}{g^\mathrm{ret}(\omega)^{-1}-\Sigma^\mathrm{ret}(\omega)}, \\
g^\mathrm{ret}(\omega)&=\frac{1}{\omega-h- \Gamma^{\mathrm{ret}}(\omega)},
\end{split}\end{equation}
where the self-energy $\Sigma^\textnormal{ret}$ is associated with the two-particle interaction $v_{ijkl}$. The Keldysh component is given by
\begin{equation}
\begin{split}
G^\mathrm{K}_{ij}(t-t')&=\I\left[ \left\langle c_{j}^\dagger(t') c_i(t)\right\rangle-\left\langle c_i(t) c_{j}^\dagger(t') \right\rangle\right],\\
G^\mathrm{K}(\omega)&=\int_{-\infty}^\infty \mathrm{d} t \mathrm{e}^{i\omega t} G^\mathrm{K}(t),
\end{split}
\end{equation}
and the corresponding Dyson equation takes the form
\begin{equation}\label{eq:gk_from_self}
\begin{split}
G^\mathrm{K}& = G^\mathrm{ret}[ (g^\mathrm{ret})^{-1} g^\mathrm{K}(g^\mathrm{adv})^{-1}+ \Sigma^\mathrm{K}] G^\mathrm{adv}\\
& = G^\mathrm{ret}\Big[\Gamma^{\mathrm{K}} + \Sigma^\mathrm{K}\Big] G^\mathrm{adv},
\end{split}
\end{equation}
where we have exploited that
\begin{equation}\label{eq:gk}
g^\mathrm{K}=g^\mathrm{ret}\Gamma^{\mathrm{K}} g^\mathrm{adv}.
\end{equation}
All quantities in Eqs.~(\ref{eq:gr_from_self}) and (\ref{eq:gk_from_self}) are matrices defined by two single-particle indices as well as a single frequency. To simplify the notation, we will frequently employ multi-indices \(1=(i_1, \alpha_1)\) that include both this single-particle index $i_1$ as well as the Keldysh index $\alpha_1\in\{1,2\}$. The frequency-dependence will be denoted separately.
If the entire system is in an equilibrium configuration ($T_\nu=T$, $\mu_\nu=\mu$), the Green's functions fulfill the fluctuation-dissipation theorem (FDT):
\begin{equation}\label{eq:fluc_dis}
G^\mathrm{K}(\omega)=\left[1-2n(\omega)\right]\left[ G^\mathrm{ret}(\omega)-G^\mathrm{adv}(\omega) \right].
\end{equation}
Out of equilibrium, this no longer holds true, but the Keldysh Green's function
can always be expressed via an effective distribution function
\(n^\mathrm{eff}(\omega)\in\mathbb{C}^{N\times N}\):
\begin{equation}\label{eq:fluc_dis_eff}
G^\mathrm{K}(\omega)=
G^\mathrm{ret}(\omega)\left[1-2n^\mathrm{eff}(\omega) \right]
-\left[1-2n^\mathrm{eff}(\omega) \right]G^\mathrm{adv}(\omega) .
\end{equation}
At constant \(\omega\), this is a \emph{Sylvester equation}, which can be solved
if \(G^\mathrm{ret}(\omega)\) and \(G^\mathrm{adv}(\omega)\) have no common eigenvalues.
In equilibrium, the distribution function \(n^\mathrm{eff}(\omega)= n(\omega)\mathbbm{1}\) becomes diagonal and one recovers the fluctuation-dissipation theorem. Out of equilibrium, \(n^\mathrm{eff}(\omega)\) provides an intuitive extension of the equilibrium distribution function.
\section{Second order fRG formulation}
\label{sec:sfrgFinite}
The functional renormalization group is an implementation of the RG idea on the level of single-particle correlation functions.
One starts by introducing a low-energy cutoff $\Lambda$ into the non-interacting Green's function, $g\to g^\Lambda$. By virtue of this replacement, all vertex functions (such as the self-energy) acquire a $\Lambda$-dependence; taking the derivative w.r.t.~\(\Lambda\) yields an infinite hierarchy of coupled differential (flow) equations that describe the changes of the vertex functions when the cutoff scale is altered. The flow equations are arranged in powers of the interaction strength $v_{ijkl}$. If one truncates them at a given order, one obtains a controlled approximation while still including an infinite resummation of higher-order contributions. An introduction to this method can be found in Refs.~\onlinecite{Metzner2012,kopietzBook}.
While the systems described in Sec.~\ref{sec:class_of_models} can easily be treated in a first-order scheme, such an approximation only produces frequency-independent corrections to the retarded self-energy.\cite{Jakobs2007,Gezzi2007,Karrasch2010,Kennes2012} Contributions to the Keldysh component of the self-energy are, however, expected to be essential in order to describe heating. Such effects may fundamentally change the phenomenology, especially in systems that are only weakly coupled to the environment. In this work, we aim to account for all second order terms. This has so far only been achieved in thermal equilibrium\cite{Karrasch2008,Heyder2014,Sbierski2017,Markhof2018,Weidinger2019} as well as for the single impurity Anderson model out of equilibrium.\cite{Jakobs2010b}
As a guide to the reader, we will now summarize the main characteristics of our second-order Keldysh FRG scheme. Auxiliary wide-band reservoirs are attached to all sites of the chain and serve as the cutoff.\cite{Karrasch2010,Jakobs2010a} The flow of the three-particle vertex is neglected. The key approximation of our approach is to modify the rhs of the flow equation for the two-particle vertex $\gamma^\Lambda$ by dropping both its own feedback (i.e., replacing $\gamma^\Lambda$ by the initial, bare interaction) as well as the feedback of the self-energy. The solution to the flow equation for $\gamma^\Lambda$ is then nothing but second-order perturbation-theory in the presence of the additional reservoirs (i.e., at the scale $\Lambda$). In contrast, the self-energy flow equation is solved in full and is not further approximated.
Our FRG scheme is correct to second order in the interaction but still contains an infinite number of higher-order terms. Despite the seemingly crude approximation to the (vertex) flow equations, their solution requires elaborate numerical techniques. By performing the calculation on several hundreds of computing nodes via MPI parallelization, we can access systems of up to 60 lattice sites.
\subsection{Choice of the cutoff}
\label{ssec:cutoff_finite}
When choosing the cutoff, we have two main goals: (i) after truncation, we want to preserve as many symmetries as possible while (ii) aiming for a numerically efficient algorithm. It is not straightforward to achieve both of these goals simultaneously; hence, our approach is a compromise, and we cannot rule out that a different cutoff might yield different, and potentially better, results.
The physical system that we want to study is only weakly coupled to reservoirs. This results in a sharply peaked density of states, which poses a significant numerical problem. Thus, it is advantageous to employ a cutoff which introduces additional scattering. During the flow, physical decay processes are expected to be generated, which should guarantee sufficient smoothing as the cutoff scale is successively lowered. In order to preserve the fluctuation-dissipation theorem in the equilibrium limit (artificially breaking the FDT would lead, e.g., to anomalous heating), we refrain from employing a cutoff which modifies the distribution function.
For these reasons, we use a reservoir cutoff scheme where additional, auxiliary wide-band reservoirs are attached to all sites of the chain. \cite{Karrasch2010,Jakobs2010a} These reservoirs are characterized by a hybridization \(\Lambda\), and their initial state is an equilibrium one governed by a temperature $T_\textnormal{cut}$ as well as a chemical potential $\mu_\textnormal{cut}$. In this paper, we will exclusively use $T_\textnormal{cut}=0$ or $T_\textnormal{cut}=\infty$. To be precise, we replace $\Gamma^\textnormal{ret}\to\Gamma^{\textnormal{ret},\Lambda}$ as well as $\Gamma^\textnormal{K}\to\Gamma^{\textnormal{K},\Lambda}$ and employ Eq.~(\ref{eq:restrRes}) to obtain
\begin{equation}\begin{split}
\label{eq:restrResLa}
\Gamma^{\mathrm{ret},\Lambda}(\omega)& =\Gamma^\textnormal{ret}(\omega)-\I\Lambda\mathbbm{1},\\
\Gamma^{\mathrm{K},\Lambda}(\omega) & =\Gamma^\textnormal{K}(\omega) -2\I [1-2n^\textnormal{cut}(\omega)]\Lambda\mathbbm{1}.
\end{split}\end{equation}
The contribution of the physical reservoirs is given by Eq.~(\ref{eq:restrRes}). The low-frequency properties of the system are suppressed via Eq.~(\ref{eq:restrResLa}), which thus acts as an infrared cutoff. When the auxiliary reservoirs are decoupled ($\Lambda=0$), the original, physical system is recovered. This cutoff has the advantage that the Green's functions at finite flow parameter have the same form as physical Green's functions. This allows us to simplify the flow equations significantly and moreover guarantees that causality is conserved automatically.\cite{Jakobs2010a} The same holds true for the fluctuation-dissipation theorem in the equilibrium limit if the temperature and chemical potential of the auxiliary reservoirs are chosen identical to the physical ones, $T_\nu=T_\textnormal{cut}=T$, $\mu_\nu=\mu_\textnormal{cut}=\mu$.
\subsection{Self-energy flow equation}
The flow of the self-energy is given by\cite{Metzner2012,kopietzBook}
\begin{equation}
\label{eq:fin_fo_flow}
\begin{split}
&\partial_\Lambda \Sigma_{1'1}^\Lambda(\omega)
=-\frac{\I}{2\pi} \int d\Omega \sum_{22'}\gamma^\Lambda_{1'2'12}(\omega, \Omega,\omega, \Omega) S^\Lambda_{22'}(\Omega),
\end{split}
\end{equation}
where \(\gamma^\Lambda\) denotes the one-particle irreducible two-particle vertex function, and $S^\Lambda$ is the single-scale propagator:
\begin{equation}\label{eq:single_scale}
S^\Lambda(\omega) =-G^\Lambda(\omega) \{\partial_\Lambda [g^{\Lambda}(\omega)^{-1}]\}G^\Lambda(\omega) = \partial_\Lambda^* G^\Lambda(\omega).
\end{equation}
Here, $\partial_\Lambda^*$ indicates a derivative that acts only on the explicit $\Lambda$-dependence of the cutoff (but not on $\Sigma^\Lambda$). As a reminder, we note that the multi-indices \(1',2',\dots\) contain the single-particle as well as the Keldysh indices. The retarded part of $S^\Lambda$ takes the form
\begin{equation}
S^{\textnormal{ret},\Lambda} = \I G^{\textnormal{ret},\Lambda}G^{\textnormal{ret},\Lambda},
\end{equation}
where we have used Eqs.~(\ref{eq:dysonret}) and (\ref{eq:restrResLa}). The Keldysh component is given by
\begin{equation}\begin{split}
S^{\textnormal{K},\Lambda} = &\,S^{\textnormal{ret},\Lambda}\Big[\Gamma^{\textnormal{K},\Lambda}+\Sigma^{\textnormal{K},\Lambda}\Big] G^{\textnormal{adv},\Lambda} \\
+ &\, G^{\textnormal{ret},\Lambda}\Big[\Gamma^{\textnormal{K},\Lambda}+\Sigma^{\textnormal{K},\Lambda}\Big] S^{\textnormal{adv},\Lambda} \\
-& 2\I [1-2n^\textnormal{cut}(\omega)]G^{\textnormal{ret},\Lambda} G^{\textnormal{adv},\Lambda},
\end{split}\end{equation}
where we have employed Eqs.~(\ref{eq:gk_from_self}) and (\ref{eq:restrResLa}).
\subsection{Vertex flow equation}
Our goal is to extend the functional renormalization group beyond leading order in a way that includes all second order contributions but that is still numerically feasible. This will allow us to analyze the effect of inelastic processes and to understand how they modify the first-order behavior.
Time-translational invariance enforces energy conservation, and we can parametrize the frequency-dependence of the two-particle vertex \(\gamma^\Lambda(\omega_{1'},\omega_{2'},\omega_1,\omega_2)=\gamma^\Lambda(\Pi,X,\Delta)\) via the variables:
\begin{equation}
\label{eq:freq_trafo}
\begin{split}
\Pi& =\omega_1+\omega_2=\omega_{1'}+\omega_{2'},\\
X& =\omega_{2'}-\omega_1=\omega_{2}-\omega_{1'},\\
\Delta &=\omega_{1'}-\omega_1=\omega_{2}-\omega_{2'}.
\end{split}
\end{equation}
The flow-equation for the two-particle vertex function then reads:
\begin{widetext}
\begin{equation}
\label{eq:sord}
\begin{split}
\partial_\Lambda \gamma^\Lambda_{1'2'12}(\Pi,X,\Delta)=&\frac{\I}{2\pi}\int d\Omega\sum_{33'44'}\\
&\hspace{-2cm}\gamma^\Lambda_{1'2'34}\left(\Pi, \Omega+\frac{X-\Delta}{2}, \Omega-\frac{X-\Delta}{2}\right)S^\Lambda_{33'}\left(\frac{\Pi}{2}-\Omega\right)G^\Lambda_{44'}\left(\frac{\Pi}{2}+\Omega\right)\gamma^\Lambda_{3'4'12}\left(\Pi,\frac{X+\Delta}{2}+\Omega, \frac{X+\Delta}{2}-\Omega\right)\\
&\hspace{-2.4cm}+\gamma^\Lambda_{1'4'32}\left(\frac{\Pi+\Delta}{2}+\Omega, X, \frac{\Pi+\Delta}{2}-\Omega\right)\biggl[S^\Lambda_{33'}\left(\Omega-\frac{X}{2}\right)G^\Lambda_{44'}\left(\Omega+\frac{X}{2}\right)+\\
&\hspace{3.86cm}G^\Lambda_{33'}\left(\Omega-\frac{X}{2}\right)S^\Lambda_{44'}\left(\Omega+\frac{X}{2}\right)\biggr]\gamma^\Lambda_{3'2'14}\left(\Omega+\frac{\Pi-\Delta}{2}, X, \Omega- \frac{\Pi-\Delta}{2}\right)\\
&\hspace{-2.4cm}-\gamma^\Lambda_{1'3'14}\left(\Omega+\frac{\Pi-X}{2}, \Omega-\frac{\Pi-X}{2}, \Delta\right)\biggl[S^\Lambda_{33'}\left(\Omega-\frac{\Delta}{2}\right)G^\Lambda_{44'}\left(\Omega+\frac{\Delta}{2}\right)+\\
&\hspace{3.91cm}G^\Lambda_{33'}\left(\Omega-\frac{\Delta}{2}\right)S^\Lambda_{44'}\left(\Omega+\frac{\Delta}{2}\right)\biggr]\gamma^\Lambda_{4'2'32}\left(\frac{\Pi+X}{2}+\Omega, \frac{\Pi+X}{2}-\Omega,\Delta\right)\\
&+\mathcal{O}(U^3),
\end{split}
\end{equation}
\end{widetext}
where we already truncated the otherwise infinite hierarchy of differential equations by neglecting the flow of the three-particle vertex. This approximation is controlled in a perturbative sense and all terms neglected are at least of third order in the interaction $v_{ijkl}$, which we symbolically denote as \(\mathcal{O}\left(U^3\right)\).
If the frequency space is discretized via a grid with \(N_\Omega\) points, the resulting two-particle vertex has $\ord{N^4 N_\Omega^3}$ non-vanishing entries. Computing the rhs of an individual element is associated with a cost of \(\ord{N^4N_\Omega}\),\footnote{In this argument, we assume the grid is chose fine enough to approximate the integration.} resulting in an complexity class of
\begin{equation}
\text{full 2nd order fRG}\in \ord{N^8 N_\Omega^4}
\end{equation}
to compute the flow of the two-particle vertex (the flow of the self-energy is significantly cheaper computationally). This is impractical even for small systems, and further approximations need to be devised. This will be the subject of Sec.~\ref{eq:sec_vertexsimple}.
\subsection{Initial condition}
When the coupling to the reservoirs is large ($\Lambda\to\infty$), the vertex functions can be obtained analytically:
\begin{equation}
\label{eq:ini_cond}
\begin{split}
\Sigma^{\mathrm{ret}, \Lambda\to\infty}_{i'i}&=\frac{1}{2}\sum_j v_{i'jij},~
\Sigma^{\mathrm{K},\Lambda\to\infty}_{i'i}=0,~
\gamma^{\Lambda\to\infty}_{1'2'12}=\bar v_{1'2'12},
\end{split}
\end{equation}
where we introduced the Keldysh-space version of the two-particle interaction
\begin{equation}
\bar v_{1'2'12}=\begin{cases} \frac{1}{2} v_{i_{1'} i_{2'} i_1 i_2} & \alpha_{1'}+\alpha_{2'}+\alpha_1+\alpha_2\ \text{odd}\\
0 & \text{otherwise.}\end{cases}
\end{equation}
The initial value of the retarded self-energy is frequency independent and can therefore be absorbed into the non-interacting Hamiltonian $h$.
\subsection{Simplification of the vertex flow equation}
\label{eq:sec_vertexsimple}
As a first step to reduce the complexity of the vertex flow equation (\ref{eq:sord}), we replace \(\gamma^\Lambda\) with its initial value \(\bar v\) on the rhs (i.e., we remove the feedback of the two-particle vertex into its own flow equation). As \(\gamma^\Lambda=\bar v+\ord{U^2}\), this only generates an error of \(\ord{U^3}\). The flow equation can then naturally be split up into three independent terms, \(\gamma^\Lambda=\bar v+\gamma^{\text{p},\Lambda}(\Pi)+\gamma^{\text{x},\Lambda}(X)+\gamma^{\text{d},\Lambda}(\Delta)\):
\begin{equation}
\label{eq:chan_decomp_flow}
\begin{split}
\partial_\Lambda \gamma^{\mathrm{p},\Lambda}_{1'2'12}(\Pi)=&\frac{\I}{2\pi}\int d\Omega\sum_{33'44'}\bar v_{1'2'34}S^\Lambda_{33'}G^\Lambda_{44'}\bar v_{3'4'12},\\
\partial_\Lambda \gamma^{\mathrm{x},\Lambda}_{1'2'12}(X)=&\frac{\I}{2\pi}\int d\Omega\sum_{33'44'}\bar v_{1'4'32}\Big[S^\Lambda_{33'}G^\Lambda_{44'}\\ & \hspace*{2.95cm}+G^\Lambda_{33'}S^\Lambda_{44'}\Big]\bar v_{3'2'14},\\
\partial_\Lambda \gamma^{\mathrm{d},\Lambda}_{1'2'12}(\Delta)=&\frac{-\I}{2\pi}\int d\Omega\sum_{33'44'}\bar v_{1'3'14}\Big[S^\Lambda_{33'}G^\Lambda_{44'}\\&\hspace*{2.95cm}+G^\Lambda_{33'}S^\Lambda_{44'}\Big]\bar v_{4'2'32},
\end{split}
\end{equation}
with the initial condition being $\gamma^{\alpha,\Lambda\to\infty}=0$, $\alpha=\textnormal{p,x,d}$. We have omitted the frequency arguments of $S^\Lambda$ and $G^\Lambda$; they are given explicitly in Eq.~(\ref{eq:sord}). The flow equation of the self-energy, which is not subject to any further approximations, can be decomposed correspondingly:
\begin{equation}\label{eq:flow_self_channel_ssfinite}
\begin{split}
&\partial_\Lambda\Sigma^\Lambda_{1'1}(\omega) =-\frac{\I}{2\pi} \int d\Omega\sum_{22'} S^\Lambda_{22'}(\Omega)\times\\
&\Big[\bar v_{1'2'12} + \gamma^{\mathrm{p},\Lambda}_{1'2'12}(\Omega+\omega)+\gamma^{\mathrm{x},\Lambda}_{1'2'12}(\Omega-\omega)+\gamma^{\mathrm{d},\Lambda}_{1'2'12}(0)\Big].
\end{split}\end{equation}
Note that \(\gamma^{\mathrm{d},\Lambda}(\Delta)\) is only needed at \(\Delta=0\).
Secondly, we remove the self-energy feedback on the rhs of the flow equation for the two-particle vertex by replacing \(G^\Lambda\to g^\Lambda\) as well as $S^\Lambda\to s^\Lambda= \partial_\Lambda g^\Lambda$, which again only introduces errors of \(\ord{U^3}\). At first sight, this might appear to make the problem more complicated from a numerical perspective as without feeding back the inelastic contributions of the self-energy, the Green's functions might be sharply peaked and evaluating the integrals might become more difficult. However, our approximation allows us to analytically integrate the flow equations. By exploiting that \(\partial_\Lambda g_{33'}^\Lambda g_{44'}^\Lambda= s_{33'}^\Lambda g_{44'}^\Lambda+g_{33'}^\Lambda s_{44'}^\Lambda\) as well as \(\bar v_{1'2'12}=-\bar v_{2'112}=-\bar v_{1'2'21}\) and by renaming $\Omega\to-\Omega$ in the case of $\gamma^{\mathrm{p},\Lambda}$ , we obtain
\begin{equation}
\label{eq:chan_decomp_flow2}
\begin{split}
\gamma^{\mathrm{p},\Lambda}_{1'2'12}(\Pi)=&\frac{\I}{4\pi}\int d\Omega\sum_{33'44'}g^\Lambda_{33'}\left(\frac{\Pi}{2}-\Omega\right)g^\Lambda_{44'}\left(\frac{\Pi}{2}+\Omega\right)\\&\hspace*{3cm}\times \bar v_{1'2'34}\bar v_{3'4'12},\\[1ex]
\gamma^{\mathrm{x},\Lambda}_{1'2'12}(X)=&\frac{\I}{2\pi}\int d\Omega\sum_{33'44'}g^\Lambda_{33'}\left(\Omega-\frac{X}{2}\right)g^\Lambda_{44'}\left(\Omega+\frac{X}{2}\right)\\& \hspace*{3cm}\times \bar v_{1'4'32}\bar v_{3'2'14},\\[1ex]
\gamma^{\mathrm{d},\Lambda}_{1'2'12}(\Delta)=&\frac{-\I}{2\pi}\int d\Omega\sum_{33'44'}g^\Lambda_{33'}\left(\Omega-\frac{\Delta}{2}\right)g^\Lambda_{44'}\left(\Omega+\frac{\Delta}{2}\right)\\& \hspace*{3cm}\times \bar v_{1'3'14}\bar v_{4'2'32}.
\end{split}
\end{equation}
Eq.~(\ref{eq:chan_decomp_flow2}) is nothing but the perturbation-theory result for the two-particle vertex in the presence of a finite flow parameter \(\Lambda\). The cutoff enters in the bare Green's functions $g^\Lambda$.
While at \(T=0\) some components of the Green's functions and single-scale propagators are discontinuous, this is not true for the vertex functions, which one can understand as follows: The rhs of the flow equations (\ref{eq:chan_decomp_flow2}) is governed by a convolution of two functions $g^\Lambda$ that decay sufficiently quickly for large frequencies; this yields a continuous function.
The new flow equations (\ref{eq:chan_decomp_flow2}) each depend on a single frequency (and not on three frequencies), which drastically simplifies calculations. In addition, the dependence on the single-particle indices is reduced:
\begin{equation}
\label{eq:sup_chan_two_part}
\begin{split}
v_{i_{1'}i_{2'}\bullet\bullet}=0 \lor v_{\bullet\bullet i_{1}i_{2}}=0~~&\Rightarrow~\gamma^{\text{p},\Lambda}_{1'2'12}(\Pi)=0,\\
v_{i_{1'}\bullet\bullet i_{2}}=0 \lor v_{\bullet i_{2'}i_{1}\bullet}=0~~&\Rightarrow~\gamma^{\text{x},\Lambda}_{1'2'12}(X)=0,\\
v_{i_{1'}\bullet i_{1}\bullet}=0 \lor v_{\bullet i_{2'} \bullet i_{2}}=0~~&\Rightarrow~\gamma^{\text{d},\Lambda}_{1'2'12}(\Delta)=0.
\end{split}
\end{equation}
This follows directly from Eq.~(\ref{eq:chan_decomp_flow2}); e.g., $\gamma^{\textnormal{p},\Lambda}_{1'2'12}$ contains a term $v_{1'2'34}$ and thus vanishes for those indices $1',2'$ where $v_{1'2'34}=0$. For a nearest-neighbor interaction, Eq.~(\ref{eq:sup_chan_two_part}) simplifies to (other short-ranged interactions follow similarly)
\begin{equation}
\begin{split}
\gamma^{\text{p},\Lambda}_{1'2'12}(\Pi )&=0\hspace{0.5cm} \forall\, |i_{1'}-i_{2'}|\neq 1 \lor |i_{1}-i_{2}|\neq 1,\\
\gamma^{\text{x},\Lambda}_{1'2'12}(X )&=0\hspace{0.5cm} \forall\, |i_{1'}-i_{2 }| > 1 \lor |i_{2'}-i_{1}|>1,\\
\gamma^{\text{d},\Lambda}_{1'2'12}(\Delta)&=0\hspace{0.5cm} \forall\, |i_{1'}-i_{1 }| > 1 \lor |i_{2'}-i_{2}|>1.
\end{split}\label{eq:sparseTwoPart}
\end{equation}
By virtue of Eqs.~(\ref{eq:sup_chan_two_part}) and (\ref{eq:sparseTwoPart}), only $\ord{N^2}$ terms are generated in each channel $\gamma^{\alpha,\Lambda}$, $\alpha=\textnormal{p,x,d}$ of the vertex flow equation (\ref{eq:chan_decomp_flow2}). Moreover, the summation over the single-particle indices $i_3$, $i_{3'}$, $i_4$, and $i_{4'}$ in Eq.~(\ref{eq:chan_decomp_flow2}) only involves a limited number of terms and does not scale with $N$. The numerical cost of computing $\gamma^\Lambda$ at a given $\Lambda$ is then given by
\begin{equation}\label{eq:frg_scaling1}
\underbrace{\ord{N^2 N_\Omega}}_{\text{\#components}}\underbrace{\ord{N_\Omega}}_{\text{integration}}.
\end{equation}
This does not include the cost of computing the Green's functions $g^\Lambda$, which is expected to scale like \(\ord{N^3N_\Omega}\) and which should be done beforehand. While Eqs.~(\ref{eq:chan_decomp_flow2}) can be solved numerically, it turns out to be more efficient to employ a semi-analytical solution, which we will discuss in the next section.
\subsection{Analytically computing the perturbative two-particle vertex}
\label{sec:fin_ala_vert}
We will now derive a semi-analytic way to determine the rhs of Eq.~(\ref{eq:chan_decomp_flow2}) at a given value of the flow parameter $\Lambda$. To improve readability, we use the short hand notation for the effective (retarded) Hamiltonian
\begin{equation}
\bar h=h+\Gamma^{\textnormal{ret},\Lambda},
\end{equation}
where $\Gamma^{\textnormal{ret},\Lambda}$ has been defined in Eq.~(\ref{eq:restrResLa}) and is frequency-independent for the case of physical wide-band reservoirs that we focus on exclusively [see Eq.~(\ref{eq:restrRes})]. As \(\bar h\) is not hermitian, it has separate left and right eigensystems:
\begin{equation}\label{eq:barheig}
\begin{split}
\bar h\left| q \right\rangle = \lambda_q \left| q \right\rangle,~~~
\left\langle \bar q \right| \bar h = \left\langle \bar q \right| \lambda_q.
\end{split}
\end{equation}
The positivity of $\Gamma^{\textnormal{ret},\Lambda}$ ensures that \(\Im(\lambda_q)<0\ \forall q\).
We can now rewrite the non-interacting retarded and advanced Green's functions as:
\begin{equation}
\label{eq:grDecomp}
\begin{split}
g^{\textnormal{ret},\Lambda}(\omega)&=\frac{1}{\omega-\bar h}=\sum_q \frac{1}{\omega-\lambda_q}\ket{q}\bra{\bar{q}}=\sum_q \frac{1}{\omega-\lambda_q}Q_q,\\
g^{\textnormal{adv},\Lambda}(\omega)&=\frac{1}{\omega-\bar h^\dagger}=\sum_q \frac{1}{\omega-\lambda^*_q}\ket{\bar q}\bra{q}=\sum_q \frac{1}{\omega-\lambda^*_q}Q^\dagger_q,
\end{split}
\end{equation}
where we introduced the matrix \(Q_q=\ket{q}\bra{\bar q}\). Next, we simplify the Keldysh Green's function $g^{\textnormal{K},\Lambda}(\omega)$. By virtue of Eq.~(\ref{eq:fluc_dis_eff}), $g^{\textnormal{K},\Lambda}(\omega)$ can be expressed in terms of an effective distribution function $n^\textnormal{eff}(\omega)$, which can be related to the hybridization $\Gamma^{\textnormal{K},\Lambda}$ via the Dyson equation (\ref{eq:gk}):
\begin{equation}\label{eq:sylvester2}\begin{split}
g^{\textnormal{K},\Lambda}& =g^{\textnormal{ret},\Lambda}\Gamma^{\textnormal{K},\Lambda}g^{\textnormal{adv},\Lambda}\\ &=
g^{\textnormal{ret},\Lambda}(1-2n^\textnormal{eff}) - (1-2n^\textnormal{eff})g^{\textnormal{adv},\Lambda}\\[1ex]
&\Leftrightarrow \Gamma^{\textnormal{K},\Lambda} = \bar h (1-2n^\textnormal{eff}) - (1-2n^\textnormal{eff})\bar h^\dagger.
\end{split}\end{equation}
All the reservoirs contribute additively to $n^\textnormal{eff}$, and the only frequency-dependence stems from the Fermi functions $n^\nu(\omega)$ and $n^\textnormal{cut}(\omega)$. The effective distribution function can thus be expressed in terms of frequency-independent operators $\eta_\nu$ and $\eta_\textnormal{cut}$:
\begin{equation}\begin{split}\label{eq:sylvesterIndiRes}
1-2n^\textnormal{eff}(\omega) & = \sum_{\alpha=\nu,\textnormal{cut}} \eta_\alpha \left[1-2n^\alpha(\omega)\right]\\& = \sum_{\substack{\alpha=\nu,\textnormal{cut}\\ T_\alpha=0}}\eta_\alpha\sgn(\omega-\mu_\alpha),
\end{split}\end{equation}
where we have used that each of the reservoirs is held at either zero or infinite temperature. By comparing Eqs.~(\ref{eq:restrRes}), (\ref{eq:sylvester2}), and (\ref{eq:sylvesterIndiRes}), we obtain
\begin{equation}\begin{split}
-2\I\Gamma^\nu&=\bar h \eta_\nu- \eta_\nu\bar h^\dagger,~~ -2\I\Lambda\mathbbm{1}=\bar h \eta_\textnormal{cut}- \eta_\textnormal{cut}\bar h^\dagger.
\end{split}\end{equation}
The resulting equations are of a Sylvester form and can be solved via the Bartels-Stewart algorithm.\cite{Bartels1972} Note that a unique solution for $n^\textnormal{eff}$ exists if and only if \(\bar h\) has no real eigenvalues, which is equivalent to the statement that all degrees of freedom have a decay channel into one of the reservoirs.
Using Eqs.~(\ref{eq:grDecomp}) and (\ref{eq:sylvester2}), we can now express all the terms appearing on the rhs of Eq.~(\ref{eq:chan_decomp_flow2}) via complex-valued integrals. A specific example is given by
\begin{equation}
\label{eq:exampleTwoGF}
\begin{split}
&\int \dOp \Omega\ g^{\textnormal{ret},\Lambda}_{i_3i_{3'}}(\pm\Omega) g^{\textnormal{K},\Lambda}_{i_4i_{4'}}(\Omega+\omega)\\
&=\int \dOp\Omega \sum_{q_1} \frac{1}{\pm\Omega-\lambda_{q_1}} \left(Q_{q_1}\right)_{i_3i_{3'}} \sum_{q_2}\sum_{\alpha}\sgn(\Omega+\omega-\mu_\alpha)\\
&\times\biggl[\frac{1}{\Omega+\omega-\lambda_{q_2}} \left( Q_{q_2}\eta_\alpha\right)_{i_4i_{4'}}-\frac{1}{\Omega+\omega-\lambda_{q_2}^*} \left(\eta_\alpha Q^\dag_{q_2}\right)_{i_4i_{4'}}\biggr]\\[1ex]
&=\pm\sum_{q_1q_2}\sum_{\alpha} \left(Q_{q_1}\otimes Q_{q_2} \eta_\alpha\right)_{i_3i_{3'}i_4i_{4'}}f_1(\pm \lambda_{q_1}, \lambda_{q_2}-\omega,\mu_\alpha)\\
&\hspace{1.5cm}-\left(Q_{q_1}\otimes \eta_\alpha Q^\dag_{q_2} \right)_{i_3i_{3'}i_4i_{4'}}f_1(\pm \lambda_{q_1}, \lambda_{q_2}^*-\omega,\mu_\alpha),
\end{split}
\end{equation}
where $\omega\in\{\Pi,X,\Delta\}$, we have shifted the integration variable $\Omega$, and introduced
\begin{equation}
\begin{split}
f_1(a,b,\mu)&=\int d\Omega \frac{1}{\Omega-a} \frac{1}{\Omega-b} \sgn(\Omega-\mu).
\end{split}
\end{equation}
Importantly, the frequency integrals $f_1$ in Eq.~(\ref{eq:exampleTwoGF}) do not depend on the single-particle indices $i_3,i_{3'},i_4,i_{4'}$ and can be computed analytically. A detailed account of this and of how to treat all other terms appearing on the rhs of Eq.~(\ref{eq:chan_decomp_flow2}) can be found in Appendix~\ref{ch:twoGF}.
The complexity of calculating the two-particle vertex for a given value of $\Lambda$ using this strategy has a drastically different, and in some cases favorable, scaling:
\begin{equation}\label{eq:frg_scalingana}
\underbrace{\ord{N^2 N_\Omega}}_{\text{\#components}}~~\underbrace{\ord{N^2}}_{q_{1,2}\text{-summation}}.
\end{equation}
The main advantage compared to Eq.~(\ref{eq:frg_scaling1}) is the reduction from \(\ord{N_\Omega^2}\) to \(\ord{N_\Omega}\) by avoiding the numerical calculation of the integrals; this is achieved at the cost of an additional internal summation over \(N^2\) entries. Note that to obtain this complexity class, it is essential to first compute and store all \(Q_q, \eta_\alpha, Q_q\eta_\alpha, \eta_\alpha Q_q\) for a given $\Lambda$.
\subsection{Frequency integrations}
\label{ssec:freg_integ}
\paragraph{Discretization of the self-energy}
We just illustrated that the frequency integration on the rhs of Eq.~(\ref{eq:chan_decomp_flow2}) can be carried out analytically. The integral that appears in the self-energy flow equation (\ref{eq:flow_self_channel_ssfinite}), however, needs to be performed numerically. It is thus necessary to discretize the frequency argument of $\Sigma^\Lambda(\omega)$. In this work, we employ a fixed, equidistant frequency grid and assume that the self-energy is step-wise constant between grid points. We explicitly tested that our results are converged w.r.t.~to the grid parameters such as its number of elements $N_\Omega$, its spacing, and its largest frequency.
At the beginning (end) of the flow, the single-scale propagator as well as the two-particle vertex decay on a scale given by the coupling $\Lambda$ to the auxiliary reservoir (by the physical bandwidth); hence, they need to be evaluated for large frequencies (for frequencies on the scale of the bandwidth).\footnote{Note that in order to evaluate observables at the end of the flow, the self-energy is required for all frequencies within the support of the Green's functions. For that reason, the largest frequency in our grid always has to be much larger than the physical bandwidth.} In order to avoid having to adapt the grid during the solution of the flow equations, we note that the first-order contribution to the self-energy is frequency independent and thus \(\Sigma_{1'1}^\Lambda(\omega)-\Sigma_{1'1}^\Lambda(\omega')\sim \ord{U^2}\). We can therefore always approximate the self-energy at arbitrarily large frequencies by its value at the largest frequency in our grid; this only leads to errors in \(\ord{U^3}\).
\paragraph{Indefinite integrals}
After discretizing the self-energy, the integral in Eq.~(\ref{eq:flow_self_channel_ssfinite}) can be performed numerically; we employ the \texttt{runge\_kutta\_cash\_karp54} implementation provided by \texttt{boost}.\cite{boost} The indefinite integral is recast as
\begin{equation}
\int_{-\infty}^\infty \dOp \omega f(\omega)=\int_{-A}^A \dOp \omega f(\omega)+\int_{-\frac{1}{A}}^\frac{1}{A} \frac{\dOp \eta}{\eta^2}f\left(\frac{1}{\eta}\right),
\label{eq:indefInteg}
\end{equation}
where we substituted \(\eta=\frac{1}{\omega}\) in the last term. Those terms in Eq.~(\ref{eq:flow_self_channel_ssfinite}) that involve a Keldysh Green's function feature discontinuities at every chemical potential $\mu_\nu$ and $\mu_\textnormal{cut}$ of any of the zero-temperature reservoirs. It is most efficient to split up the corresponding integrals such that none of them contains a discontinuity.
\paragraph{Lookup tables for vertex functions}
The two-particle vertex functions $\gamma^{\textnormal{p},\Lambda}(\Omega+\omega)$ and $\gamma^{\textnormal{x},\Lambda}(\Omega-\omega)$ that appear on rhs of the self-energy flow equation (\ref{eq:flow_self_channel_ssfinite}) need to be evaluated for $N_\Omega^2$ different frequency arguments in order to perform the convolution. Since the analytical approach outlined in Sec.~\ref{sec:fin_ala_vert} is numerically expensive, it is favorable to compute a lookup table for $\gamma^{\textnormal{p},\Lambda}$ and $\gamma^{\textnormal{x},\Lambda}$ on a given set of frequencies beforehand; it is efficient to (ab)use an integration routine to determine an optimal grid. In between grid points, the vertex functions are determined using a cubic spline interpolation. We made sure that our results are independent of the grid parameters.
It is important to point out that the number of grid points needed to faithfully approximate the vertex functions does not necessarily scale with the system size $N$. We will come back to this issue in Sec.~\ref{sec:tb_chains}.
\subsection{Numerical Details, Parallelization}
\label{ssec:paralleli}
There are two reasons to devise a parallelized scheme to solve the FRG flow equations. First, the computational effort to treat large systems of \(\ord{50}\) sites is significant, and the wall-time performance can be enhanced greatly using parallelization. Secondly, the lookup table for the entire vertex function takes up large amounts of memory and cannot be stored on a single machine.\footnote{For a system of \(\ord{50}\) sites with a nearest-neighbor interaction, each $\gamma^{\alpha,\Lambda}$ can be stored in a 1D-array of size \(\ord{10\mathrm{MB}}\) per frequency.} We aim at using MPI parallelization over hundreds of computing nodes, and it is essential to minimize the necessary communication between different machines.
The main idea is to calculate the rhs of $\partial_\Lambda\Sigma^\Lambda_{1'1}(\omega)$ in parallel by splitting up the indices $1',1$ into appropriate sets, which are handled by individual nodes. To be precise, we proceed as follows:
\begin{enumerate}
\item At a given $\Lambda$, calculate the eigenvalues $\lambda_q$ and eigenvectors $\ket{q}$, $\bra{\bar q}$ of $\bar h$ [see Eq.~(\ref{eq:barheig})], determine $\eta_\alpha$ via Eqs.~(\ref{eq:sylvester2}) and (\ref{eq:sylvesterIndiRes}), and compute the products $\eta_\alpha Q_q$ and $Q_q\eta_\alpha$ $\forall q,\alpha$, $Q_q=\ket{q}\bra{\bar q}$.
\item Split up the indices $1',1$ into disjoint sets (see below). On each node, compute the rhs of the self-energy flow equation (\ref{eq:flow_self_channel_ssfinite}) for a given set and for all frequencies $\omega$. To this end, perform the following steps:
\item[3a.] Start a loop over the indices $2$ and $2'$ that appear on the rhs of Eq.~(\ref{eq:flow_self_channel_ssfinite}); these loops only contain a finite (small) number of terms $\gamma^{\textnormal{p},\Lambda}_{1'2'12}$ and $\gamma^{\textnormal{x},\Lambda}_{1'2'12}$ via Eqs.~(\ref{eq:sup_chan_two_part}) and (\ref{eq:sparseTwoPart}).
\item[3b.] Perform the $\Omega$-integrals that involve $\gamma^{\textnormal{p/x},\Lambda}_{1'2'12}(\Omega\pm\omega)$ numerically; $\gamma^{\textnormal{d},\Lambda}_{1'12'2}$ is only needed at zero frequency. To this end, first calculate a frequency lookup table for $\gamma^{\textnormal{p/x},\Lambda}_{1'2'12}(\tilde\omega)$. For a fixed $\tilde\omega$, proceed as follows:
\item[3c.] Evaluate Eq.~(\ref{eq:chan_decomp_flow2}) in the spirit of Eq.~(\ref{eq:exampleTwoGF}); see also the appendix. In particular,\\[1ex]
\hspace*{0.5cm} $\bullet$ loop over $q_{1,2}$ and $\alpha$,\\[1ex]
\hspace*{1cm} $\bullet$ compute $f_{0,1,2}$ analytically,\\[1ex]
\hspace*{1cm} $\bullet$ sum over $3,3',4,4'$, restrict single-particle\\\hspace*{1cm} $\phantom{\bullet}$ sums via Eqs.~(\ref{eq:sup_chan_two_part}) and (\ref{eq:sparseTwoPart}).
\end{enumerate}
The computational bottleneck of the FRG algorithm is the calculation of the two-particle vertex [see Eq.~(\ref{eq:frg_scalingana})], i.e., establishing the lookup table for $\gamma^{\textnormal{p/x},\Lambda}_{1'2'12}(\tilde\omega)$ in step 3c. If the indices $1',1$ are grouped such that within one set the single-particle parts $i_{1'}$ and $i_1$ are chosen from the same spatial region, Eqs.~(\ref{eq:sup_chan_two_part}) and (\ref{eq:sparseTwoPart}) restrict $i_{2'}$ and $i_2$ to (roughly) the same region. Thus, each node needs to compute the lookup table for (roughly) disjoint sets of indices $1',2',1,2$, rendering parallelization highly-efficient. Note that while \(\gamma^{\mathrm{d},\Lambda}_{1'2'12}\) is required for \(\ord{N}\) values of \(2',2\) on every machine, it only enters at zero frequency.
The only quantity that needs to be sent across nodes is the self-energy, which feeds back into its own flow equation (\ref{eq:flow_self_channel_ssfinite}). Note that the corresponding amount of data is small, and MPI parallelization is highly-efficient.
\begin{figure*}
\begin{tikzpicture}
\coordinate(A0) at (0.6cm,0cm);
\foreach \a in {1,2,...,7}{
\if\a4%
\node[circle, minimum size=0.6cm, right=1.1cm*\a of A0, thick](A\a){$\dots$};
\else
\node[draw, circle, minimum size=0.6cm, right=1.1cm*\a of A0, thick](A\a){};
\fi;
}
\foreach[evaluate=\a as \an using int(\a+1)] \a in {1,2,...,6}
\path[]
(A\a.north east) edge [
text=black,
thick,
shorten <=2pt,
shorten >=2pt,
bend left=50
] node[above]{\(t\)} (A\an.north west);
\foreach[evaluate=\a as \an using int(\a+1)] \a in {1,2,...,6}
\path[]
(A\a.south east) edge [
text=black,
thick,
shorten <=2pt,
shorten >=2pt,
bend right=50,
dashed
] node[below]{\(U\)} (A\an.south west);
\coordinate[left=2cm of A1](r1);
\draw[thick, shading=axis, left color=white, right color=custBlue] (r1)++(-90:1.5 and 0.5) arc(-90:90:1.5 and 0.5);
\coordinate[right=2cm of A7](r2);
\draw[thick, shading=axis, right color=white, left color=custBlue] (r2)++(270:1.5 and 0.5) arc(270:90:1.5 and 0.5);
\path[]
(r1)++(35:1.5 and 0.5) edge [
text=black,
thick,
shorten <=2pt,
shorten >=2pt,
bend left=50
] node[above]{\(\Gamma\)} (A1.north west);
\path[]
(r2)++(145:1.5 and 0.5) edge [
text=black,
thick,
shorten <=2pt,
shorten >=2pt,
bend right=50
] node[above]{\(\Gamma\)} (A7.north east);
\end{tikzpicture}
\caption{Pictorial representation of the system employed in Sec.~\ref{sec:tb_chains}. We consider a tight-binding chain ($N$ sites) of spinless fermions with a nearest-neighbor hopping $t$ and a nearest-neighbor interaction $U$. The chain is end-coupled to wide-band reservoirs. The reservoirs are characterized by a hybridization \(\Gamma\) and are initially in thermal equilibrium at zero temperature and with chemical potentials $\mu_\text{L,R}$. }
\label{fig:sys_finite_chain}
\end{figure*}%
\subsection{Perturbation theory}
\label{sec:pt}
Within the FRG scheme used in this work, the two-particle vertex is computed in second-order perturbation theory in $\bar v$. This is achieved by neglecting the feedback of both the self-energy and the two-particle vertex within the original flow equation (\ref{eq:sord}); the result is given in Eq.~(\ref{eq:chan_decomp_flow2}). As a reference, we now illustrate how to calculate the self-energy within perturbation theory in an easy fashion using the existing FRG formalism.
The first-order contribution to the self-energy can be obtained by replacing the single-scale propagator as well as the two-body vertex on the rhs of Eq.~\eqref{eq:flow_self_channel_ssfinite} by their lowest-order expansion (\(s^\Lambda\) and \(\bar v\), respectively) and by integrating the resulting equation:\cite{christophdr}
\begin{equation}\label{eq:pt_first}
\begin{split}
&\Sigma^{\textnormal{1PT},\Lambda}_{1'1}(\omega) =-\frac{\I}{2\pi} \int d\Omega\sum_{22'} g^\Lambda_{22'}(\Omega)\bar v_{1'2'12}.
\end{split}\end{equation}
In order to generalize this to second order, we compute the leading-order expansion of the single-scale propagator:
\begin{equation}\label{eq:pt_s}\begin{split}
S^\Lambda&=\partial_\Lambda^* G^\Lambda =\partial_\Lambda^*\left[g^\Lambda+g^\Lambda\Sigma^{\textnormal{1PT},\Lambda}g^\Lambda+\ord{U^2}\right] \\&=s^\Lambda+g^\Lambda\Sigma^{\textnormal{1PT},\Lambda} s^\Lambda +s^\Lambda\Sigma^{\textnormal{1PT},\Lambda} g^\Lambda+\ord{U^2} \\
&=s^\Lambda +\ord{U}.
\end{split}\end{equation}
Since \(\gamma^{\mathrm{p/x/d},\Lambda}\sim U^2\), the second-order contribution to the rhs of Eq.~\eqref{eq:flow_self_channel_ssfinite} that is associated with the x- and p-channel is given by
\begin{equation}\label{eq:flow_pt_px}
\begin{split}
&-\frac{\I}{2\pi} \int d\Omega\sum_{22'} s^\Lambda_{22'}(\Omega)\Big[\gamma^{\mathrm{p},\Lambda}_{1'2'12}(\Omega+\omega)+\gamma^{\mathrm{x},\Lambda}_{1'2'12}(\Omega-\omega)\Big]\\
=&-\partial_\Lambda \frac{\I}{2\pi} \int d\Omega\sum_{22'} g^\Lambda_{22'}(\Omega)\Big[\gamma^{\mathrm{p},\Lambda}_{1'2'12}(\Omega+\omega)\Big].
\end{split}\end{equation}
The derivative in the last line acts on $g^\Lambda$ as well as on $\gamma^{\textnormal{p},\Lambda}$, which yields the first and second term in the first line, respectively. The latter becomes clear if we use Eq.~(\ref{eq:chan_decomp_flow2}) and rename indices as well as the integration variables. Next, we discuss the second-order contributions to the rhs of Eq.~(\ref{eq:flow_self_channel_ssfinite}) that is attributed to the d-channel as well as to the single-scale propagator:
\begin{equation}\label{eq:flow_pt_d}
\begin{split}
&-\frac{\I}{2\pi} \int d\Omega\sum_{22'} s^\Lambda_{22'}(\Omega)\gamma^{\mathrm{d},\Lambda}_{1'2'12}(0)\\
&+\left\{s^\Lambda+g^\Lambda \Sigma^{\textnormal{1PT},\Lambda} s^\Lambda+s^\Lambda \Sigma^{\textnormal{1PT},\Lambda} g^\Lambda\right\}_{22'}(\Omega) \bar v_{1'2'12}\\
=&-\partial_\Lambda\frac{\I}{2\pi} \int d\Omega\sum_{22'} g^\Lambda_{22'}(\Omega)\Big[\bar v_{1'2'12} +\gamma^{\mathrm{d},\Lambda}_{1'2'12}(0)\Big],
\end{split}\end{equation}
where we have plugged in Eq.~(\ref{eq:pt_first}); the terms $g^\Lambda \Sigma^{\textnormal{1PT},\Lambda} s^\Lambda\bar v$ can be identified with the term $g^\Lambda \partial_\Lambda\gamma^{\mathrm{d},\Lambda}$ via Eq.~(\ref{eq:chan_decomp_flow2}).
The second-order perturbation theory result for the self-energy can now be obtained by modifying the rhs of Eq.~(\ref{eq:flow_self_channel_ssfinite}) according to Eqs.~(\ref{eq:flow_pt_px}) and (\ref{eq:flow_pt_d}) and by integrating w.r.t.~$\Lambda$:
\begin{equation}\label{eq:flow_pt}
\begin{split}
&\Sigma^{\textnormal{2PT},\Lambda}_{1'1}(\omega) =-\frac{\I}{2\pi} \int d\Omega\sum_{22'} g^\Lambda_{22'}(\Omega)\times\\
&\Big[\bar v_{1'2'12} + \gamma^{\mathrm{p},\Lambda}_{1'2'12}(\Omega+\omega)+\gamma^{\mathrm{d},\Lambda}_{1'2'12}(0)\Big].
\end{split}\end{equation}
We reiterate that \(\gamma^{\textnormal{p/d},\Lambda}\) denote the two-particle vertices within second-order perturbation theory [see Eq.~\eqref{eq:chan_decomp_flow2}]; \(\gamma^{\mathrm{x},\Lambda}\) does not appear separately. Note that Eq.~(\ref{eq:flow_pt}) is completely general and holds even if the first-order contribution $\Sigma^{\textnormal{1PT},\Lambda}$ to the self-energy does not vanish.\cite{christophdr}
After plugging in the analytic expressions for \(\gamma^{\textnormal{p/d},\Lambda}\) derived in Sec.~\ref{sec:fin_ala_vert}, the remaining frequency integral can be performed analytically using the techniques of Appendix~\ref{ch:twoGF}. While the ensuing expressions are lengthy, they are particularly helpful for large systems where the bare Green's functions are sharply-peaked and numerical integrations become demanding.
\section{Application to 1D chains}
\label{sec:tb_chains}
Here, we want to study one-dimensional metallic systems driven out of their equilibrium state. These systems are interesting even within equilibrium as a Luttinger liquid state emerges at finite interactions, which replaces the non-interacting Fermi-liquid picture and signals the onset of critical, collective behavior.\cite{giamarchi,Mastropietro2013} The so-called Tomonaga-Luttinger model -- obtained by linearizing the dispersion of the electrons around the Fermi-points -- is the standard paradigm to understand the low-energy behavior of such systems.\cite{Tomonaga1950,Luttinger1963,Schoenhammer1997} However, motivating the Tomonaga-Luttinger model outside of the equilibrium (low-energy) realm becomes ambiguous. Recently, studies tried to extend its predictive power to out-of-equilibrium setups by taking into account non-linear contributions due to the band-curvature. This phenomenology was coined non-linear Luttinger liquids, and interesting predictions with respect to non-equilibrium critical behavior have been made.\cite{Imambekov2009,Imambekov2012} However, due to the ambiguity of the low-energy assumption outside of equilibrium, these studies require a firm benchmark based on microscopic model calculations -- a goal that our novel FRG algorithm was set up to contribute to.
\subsection{Model}
We consider a one-dimensional lattice of spinless fermions end-coupled to leads, which act as particle reservoirs.
We will restrict ourselves to the simplest case described by the Hamiltonian:
\begin{equation}
\begin{split}
H_\mathrm{chain}&=H_\mathrm{tb}+H_\mathrm{int},\\
H_\mathrm{tb}&=t\sum_{n=1}^{N-1} c_n^\dag c^\vdag_{n+1} +\mathrm{h.c.},\\
H_\mathrm{int}&=U\sum_{n=1}^{N-1} \left(c_n^\dag c^\vdag_n -\frac{1}{2}\right)\left(c_{n+1}^\dag c^\vdag_{n+1} -\frac{1}{2}\right),
\end{split}
\end{equation}
where \(N\) denotes the number of sites in the interacting chain, and $t$ and \(U\) are the strength of the nearest-neighbor hopping and interaction, respectively. Unless mentioned otherwise, we always set $t=1$.
Two wide-band reservoirs ($N_{\rm res}=2$) that we refer to as \emph{left} ($1=\mathrm{L}$) and \emph{right} ($2=\mathrm{R}$) are coupled to the ends of the chain and are characterized by the hybridization function
\begin{equation}\begin{split}
\Gamma^{1,\mathrm{ret}}_{ij}&= -\I\Gamma^\mathrm{L}_{ij}=-\I\Gamma \delta_{i,1}\delta_{j,1},\\\Gamma^{2,\mathrm{ret}}_{ij}&=-\I\Gamma^\mathrm{R}_{ij}=-\I\Gamma\delta_{i,N}\delta_{j,N}.
\end{split}\end{equation}
Initially, the reservoirs are prepared in thermal equilibrium at zero temperature (\(T_\nu=0\)) and with chemical potentials \(\mu_1=\mu_\text{L}\), \(\mu_2=\mu_\text{R}\). A pictorial representation of the system is shown in Fig.~\ref{fig:sys_finite_chain}.
Before we discuss the results, we briefly turn to a particularity specific to the model, which is relevant for the numerical efficiency of our algorithm.
In the absence of interactions, the system is perfectly coherent in the extended region between the reservoirs; the typical coherence time grows with \(N\). This is reflected in observables such as the local density of states (LDOS)
\begin{equation}
\rho_i(\omega)=-\frac{1}{\pi}\Im[G^\mathrm{ret}_{ii}(\omega)],
\end{equation}
shown in Fig.~\ref{fig:smoothingTwoPartVert}. It features \(\ord{N}\) peaks of width \(\ord{\Gamma/N}\) and requires \(\ord{N}\) frequency points to be faithfully represented. This, however, does not imply that the number of discretization points $N_\Omega$ that appear within our algorithm needs to scale with \(N\), since the discretizations discussed in Sec.~\ref{ssec:freg_integ} are only used to represent the vertex functions, not the Green's functions themselves.
In contrast to the Green's functions, the two-particle vertex functions feature \(\ord{N^2}\) peaks of width \(\ord{\Gamma/N}\), making them smooth in the limit \(N\to \infty\). This is explicitly demonstrated in Fig.~\ref{fig:smoothingTwoPartVert}. A similar argument holds for the self-energy.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{./fig2.pdf}
\caption{ \emph{Left panel}: The local density of states of a non-interacting chain ($U=0$) of length \(N=4,8,\dots,24\) (bottom to top) with \(\Gamma=0.2\) in thermal equilibrium ($\mu_\text{L}=\mu_\text{R}=0$). It features \(N\) peaks of width \(\sim \Gamma/N\). \emph{Right panel}: The \(\left(\gamma^\mathrm{p}\right)^{2221}_{1221}\)--component of the two-particle vertex (\ref{eq:chan_decomp_flow2}) at $\Lambda=0$ for \(\Gamma=0.2\) in thermal equilibrium ($\mu_\text{L}=\mu_\text{R}=0$). In contrast to the LDOS, the two-particle vertex features \(\ord{N^2}\) peaks and converges to a smooth function for $N\to\infty$.}
\label{fig:smoothingTwoPartVert}
\end{figure}%
\subsection{Results in equilibrium}
\label{ssec:eq_results}
We will now test the second-order FRG approach in thermal equilibrium. Since this limit is well understood (in marked contrast to the non-equilibrium case), it provides a natural benchmark for our algorithm. In particular, we investigate the influence of the different choices of the temperature $T_\textnormal{cut}$ and chemical potential $\mu_\textnormal{cut}$ associated with the FRG cutoff, which yield different results due to the truncation of the flow equation hierarchy. Investigating the cutoff-dependence thus provides a way to study the reliability of our approximation scheme. We emphasize that the physical reservoirs are always held at zero temperature $T_\textnormal{L}=T_\textnormal{R}=0$.
In thermal equilibrium, it is natural to work with a cutoff whose temperature $T_\textnormal{cut}$ and chemical potential $\mu_\textnormal{cut}$ equals those of the physical reservoirs $\mu_\textnormal{L}=\mu_\textnormal{R}$, $T_\textnormal{L}=T_\textnormal{R}=0$. Out-of-equilibrium, however, it is a priori unclear what cutoff scheme one should employ; results are only meaningful if they do not depend on the particular choice of the cutoff. We will now illustrate that the equilibrium data displays a strong cutoff-dependence. This is a first hint towards the inadequacy of our second-order FRG scheme in treating finite, interacting system in non-equilibrium.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{./fig3.pdf}
\caption{ \emph{Left panel}: Self-energy of the single impurity Anderson model in equilibrium ($N=2, t=0,\mu_\mathrm{L,R,cut}=0$). In the limit of small $U$, perturbation theory [solid lines, see Eq.~(\ref{eq:siam})] and FRG data (dashed lines) successively approach each other. \emph{Right panel}: If the FRG flow equations are modified according to Sec.~\ref{sec:pt}, perturbation theory is reproduced exactly.}
\label{fig:siam}
\end{figure}%
\subsubsection{Single Impurity Anderson Model}
We first perform an instructive comparison to test the validity of our numerics. For a two-site system ($N=2$) with zero hopping ($t=0$) and $\mu_\textnormal{L}=\mu_\textnormal{R}=0$, one obtains a version of the single-impurity Anderson model. The self-energy can be computed in perturbation theory, and the result in equilibrium reads\cite{Hamiltonian1984}
\begin{equation}\label{eq:siam}\begin{split}
\Sigma(&\omega)=U^2\int d\omega_1 d\omega_2 d\omega_3 \frac{\rho_0(\omega_1)\rho_0(\omega_2) \rho_0(\omega_3)}{\omega-\omega_1-\omega_2+\omega_3}\\ &\times\big[ n(\omega_1)n(\omega_2) n(-\omega_3) + n(-\omega_1)n(-\omega_2) n(\omega_3) \big],
\end{split}\end{equation}
where $n(\omega)=n^\textnormal{L}(\omega)=n^\textnormal{R}(\omega)$ is the Fermi function, and $\rho_0(\omega)$ denotes the non-interacting density of states at one of the sites. In the limit of small $U$, the FRG data successively approaches this result (see Fig.~\ref{fig:siam}, left panel). Moreover, one can exactly reproduce perturbation theory by modifying the flow equation according to Sec.~\ref{sec:pt} (right panel of the figure).
\subsubsection{Fluctuation-dissipation theorem}
In equilibrium, the fluctuation-dissipation theorem holds [see Eq.~\eqref{eq:fluc_dis}]. The effective (non-equilibrium) distribution function defined in Eq.~\eqref{eq:fluc_dis_eff} should therefore reduce to the zero-temperature Fermi-Dirac distribution function,
\begin{equation}\label{eq:fluc_dis2}
n^\text{eff}_{ij}(\omega)=n(\omega)\delta_{i,j}=\theta(-\omega)\delta_{i,j}.
\end{equation}
We now test how well the FDT is preserved within our approximate FRG approach; the results are summarized in Fig.~\ref{fig:noBiasCutoffComp}. The left panel shows the \((1,1)\)-component of the effective distribution function. We find that a zero-temperature cutoff (\(T_\mathrm{cut}=\mu_\mathrm{cut}=0\)) preserves the FDT (up to numerical errors associated with integration routines). An infinite-temperature cutoff (\(T_\mathrm{cut}=\infty\)), however, introduces artificial heating: It leads to a significant (de)population of states (below) above the Fermi level and severely violates the FDT even at the end of the flow. In Fig.~\ref{fig:noBiasVarGamComp}, we illustrate how this artificial heating depends on the reservoir-coupling $\Gamma$ and on the distance to the boundary. A strong coupling to the physical zero-temperature reservoirs reduces this artificial heating substantially; the distribution function evolves towards a Fermi-Dirac distribution as $\Gamma$ is increased. This `cooling' effect, however, is local and almost absent in the bulk (see the right panel of Fig.~\ref{fig:noBiasVarGamComp}).
\begin{figure}[t]
\includegraphics[width=\linewidth,clip]{./fig4.pdf}
\caption{FRG results in equilibrium with \(\mu_\mathrm{L,R,cut}=0\), reservoir couplings \(\Gamma=0.2\), and various values of the interaction $U$.
\emph{Left panel}: The effective distribution at the boundary of a system of \(N=12\) sites. The upper and lower panel show data obtained using an infinite- and zero-temperature cutoff scheme, respectively. Only the latter reproduces the correct equilibrium distribution function (\ref{eq:fluc_dis2}) stipulated by the fluctuation-dissipation theorem; the former yields artificial heating in form of a decreased discontinuity at the Fermi surface. \emph{Right panel}: Imaginary part of the self-energy at the boundary, which serves as a measure for the magnitude inelastic processes at a given energy \(\omega\). In equilibrium, scattering at the Fermi surface is suppressed;\cite{giamarchi2003quantum,Samokhin1998} while data obtained with \(T_\mathrm{cut}=0\) reproduces this correctly, the infinite-temperature cutoff introduces unphysical scattering.
}
\label{fig:noBiasCutoffComp}
\end{figure}
\begin{figure}[t]
\includegraphics[width=\columnwidth]{./fig5.pdf}
\caption{
The effective distribution function at the boundary (left panel) and in the bulk (right panel) in equilibrium ($\mu_\mathrm{L,R}=0$) for $U=1$, $N=12$, and different reservoir couplings $\Gamma$. The data was computed using an infinite-temperature cutoff scheme. The exact result is given by Eq.~(\ref{eq:fluc_dis2}). The physical reservoirs `cool' the system towards zero temperature only at strong couplings and close to the boundary. }
\label{fig:noBiasVarGamComp}
\end{figure}
\begin{figure}[t]
\includegraphics[width=\columnwidth]{./fig6.pdf}
\caption{
The local density of states at the boundary (left panel) and in the bulk (right panel) in equilibrium ($\mu_\mathrm{L,R,cut}=0$) for \(\Gamma=0.2\) and \(U=1\). The various lines show data obtained for different cutoff temperatures $T_\textnormal{cut}$ and system sizes $N$.
In a Luttinger liquid, the local density of states is expected to vanish at the Fermi surface. In contrast, our infinite-temperature cutoff introduces inelastic scattering and yields a smooth LDOS (see also Fig.~\ref{fig:noBiasCutoffComp}).
}
\label{fig:noBiasCutoffCompDOS}
\end{figure}
\subsubsection{Scattering induced by hot reservoirs}
Next, we investigate the imaginary part of the self-energy \(\Im(\Sigma^\mathrm{adv}_{ii})(\omega)\). This quantity roughly measures the generation of inelastic scattering at site $i$. Results obtained for $T_\textnormal{cut}=0$ as well as $T_\textnormal{cut}=\infty$ are shown in the right panel of Fig.~\ref{fig:noBiasCutoffComp}. In equilibrium, it is well-understood\cite{giamarchi2003quantum,Samokhin1998} that no additional inelastic scattering should be generated close to the Fermi-edge, \(\Im(\Sigma^\mathrm{adv}_{ii})(\omega=0)=0\). The zero-temperature cutoff reproduces this result. The infinite-temperature cutoff, however, artificially introduces such processes via the flow. This problem exacerbates for larger system as the influence of the physical coupling on the center of the chain decreases.
\subsubsection{Density of states}
Finally, we study the local density of states (see Fig.~\ref{fig:noBiasCutoffCompDOS}). While we cannot reproduce hallmarks of Luttinger liquid physics such as critical power laws\cite{giamarchi2003quantum} for small systems, we clearly find that an infinite-temperature cutoff yields a smooth LDOS even for \(N=12\). This again relates to the unphysical generation of an inelastic scattering length scale (even at the Fermi surface) that is comparable to (or smaller than) the system size. The zero-temperature cutoff does not yield a smooth LDOS. This is another demonstration of the cutoff-dependence in a physical quantity within our FRG approach.
\subsubsection{Summary}
We have demonstrated that our FRG results feature a strong cutoff-dependence in equilibrium. Since it is a priori unclear what cutoff scheme to employ away from this limit, our approach seems unsuitable in its present form. One loophole, however, remains: Thermal equilibrium is (counter-intuitively) the most challenging one situation, both from a numerical perspective (integrals become sharply-peaked) and potentially fundamentally (e.g, a delicate interplay of collective phenomena leads to the suppression of scattering around the Fermi-level). Therefore, we will now analyze whether or not the detrimental cutoff-dependence still shows up in non-equilibrium.
\subsection{Results at finite bias}
\label{ssec:bias_results}
Throughout this section, we drive the system out of equilibrium by applying a finite bias voltage \(\mu_\mathrm{L}=-\mu_\mathrm{R}=1\). We will compare data obtained using three different cutoff schemes: i) $T_\textnormal{cut}=\mu_\textnormal{cut}=0$, ii) $T_\textnormal{cut}=0,\mu_\textnormal{cut}=\mu_\textnormal{R}=-1$, and iii) $T_\textnormal{cut}=\infty$ (the choice of the chemical potential $\mu_\textnormal{cut}$ is irrelevant in the latter case). We again emphasize that the physical reservoirs are always held at zero temperature $T_\textnormal{L}=T_\textnormal{R}=0$.
\begin{figure}[t]
\includegraphics[width=\columnwidth,clip]{./fig7.pdf}
\caption{FRG results for a chain of $N=24$ sites with \(\Gamma=0.2\) and \(U=1\) obtained using different cutoff schemes. The system is driven out of equilibrium by a bias voltage $\mu_\text{L,R}=\pm1$. The second-order perturbation-theory result is shown for comparison (PT). \emph{Left panel}: The effective distribution function \eqref{eq:fluc_dis_eff} at the boundary. For small $\Gamma$ and $U$, one expects a piecewise-constant function with two steps of height \(1/2\) at the chemical potentials of the reservoirs. \cite{severindr}
\emph{Center panel}: The local density of states at the boundary. \emph{Right panel}: The occupation of the individual sites (solid) and the local current (dashed).
}
\label{fig:biasCutoffComp}
\end{figure}
\begin{figure}[t]
\includegraphics[width=\columnwidth,clip]{./fig8.pdf}
\caption{FRG results for the imaginary part of self-energy at the boundary of a chain with $\Gamma=0.2$, \(\mu_\mathrm{L,R}=\pm 1\), and for various values of $U$. The upper (lower) row shows data obtained using an infinite-temperature (zero-temperature) cutoff with $\mu_\mathrm{cut}=0$. The columns contain different system sizes $N=6,12,24,48$ (the dashed gray line in the bottom-right panel was calculated for $U=1$ with \(N=60\)). With increasing $N$, the dependence of the results on the cutoff becomes more pronounced. This illustrates that our FRG scheme is insufficient to reliably address the out-of-equilibrium properties of large, interacting systems.}
\label{fig:biasCutoffCompSig}
\end{figure}
\subsubsection{Cutoff-dependence of physical observables}
In Fig.~\ref{fig:biasCutoffComp}, we show the effective distribution function, the local density of states, the local occupation number, and the local current for a system with \(N=24\) sites. FRG results were obtained using three different cutoff schemes; second-order perturbation-theory is included for comparison.
We find that all cutoffs yield an effective distribution function \(n_{11}^\mathrm{eff}(\omega)\) that breaks inversion symmetry, implying that the distribution function is not uniform throughout the chain (see the upper left panel of Fig.~\ref{fig:biasCutoffComp}).
This is in line with general expectations and remedies a shortcoming of a first order FRG approach.\cite{Jakobs2007} Unfortunately, the distribution function generally features a strong cutoff-dependence.
The local density of states at the boundary is shown in the upper right panel of Fig.~\ref{fig:biasCutoffComp}.
To reduce finite-size effects, we introduce an artificial broadening via
\begin{equation}
\rho_i(\omega)=-\frac{1}{\pi}\Im\left\{ \frac{1}{\left[ G^\mathrm{ret}(\omega)\right]^{-1} +\I \Gamma_\mathrm{smear}} \right\}_{ii},\ \Gamma_\mathrm{smear}=0.2.
\end{equation}
For a cutoff with \(T_{\rm cut}=\mu_{\rm cut}=0\), this quantity starts to show a cusp at \(\omega=0\) as well as shoulders at \(\omega=\mu_\mathrm{L}\) and \(\omega=\mu_\mathrm{R}\). Such features are expected in the ground state of a Luttinger liquid at \(\mu=0,\mu_\mathrm{L}$ and $\mu_\mathrm{R}\), respectively. They hint towards the survival of some of the ground-state Luttinger liquid physics even at finite bias. However, these features are not present in the data obtained using the other cutoff schemes; their appearance is uncontrolled.
The occupations (see the lower panel of Fig.~\ref{fig:biasCutoffComp})
\begin{equation}
n_i=\left\langle c_i^\dagger c_i\right\rangle=\frac{1}{2}-\frac{\I}{2}\int\frac{\dOp\omega}{2\pi} G^\mathrm{K}_{ii}(\omega)
\end{equation}
show a gradient-type behavior within the infinite-temperature cutoff, while in the other schemes Friedel oscillations dominate. The stationary-state current
\begin{equation}
I_i=\Re \int \frac{\dOp \omega}{2\pi} G^\mathrm{K}_{i(i+1)}(\omega)
\end{equation}
is conserved within perturbation theory, while the FRG schemes violate particle-number conservation to $\mathcal{O}(U^3)$. This is especially severe for \(T_\mathrm{cut}=\infty\), which leads to a strong suppression of the current in the middle of the chain.
In a nutshell, the FRG results for physical quantities feature a strong cutoff-dependence out-of-equilibrium
\subsubsection{Inelastic scattering and scaling}
As in equilibrium, it is insightful to analyze the imaginary part of the self-energy. Results are shown in Fig.~\ref{fig:biasCutoffCompSig} for two different cutoff schemes; perturbation theory is included for comparison. We find that a cutoff with \(T_\mathrm{cut}=\mu_\mathrm{cut}=0\) (with $T_\mathrm{cut}=\infty$) consistently underestimates (overestimates) the amount of inelastic processes compared to perturbation theory.
As such, this is not necessarily problematic. However, the difference between perturbation theory and
the FRG data increases with increasing $N$. In that sense, the system does not behave perturbatively, as `secular' terms in the system size $N$ such as $U^3N$ arise. Note that this strong cutoff dependence does not only occur in the bulk (i.e., far from the cold physical reservoirs) but also right at the boundary of the interacting system. Thus, our second-order FRG scheme is not suited to study systems out of thermal equilibrium at least with the (reservoir) cutoffs employed.
\begin{table*}[t]
\begin{tabular}{l||p{1cm}|p{4.5cm}|p{7cm}}
& ret & adv & K\\
\hline
\hline
ret & \(0\) & \(\pm Q_{q_1}\otimes Q^\dagger_{q_2}f_0(\pm\lambda_{q_1}, \lambda^*_{q_2}-\omega)\) & \(\pm Q_{q_1} \otimes Q_{q_2} \eta_\alpha f_1(\pm\lambda_{q_1},\lambda_{q_2}-\omega, \mu_\alpha)\)\newline\(\mp Q_{q_1} \otimes\eta_\alpha Q^\dag_{q_2} f_1(\pm\lambda_{q_1},\lambda^*_{q_2}-\omega, \mu_\alpha)\) \\
\hline
adv & & \(0\) &\(\pm Q^\dag_{q_1} \otimes Q_{q_2} \eta_\alpha f_1(\pm\lambda^*_{q_1},\lambda_{q_2}-\omega, \mu_\alpha)\)\newline\(\mp Q^\dag_{q_1} \otimes\eta_\alpha Q^\dag_{q_2} f_1(\pm\lambda^*_{q_1},\lambda^*_{q_2}-\omega, \mu_\alpha)\) \\
\hline
K& & &
\(\pm Q_{q_1}\eta_{\alpha_1} \otimes Q_{q_2} \eta_{\alpha_2} f_2(\pm\lambda_{q_1},\lambda_{q_2}-\omega, \mu_{\alpha_1}, \mu_{\alpha_2})\)
\newline
\(\mp Q_{q_1}\eta_{\alpha_1} \otimes \eta_{\alpha_2}Q^\dagger_{q_2} f_2(\pm\lambda_{q_1},\lambda^*_{q_2}-\omega, \mu_{\alpha_1}, \mu_{\alpha_2})\)
\newline
\(\mp \eta_{\alpha_1}Q^\dagger_{q_1} \otimes Q_{q_2} \eta_{\alpha_2} f_2(\pm\lambda^*_{q_1},\lambda_{q_2}-\omega, \mu_{\alpha_1}, \mu_{\alpha_2})\)
\newline
\(\pm \eta_{\alpha_1}Q^\dagger_{q_1} \otimes \eta_{\alpha_2}Q^\dagger_{q_2} f_2(\pm\lambda^*_{q_1},\lambda^*_{q_2}-\omega, \mu_{\alpha_1}, \mu_{\alpha_2})\)
\end{tabular}
\caption{Analytical expressions for \(\int \dOp \Omega g^\mathrm{row}(\pm\Omega) g^\mathrm{col}(\Omega+\omega)\). For readability, we omit all summations as well as the single-particle indices. Both are to be understood in analogy to Eq.~\eqref{eq:exampleTwoGF}. The missing entries of the table can be obtained by using \(\int \dOp \Omega g^\mathrm{row}(\pm\Omega) g^\mathrm{col}(\Omega+\omega)=\int \dOp \Omega g^\mathrm{col}(\pm\Omega)g^\mathrm{row}(\Omega\mp \omega) \).}
\label{tab:twgf}
\end{table*}
\section{Conclusion}
We developed a second-order implementation of the Keldysh functional renormalization group to study out-of-equilibrium quantum wires attached to non-interacting reservoirs. Our key idea is to simplify the flow equation of the two-particle vertex by neglecting its own feedback as well as the feedback of the self-energy. This approach is correct to second order in the interaction but still contains an infinite resummation of higher-order terms (since the flow of the self-energy is solved in full). By combining semi-analytic solution techniques with massive MPI parallelization, we treated system of up to 60 lattice sites.
Within the FRG, we employed a so-called reservoir cutoff, which is physical, easy to implement, numerically-efficient, and which has proven to provide good results in other setups.\cite{Karrasch2010,Jakobs2010a,Jakobs2010b,Kennes2012} Since one can vary the temperature and the chemical potential of the auxiliary reservoirs, our approach in fact encompasses a whole class of cutoffs. This has the distinct advantage that one can explicitly analyze whether or not the results are indeed independent of the particular choice of the RG procedure.
As a prototypical model, we studied a one-dimensional tight-binding chain with nearest-neighbor interactions that is weakly coupled to left and right reservoirs. We computed effective distribution functions, the local density of states, and the steady-state current and demonstrated that all of these quantities depend strongly on the choice of the cutoff. Exact results (such as the fluctuation-dissipation theorem) are available in equilibrium and can serve as a benchmark for the FRG data. In non-equilibrium, there is no physically-motivated cutoff; moreover, secular higher-order terms appear which are only partly included in a our approach. This demonstrates that our second-order FRG scheme is highly-demanding but still inadequate to study interacting quantum wires out of equilibrium.
A different cutoff scheme or, if possible, a more thorough treatment of the higher-order vertices might yield better results and are intriguing avenues of future research. Furthermore, for systems where physical inelastic processes limit the coherence length, our method is still expected to provide insights on how small interactions modify their behavior through heating.
\section*{Acknowledgments} DMK was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - Cluster of Excellence Matter and Light for Quantum Computing (ML4Q) EXC 2004/1 - 390534769. We acknowledge support from the Max Planck-New York City Center for Non-Equilibrium Quantum Phenomena. CKa and CKl acknowledge support by the Deutsche Forschungsgemeinschaft through the Emmy Noether program (KA 3360/2-2). CKa acknowledges support by the `Niedersächsisches Vorab' through `Quantum- and Nano-Metrology (QUANOMET)' initiative within the project P-1.
|
2,869,038,154,578 | arxiv | \section{Introduction}
\label{sec01}
The response to
external electric and magnetic fields provides a fundamental tool for studying and altering
the properties of materials with numerous attendant applications. In particular higher-order
responses allow for `manipulating light with light.' Thus, there is considerable
interest in identifying molecular systems with large non-linear responses.
One approach in this direction is based on push-pull systems, i.e., chain-like
molecules with an
electron-donor group at one end and an electron-acceptor group
at the other (see Fig.\ \ref{fig01}). When the backbone is a $\pi$-conjugated oligomer
the $\pi$ electrons of the backbone may respond easily to perturbations like those of the
substituents and/or
external fields. Due to the donor and acceptor groups a large electron transfer,
and, accordingly, a large dipole moment can occur and one may hope for large
responses of the
dipole moment to external fields. For these $\pi$-conjugated systems, each circle in
Fig.\ \ref{fig01} could be, for example, a vinylene group, a phenylene group, a methinimine
group, or combinations of those.
If the push-pull system is sufficiently large, we may split it into three parts, i.e., a left
(L), a central (C), and a right (R) part as shown in Fig.\ \ref{fig01}. Electrons of the central
part are assumed to be so far from the terminations that they do not feel the latter (or,
more precisely, the effects of the terminations are exponentially decaying in the central part).
The dipole moment, $\vec\mu$, is useful in quantifying the response of the system to an
external
electric field,
\begin{eqnarray}
\mu_i(\omega)&=& \mu_i^{(0)}(\omega) + \sum_j\sum_{\omega_1}\alpha_{ij}(\omega;\pm\omega_1)
\cdot E_j(\omega_1)\nonumber\\
&&+ \frac{1}{2}\sum_{jk}\sum_{\omega_1,\omega_2}\beta_{ijk}(\omega;\pm\omega_1,\pm\omega_2)\cdot
E_j(\omega_1) E_k(\omega_2)\nonumber\\
&&+\frac{1}{6}\sum_{jkl}\sum_{\omega_1,\omega_2,\omega_3}\gamma_{ijkl}(\omega;\pm\omega_1,
\pm\omega_2,\pm\omega_3)
\cdot E_j(\omega_1) E_k(\omega_2) E_l(\omega_3) + \cdots.
\label{eqn01}
\end{eqnarray}
Here, $E_m(\omega_s)$ is the $m$th component (i.e., $x$, $y$, or $z$)
of the external field with the frequency $\omega_s$ and $\omega$ is the frequency of the response of
the molecule to the field. The $\omega_n$ summations go over all
the frequencies of the applied field. $\mu_i^{(0)}(\omega)$ is the dipole moment
in the absence of the field which vanishes for $\omega\ne 0$. Moreover, $\alpha_{ij}(\omega;\pm\omega_1)$ is
the linear polarizability, and $\beta_{ijk}(\omega;\pm\omega_1,\pm\omega_2)$,
$\gamma_{ijkl}(\omega;\pm\omega_1,\pm\omega_2,
\pm\omega_3)$, $\dots$ are the first, second, $\dots$ hyperpolarizability. Sum rules
require that these quantities can be non-zero only if the frequency of the
response, $\omega$, equals the sum of the frequencies (eventually multiplied by $-1$), i.e.,
for $\gamma_{ijkl}(\omega;\pm\omega_1,\pm\omega_2,\pm
\omega_3)$ we require $\omega=\pm\omega_1\pm\omega_2\pm\omega_3$.
In the present paper we focus on static external fields, in which case
$\omega_i=0$. Furthermore, we shall study a neutral system, although our arguments also are valid
for charged systems as long as the extra charge is localized to the terminations.
We let $\rho(\vec r)$ be the (field-dependent) total charge density (i.e., the sum of the nuclear and
electronic charge densities), and choose the long axis to be $z$. Then the
component of the total dipole moment that is of interest here, namely $z$, is given by
(omitting its argument, $\omega$)
\begin{equation}
\mu_z = \int \rho(\vec r)z d\vec r
=\int_L\rho(\vec r)z d\vec r+\int_C\rho(\vec r)z d\vec r+\int_R\rho(\vec r)z d\vec r,
\label{eqn02}
\end{equation}
where we have split the integral into contributions from the left, central,
and right regions of the chain. The central region consists of identical neutral units.
We can, therefore, write
\begin{equation}
\int_C\rho(\vec r)z d\vec r=K_C \mu_C,
\label{eqn03}
\end{equation}
where $K_C$ is the number of units in C and $\mu_C$ is the $z$ component of the dipole
moment of one of these units. In order to evaluate the other two contributions to the total
dipole moment in Eq.\ (\ref{eqn02}) we define a `typical' center for each term,
i.e., $\vec R_R$ and $\vec R_L$ (these could, e.g., be the center of mass
of the right and left parts, respectively), and let $Z_R$ and $Z_L$ be the $z$
components of these vectors. Since the chain is neutral we, then, obtain
\begin{equation}
\int_L\rho(\vec r)z d\vec r+\int_R\rho(\vec r)z d\vec r
=(Z_R-Z_L)\int_R\rho(\vec r)d\vec r + \int_L\rho(\vec r)(z-Z_L)d\vec r+
\int_R\rho(\vec r)(z-Z_R)d\vec r.
\label{eqn04}
\end{equation}
The first term on the right hand side describes the contribution to the dipole
moment associated with electron transfer from one end to the other. This term grows linearly
with chain length (due to $Z_R-Z_L$) as does the term in Eq.\ (\ref{eqn03}).
On the other hand, the last two terms in Eq.\ (\ref{eqn04}) describe local dipole
moments that arise from the electron distributions within the two terminal
regions and they are independent of the chain length.
This discussion suggests that donor/acceptor (=D/A) substitution at the ends of
long chains may change the charge distribution in R and L so as to strongly
enhance the dipole moment and, consequently, produce a particularly large
change in the dipole moment when the system is exposed to an external electric field.
Therefore, very many studies have been devoted to push-pull systems as a function
of increasing length (see, e.g.,
[\onlinecite{mbz92,ty92,mgmpbbp94,sldzfss94,mtwc94,gpbfldbtncm94,hfmmzl95,m95,vtvg96,lmrfg96,bggpjm96,cjak97,zl99,stcm00,cpjgbsrk00,kcb00,bcta04,smrlls04}]).
Not only the electrons but also the
structure (phonons) will respond to a static electric field. We will
demonstrate that, for sufficiently long chains, the electronic response per unit of a push-pull
system (with structural relaxation taken into account) becomes
independent of the donor and acceptor groups, implying that the materials properties cannot be
improved upon substitution. Our mathematical arguments for this finding are presented in the
next section, and in Sec.\ \ref{sec03} we illustrate and analyse the
results through calculations on a model system. The particular case of inversion symmetry
is discussed in Sec.\ \ref{sec04} where we also make a comparison with previous results.
Finally, a summary is provided in Sec.\ \ref{sec05}.
The arguments we present are related to those originally given by Vanderbilt
and King-Smith for an extended system in the absence of an external field. They argued that
the permanent polarization (i.e.\ dipole moment per unit length)
is a bulk property.\cite{vks93} Very recently, Kudin {\it et al.}\cite{kcr07}
proved that the permanent polarization is quantized for D/A substituted systems.
Neither of these works considered the induced polarization or the
structural relaxation due to an external field. Finally, in a recent paper we presented
some of the arguments behind the present work but did not analyze the predictions
as we do here using a model system.\cite{gut}
\section{Changes in the Charge Distribution upon Substitution}
\label{sec02}
By replacing some (groups of) atoms with others at the chain ends,
the electronic orbitals with components near the ends will change.
Since the set of electronic orbitals is orthonormal, all other orbitals
will change as well. Accordingly, the charge distribution may change everywhere
due to the substitutions.
When an electrostatic field is applied as well, each orbital will respond to
the field. Since the orbitals will have changed due to the substitution, so will
their responses to the field. Furthermore, the structural responses due to the field will also depend
on the substitution at the ends. Therefore, the dipole moment can depend upon both the
substitution and the field. From these arguments there is no reason to believe that
$\mu^{(0)}/N$, $\alpha/N$, $\beta/N$, $\gamma/N$, $\dots$ (with $N$ being the number of repeated units)
will be independent of the substitution. However, we shall argue here that the charge
\begin{equation}
q = \int_R\rho(\vec r)d\vec r
\label{eqn05}
\end{equation}
in Eq.\ (\ref{eqn04}) can change, at most, by an integral number of elementary
units for different D/A substitutions at fixed external static
field. Our proof is a generalization
of arguments due to Vanderbilt and King-Smith\cite{vks93} (see also [\onlinecite{kcr07}]),
and was previously proposed by the present authors.\cite{gut} It will be verified
here by calculations on a model system and given a thorough analysis on that
basis.
For a given system (with specified geometry), and value of the external field, we transform the set of occupied
orbitals into a set of orthonormal, localized functions. Those functions
ascribed to C will be similar to the Wannier functions of the infinite
periodic system. The localized orbitals will be centered
in one region, but may have tails sticking into another region.
We assume that the terminal regions are large enough so that any functions centered
therein, which differ from those of C, are exponentially vanishing in C.
On the other hand, those functions ascribed to C, but centered on
units next to L or R,
will likely have tails extending into those regions.
The density matrix can then
be written in block-diagonal form with three blocks, one for each of the three regions.
Since the density matrix is idempotent, each block will be so, too, and
there will be an integral number of electrons
associated with each of the three sets of functions. That is to say, the number of
electrons associated with the functions centered in the two end regions is integral.
Accordingly, any non-integral part of $q$ is associated with the tails of the
functions in C that extend into R, which, per construction, is independent of the
terminations, i.e., also of D/A substitution.
We conclude that, for different terminations, $q$ can change only by an integer. This is valid
for long chains and all fields. Therefore, the electronic response per unit of the chains to
the field, with or without nuclear response, is independent of termination. The only possible
change for different terminations is that $q$ may jump by an integer for different field
strengths. In fact, our numerical studies on a H\"uckel-type model will confirm this
prediction. Of course, in ab initio calculations, there may also be a jump due to changing the
basis set or the method (e.g.\ Hartree-Fock vs. Kohn-Sham DFT).
\section{Illustrating and Analyzing the Result}
\label{sec03}
In order to explore in detail the predictions from above we studied a H\"uckel like model for
long, finite (AB)$_{2K+1}$ chains. In our model, we use a basis set of orthonormal
atomic orbitals (AOs) with one AO per atom. The system has one electron per atom, and
the nuclei are given charges of $+1$ whereas the electronic charge is set equal to $-1$. (All
quantities are expressed in atomic units in this paper.)
Given that $\chi_n$ is the AO of the $n$th atom ($n=1,2,\dots,4K+2$) and $\hat h$ is
the Kohn-Sham or Fock single-electron hamiltonian we assume that only $\langle \chi_j\vert\hat h\vert \chi_j\rangle$,
$\langle \chi_j\vert\hat h\vert \chi_{j\pm1}\rangle$, and
$\langle \chi_j\vert\hat h\vert \chi_{j\pm2}\rangle$ are non-vanishing with values
\begin{eqnarray}
\langle \chi_{2p+1}\vert\hat h\vert \chi_{2p+1}\rangle&=&\epsilon_0\nonumber\\
\langle \chi_{2p}\vert\hat h\vert \chi_{2p}\rangle&=&-\epsilon_0\nonumber\\
\langle \chi_j\vert\hat h\vert \chi_{j+1}\rangle&=& -[t_1-\alpha_1(z_{j+1}-z_j)]\nonumber\\
\langle \chi_j\vert\hat h\vert \chi_{j+2}\rangle&=& -[t_2-\alpha_2(z_{j+2}-z_j)].
\label{eqn06}
\end{eqnarray}
Here $z_j$ is the position of the $j$th atom. Different donor and acceptor groups are
modeled by modifying the on-site energies of the terminating atoms and/or the terminating
hopping integrals,
\begin{eqnarray}
\langle \chi_1\vert\hat h\vert \chi_1\rangle&=&\epsilon_0+\epsilon_L\nonumber\\
\langle \chi_{4K+2}\vert\hat h\vert \chi_{4K+2}\rangle&=&-\epsilon_0+\epsilon_R\nonumber\\
\langle \chi_1\vert\hat h\vert \chi_2\rangle&=& -[t_1-\alpha_1(z_2-z_1)]+t_L\nonumber\\
\langle \chi_{4K+1}\vert\hat h\vert \chi_{4K+2}\rangle&=& -[t_1-\alpha_1(z_{4K+2}-z_{4K+1})]+t_R.
\label{eqn07}
\end{eqnarray}
Finally, we assume that
\begin{equation}
\langle\chi_j\vert z\vert\chi_k\rangle=\delta_{j,k}z_j.
\label{eqn07a}
\end{equation}
In order to analyse the results we, first, define a reference structure for which the position of the
$n$th atom is
\begin{equation}
z_n^{(0)}=\frac{a}{2}\left(n-2K-\frac{3}{2}\right)-(-1)^nu_0.
\label{eqn08}
\end{equation}
Here $a$ is the length of the unit cell for an infinite, periodic system with
the same electronic interactions and no external field.
Subsequently, we define for each atom
\begin{eqnarray}
u_n&=&z_n-\frac{a}{2}\left(n-2K-\frac{3}{2}\right)\nonumber\\
\Delta z_n &=& z_n-z_n^{(0)}.
\label{eqn09}
\end{eqnarray}
The total energy is written as the sum over occupied orbital energies (multiplied by 2 due to spin degeneracy)
augmented by a harmonic term in the nearest- and the next-nearest-neighbour bond lengths,
\begin{equation}
E_{\rm tot}=2\sum_{i=1}^{\rm occ}\epsilon_i +
\frac{k_1}{2}\sum_{p=1}^{4K+1}(z_{p+1}-z_p)^2+
\frac{k_2}{2}\sum_{p=1}^{4K}(z_{p+2}-z_p)^2-E_{\rm DC}\sum_{p=1}^{4K+2} z_p.
\label{eqn09a}
\end{equation}
$E_{\rm DC}$ is the strength of the electrostatic field.
For the infinite, periodic chain without an external field, the lowest total energy
corresponds to a certain lattice constant $a$ and
\begin{equation}
u_n=(-1)^{n+1} u_0.
\label{eqn09b}
\end{equation}
The force constants $k_1$ and $k_2$ are determined so that $a$ and $u_0$ take certain
chosen values.
With
\begin{equation}
\psi_i = \sum_{n=1}^{4K+2} C_{ni} \chi_n
\label{eqn10}
\end{equation}
being the $i$th orbital (ordered according to increasing orbital energy) we calculate
the Mulliken charge on the $n$th atom for field $E_{\rm DC}$ as
\begin{equation}
q_n(E_{\rm DC}) = 1 -2\sum_{i=1}^{2K+1} \vert C_{ni}\vert^2
\label{eqn11}
\end{equation}
which leads to the dipole moment
\begin{equation}
\mu_z = \sum_{n=1}^{4K+2} z_n q_n(E_{\rm DC}).
\label{eqn12}
\end{equation}
The charge transfer is given through
\begin{equation}
q = \sum_{n=2K+2}^{4K+2} q_n(E_{\rm DC}).
\label{eqn13}
\end{equation}
We also define
\begin{eqnarray}
\Delta_1 q_n(E_{\rm DC}) &=& q_n(E_{\rm DC})-\tilde q_n(0)\nonumber\\
\Delta_2 q_n(E_{\rm DC}) &=& q_n(E_{\rm DC})-q_n(0).
\label{eqn14}
\end{eqnarray}
where $\tilde q_n(0)$ is the charge for the infinite, periodic chain in the absence of
the field. $\Delta_2 q_n(E_{\rm DC})$ quantifies the effects on the charge distribution of
the push-pull chain due to
including the field, whereas $\Delta_1 q_n(E_{\rm DC})$ includes effects both from the field
and from the terminations. Note that $\Delta_1 q_n(E_{\rm DC})-\Delta_2 q_n(E_{\rm DC})$ gives the
field-independent effect of the terminations. Finally, it turns out to be useful to define
the center and width of the $i$th orbital according to
\begin{eqnarray}
\zeta_i &=& \sum_{n=1}^{4K+2}z_n \vert C_{ni}\vert^2\nonumber\\
\Delta\zeta_i &=& \left[\sum_{n=1}^{4K+2}(z_n-\zeta_i)^2 \vert C_{ni}\vert^2\right]^{1/2},
\label{eqn15}
\end{eqnarray}
which is consistent with Eq.\ (\ref{eqn07a}).
We performed calculations for six different
terminations specified by $(\epsilon_L,\epsilon_R,t_L,t_R)$.
The results are summarized in Figs.\ \ref{fig02}, \ref{fig02a}, \ref{fig04}, \ref{fig05}, and \ref{fig06}.
Since our model is that of a finite chain with two different types of atoms, A and B, the
Mulliken charges in the central region take two values. This is clearly recognized in the
presentation of $q_n$ in Fig.\ \ref{fig02} for $E_{\rm DC}=-0.015$.
In Fig.\ \ref{fig02} it is also seen that near the ends,
the Mulliken charges differ from the values of the inner part and, moreover, these charges
depend sensitively on the terminations. For the field strength $E_{\rm DC}=-0.015$
these findings are only marginally modified compared to those of a vanishing field
(not shown). From $\Delta_1 q_n$ we see that the combination of electrostatic
field and termination leads to an internal polarization of each unit in C.
Actually, $\Delta_1 q_n$ shows a reduced internal polarization compared to $\Delta_2 q_n$. Thus,
terminating the chain reduces the effect of the field in that regard. Whereas $\Delta_2 q_n$
contains information about the field-induced charge
redistributions, $\Delta_1 q_n$ contains additional information about the (field-dependent)
effects of the terminations. For $E_{\rm DC}=-0.015$
the field-induced charge redistributions are smaller near the terminations than
in the central parts.
For the larger field, $E_{\rm DC}=-0.03$, in Fig.\ \ref{fig02a}
the identification of the central region becomes much more difficult and, as
we shall see below, electrons are transferred from one end to the other. Moreover,
in this case the field perturbs the system so strongly that the effects of the field are stronger
than those of the terminations. This can be seen from the fact that $\Delta_1 q_n$ and
$\Delta_2 q_n$ are very similar.
The structure also depends upon the termination. For the intermediate field
of Fig.\ \ref{fig02} (and for zero field as well) the atomic coordinate $u_n$ is nearly
constant in C but varies considerably near the ends where its value depends on the
termination, as was the case for the atomic charges.
For the higher field in Fig.\ \ref{fig02a} it appears as if no
central region can be identified from this parameter. However, the fact that $\Delta z_n$ is
essentially linear for the innermost atoms implies that there is a well-defined,
repeating structure in C with a lattice constant differing from that of the field-free
case.
Fig.\ \ref{fig04} shows that the charge transfer, $q$, is independent of termination (though
not independent of the field), with the
exception of jumps by (even) integers. (The integers are even because we have not allowed
for spin polarization.) However, the charge distribution inside
R or L does depend on the terminations and, as a consequence, the
dipole moment does as well. On the other hand, the variation of
$\mu_z/N$ as a function of $E_{\rm DC}$ for different terminations follows parallel curves,
implying that the (hyper)polarizabilities are independent of the terminations. In fact,
a least squares fit yields the values (including maximum deviations): $\mu_0/N = 0.3245\pm
0.0023$, $\alpha/N = 1.677\pm 0.013$, $\beta/N = 18.17 \pm 0.20$, and
$\gamma/N = 606.9 \pm9.1$ for all six terminations.
As a function of field $\mu_z$ is discontinuous and the power series expansion
is valid only up to the field where the discontinuity occurs. Once such a
discontinuity has been passed, the dipole moment depends more strongly on the field. This
means that the only way of increasing the responses of long push-pull systems to DC
fields is to design chains for which the integral electron transfers occur at low fields.
At a given field the size of the chain for which jumps in the charge $q$ (i.e.\ Zener
tunneling) take place depends on the terminations (cf.\ Fig.\ \ref{fig05}). In the
shortest chains, for which Zener tunneling does not occur, $\mu_z$ follows parallel
curves as a function of chain length, $N=2K+1$, for different
terminations. This means that the
dipole moment and (hyper)polarizabilities per unit become
independent of termination. However, as seen in Fig.\ \ref{fig05}, the slope of these curves
increase after Zener tunneling has taken place, implying that the dipole moment
increases. Assuming that the field-dependence of the dipole moment likewise increases, this suggests
that the polarizability and/or hyperpolarizabilities per unit may increase for D/A
substituted systems after an integral number of electrons has been transferred from one end to
the other.
In Fig.\ \ref{fig06} we show an example of what happens to the molecular orbitals
when the jumps take place. Calculations were performed for field strengths between
$-0.0340$ and $-0.0485$ in steps of $-0.0005$, but in the figure we only show the results
for fields where Zener tunneling occurs. In all cases, the curves vary smoothly as a function
of field strength.
At the lowest two fields, the occupied orbitals closest to the Fermi level have a center
in the left part ($\zeta_i<0$), whereas the unoccupied orbitals closest to the Fermi level
are centered in the right part. At the field $E_{\rm DC} \simeq -0.0375$, two
electrons (one per spin direction) are
transferred from one side to the other, which again happens at a larger field
($E_{\rm DC} \simeq -0.0475$).
In the first case, we observe the occurrence of two new, very localized, orbitals
close to
(but not at) the Fermi level. The energetically lower (i.e occupied) one is
localized towards to the chain end on the right side while the other (unoccupied)
is localized towards the chain end on the left side. Accompanying this interchange is
a similar interchange of two rather delocalized
orbitals, both of which are further away from the Fermi level and centered closer to the
middle of the chain.
Again, at the second electron transfer a pair of new, rather localized, orbitals
near (even closer to) the Fermi level show up towards the chain ends,
and also this transfer is accompanied by some reorganization of the other orbitals.
Finally, Fig.\ \ref{fig06} also shows an example of a reorganization of the orbitals, i.e.,
for a field around $E_{\rm DC} = -0.0430$. Here, one localized, occupied orbital interchanges
order with an adjacent (in energy) more delocalized orbital, but otherwise no further
significant changes are observed.
\section{Inversion symmetry and comparison with previous results}
\label{sec04}
Before proceeding to compare with previous results we develop an interesting consequence
of our findings with regard to inversion symmetry. The same arguments can be applied for
a system containing a mirror plane perpendicular to the
chain axis, but here we shall for the sake of simplicity restrict ourselves to the
case of inversion symmetry. Suppose the
long oligomer of interest contains a central region made up of units with inversion
symmetry. Even if the central part does not have inversion symmetry, it may be
possible to create such with the addition of appropriate terminating groups. This
is, for example, the case for oligomers of thienyleneethynylenes and thienylenevinylenes that were
studied by Geisler {\it et al.}\cite{gpbfldbtncm94} Many of the systems of interest fall
into one of these two categories. Since, according to our findings, D/A substitution
cannot affect the (hyper)polarizabilities per unit, the latter must vanish even if the symmetry is not
preserved. For instance, modifying the terminations of the systems of Geisler {\it et al.} so
that inversion symmetry no longer exists cannot result in a non-vanishing $\beta/N$ if the chains
are sufficiently long.
A large fraction of previous observations are for systems of the type
described in the preceding paragraph. Some of these cases are discussed below along
with others pertinent to our findings herein. We now briefly consider, in particular,
the works mentioned in the Introduction.
In their combined experimental and theoretical study
on some push-pull oligoenes, Meyers {\it et al.}\cite{mbz92} observed a `negligible charge
transfer all the way from the donor to the acceptor', which implies that $q$ is independent
of the termination. On the other hand, in their theoretical study Tsunekawa and
Yamaguchi\cite{ty92} examined shorter, nitrogen-containing push-pull oligomers. They noted that
these systems are interesting from the perspective of maximizing $\beta$, but our results
establish that, for such to be true, the systems must be short enough so that our approach
is inapplicable. This serves to highlight the point that apparent, but not real,
discrepancies can occur due to shortness of the chain length.
Marder {\it et al.}\cite{mgmpbbp94} presented an approach for unifying the description of linear
and nonlinear polarization in organic polymethine dyes. It has since been shown that their
analysis is invalid if phonons are taken into account.\cite{kcb00} Here, however, we
emphasize that the conclusions they draw regarding $\beta$ can, again, hold only for
systems that are too short for our treatment to apply.
Clearly, the chain length required for validity of the treatment given here is an important
issue. In Fig.\ \ref{fig05} the dipole moment is converged for chains with some 20 units.
However, this may be an artifact of our simple H\"uckel model. In an experimental
study\cite{sldzfss94} and in several computational studies,\cite{mtwc94,vtvg96,lmrfg96,stcm00,ktrh95}
the second hyperpolarizability per unit was found to converge considerably slower which,
in fact, agrees with our own earlier findings.\cite{gut} Thus, when focusing on
higher-order non-linear responses quite large chains may be required for
the results of the present work to be relevant. In
shorter push-pull systems (for instance those considered
by Geisler {\it et al.}\cite{gpbfldbtncm94,bggpjm96} or by
Morley {\it et al.}\cite{hfmmzl95,m95}) D/A substitution can
have an influence on the response.
As shown numerically by Champagne {\it et al.},\cite{cjak97} $\beta/N$ also
converges relatively slowly as a function of size. They considered D/A
substituted oligomers of polymetheimine [also called polycarbonitrile, (CHN)$_x$]. This
system has a zigzag backbone of alternating C and N atoms with alternating bond lengths.
Without the bond length alternation it would, at least hypothetically, be possible to choose
donor and acceptor groups so that the overall system is centrosymmetric.
Even if chemical arguments imply that this structure is unrealistic, a non-zero value of
$\beta/N$ for long chains should be ascribed, strictly speaking, to the bond length
alternation.
Polyphenylenes and polypyridines have been studied by
Zhang and Lu.\cite{zl99} They focused on $\alpha$ and $\gamma$ as a function of
the length of a closed
ring for each system and applied a finite-field approach in their calculations.
Unfortunately, as we have shown earlier (see, e.g., [\onlinecite{gut}]), this approach will never
converge to the results for the infinite, periodic chain. Nevertheless, although $\beta/N$
will vanish for the polyphenylenes, we predict that a non-zero value will occur
for both short and long oligomers of the polypyridines.
For the D/A substituted polyenes studied by Champagne {\it et al.}\cite{cpjgbsrk00} our
analysis confirms their findings, i.e., that
$\beta/N$ will vanish for sufficiently large chains. Their numerical results indicate that
$\beta/N$ goes through a maximum and that convergence to the infinite chain result for
larger $N$ is slow.
Even the polarizability, $\alpha/N$, and the permanent dipole moment, $\mu_z^{(0)}/N$,
may converge more slowly, as a function of chain length, than
predicted by our simple model. This is, for example, the case for the systems
investigated by Smith {\it et al.}\cite{smrlls04} and by Kudin {\it et al.}\cite{kcr07}
In a recent study, Botek {\it et al.}\cite{bcta04} compared finite oligomers of [$N$]helicenes
and [$N$]phenylenes that possess a helical structure for $N$ larger than roughly 6. By making
explicit use of the helical symmetry of the central region we predict that, when those
systems are sufficiently long, D/A substitution will not be able to modify the electronic
responses to static fields. The fact that
Botek {\it et al.} find changes upon D/A substitution implies that the
chains of their study are not converged to the long chain limit.
\section{Summary}
\label{sec05}
As long as the applied field is not so strong that an integral number of electrons is
transferred from one end to the other, the answer to the question of
the title is clearly: there can be no change. This comes from our mathematical
analysis in Sec.\ \ref{sec02}, which generalizes treatments presented previously by Vanderbilt
and King-Smith\cite{vks93} and by Kudin {\it et al.},\cite{kcr07} who considered only
electronic polarization in the absence of an external electrostatic field. It is also in
agreement with our own earlier prediction.\cite{gut}
Calculations on a model system confirm the basic result and shed light on the nature of
the end-to-end charge transfer. Although the end charges, permanent dipole moment, and
structure depend sensitively on the terminations neither the amount of charge transferred nor the
(hyper)polarizabilities per unit do so. The field and/or chain length at which the charge jumps take
place also depend on the terminations. Each jump is associated with an interchange of
occupied and unoccupied molecular orbitals that are well-localized in the chain end
region. These orbitals are close to but not at the Fermi level. There is also an
accompanying orbital reorganization.
One consequence of our finding is that long unsubstituted chains which have inversion or mirror
symmetry, or can be made symmetric by substitution, must have a vanishing first
hyperpolarizability per unit. Experimental and theoretical determinations are consistent with this
fact, although apparent contradictions can occur for short chains.
\begin{acknowledgments}
This work was supported by the German Research Council (DFG) through project
Sp439/20 within the SPP 1145. Moreover, one of the authors (MS) is very grateful to the
International Center for Materials Research, University of California,
Santa Barbara, for generous hospitality.
\end{acknowledgments}
|
2,869,038,154,579 | arxiv | \section{Introduction: file preparation and submission}
The \verb"iopart" \LaTeXe\ article class file is provided to help authors prepare articles for submission to IOP Publishing journals.
This document gives advice on preparing your submission, and specific instructions on how to use \verb"iopart.cls" to follow this advice. You
do not have to use \verb"iopart.cls"; articles prepared using any other common class and style files can also be submitted.
It is not necessary to mimic the appearance of a published article.
The advice
on \LaTeX\ file preparation in this document applies to
the journals listed in table~\ref{jlab1}. If your journal is not listed please go to the journal website via \verb"http://iopscience.iop.org/journals" for specific
submission instructions.
\begin{table}
\caption{\label{jlab1}Journals to which this document applies, and macros for the abbreviated journal names in {\tt iopart.cls}. Macros for other journal titles are listed in appendix\,A.}
\footnotesize
\begin{tabular}{@{}llll}
\br
Short form of journal title&Macro name&Short form of journal title&Macro name\\
\mr
2D Mater.&\verb"\TDM"&Mater. Res. Express&\verb"\MRE"\\
Biofabrication&\verb"\BF"&Meas. Sci. Technol.$^c$&\verb"\MST"\\
Bioinspir. Biomim.&\verb"\BB"&Methods Appl. Fluoresc.&\verb"\MAF"\\
Biomed. Mater.&\verb"\BMM"&Modelling Simul. Mater. Sci. Eng.&\verb"\MSMSE"\\
Class. Quantum Grav.&\verb"\CQG"&Nucl. Fusion&\verb"\NF"\\
Comput. Sci. Disc.&\verb"\CSD"&New J. Phys.&\verb"\NJP"\\
Environ. Res. Lett.&\verb"\ERL"&Nonlinearity$^{a,b}$&\verb"\NL"\\
Eur. J. Phys.&\verb"\EJP"&Nanotechnology&\verb"\NT"\\
Inverse Problems&\verb"\IP"&Phys. Biol.$^c$&\verb"\PB"\\
J. Breath Res.&\verb"\JBR"&Phys. Educ.$^a$&\verb"\PED"\\
J. Geophys. Eng.$^d$&\verb"\JGE"&Physiol. Meas.$^{c,d,e}$&\verb"\PM"\\
J. Micromech. Microeng.&\verb"\JMM"&Phys. Med. Biol.$^{c,d,e}$&\verb"\PMB"\\
J. Neural Eng.$^c$&\verb"\JNE"&Plasma Phys. Control. Fusion&\verb"\PPCF"\\
J. Opt.&\verb"\JOPT"&Phys. Scr.&\verb"\PS"\\
J. Phys. A: Math. Theor.&\verb"\jpa"&Plasma Sources Sci. Technol.&\verb"\PSST"\\
J. Phys. B: At. Mol. Opt. Phys.&\verb"\jpb"&Rep. Prog. Phys.$^{e}$&\verb"\RPP"\\
J. Phys: Condens. Matter&\verb"\JPCM"&Semicond. Sci. Technol.&\verb"\SST"\\
J. Phys. D: Appl. Phys.&\verb"\JPD"&Smart Mater. Struct.&\verb"\SMS"\\
J. Phys. G: Nucl. Part. Phys.&\verb"\jpg"&Supercond. Sci. Technol.&\verb"\SUST"\\
J. Radiol. Prot.$^a$&\verb"\JRP"&Surf. Topogr.: Metrol. Prop.&\verb"\STMP"\\
Metrologia&\verb"\MET"&Transl. Mater. Res.&\verb"\TMR"\\
\br
\end{tabular}\\
$^{a}$UK spelling is required; $^{b}$MSC classification numbers are required; $^{c}$titles of articles are required in journal references; $^{d}$Harvard-style references must be used (see section \ref{except}); $^{e}$final page numbers of articles are required in journal references.
\end{table}
\normalsize
Any special submission requirements for the journals are indicated with footnotes in table~\ref{jlab1}.
Journals which require references in a particular format will need special care if you are using BibTeX, and you might need to use a \verb".bst" file
that gives slightly non-standard output in order to supply any extra information required. It is not
necessary to give references in the exact style of references used in published articles, as long as all of
the required information is present.
Also note that there is an incompatibility
between \verb"amsmath.sty" and \verb"iopart.cls" which cannot be completely worked around. If your article relies
on commands in \verb"amsmath.sty" that are not available in \verb"iopart.cls", you may wish to consider using a different
class file.
Whatever journal you are submitting to, please look at recent published articles (preferably
articles in your subject area) to familiarize yourself with the features of the journal. We do not demand
that your \LaTeX\ file closely resembles a published article---a generic `preprint' appearance of the sort
commonly seen on \verb"arXiv.org" is fine---but your submission should be presented
in a way that makes it easy for the referees to form an opinion of whether it is suitable for the journal.
The generic advice in this document---on what to include in an abstract, how best to present complicated
mathematical expressions, and so on---applies whatever class file you are using.
\subsection{What you will need to supply}
Submissions to our journals are handled via the ScholarOne web-based submission system. When you submit
a new article to us you need only submit a PDF of your article. When you submit a revised version,
we ask you to submit the source files as well. Upon acceptance for publication we will use the source files to produce a proof of your article in the journal style.
\subsubsection{Text.}When you send us the source files for a revised version of your submission,
you should send us the \LaTeX\ source code of your paper with all figures read in by
the source code (see section \ref{figinc}). Articles can be prepared using almost any version of \TeX\ or \LaTeX{},
not just \LaTeX\ with the class file \verb"iopart.cls". You may split your \LaTeX\ file into several parts, but please show
which is the `master' \LaTeX\ file that reads in all of the other ones by naming it appropriately. The `master'
\LaTeX\ file must read in all other \LaTeX\ and figure files from the current directory. {\it Do not read in files from a different directory, e.g. \verb"\includegraphics{/figures/figure1.eps}" or
\verb"\include{../usr/home/smith/myfiles/macros.tex}"---we store submitted files
all together in a single directory with no subdirectories}.
\begin{itemize}
\item {\bf Using \LaTeX\ packages.} Most \LaTeXe\ packages can be used if they are
available in common distributions of \LaTeXe; however, if it is essential to use
a non-standard package then any extra files needed to process the article must
also be supplied. Try to avoid using any packages that manipulate or change the standard
\LaTeX\ fonts: published articles use fonts in the Times family, but we prefer that you
use \LaTeX\ default Computer Modern fonts in your submission. The use of \LaTeX\ 2.09, and of plain
\TeX\ and variants such as AMSTeX is acceptable, but a complete PDF of your submission should be supplied in these cases.
\end{itemize}
\subsubsection{Figures.} Figures should ideally be included in an article as encapsulated PostScript files
(see section \ref{figinc}) or created using standard \LaTeX\ drawing commands.
Please name all figure files using the guidelines in section \ref{fname}.
We accept submissions that use pdf\TeX\ to include
PDF or bitmap figures, but please ensure that you send us a PDF that uses PDF version 1.4 or lower
(to avoid problems in the ScholarOne system).
You can do this by putting \verb"\pdfminorversion=4" at the very start of your TeX file.
\label{fig1}All figures should be included within the body of the text
at an appropriate point or grouped together with their captions at the end of the article. A standard graphics inclusion package such as \verb"graphicx" should be used for figure inclusion, and the package should be declared in the usual
way, for example with \verb"\usepackage{graphicx}", after the \verb"\documentclass" command.
Authors should avoid using special effects generated by including verbatim
PostScript code in the submitted \LaTeX\ file. Wherever possible, please try to use standard \LaTeX\ tools
and packages.
\subsubsection{References.\label{bibby}}
You can produce your bibliography in the standard \LaTeX\ way using the \verb"\bibitem" command. Alternatively
you can use BibTeX: our preferred \verb".bst" styles are:
\begin{itemize}
\item For the numerical (Vancouver) reference style we recommend that authors use
\verb"unsrt.bst"; this does not quite follow the style of published articles in our
journals but this is not a problem. Alternatively \verb"iopart-num.bst" created by Mark A Caprio
produces a reference style that closely matches that in published articles. The file is available from
\verb"http://ctan.org/tex-archive/biblio/bibtex/contrib/iopart-num/" .
\item For alphabetical (Harvard) style references we recommend that authors use the \verb"harvard.sty"
in conjunction with the \verb"jphysicsB.bst" BibTeX style file. These, and accompanying documentation, can be downloaded
from \penalty-10000 \verb"http://www.ctan.org/tex-archive/macros/latex/contrib/harvard/".
Note that the \verb"jphysicsB.bst" bibliography style does not include article titles
in references to journal articles.
To include the titles of journal articles you can use the style \verb"dcu.bst" which is included
in the \verb"harvard.sty" package. The output differs a little from the final journal reference
style, but all of the necessary information is present and the reference list will be formatted
into journal house style as part of the production process if your article is accepted for publication.
\end{itemize}
\noindent Please make sure that you include your \verb".bib" bibliographic database file(s) and any
\verb".bst" style file(s) you have used.
\subsection{\label{copyright}Copyrighted material and ethical policy} If you wish to make use of previously published material for which you do not own the copyright then you must seek permission from the copyright holder, usually both the author and the publisher. It is your responsibility to obtain copyright permissions and this should be done prior to submitting your article. If you have obtained permission, please provide full details of the permission granted---for example, copies of the text of any e-mails or a copy of any letters you may have received. Figure captions must include an acknowledgment of the original source of the material even when permission to reuse has been obtained. Please read our ethical policy before writing your article.
\subsection{Naming your files}
\subsubsection{General.}
Please name all your files, both figures and text, as follows:
\begin{itemize}
\item Use only characters from the set a to z, A to Z, 0 to 9 and underscore (\_).
\item Do not use spaces or punctuation characters in file names.
\item Do not use any accented characters such as
\'a, \^e, \~n, \"o.
\item Include an extension to indicate the file type (e.g., \verb".tex", \verb".eps", \verb".txt", etc).
\item Use consistent upper and lower case in filenames and in your \LaTeX\ file.
If your \LaTeX\ file contains the line \verb"\includegraphics{fig1.eps}" the figure file must be called
\verb"fig1.eps" and not \verb"Fig1.eps" or \verb"fig1.EPS". If you are on a Unix system, please ensure that
there are no pairs of figures whose names differ only in capitalization, such as \verb"fig_2a.eps" and \verb"fig_2A.eps",
as Windows systems will be unable to keep the two files in the same directory.
\end{itemize}
When you submit your article files, they are manipulated
and copied many times across multiple databases and file systems. Including non-standard
characters in your filenames will cause problems when processing your article.
\subsubsection{\label{fname}Naming your figure files.} In addition to the above points, please give each figure file a name which indicates the number of the figure it contains; for example, \verb"figure1.eps", \verb"figure2a.eps", etc. If the figure file contains a figure with multiple parts, for example figure 2(a) to 2(e), give it a name such as \verb"figure2a_2e.eps", and so forth.
\subsection{How to send your files}
Please send your submission via the ScholarOne submission system. Go to the journal home
page, and use the `Submit an article' link on the right-hand side.
\section{Preparing your article}
\subsection{Sample coding for the start of an article}
\label{startsample}
The code for the start of a title page of a typical paper in the \verb"iopart.cls" style might read:
\small\begin{verbatim}
\documentclass[12pt]{iopart}
\begin{document}
\title[The anomalous magnetic moment of the
neutrino]{The anomalous magnetic moment of the
neutrino and its relation to the solar neutrino problem}
\author{P J Smith$^1$, T M Collins$^2$,
R J Jones$^3$\footnote{Present address:
Department of Physics, University of Bristol, Tyndalls Park Road,
Bristol BS8 1TS, UK.} and Janet Williams$^3$}
\address{$^1$ Mathematics Faculty, Open University,
Milton Keynes MK7~6AA, UK}
\address{$^2$ Department of Mathematics,
Imperial College, Prince Consort Road, London SW7~2BZ, UK}
\address{$^3$ Department of Computer Science,
University College London, Gower Street, London WC1E~6BT, UK}
\ead{[email protected]}
\begin{abstract}
...
\end{abstract}
\keywords{magnetic moment, solar neutrinos, astrophysics}
\submitto{\jpg}
\maketitle
\end{verbatim}
\normalsize
At the start of the \LaTeX\ source code please include
commented material to identify the journal, author, and (if you are sending a revised
version or a resubmission) the reference number that the journal
has given to the submission. The first non-commented line should be
\verb"\documentclass[12pt]{iopart}" to load the preprint class
file. The normal text will be in the Computer Modern 12pt font.
It is possible to specify 10pt font size by passing the option \verb"[10pt]" to the class file.
Although it is possible to choose a font other than Computer Modern by loading external packages, this is not recommended.
The article text begins after \verb"\begin{document}".
Authors of very long articles may find it convenient to separate
their article into a series of \LaTeX\ files each containing one section, and each of which is called
in turn by the primary file. The files for each section should be read in from the current directory;
please name the primary file clearly so that we know to run \LaTeX\ on this file.
Authors may use any common \LaTeX\ \verb".sty" files.
Authors may also define their own macros and definitions either in the main article \LaTeX\ file
or in a separate \verb".tex" or \verb".sty" file that is read in by the
main file, provided they do not overwrite existing definitions.
It is helpful to the production staff if complicated author-defined macros are explained in a \LaTeX\ comment.
The article class \verb"iopart.cls" can be used with other package files such
as those loading the AMS extension fonts
\verb"msam" and \verb"msbm", which provide the
blackboard bold alphabet and various extra maths symbols as well as symbols useful in figure
captions. An extra style file \verb"iopams.sty" is provided to load these
packages and provide extra definitions for bold Greek letters.
\subsection{\label{dblcol}Double-column layout}
The \verb"iopart.cls" class file produces single-column output by default, but a two-column layout can be obtained by
using \verb"\documentclass[10pt]" at the start of the file and \verb"\ioptwocol" after the \verb"\maketitle" command. Two-column output will begin
on a new page (unlike in published double-column articles, where the two-column material
starts on the same page as the abstract).
In general we prefer to receive submissions in single-column format even for journals
published in double-column style; however, the \verb"\ioptwocol" option may be useful to test figure sizes
and equation breaks for these journals. When setting material
in two columns you can use the asterisked versions of \LaTeX\ commands such as \verb"\begin{figure*} ... \end{figure*}"
to set figures and tables across two columns. If you have any problems or any queries about producing two-column output, please contact us at \verb"[email protected]".
\section{The title and abstract page}
If you use \verb"iopart.cls", the code for setting the title page information is slightly different from
the normal default in \LaTeX. If you are using a different class file, you do not need to mimic the appearance of
an \verb"iopart.cls" title page, but please ensure that all of the necessary information is present.
\subsection{Titles and article types}
The title is set using the command
\verb"\title{#1}", where \verb"#1" is the title of the article. The
first letter
of the title should be capitalized with the rest in lower case.
The title appears in bold case, but mathematical expressions within the title may be left in light-face type.
If the title is too long to use as a running head at the top of each page (apart from the
first) a short
form can be provided as an optional argument (in square brackets)
before the full title, i.e.\ \verb"\title[Short title]{Full title}".
For article types other than papers, \verb"iopart.cls"
has a generic heading \verb"\article[Short title]{TYPE}{Full title}"
and some specific definitions given in table~\ref{arttype}. In each case (apart from Letters
to the Editor and Fast Track Communications) an
optional argument can be used immediately after the control sequence name
to specify the short title; where no short title is given, the full title
will be used as the running head. Not every article type has its own macro---use \verb"\article" for
any not listed. A full list of the types of articles published by a journal is given
in the submission information available via the journal home page.
The generic heading could be used for
articles such as those presented at a conference or workshop, e.g.
\small\begin{verbatim}
\article[Short title]{Workshop on High-Energy Physics}{Title}
\end{verbatim}\normalsize
Footnotes to titles may be given by using \verb"\footnote{Text of footnote.}" immediately after the title.
Acknowledgment of funding should be included in the acknowledgments section rather than in a footnote.
\begin{table}
\caption{\label{arttype}Types of article defined in the {\tt iopart.cls}
class file.}
\footnotesize\rm
\begin{tabular*}{\textwidth}{@{}l*{15}{@{\extracolsep{0pt plus12pt}}l}}
\br
Command& Article type\\
\mr
\verb"\title{#1}"&Paper (no surtitle on first page)\\
\verb"\ftc{#1}"&Fast Track Communication\\
\verb"\review{#1}"&Review\\
\verb"\topical{#1}"&Topical Review\\
\verb"\comment{#1}"&Comment\\
\verb"\note{#1}"&Note\\
\verb"\paper{#1}"&Paper (no surtitle on first page)\\
\verb"\prelim{#1}"&Preliminary Communication\\
\verb"\rapid{#1}"&Rapid Communication\\
\verb"\letter{#1}"&Letter to the Editor\\
\verb"\article{#1}{#2}"&Other articles\\\ & (use this for any other type of article; surtitle is whatever is entered as {\tt
\#1})\\
\br
\end{tabular*}
\end{table}
\subsection{Authors' names and addresses}
For the authors' names type \verb"\author{#1}",
where \verb"#1" is the
list of all authors' names. Western-style names should be written as initials then
family name, with a comma after all but the last
two names, which are separated by `and'. Initials should {\it not} be followed by full stops. First (given) names may be used if
desired. Names in Chinese, Japanese and Korean styles should be written as you want them to appear in the published article. Authors in all IOP Publishing journals have the option to include their names in Chinese, Japanese or Korean characters in addition to the English name: see appendix B for details.
If the authors are at different addresses a superscripted number, e.g. $^1$, \verb"$^1$", should be used after each
name to reference the author to his/her address.
If an author has additional information to appear as a footnote, such as
a permanent address, a normal \LaTeX\ footnote command
should be given after the family name and address marker
with this extra information.
The authors' affiliations follow the list of authors.
Each address is set by using
\verb"\address{#1}" with the address as the single parameter in braces.
If there is more
than one address then the appropriate superscripted number, followed by a space, should come at the start of
the address.
E-mail addresses are added by inserting the
command \verb"\ead{#1}" after the postal address(es) where \verb"#1" is the e-mail address.
See section~\ref{startsample} for sample coding. For more than one e-mail address, please use the command
\verb"\eads{\mailto{#1}, \mailto{#2}}" with \verb"\mailto" surrounding each e-mail address. Please ensure
that, at the very least, you state the e-mail address of the corresponding author.
\subsection{The abstract}
The abstract follows the addresses and
should give readers concise information about the content
of the article and indicate the main results obtained and conclusions
drawn. It should be self-contained---there should be no references to
figures, tables, equations, bibliographic references etc. It should be enclosed between \verb"\begin{abstract}"
and \verb"\end{abstract}" commands. The abstract should normally be restricted
to a single paragraph of around 200 words.
\subsection{Subject classification numbers}
We no longer ask authors to supply Physics and Astronomy Classification System (PACS)
classification numbers. For submissions to {\it Nonlinearity}\/ we ask that you should
supply Mathematics Subject Classification (MSC) codes. MSC numbers are included after the abstract
using \verb"\ams{#1}".
The command
\verb"\submitto{#1}" can be inserted, where \verb"#1" is the journal name written in full or the appropriate control sequence as
given in table~\ref{jlab1}. This command is not essential to the running of the file and can be omitted.
\subsection{Keywords}
Keywords are required for all submissions. Authors should supply a minimum of three (maximum seven) keywords appropriate to their article as a new paragraph starting \verb"\noindent{\it Keywords\/}:" after the end of the abstract.
\subsection{Making a separate title page}
To keep the header material on a separate page from the
body of the text insert \verb"\maketitle" (or \verb"\newpage") before the start of the text.
If \verb"\maketitle" is not included the text of the
article will start immediately after the abstract.
\section{The text}
\subsection{Sections, subsections and subsubsections}
The text of articles may be divided into sections, subsections and, where necessary,
subsubsections. To start a new section, end the previous paragraph and
then include \verb"\section" followed by the section heading within braces.
Numbering of sections is done {\it automatically} in the headings:
sections will be numbered 1, 2, 3, etc, subsections will be numbered
2.1, 2.2, 3.1, etc, and subsubsections will be numbered 2.3.1, 2.3.2,
etc. Cross references to other sections in the text should, where
possible, be made using
labels (see section~\ref{xrefs}) but can also
be made manually. See section~\ref{eqnum} for information on the numbering of displayed equations. Subsections and subsubsections are
similar to sections but
the commands are \verb"\subsection" and \verb"\subsubsection" respectively.
Sections have a bold heading, subsections an italic heading and
subsubsections an italic heading with the text following on directly.
\small\begin{verbatim}
\section{This is the section title}
\subsection{This is the subsection title}
\end{verbatim}\normalsize
The first section is normally an introduction, which should state clearly
the object of the work, its scope and the main advances reported, with
brief references to relevant results by other workers. In long papers it is
helpful to indicate the way in which the paper is arranged and the results
presented.
Footnotes should be avoided whenever possible and can often be included in the text as phrases or sentences in parentheses. If required, they should be used only for brief notes that do not fit conveniently into the text. The use of
displayed mathematics in footnotes should be avoided wherever possible and no equations within a footnote should be numbered.
The standard \LaTeX\ macro \verb"\footnote" should be used. Note that in \verb"iopart.cls" the \verb"\footnote" command
produces footnotes indexed by a variety of different symbols,
whereas in published articles we use numbered footnotes. This
is not a problem: we will convert symbol-indexed footnotes to numbered ones during the production process.
\subsection{Acknowledgments}
Authors wishing to acknowledge assistance or encouragement from
colleagues, special work by technical staff or financial support from
organizations should do so in an unnumbered `Acknowledgments' section
immediately following the last numbered section of the paper. In \verb"iopart.cls" the
command \verb"\ack" sets the acknowledgments heading as an unnumbered
section.
Please ensure that you include all of the sources of funding and the funding contract reference numbers that you are contractually obliged to acknowledge. We often receive requests to add such information very late in the production process, or even after the article is published, and we cannot always do this. Please collect all of the necessary information from your co-authors and sponsors as early as possible.
\subsection{Appendices}
Technical detail that it is necessary to include, but that interrupts
the flow of the article, may be consigned to an appendix.
Any appendices should be included at the end of the main text of the paper, after the acknowledgments section (if any) but before the reference list.
If there are
two or more appendices they should be called Appendix A, Appendix B, etc.
Numbered equations will be in the form (A.1), (A.2), etc,
figures will appear as figure A1, figure B1, etc and tables as table A1,
table B1, etc.
The command \verb"
\section{Introduction}
The Landauer--B\"uttiker formalism lies in the heart of mesoscopic physics \cite{Landauer_1957,doi:10.1080/14786437008238472,PhysRevLett.57.1761}.
It directly allows one to express the conductance in terms of the transmission matrix, this way relating transport and quantum properties \cite{Landauer_1992,Imry_1999}.
Historically, the substantiation of this formalism via linear response theory was connected with certain controversies ({\it cf} \cite{PhysRevLett.46.618,PhysRevB.23.6851} and \cite{PhysRevB.22.3519,PhysRevLett.47.972,PhysRevB.24.2978,PhysRevB.24.1151}). The original Landauer formulas proved to be sensitive to the proper formulation of the physical problem, in particular, to the proper definition of leads, electron reservoirs, and self-consistency of linear response (for review see \cite{Stone_1988}).
The controversies were finally resolved by B\"uttiker in \cite{PhysRevLett.57.1761}, where the general formulas for multi-terminal mesoscopic conductance were obtained.
Even though according to the elementary theory of tunneling the transmission probability is defined in a stationary setup there was a lot of attention
related to the non-equilibrium approach to the transport \cite{Caroli_1971,PhysRevB.22.5887}.
The powerful analytic approaches involving Keldysh Green's function techniques were developed in \cite{Stefanucci2004,PhysRevB.69.195318,KOHLER2005,Ridley2022}, along with the efficient numerical methods \cite{Gaury2016,PhysRevB.93.134506,Kloss2021}, which
allow one not only to describe creation of the asymptotic currents and address their properties beyond the linear response regime but also explore behavior of the generic time-dependent quantum transport \cite{PhysRevB.66.205320,Moskalets2011,PhysRevB.103.L041405}.
From the point of view of the one-dimensional integrable models the attention to similar problems was renewed in the context of the quantum quenches, which are specifically,
understood as the evolution of the isolated quantum system initialized in the highly non-equilibrium state created either via the rapid change of the Hamiltonian or containing macroscopic
spatial inhomogeneities \cite{Calabrese_2007,Sotiriadis2008,Polkovnikov2011,Calabrese_2016,Eisert_2015}. The latter is more pertinent to the quantum transport setup and is dubbed as the partition approach \cite{Caroli_1971,PhysRevB.69.195318}.
The large-time behavior of such systems can be described by the generalized hydrodynamics
\cite{Bertini_2016,Castro_Alvaredo_2016}, which allows one to get analytic treatment of the non-equilibrium steady currents,
describe anomalous diffusion, and address the correlation functions (for review see the special issue \cite{Bastianello_2022}.
The transport in the translational invariant systems of free fermions and their spin analogs attracted a lot of attention due to the possibility of obtaining analytic answers for
the average number of particles and its variance \cite{antal1999transport,Antal_2008,lancaster2010quantum,Viti_2016} (see also a numerical study in
\cite{PhysRevA.90.023624}).
Other aspects of the evolution of the bipartite system were studied in \cite{Perfetto2017,Jin2021}.
More delicate observables such as Loschmidt echo and Full Counting Statistics (FCS) were addressed in \cite{Viti_2016,St_phan_2017,PhysRevLett.110.060602,Sasamoto}, where the connection to random matrix theory was performed
and the FCS was expressed in terms of Fredholm determinants.
Other connections of one-dimensional fermions at equilibrium in external potentials and random matrix theory are reviewed in \cite{Dean2019}.
The simplest case when translational invariance is broken by a local defect in many cases also allows for analytic treatment.
Among others we would like to emphasize research that studies entropy evolution \cite{eisler2009entanglement,eisler2012on_entanglement,Dubail_2017},
transport properties within the interacting resonant level model \cite{Bransch_del_2010,PhysRevB.82.205414,Bidzhiev_2017,Bidzhiev_2019}, as well as non-integrable Ising chain \cite{PhysRevB.99.180302}.
The inclusion of the defect in the generalized hydrodynamic approach was performed in \cite{Bertini2016}, the peculiarities of the thermalization via the defect were discussed in
\cite{10.21468/SciPostPhys.12.2.060}, and effects of the attractive local potential quench in \cite{Rossi2021}.
Ref. \cite{10.21468/SciPostPhys.6.1.004} deals with the exact evaluation of the current and charge distribution for the bipartite scenario
when the left part of the system is prepared in the fully decorrelated state (infinite temperature) and is connected via the defect with the empty right part.
Further, this type of quench was considered for the hopping defect for the arbitrary initial distributions in \cite{Gamayun2020}, where FCS, Loschmidt echo, and the entanglement entropy were computed.
In \cite{Schehr2022} analytic answers for the particle and energy currents as well as the full density distribution were obtained for the continuous system with a delta impurity.
In this paper, we study the continuous bipartite system with an \textit{arbitrary} defect localized around the middle of the system.
We consider a bipartite quench protocol, in which initially the ``right'' part of the system is empty and the ``left'' part is filled up to some energy with fermions subjected
to the local short-range potential $V_0(x)$, or distributed according to some probability (to model, for instance, the thermal initial state).
After that, the dynamics of the whole is governed by the Hamiltonian with the local potential $V(x)$, which may, in principle, be different from $V_0(x)$. We compute the FCS of the number of particles in the right part of the system.
We derive an expression for FCS in the form of Fredholm determinant that is expressed via the Jost functions that correspond to the potentials $V$ and $V_0$.
This is an exact expression in the thermodynamic limit that describes both the transient dynamics and the formation of the non-equilibrium steady-state.
We argue that in the absence of the bound states in the potential $V(x)$, the leading terms in the FCS are defined via the transmission coefficient of the potential $V(x)$ and are given by the Levitov--Lesovik formula \cite{LL,Levitov_1996,Sch_nhammer_2007} (with logarithmic corrections for zero temperature states).
If two or more bound states are present in the system they affect even the properties of the steady state by introducing persistent oscillations with a frequency equal to the difference of energies between the bound states. Moreover, the amplitude of these oscillations depends on the Jost functions of the potential $V_0(x)$, this way retaining the memory of the initial state. This phenomenon can be observed already on the level of the current, where even for the constant bias the persistent oscillations are present on top of the constant Landauer--B\"uttiker contribution.
Similar dependencies of the initial correlation in the case when bound states are present in the system were observed in \cite{Khosravi2008,PhysRevB.92.165403}.
This effect seems to be overlooked in the traditional approach (see for instance footnote 54 in \cite{PhysRevB.46.12485}).
The paper is organized as follows. In Section~\ref{sec2} we recall definitions of the scattering data, the Jost states and adopt notations for one-dimensional systems.
In Section~\ref{quenchSec} we formulate the problem and present the main results.
The outline of the derivation of the main results is presented in Sections \ref{HardWall} and
\ref{secKernel}. In Section~\ref{HardWall} we describe a construction of the wave functions in the finite system and in Section~\ref{secKernel} we discuss how to obtain the kernel for the Fredholm determinant. Section~\ref{current} contains derivation of the Landauer--B\"uttiker expression for the current and its modification in the case when multiple bound states are present in the systems. A short summary and outlook are presented in Section~\ref{summ}. Appendices deal with some details of the derivations and contain scattering data for a few exemplary potentials.
\section{General properties of scattering}\label{sec2}
In this section we briefly remind some general notions of the one-dimensional scattering on the local potential $V(x)$.
The eigenvalue problem satisfies the Schrodinger equation
\begin{equation}
H_V\Psi= \left(-\frac{d^2}{dx^2} +V(x)\right)\Psi = E \Psi.
\end{equation}
The locality means that the potential vanishes fast enough as $|x|\to \infty$. For all practical purposes we assume that the potential is nonzero only in the finite domain
$|x|<\xi$. This way, for $|x|>\xi$ the wave functions that correspond to the energy $E=k^2$ are the plane waves $e^{\pm i k x}$.
So for every real $k \neq 0$ there exists a two-dimensional space of solutions.
The typical basis in this space can be conveniently described by the Jost states $\psi_k$, $\varphi_k$ defined by their asymptotic behavior, namely
\begin{equation}\label{eigenvalue}
\psi_k(x) = e^{-ikx} + o(1),\qquad x\to +\infty,
\end{equation}
\begin{equation}
\varphi_k(x)= e^{-ikx} + o(1),\qquad x\to -\infty.
\end{equation}
For a real potential these states are connected to their complex conjugated counterparts as $\psi_{-k}(x) = \bar{\psi}_k(x)$,
$\varphi_{-k}(x) = \bar{\varphi}_k(x)$.
If additionally the potential is symmetric $V(x) = V(-x)$, then $\psi_k(-x)$ and $\varphi_k(-x)$ are still eigenfunctions. Considering the asymptotic behavior one can conclude that in this case $ \psi_k(-x) = \bar{\varphi}_k(x)$.
Using \eref{eigenvalue} we see that the Jost solutions satisfy the following integral equations
\begin{equation}\label{psiint}
\psi_k(x) = e^{-ik x} - \int\limits_x^\infty \frac{\sin(k(x-y))}{k}V(y) \psi_k(y) dy,
\end{equation}
\begin{equation}\label{phiint}
\varphi_k(x) = e^{-ik x} + \int\limits^x_{-\infty} \frac{\sin(k(x-y))}{k}V(y) \varphi_k(y) dy.
\end{equation}
As both Jost solutions form a basis they are connected by the linear transformation, the transfer matrix,
\begin{equation}\label{transfer}
\left(
\begin{array}{c}
\varphi_k(x) \\
\bar{\varphi}_k(x)
\end{array}
\right) =
\mathcal{T}(k)
\left(
\begin{array}{c}
\psi_k(x) \\
\bar{\psi}_k(x)
\end{array}
\right),\qquad
\mathcal{T}(k) = \left(
\begin{array}{cc}
a_k & b_k\\
\bar{b}_k & \bar{a}_k
\end{array}
\right).
\end{equation}
Note that for a real potential $a_{-k}=\bar{a}_k$, $b_{-k}=\bar{b}_k$, while for a symmetric potential $b_k$ is purely imaginary.
Considering the Wronskian of the eigenvalue problem \eref{eigenvalue} we conclude that the transfer matrix is unimodular
\begin{equation}\label{uni}
\det \mathcal{T}(k) =|a_k|^2-|b_k|^2= 1.
\end{equation}
The transfer matrix $\mathcal{T}$ can be repacked into the $S$-matrix \cite{Newton1982} as follows
\begin{equation}
S = \frac{1}{a_k}\left(
\begin{array}{cc}
-\bar{b}_k & 1\\
1 & b_k
\end{array}
\right).
\end{equation}
The unimodularity condition \eref{uni} means the unitarity for S-matrix $SS^+ =1$.
The transmission and the reflection coefficients are defined as the squared absolute values of the off-diagonal and diagonal components of the S-matrix, respectively,
\begin{equation}\label{tran}
T(E) = \frac{1}{|a_k|^2},\qquad R(E) = \frac{|b_k|^2}{|a_k|^2}.
\end{equation}
Here we present them as the functions of energy $E = k^2$. The unitarity \eref{uni} guarantees that $T(E) + R(E) = 1$.
The coefficient $a_k$ can be analytically continued to the upper half plane where it might have zeroes that correspond to the bound states. They are purely imaginary $k=i\varkappa$ so the corresponding energy is negative $E = -\varkappa^2$. In fact the analytic properties allow one to present (see for instance \cite{Novikov})
\begin{equation}
\label{a}
a_k = \prod\limits_{n=1}^{N} \frac{k-i\varkappa_n}{k+i\varkappa_n}
\exp\left(\frac{1}{2\pi i}\int\limits_{-\infty}^{\infty}\frac{\log (1+|b_q|^2)}{q-k-i0}dq\right).
\end{equation}
To describe the wave function of a bound state we can use either $\varphi_k(x)$ and $\bar{\psi}_k(x)$ as both these functions can be analytically continued to the upper half plane. In fact, it turns out that they are proportional $\varphi_{i\varkappa}(x) = b_\varkappa \bar{\psi}_{i\varkappa}$. Taking into account the definition of transfer matrix \eref{transfer} this relation is hardly surprising and $b_{\varkappa}$ can be considered as an analytic continuation of the $b_k$, however, contrary to $a_k$ such continuation is not always possible, and the coefficient $b_\varkappa$ should be considered as additional scattering data.
Finally, let us comment on the normalization conditions of the continuous spectrum. Similar to \cite{Novikov} we conclude that
\begin{equation}
\int\limits_{-\infty}^\infty dx \varphi_k(x) \bar{\psi}_q(x) = a_q \delta(k-q).
\end{equation}
Therefore the Green's function $G(x,y,t)$ defined as a solution of the Schrodinger equation in $x$ variable with the initial condition $G(x,y,t=0) = \delta(x-y)$, can be presented as
\begin{equation}\label{Gsimple}
G(x,y,t) = \int_C\frac{dk}{2\pi} \frac{\varphi_k(x) \bar{\psi}_k(y)}{a_k} e^{-itE_k}.
\end{equation}
As for the continuum spectrum the contour $C$ goes along the real line. We notice however that the integrand can be analytically continued in the upper half plane. Moreover, in this form we can easily take into account also contributions from the bound states. To do so the contour $C$ should run above all positions of zeroes of $a_k$ in the upper half plane (see figure~\ref{FigContours} below).
Below we re-derive this presentation using wave functions in the box (hard-wall boundary conditions), and demonstrate how to express full counting statistics via the scattering data and Jost solutions.
\section{Quench protocol} \label{quenchSec}
The scattering states introduced in the previous section describe an infinite system. To correctly formulate transport problem we consider
open (hard-wall) boundary conditions placed at $x=\pm R$, perform computations at finite $R$, and send $R\to\infty$ in the end of the computation.
At the initial moment of time only the left part of the system $x<0$ is filled. Meaning that the single particle wave functions $\Lambda_q$ are non-zero only in the interval $x\in [-R,0]$, more formally
\begin{equation}\label{eq1}
- \frac{d^2\Lambda_q}{dx^2}+V_0(x)\Lambda_q = q^2\Lambda_q,\qquad\qquad \Lambda_q(0) = \Lambda_q(-R) = 0.
\end{equation}
The post-quench wave functions satisfies
\begin{equation}\label{eq2}
- \frac{d^2\chi_k}{dx^2}+V(x)\chi_k = k^2\chi_k,\qquad\qquad \chi_k(-R) = \chi_k(R) = 0.
\end{equation}
The initial $N$-particle state of the system $|{\rm in}\rangle$ is given in a Fock space by an ordered set of momenta $q_1<q_2<\dots< q_N$. Formally, it can be presented as a wedge product
\begin{equation}\label{vac}
|{\rm in}\rangle = \Lambda_{q_1}\bigwedge \Lambda_{q_2} \dots \bigwedge\Lambda_{q_N},
\end{equation}
which in the coordinate space corresponds to a single Slater determinant. The case of the statistical ensemble in the $N\to \infty$ limit can be described by taking the typical distribution of $q_i$.
To characterize many body dynamics we consider full counting statistics (FCS). It can be written as
\begin{equation}\label{FCS2}
\mathcal{F}(\lambda,t) = \langle {\rm in}| e^{itH} e^{\lambda N_R} e^{-itH} |{\rm in} \rangle = \langle {\rm in}| e^{\lambda \int\limits_0^t d\tau J(\tau)} |{\rm in} \rangle,
\end{equation}
where $N_R$ is number of particles in right part of the system and $J(\tau)$ is the current through the point $x=0$.
Introducing the resolution of the unity, we can formally present FCS as a form factor series
\begin{equation}\label{ff1}
\mathcal{F}(\lambda,t) = \sum_{\textbf{k},\textbf{p}}\langle {\rm in}| \textbf{k} \rangle \langle \textbf{k} | e^{\lambda N_R} |\textbf{p}\rangle\langle \textbf{p} |{\rm in} \rangle
e^{it(E_{\textbf{k}}-E_{\textbf{p}})}.
\end{equation}
Here $|\textbf{k}\rangle$ and $|\textbf{p}\rangle$ are many-body states of the form \eqref{vac}.
Therefore the overlaps and the matrix elements are the determinants of the Cauchy type matrices.
Due to the free fermionic structure of the initial state \eref{vac} the FCS can be presented as
\begin{equation}\label{Fdet}
\mathcal{F}(\lambda,t) = \det X_{ab},
\end{equation}
with indices $a$ and $b$ corresponding to the momenta in the initial state $|{\rm in}\rangle$, and the matrix elements are
\begin{equation}
X_{ab} =\delta_{ab}+ (e^\lambda-1)\sum_{k,p} \frac{(\Lambda_a,\chi_k)(\chi_k, P_>\chi_p)(\chi_p,\Lambda_b)}{\sqrt{(\Lambda_a,\Lambda_a)}(\chi_k,\chi_k)(\chi_p,\chi_p)\sqrt{(\Lambda_b,\Lambda_b)}} e^{it(E_k-E_p)}.
\label{Xab}
\end{equation}
Here $P_>$ is a projector on the right part of the system i.e. $x\in [0, R)$.
This formula can be obtained from \eref{ff1} using some variant of the Cauchy--Binet formula (the product of determinants is the determinant of product of matrices).
Our goal is to present \eref{Fdet} in the thermodynamic limit as a Fredholm determinant of some trace-class operator. Namely, we present
\begin{equation}\label{kk}
X_{ab} =\delta_{ab}+ \frac{\pi}{R} K(q_a,q_b)+ o(1/R)
\end{equation}
so that FCS
in the thermodynamic limit $R\to\infty$ transforms into a Fredholm determinant
\begin{equation}\label{Ftd}
\mathcal{F} (\lambda,t) = \det X \to \det \left(1 + \rho \hat{K}\right),
\end{equation}
where $\rho$ in the density of the initial state and the operator $\hat{K}$ acts on the integrable functions $L^2(\mathds{R})$ via the convolution with the kernel $K(q,q')$, namely
\begin{equation}
\hat{K}f(q) = \int K(q,q')f(q')dq'.
\end{equation}
We compute this kernel in Section~\ref{secKernel}. It can be presented as
\begin{equation}
K(q,q') = K_0(q,q') + \delta K(q,q'),
\end{equation}
where
\begin{equation}\label{K0}
K_0(q,q') = \frac{e^\lambda-1}{\pi} \sigma(q,q') \frac{\sin \frac{t(E_q -E_{q'})}{2}}{E_q -E_{q'}}
\end{equation}
with
\begin{equation}
\sigma(q,q') = \frac{i |\Phi_q(0)||\Phi_{q'}(0)|}{ \Phi_q(0) \Phi_{q'}(0) \bar{a}_qa_{q'}} \left(
\bar{\psi}_{q'}(0) \partial_x \psi_q(0) - \psi_q(0) \partial_x \bar{\psi}_{q'}(0).
\right)
\end{equation}
Here $\psi_k$ are Jost solutions defined by equation~\eqref{psiint} and by $\Phi_k(x)$ we denote the Jost solution equation~\eref{phiint} on the potential $V_0$. The expression for $\delta K$ can be found in Section~\ref{secKernel}. It contains, in particular, contributions from the bound states if they are present in the spectrum of $V(x)$.
We see that the kernels are expressed via the scattering data and the Jost solutions.
The separation on $K_0$ and $\delta K$ is done to facilitate the large $t$ asymptotic analysis.
Namely, in this limit $\delta K$ contains only oscillating terms, while formally $K_0$ tends to a delta function. For this reasoning we can heuristically argue that the leading contribution to the FCS will be given by $K_0$ and $\delta K$ will results in smooth prefactor for FCS.
For a specific lattice system this effect was observed in \cite{Gamayun2020}.
Moreover, since $\sigma(q,q')$ is a smooth function we can replace it with diagonal values $\sigma(q,q')\to \sigma(q,q)$.
Further, taking into account that the Wronskian $\bar{\psi}_{q}(x) \partial_x \psi_q(x) - \psi_q(x) \partial_x \bar{\psi}_{q}(x)$ does not depend on $x$, which can be checked by the immediate
differentiation. We evaluate it at $x\to-\infty$ and arrive to the conclusion that $\sigma(q,q)=2q/|a_q|^2$.
This allows us to transform the kernel to act on the energy space instead of momentum. This way, we obtain a Fredholm determinant of the generalized sine-kernel type
\begin{equation}
\mathcal{F}(\lambda,t) \approx \tilde{C}(\lambda,t) \det \left(1 + \frac{e^\lambda-1}{\pi}\rho(E)T(E)\frac{\sin \frac{t(E-E')}{2}}{E-E'} \right).
\end{equation}
Here we have written a kernel of the integral operator. The prefactor $ \tilde{C}(\lambda,t)$ appeared due to discarding $\delta K$.
Notice that in this form all information about the Jost function disappears and only the transmission coefficient $T(E)$ for the post-quench potential remains.
Large $t$ asymptotic behavior of the Fredholm determinant can be easily found either by solving the corresponding Riemann--Hilbert problem \cite{Kitanine_2009,Slavnov_2010,Kozlowski_2011aa} or using the effective form factors
\cite{GIZ,chernowitz2021dynamics,PhysRevB.105.085145}. For the smooth distribution $\rho(E)$ the result reads
\begin{equation}
\mathcal{F}(\lambda,t) \approx C(\lambda,t) \mathcal{F}_s(\lambda,t)
\end{equation}
with
\begin{equation} \label{fcsLL}
\log \mathcal{F}_s(\lambda,t) = \frac{t}{2\pi }\int \log (1 + (e^\lambda-1) \rho(E) T(E)) dE \equiv i t \int \nu_\lambda(E) dE.
\end{equation}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{FigFCS.pdf}
\caption{
Ratio of the FCS $\mathcal{F}(\lambda,t)$ \eqref{Ftd} to the large $t$ asymptotic formula $\mathcal{F}_s(\lambda,t)$ given by \eref{fcsLL2}, the initial state is characterized by $k_F=1$, $E_F=k_F^2=1$, $\rho(E) = \theta(E_F-E)$:
(a) delta barrier $V(x)= g\delta(x)$, $g=-0.3$ (one bound state), $\lambda=0.3$;
(b) symmetric double delta barrier potential \eref{symdoubledelta} with $d=2.3$, $g=-1.3$ (two bound states), $\lambda=1.3$.
}
\label{FigFCS}
\end{figure}
The prefactor $C(\lambda,t)$ contains both $\tilde{C}(\lambda,t)$ and the constant prefactors from the asymptotic expression for the
Fredholm determinant.
When bound states are absent in the spectrum or there is only one bound state then we expect only decaying transient time dependence of $C(\lambda,t) \approx C(\lambda)$,
see figure~\ref{FigFCS}(a).
This way, in equation \eqref{fcsLL}, we recover predictions for the FCS also known as the Levitov--Lesovik formula
\cite{LL,Levitov_1996,Sch_nhammer_2007}. The large deviation theory perspective on this formula can be found in \cite{RevModPhys.81.1665},
while the generalized hydrodynamic point of view is presented in \cite{Doyon2019}.
When the function $\rho(E)$ has sharp jumps, as it happens, for instance, at zero temperature $\rho(E) = \theta(E_F - E)$, or for the non-equilibrium
setups \cite{PhysRevB.81.085436,Gutman_2011}, then additionally to the smooth time dependence in $C(\lambda,t)$, we obtain also power law dependencies, with the corresponding exponents
defined by the value of the function $\nu_\lambda(E)$ at the jump points.
In particular, the modification of the vacuum case reads
\begin{equation}\label{fcsLL2}
\log \mathcal{F}_s(\lambda,t) =- \left(\nu_\lambda(0)^2+ \nu_\lambda(E_F)^2\right) \log t +
\frac{t}{2\pi }\int\limits_0^{E_F} \log (1 + (e^\lambda-1) T(E)) dE .
\end{equation}
Notice that $\nu_\lambda(0)=0$ for a generic barrier since $T(E=0)=0$. However for special potentials with $T(E=0)\ne 0$ (e.g. reflectionless potentials)
$\nu_\lambda(0)\ne 0$ also gives a contribution to \eref{fcsLL2}.
Finally, when there are two or more bound states in the spectrum, then $C(\lambda,t)$ contains
persistent oscillatory contributions with the frequency equal to the difference of energies of the bound states, see figure~\ref{FigFCS}(b).
Notice that after a few periods oscillations are described by one harmonic with a constant amplitude.
For a specific defect in a lattice model this was demonstrated in \cite{Gamayun2020}.
\subsection{Entanglement Entropy}
Let us also mention that one can relate the entanglement entropy $\mathcal{S}(t)$
obtained after tracing out the left part of the system to the FCS by a simple integral
\cite{Klich_2009,Klich_2009a,Song_2011,Song_2012}.
We express this relation in a simple and convenient form as
\begin{equation}\label{ee concise}
{\cal S}(t) = \frac{1}{4} \int\limits_{-\infty}^\infty \frac{\log\mathcal{F}(\lambda,t) }{\sinh^2 (\lambda/2)}d \lambda,
\end{equation}
where the integral at $\lambda=0$ should be treated in the principal value sense.
Substituting instead of complete $\mathcal{F}$ its asymptotic expression $\mathcal{F}_s$ for instance for zero temperature case \eqref{fcsLL2}, we obtain as $t\rightarrow \infty$
\begin{equation}\label{S asymptotic}
{\cal S}(t) \approx t \int\limits_{0}^{E_F} \frac{dE}{2\pi} \Big(-T(E) \log T(E) - R(E)\log R(E)\Big) - \frac{\log t }{4} \int\limits_{-\infty}^\infty \frac{\nu_\lambda(0)^2+\nu_\lambda(E_F)^2}
{\sinh^2 (\lambda/2)}d \lambda ,
\end{equation}
Here $R(E) \equiv 1 -T(E)$. The linear in time part of this formula is generic for one-dimensional systems \cite{Calabrese_2005E},
and in this case it has a form of classical Shannon entropy (see also \cite{eisler2012on_entanglement} and \cite{Bidzhiev_2017}),
the suitable generalization to the interacting systems was obtained in \cite{Alba7947}. The logarithmic growth becomes important in the case of the absence of the defect, or for the
reflectionless potential,
when the linear part disappears. The coefficient in front of the logarithm is compatible with predictions from conformal field theories \cite{calabrese2009entanglement,Peschel_2009,eisler2012on_entanglement}
\begin{equation}\label{S asymptotic epsilon=1}
{\cal S}(t) = \frac{c}{6} \log t+O(1) ,~~~ t\rightarrow \infty.
\end{equation}
In our case for $T(E) =1$ we get $c=2$ after computing the integral in the last line of \eqref{S asymptotic}. Notice that the coefficient in front of the logarithmic
correction when the linear part is present can be non-universal similarly to \cite{eisler2012on_entanglement}.
\section{Hard-wall wave functions}
\label{HardWall}
The key part in deriving explicit expression of kernels is an explicit presentation
for the hard-wall wave functions \eqref{eq1}, \eqref{eq2} in terms of the Jost functions and scattering data.
We start with $\chi_k$. Assuming that the range of the potential $\xi$ is much smaller than $R$, the wave function can be presented as
\begin{equation}\label{chikk}
\chi_k(x) = {\rm Im} \left[e^{ikR}\psi_k(x)\right],
\end{equation}
where $\psi_k$ is a Jost function that corresponds to the potential $V(x)$ (see \eref{psiint}).
This way the condition $\chi_k(R) = 0 $ is satisfied automatically, while for the large negative $x$ the behavior reads
\begin{equation}
\chi_k(x) = {\rm Im} \left[e^{ikR}(\bar{a}_k e^{-ikx} -b_k e^{ikx})\right].
\end{equation}
Here the scattering data corresponds to the potential $V(x)$. Demanding $\chi_k(-R)=0$ will provide us with the spectrum condition, that can be resolved as
\begin{equation}\label{sp23}
e^{2i k R} = \frac{i {\rm Im}\,b_k+\sqrt{1+({\rm Re}\,b_k)^2}}{\bar{a}_k} \equiv e^{-2i\delta(k)}.
\end{equation}
Here we have introduced the scattering phase $\delta(k)$. We have to take into account two possible solutions that correspond to two different branches of the square root.
This way, in fact we have two different scattering phases. For both of them we have $\delta(k) = - \delta(-k)$,
meaning that if $k$ is a solution than $-k$ is solution as well, with the same energy $E_k = k^2$.
However, they describe the same state as is clearly seen from \eqref{chikk}.
Therefore, we restrict ourselves to the positive $k$ solutions of \eref{sp23}.
Let us also discuss the normalization of the wave function.
To this end we notice that the $k$ derivative of the $\chi_k$ satisfies
\begin{equation}
\left(
-\partial_x^2+ V(x) - k^2
\right)\partial_k\chi_k = 2k \chi_k,\qquad
\left(
-\partial_x^2+ V(x) - k^2
\right)\chi_k =0.
\end{equation}
So we can write
\begin{multline}
2k (\chi_k,\chi_k) = \int\limits_{-R}^{R} dx \left[
-\frac{d^2\partial_k\chi_k }{dx^2}\chi_k(x) + \partial_k\chi_k \frac{d^2\chi_k(x)}{dx^2}
\right] \\
= \left[
-\frac{d\partial_k\chi_k }{dx}\chi_k(x) + \partial_k\chi_k \frac{d\chi_k(x)}{dx}
\right] \Big|_{-R}^{R}.\label{norm0}
\end{multline}
This allows us to present
\begin{equation}\label{norm}
(\chi_k,\chi_k) = ({\rm Re}\, b_k + \sqrt{1 + ({\rm Re}\, b_k)^2})\sqrt{1 + ({\rm Re}\, b_k)^2} (R + \delta'(k)).
\end{equation}
Here $\delta'(k)$ means the momentum derivative.
Similarly, we can describe the matrix elements $(\chi_k, P_>\chi_p)=\int\limits_{0}^R dx \chi_k(x) \chi_p(x) $ of the projector in \eref{Xab} as
\begin{multline}\label{chi2}
(E_k-E_p)(\chi_k, P_>\chi_p) = \\
=\int\limits_{0}^R dx \left(\left[\left(-\partial_x^2+ V(x) \right)\chi_k(x)\right] \chi_p(x) -
\chi_k(x)\left(-\partial_x^2+ V(x) \right)\chi_p(x)\right)
\\ =\int\limits_{0}^R dx \partial_x\left(
-\chi_p(x) \partial_x\chi_k(x)+ \chi_k(x)\partial_x\chi_p(x)
\right) = \chi_p(0) \partial_x\chi_k(0)-\chi_k(0) \partial_x\chi_p(0).
\end{multline}
To describe bound states that might be present in the system, one can argue that due to finite range of the potential the corresponding wave functions will be localized around $x=0$, and decay exponentially for large $x$. Therefore the boundary conditions are satisfied automatically with the exponential precision, and we may put
\begin{equation}
\chi_k^{\rm bound} (x) \approx \varphi_{i\varkappa}(x),\qquad k = i\varkappa.
\end{equation}
Its normalization can be found in a similar manner taking into account the identification $\varphi_{i\varkappa}(x) = b_\varkappa \bar{\psi}_{i\varkappa}(x)$ discussed
in Section~\ref{sec2}.
Indeed, using the fact that at $x\to+\infty$, the leading term in the momentum in the wave function behaves as $a'_{i\varkappa } e^{\varkappa x}$, we obtain
\begin{equation}\label{NormBound}
(\varphi_{i\varkappa},\varphi_{i\varkappa}) = i a'_{i\varkappa} b_\varkappa.
\end{equation}
Similarly we can find the pre-quench wave function $\Lambda_q$. In this case it is more convenient to use the Jost solution \eref{phiint} on the potential $V_0$, which we denote as
$\Phi_q(x)$. In this notation we propose the following formula
\begin{equation}\label{lambda1}
\Lambda_q(x) = {\rm Im}\frac{\Phi_q(x)}{\Phi_q(0)}.
\end{equation}
Notice that in this form the boundary condition $\Lambda_q(0)=0$ is satisfied automatically, while the condition $\Lambda_q(-R)=0$ defines spectrum
and the scattering phase $\eta(q)$
\begin{equation}\label{sp44}
e^{2iqR} = \frac{\Phi_q(0)}{\bar{\Phi}_q(0)} \equiv e^{-2i\eta(q)}.
\end{equation}
Normalization now reads as
\begin{equation}\label{LambdaOver}
(\Lambda_q,\Lambda_q) = \frac{R +\eta'(q)}{2|\Phi_q(0)|^2}.
\end{equation}
Finally, computation of the overlaps between pre- and post-quench wavefunctions in \eref{Xab}, can be avoided completely, and replaced by the corresponding
overlaps with the Jost's functions. Namely, as it follows from \eref{chi2} the time derivative of the $\eref{Xab}$ can be expressed via
the (conjugated) time evolution of the wave function $\Lambda_q(y,t)$
defined as
\begin{equation}\label{L0}
\Lambda_q(y,t) \equiv \sum_k \frac{(\Lambda_q,\chi_k)\chi_k(y)}{(\chi_k,\chi_k)}e^{itE_k} = \int\limits_{-R}^0 dx \Lambda_q(x) G^*(x,y,t).
\end{equation}
Here we have used the following presentation of the Green's function
\begin{equation}\label{Gstar}
G^*(x,y,t) \equiv \sum\limits_k \frac{\chi_k(x)\chi_k(y)}{(\chi_k,\chi_k)} e^{itE_k}.
\end{equation}
The summation is taken over all spectral points \eref{sp23}. We perform this summation explicitly in \ref{appG} with the genuine discrete degrees of freedom and take the thermodynamic limit only in the very end. The computation is straightforward but a bit tedious.
However, the obtained result can be easily explained heuristically. Namely, one can argue that in the thermodynamic limit instead of function \eref{Gstar}
one can use \eref{Gsimple}.
This way, we can find a presentation only with the Jost solutions introduced in the previous section
\begin{equation}
\Lambda_q(y,t) = \int_C \frac{dk}{2\pi} \frac{(\Lambda_q,\varphi_k)\bar{\psi}_k(y)}{a_k}e^{itE_k}.
\end{equation}
The integration path $C$ runs from $-\infty$ to $+\infty$ in the upper half plane above all positions of zeroes of $a_k$, see figure~\ref{FigContours}.
The overlap $(\Lambda_q,\varphi_k)$ can be computed using the same trick as in \eref{norm0} and \eref{chi2}.
Indeed, if we introduce function
\begin{equation}\label{Xiqk}
\Xi_{q,k} =\Lambda_q'(0)\varphi_k(0)- \int\limits_{-\infty}^0 dx \Lambda_q(x) (V_0(x) - V(x))\varphi_k(x),
\end{equation}
we can present
\begin{equation}\label{overlll}
(E_k -E_q) \int\limits_{-R}^0 dx \Lambda_q(x) \varphi_k(x) = \Xi_{q,k} - \Lambda_q'(-R)\varphi_k(-R).
\end{equation}
Here we have used that due to the finite range of the potentials the lower limit of the integration in \eref{Xiqk} can be either $-R$ or $-\infty$.
Taking into account that for $k\in C$ the last term vanishes exponentially $\varphi_k(-R)\sim e^{ikR}$, we finally present
\begin{equation}\label{L1}
\Lambda_q(y,t) = \int_C \frac{dk}{2\pi} \frac{\Xi_{q,k} \bar{\psi}_k(y)}{(k^2-q^2)a_k}e^{itE_k}.
\end{equation}
This is the final answer in the thermodynamic limit.
Notice that $\Xi_{q,k}$ is a regular function and can be continued from the discrete spectrum to upper half plane of the variable $k$.
In the next section we will evaluate large-time asymptotic behavior of the kernel, which is mostly defined by $\Xi_{q,-q}$. It can be
computed from \eref{overlll} along with the asymptotic behavior $\Lambda_q'(-R)\sim -q e^{iqR}/\Phi_q(0)$ for large $R$ (see \eref{lambda1})
\begin{equation}\label{Xiqmq}
\Xi_{q,-q}=-\frac{q}{\Phi_q(0)}.
\end{equation}
This expression can be directly obtained from the definition \eref{Xiqk} already in the thermodynamic limit. We demonstrate it in \ref{pp}.
The direct computation of $\Lambda_q(0,t)$ and its derivative in the finite system is given in \ref{appF}.
\section{Kernel} \label{secKernel}
To compute the kernel $K(q,q')$ for the Fredholm determinant of the FCS \eref{Ftd}, we start by considering its time derivative.
Using explicit presentation \eref{Xab} and \eref{chi2}, along with the definition \eref{L0}, we arrive at
\begin{equation}\label{dK}
\frac{dK(q,q')}{dt} = \frac{2i(e^\lambda-1)}{\pi}
|\Phi_q(0)| \left( f^{(1)}_q(t) \bar{f}^{(0)}_{q'}(t) - f^{(0)}_q(t)\bar{f}^{(1)}_{q'}(t)\right)|\Phi_{q'}(0)|,
\end{equation}
where we have denoted
\begin{equation}\label{fa2m}
f^{(\alpha)}_q (t) = \partial^\alpha_x \Lambda_q(x,t)\Big|_{x=0}=
\int\limits_C \frac{dk}{2\pi}
\frac{\Xi_{q,k}\partial_x^\alpha\bar\psi_k(0)}{a_k}
\frac{e^{itk^2}}{k^2-q^2}, \qquad \alpha=0,1.
\end{equation}
The contour $C$ runs as is shown in figure~\ref{FigContours}.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{FigContours.png}
\caption{Integration contours $C$ and $C'$ in the complex plane of $k$ for the integral presentation of $f^{(\alpha)}_q $ given by \eref{fa2m}.
The contours $C$ are $C'$ are the initial and transformed contours of integration, respectively.
Blue dots on the imaginary axis correspond to the bound states, red dots correspond to poles at $k=\pm q$
in \eref{fa2m}.
The shaded areas show the regions of exponential decaying (I, III quadrants, light blue) and exponential growth (II, IV quadrants, pink) of $\exp(it q^2)$ for $t\to+\infty$.
}
\label{FigContours}
\end{figure}
Using presentation \eref{L1} we can directly integrate \eref{dK}. However, in order to easier assess the long-time asymptotic behavior we first identically transform $f^{(\alpha)}_q $ to highlight the most relevant terms as $t\to+\infty$. To do so we notice that the exponential $e^{itk^2}$ is decaying in the first and third quadrants of complex plane of $k$ (see figure~\ref{FigContours}).
So we deform the contour $C$ into $C'$ by pulling it towards the real negative line and crossing it.
By doing so we inevitably encircle all positions of the bound states and the pole $k=-q$.
The obtained deformation reads
\begin{equation}\label{faCp}
f^{(\alpha)}_q(t) = \frac{i\Xi_{q,-q}\partial_x^\alpha\bar\psi_{-q}(0)}{a_{-q}} \frac{e^{itq^2}}{2q}+ \sum_{n=1}^{N^\mathrm{b}}
\frac{i \Xi_{q,i\varkappa_n}\partial_x^\alpha\bar\psi_{i\varkappa_n}(0)}{a'_{i\varkappa_n}}
\frac{e^{-it\varkappa_n^2}}{\varkappa_n^2+q^2}
+
\int\limits_{C'} \frac{dk}{2\pi}
\frac{\Xi_{q,k}\partial_x^\alpha\bar\psi_k(0)}{a_k}
\frac{e^{itk^2}}{k^2-q^2}.
\end{equation}
The ``leading'' coefficient $\Xi_{q,-q}$ was computed in \eqref{Xiqmq}.
Further we use the symmetry $k\to -k$ to fold the full contour $C'$ and consider integration only with $\mathrm{Re}\,k>0$, namely
\begin{equation}\label{faRe}
f^{(\alpha)}_q(t) = \sum_{n=1}^{N^\mathrm{b}}
B_{n,q}^{(\alpha)} e^{-it\varkappa_n^2}
+F^{(\alpha)}_q e^{itq^2} +
\int\limits_{0}^{\infty} \frac{dk}{\pi}
\Omega^{(\alpha)}_{q,k}
\frac{e^{itk^2}}{(k+i0)^2-q^2},
\end{equation}
\begin{equation}\label{BFOm}
B_{n,q}^{(\alpha)}=
\frac{i \Xi_{q,i\varkappa_n}\partial_x^\alpha\bar \psi_{i\varkappa_n}(0)}
{a'_{i\varkappa_n}(\varkappa_n^2+q^2)},
\qquad
F_q^{(\alpha)}=-i \frac{\partial_x^\alpha\psi_{q}(0)}{2\Phi_q(0)a_{-q}},
\qquad
\Omega^{(\alpha)}_{q,k}=
\mathrm{Re}\,
\frac{\Xi_{q,k}\partial_x^\alpha\bar\psi_k(0)}{a_k}.
\end{equation}
Such form of $f^{(\alpha)}_q(t)$ is convenient for large $t$ asymptotic analysis. The
first two terms give persistent oscillations, while the integral in \eref{faRe} is decaying as a power law in $t$ for large $t$.
This can be deduced from the stationary phase method considering a saddle point at $k=0$. The corresponding exponent of the power law decay depends on the behavior of $\Omega^{(\alpha)}_{q,k}$ at $k=0$. In the case of generic potentials, $a_k$ has a first order pole at $k=0$ while
$\Xi_{q,k}$ and $\partial_x^\alpha\psi_k(0)$ are regular at $k=0$.
Therefore $\Omega^{(\alpha)}_{q,k}$ has at least first order zero at $k=0$,
which implies the entire integral to be estimated as $O(t^{-1})$.
For some special potentials (for example reflectionless potentials), $a_k$ is regular at $k=0$. For such potentials the integral decays
as $t^{-1/2}$
\begin{equation}\label{intreg}
\int\limits_{0}^{\infty} \frac{dk}{\pi}
\Omega^{(\alpha)}_{q,k}
\frac{e^{itk^2}}{(k+i0)^2-q^2}=\frac{I^{(\alpha)}_{q}}{\sqrt{t}}+ O(t^{-1}),
\end{equation}
\begin{equation}\label{Iq}
I^{(\alpha)}_{q}=- \frac{\sqrt{\pi} e^{i\pi/4} \Xi_{q,0}\partial_x^\alpha\psi_0(0)}{2 a_0 q^2}.
\end{equation}
To compute the kernel we substitute $f^{(\alpha)}(t)$ in the form \eref{faRe} into~\eref{dK} and integrate over $t$.
Additionally, we perform conjugation with diagonal matrices
\begin{equation}
K(q,q') \to K(q,q')e^{-it(E_q -E_{q'})/2}.
\end{equation}
This operation does not change the determinant, so for the transformed kernel we obtain
\begin{equation}\label{Kqqp}
K(q,q') =K_0(q,q')+ \delta K (q,q').
\end{equation}
Here $K_0(q,q')$ is given by
\begin{equation}
K_0(q,q') = \frac{4i(e^\lambda-1)}{\pi} |\Phi_q(0)| (F_q^{(1)} \bar F_{q'}^{(0)}-F_q^{(0)} \bar F_{q'}^{(1)})|\Phi_{q'}(0)|
\frac{\sin t(E_q - E_{q'})/2}{E_q - E_{q'}}.
\end{equation}
Using definition \eqref{BFOm} it can be equivalently presented as \eqref{K0}.
The rest of the
kernel can be presented as
\begin{equation}
\delta K(q,q')= \frac{2i(e^\lambda-1)}{\pi}
|\Phi_q(0)| \left( M_{qq'}(t)- \bar{M}_{q'q}(t) \right)|\Phi_{q'}(0)|
\end{equation}
with
\begin{equation}
M_{qq'}(t) = e^{-it(E_q -E_{q'})/2}\sum\limits_{i=1}^4 \left[K^{(i)}(q,q',t)-K^{(i)}(q,q',0)\right].
\end{equation}
Here different kernels have different physical meaning. The kernel $K^{(1)}$ is responsible for the contribution of the
bound states only. It is given by
\begin{equation}
K^{(1)}(q,q',t)= \sum_{m<n}^{N^\mathrm{b}} (B_{mq}^{(1)}B_{nq'}^{(0)}-B_{mq}^{(0)} B_{nq'}^{(1)})
\frac{e^{it(\varkappa_n^2-\varkappa_m^2)}}{i(\varkappa_n^2-\varkappa_m^2)} .
\end{equation}
The kernel $K^{(2)}$ is responsible for contribution of the continuous spectrum only
\begin{multline}
K^{(2)}(q,q',t)= \int_0^\infty \frac{dk}{\pi} \frac{e^{it(E_k-E_{q'})}}{i(E^+_k-E_{q'})}
\frac{ \Omega_{qk}^{(1)}\bar F_{q'}^{(0)}- \Omega_{qk}^{(0)}\bar F_{q'}^{(1)}}{E^+_k-E_q} \\
+\frac12 \int\limits_0^\infty \frac{dk}{\pi}\int\limits_0^\infty \frac{dp}{\pi} \frac{e^{it(E_k-E_p)}}{i(E^+_k-E^-_p)}
\frac{\Omega_{qk}^{(1)} \Omega_{q'p}^{(0)}-\Omega_{qk}^{(0)} \Omega_{q'p}^{(1)}}
{(E^+_k-E_q)(E^-_p-E_{q'})},
\end{multline}
here $E_k=k^2$ and $E^\pm_k=(k\pm i0)^2$.
Finally the kernels $K^{(3)}$ and $K^{(4)}$ give the mixed contribution from the bound states and the continuous spectrum
\begin{equation}
K^{(3)}(q,q',t)=\sum_{n=1}^{N^\mathrm{b}}
\int_0^\infty \frac{dk}{\pi} \frac{e^{it(E_k+\varkappa_n^2)}}{i(E^+_k+\varkappa_n^2)}
\frac{ \Omega_{qk}^{(1)} B_{nq'}^{(0)}- \Omega_{qk}^{(0)} B_{nq'}^{(1)}}{E^+_k-E_q},
\end{equation}
\begin{equation}
K^{(4)}(q,q',t)=\sum_{n=1}^{N^\mathrm{b}} (B_{nq'}^{(0)} F_{q}^{(1)}-B_{nq'}^{(1)} F_{q}^{(0)})
\frac{e^{it(\varkappa_n^2+E_{q})}}{i(E_{q}+\varkappa_n^2)}.
\end{equation}
Integrals in $K^{(2)}$ and $K^{(3)}$ decay for large $t$ because of averaging of rapid oscillations as in the integral \eqref{faRe}.
Special care has to be taken for the reflectionless potentials. At the first glance, in this case relations \eref{intreg},
\eref{Iq} might produce a logarithmic growth for large $t$ in the double integral in $K^{(2)}$.
This growth is, however, absent because of the relation
\begin{equation}\label{resreg}
I^{(1)}_{q}\bar I^{(0)}_{q'}-I^{(0)}_{q}\bar I^{(1)}_{q'}=0.
\end{equation}
There are also potential singularities for small $q\lesssim t^{-1/2}$ and a bit different asymptotic analysis of \eref{intreg} is needed. Indeed, \eref{Iq} shows a singular behavior for small $q$, which in fact is not there, since in the asymptotic analysis of \eref{intreg} we have assumed that a pole at $k=q$ is far from the stationary point $k=0$. We performed such analysis for the current and showed that the contribution of small $q$ gives only the subleading contributions.
Apart from the decaying terms, $\delta K$ contains also time-independent terms $K^{(i)}(t=0)$, highly oscillating terms like $K^{(4)}(t)$,
and terms that oscillate with the frequencies given by the energies of the bound states $K^{(1)}(t)$. The latter comes in the form of the finite rank operators,
and can appear in the final expression of the determinant only linearly.
As we have discussed in Section~\ref{quenchSec} we expect that
the contribution of the kernel $\delta K$ to the asymptotic analysis of the Fredholm operator $\det (1 + \hat{K})$
enters only as a smooth overall prefactor, which has non-vanishing time dependence only if there are two or more bound states in the spectrum.
\subsection{FCS for perfect lead attachment}
\label{perf}
There are well-developed methods for asymptotic analysis of the Fredholm determinants of the so-called integrable kernels \cite{Deift_1997,Bogoliubov1997}.
As we have shown above for generic potentials $V_0(x)$ and $V(x)$ the kernel for FCS $K(q,q')$ is not an integrable one.
In this subsection we consider a special case of quench setup when the obtained kernel is integrable even for finite times.
We call this situation the \textit{perfect lead attachment} because it corresponds to the scenario when $V_0(x) = V(x)$ for $x<0$.
In this case due to the integral presentation \eref{phiint} the corresponding Jost functions coincide for negative $x$: $\varphi_q(x) =\Phi_q(x)$ for $x \le 0$. From presentation \eref{Xiqk} we observe the factorization
\begin{equation}\label{XiqkPLA}
\Xi_{q,k} = \Lambda'_q(0) \varphi_k(0),
\end{equation}
which imply a similar factorization ${f}^{(\alpha)}_q(t)= \Lambda'_q(0) g^{(\alpha)}_q(t)$ for ${f}^{(\alpha)}_q(t)$ given by \eref{fa2m},
where
\begin{equation}\label{fa22}
g^{(\alpha)}_q(t) = \int\limits_C \frac{dk}{2\pi} \omega_k^{(\alpha)}
\frac{e^{itk^2}}{k^2-q^2} ,\qquad
\omega_k^{(\alpha)} \equiv \frac {\varphi_{k}(0)\partial_x^\alpha\bar\psi_k(0)}{a_k}.
\end{equation}
Comparing \eref{XiqkPLA} at $k=-q$ with \eref{Xiqmq} we conclude that
$\Lambda'_q(0)= -q/|\varphi_q(0)|^2 $.
Therefore now \eref{dK} reads
\begin{equation}\label{dKqqp}
\frac{dK(q,q')}{dt} = \frac{2i(e^\lambda-1)qq'}{\pi |\varphi_q(0)| |\varphi_{q'}(0)|}
\left( g^{(1)}_q(t) \bar{g}^{(0)}_{q'}(t) - g^{(0)}_q(t)\bar{g}^{(1)}_{q'}(t)\right).
\end{equation}
Integrating in $t$ we can present the kernel $K(q,q')$ in the integrable form
\begin{equation}\label{Kqqpl}
K(q,q') = \frac{2(e^\lambda-1)qq'}{\pi|\varphi_q(0)| |\varphi_{q'}(0)|} \frac{g^{(1)}_q(t) \bar{g}^{(0)}_{q'}(t) - g^{(0)}_q(t)\bar{g}^{(1)}_{q'}(t)+
\bar{D}_q(t)-D_{q'}(t)}{E_q - E_{q'}},
\end{equation}
where
\begin{equation}
D_{q}(t) =i \int\limits_0^t d\tau \int\limits_C \frac{dk}{2\pi} e^{i\tau k^2}
\left[ \omega_k^{(1)}\bar{g}^{(0)}_{q}(\tau)-\omega_k^{(0)}\bar{g}^{(1)}_{q}(\tau) \right] .
\end{equation}
To check correctness of \eref{Kqqpl} we need to compare its derivative in $t$ with \eref{dKqqp} using
\begin{equation}
\frac{d}{dt}g^{(\alpha)}_q(t) =iq^2 g^{(\alpha)}_q(t)+ i \int\limits_C \frac{dk}{2\pi} \omega_k^{(\alpha)}e^{i t k^2}.
\end{equation}
Also we have to check that $K(q,q')=0$ at $t=0$. This is ensured due to the property $g^{(0)}_q(0)=0$, which follows from analyticity of $\omega_k^{(0)}$ in the upper half-plane of $k$.
The integrable form of kernel $K(q,q')$ allows one to replace evaluation of the Fredholm determinants by a solution of the Riemann--Hilbert problem \cite{Deift_1997,Bogoliubov1997}.
This approach is especially useful for the asymptotic analysis at large time $t\to + \infty$.
In this case, however, if we follow the standard procedure outlined in \cite{Bogoliubov1997}, the corresponding jump matrix will have size $4\times4$.
Therefore, we postpone full analysis to a separate publication.
The asymptotic behavior of $g^{(\alpha)}_q(t)$ can be found similarly to \eqref{faRe}, where one can neglect the last integral.
To find the large-time asymptotic behavior of ${D}_{q}(t)$
we present it identically as
\begin{multline}
{D}_{q}(t) = \int\limits_{C} \frac{dk}{2\pi} \int\limits_{C^*} \frac{dp}{2\pi}
\frac{e^{it(k^2-p^2)}-1}{k^2-p^2} \frac{\bar{\omega}_p^{(0)}\omega_k^{(1)}-\bar{\omega}_p^{(1)}\omega_k^{(0)}}{p^2-q^2} \\ \approx
-\int\limits_{C} \frac{dk}{2\pi} \int\limits_{C^*} \frac{dp}{2\pi}
\frac{1}{k^2-p^2+i0} \frac{\bar{\omega}_p^{(0)}\omega_k^{(1)}-\bar{\omega}_p^{(1)}\omega_k^{(0)}}{p^2-q^2}.
\end{multline}
Here $C^*$ is a contour conjugated to $C$.
Moreover, for the symmetric potential function $g^{(1)}_q(t)$ simplifies significantly and the integral can be dropped even for finite times, namely, we can present
\begin{equation}
g^{(1)}_q(t) = \frac{e^{itq^2}}{2 \bar{a}_q}.
\end{equation}
Here we used that for arbitrary even potential $V(-x)=V(x)$, the Jost solutions are related as $\psi_{-k}(x)=\varphi_k(-x)$,
which leads to
\begin{equation} \label{sym}
\omega^{(1)}_k = \frac{\varphi_k(0)\partial_x \psi_{-k}(0)}{a_k} = ik.
\end{equation}
Indeed taking into account that the Wronskian $\varphi_k(x) \partial_x \psi_{-k}(x) - \psi_{-k}(x) \partial_x \varphi_k(x)$ does not depend on $x$ and calculating it at $x\to -\infty $ and $x=0$
we obtain the relation \eqref{sym}.
Thus, the integral in \eqref{faRe} vanishes identically, since it depend only on the real part of \eqref{sym}.
Further the bound state contribution vanishes because the wave-functions are either odd or even, meaning that either the value at zero or the value of the derivative at zero vanishes leading to $\varphi_{i\varkappa_n}(0)\partial_x \bar{\psi}_{i\varkappa_n}(0)=0$.
\section{The current}\label{current}
Let us also discuss the full current $J(t)$ of the particles flowing through the middle $x=0$ to the right part of the system.
It can be evaluated from the FCS \eref{Ftd} as follows
\begin{equation}\label{J}
J(t) =\frac{d}{dt} \frac{d\mathcal{F}(\lambda,t)}{d\lambda}\Big|_{\lambda=0}
= \mathrm{Tr}\,\left(\rho \frac{d}{dt} \frac{d\hat K}{d\lambda}\Big|_{\lambda=0}\right) =
-\int_0^\infty dq \rho(q)
\frac{4|\Phi_q(0)|^2 }{\pi}
{\rm Im}\, f^{(1)}_q(t) \bar{f}^{(0)}_{q}(t) ,
\end{equation}
where at the last step we used explicit presentation \eref{dK} to compute the trace.
As we discuss in Section \ref{secKernel}, the integral in \eref{faRe} may be dropped for the calculation of current for large $t$ since it
vanishes as $t\to \infty$, and we can approximate
\begin{equation}\label{faap}
f^{(\alpha)}_q(t) \approx F^{(\alpha)}_q e^{itq^2} + \sum_{n=1}^{N^\mathrm{b}}
B_{n,q}^{(\alpha)} e^{-it\varkappa_n^2}.
\end{equation}
Substituting this expression into \eref{J} we obtain three type of contributions to the current
\begin{equation}
J(t)\approx J_\mathrm{LB} + J^\mathrm{b} + \delta J,
\end{equation}
where $J_\mathrm{LB}$ comes from the first term in \eref{faap},
$J^\mathrm{b}$ comes from the terms that involve the bound states only and
$\delta J$ described the mix of the first term with the bound states.
To calculate $J_\mathrm{LB}$ we use ${\rm Im}\, \psi_q'(0)\bar{\psi}_q(0) = -q$
and \eref{tran}
\begin{equation}
J_\mathrm{LB}=\int \limits_{0}^\infty \frac{dq }{\pi} \frac{q\rho(q)}{|a_q|^2} =
\int \frac{dE}{2\pi} \rho(E) T(E) .
\end{equation}
It is well-known Landauer--B\"uttiker formula for the current.
The contribution of bound states to the current is
\begin{equation}
J^\mathrm{b}=\sum_{m<n}
A_{mn} \sin t(\varkappa_m^2-\varkappa_n^2),
\end{equation}
where
\begin{equation}\label{Amn}
A_{mn} =\frac{ 4 \left(\bar\psi_{i\varkappa_n}'(0)\bar\psi_{i\varkappa_m}(0)-\bar\psi_{i\varkappa_m}'(0)\bar\psi_{i\varkappa_n}(0)\right) }{a'_{i\varkappa_m}a'_{i\varkappa_n}}
\int_0^\infty \frac{dq}{\pi} \rho(q)
|\Phi_q(0)|^2
\frac{ \Xi_{q,i\varkappa_m}\Xi_{q,i\varkappa_n}}
{(\varkappa_m^2+q^2)(\varkappa_n^2+q^2)} .
\end{equation}
For the symmetric potential $V(x)$, the bound states are either even functions with $\bar\psi_{i\varkappa_n}'(0)=0$ or odd functions with
$\bar\psi_{i\varkappa_n}(0)=0$. Therefore, in this case, a nontrivial contribution to the current may arise only from pairs of odd-even states. Furthermore, in the case of perfect lead attachment, $V(x)=V_0(x)$, we have $\Xi_{q,i\varkappa_n}=0$ for odd bound states
$\bar\psi_{i\varkappa_n}(x)$
and therefore there is no contribution at all to the current from bound states in the case of perfect lead attachment with an even potential.
The integral in $q$ for $\delta J$ can be estimated by the
contribution at $q=0$ by the method of stationary phase and it can be shown that $\delta J$
decays for large $t$ at least as $t^{-1/2}$ and therefore does not give a leading contribution to the current.
Finally we arrive to the following expression for the large-time asymptotic current
\begin{equation}\label{Jtot}
J(t)\approx \int \frac{dE}{2\pi} \rho(E) T(E) +
\sum_{m<n}
A_{mn} \sin t(\varkappa_m^2-\varkappa_n^2).
\end{equation}
We see that in addition to the constant Landauer--B\"uttiker current
(the first term), there are also oscillating terms connected with the presence of the multiple bound states.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{FigJ.pdf}
\caption{
Current through the point $x=0$ and its asymptotic behavior, the initial state is characterized by $k_F=1$, $E_F=k_F^2=1$, $\rho(E) = \theta(E_F-E)$:
(a) the reflectionless potential $V(x)= - 2/\cosh^2 x$ (one bound state); the current (black) is oscillating with an amplitude decaying as $\sim t^{-1/2}$ around Landauer--B\"uttiker constant current $J_{LB} = E_F/(2\pi)$ (red).
(b) symmetric double delta barrier potential \eref{symdoubledelta} with $d=2.3$, $g=-1.3$ (two bound states);
the current (black dots) has asymptotic oscillating behavior \eref{Jtot2} with fixed amplitude (red curve) around Landauer--B\"uttiker constant current (green line).
}
\label{FigJ}
\end{figure}
To illustrate this formula we consider an example of the reflecionless potential $V(x)= - 2/\cosh^2 x$. For this potential $T(E)=1$, hence the name.
The Jost functions and functions $f^{(\alpha)}_q(t)$ can be easily computed and the results are presented in \ref{refLp}.
The exact expression for the current than reads as \eqref{J}
\begin{equation}\label{J2}
J(t) = \int_0^\infty \frac{dq}{\pi} \rho(q)\left(q+\sin[(1+q^2)t]+2(1+q^2)\mathrm{Im}\int\limits_0^\infty \frac{dk}{\pi} \frac{k^2}{1+k^2} \frac{e^{it(k^2-q^2)}}{(k+i0)^2-q^2}\right).
\end{equation}
We plot this expression for $\rho(E) = \theta(E_F-E)$ in figure~\ref{FigJ}(a) against the Landauer--B\"uttiker expression $J_{LB} = E_F/(2\pi)$.
Notice that even though the bound state is present in the spectrum, it produces only vanishing with time oscillations.
To demonstrate the persistent oscillations we consider the symmetric double delta barrier potential
\begin{equation} \label{symdoubledelta}
V(x) = g \delta(x-d/2)+ g\delta(x+d/2).
\end{equation}
The corresponding scattering data can be computed explicitly (for the details see \ref{asff})
\begin{equation}\label{Tdelta2a1}
a_k = \frac{g^2e^{2 i k d}+(2 k+i g)^2}{4 k^2}, \quad
b_k = \frac{g e^{ -i d k} (g-2 i k)-g e^{ i d k} (g+2 i k)}{4 k^2}.
\end{equation}
The bound states momenta follow from the relation $a_{i\varkappa}=0$, which if we define $u=2\varkappa/|g|$, $D=|g|d$ can be written as
\begin{equation}\label{bound2d1}
(u-1)^2 -e^{-u D}=0.
\end{equation}
For the negative couplings this equation has two solutions for $D>2$ and one solution for $0\le D \le 2$. Note $a_k$ has a simple pole at $k=0$ if $D\ne 2$.
The case $D=2$ describes a situation when the bound states is just starts to appear from (disappear into) the continuous spectrum, which formally is reflected
in $a_k$ being regular at $k=0$. Notice that same behavior is inherent for the reflectionless potentials, while for generic potentials $a_k$ has a simple pole at $k=0$.
The formula for the asymptotic current \eqref{Jtot} is now given by
\begin{equation}\label{Jtot2}
J(t)\approx \int \frac{dE}{2\pi} \rho(E) T(E) +
A_{12} \sin t(E_2-E_1),
\end{equation}
where $T(E)=|a_k|^{-2}$ is the transmission coefficient; the energies of bound states $E_j=-\varkappa_j^2$ are defined via solutions $\varkappa_j$ of the equation \eref{bound2d1};
the amplitude $A_{12}$ follows from \eqref{Amn} and is presented explicitly in \eqref{A12d2}.
In figure~\ref{FigJ}(b) we compare the asymptotic current \eqref{Jtot2} with the exact expression \eqref{J}
computed numerically using $f^{(\alpha)}_q(t)$ given in \ref{asff}. We observe that the asymptotic regime establishes after few oscillations.
\section{Summary and Outlook}\label{summ}
To summarize, we have presented derivations of the Full Counting Statistics for the one-dimensional transport via an arbitrary defect from the first principles.
The derivation in the main part is based on the effective presentation of the Green's function in the thermodynamic limit.
The procedure of taking this limit (replacing the summation of the quantized quasimomenta to the integral) is not absolutely rigorous, so in the appendix, we have presented an exact summation over the quantized momenta with the subsequent thermodynamic limit.
The final answer can be expressed via the Fredholm determinant whose numerical evaluation is straightforward.
We speculate that the large-time asymptotic behavior of the obtained Fredholm determinant could be deduced after certain approximations of the kernels, which render
the determinant to be of the sine-kernel type.
In this form, the answer depends only on the transmission coefficient of the post-quench potential, while the correlations of the original state are present only as the energy distribution. After these approximations, the Fredholm determinant could be analyzed either by the non-linear steepest descent method for the corresponding Riemann--Hilbert
problem or by application the effective form factors. This way we were able to recover the Levitov--Lesovik formula and its modification by logarithmic corrections in case of
discontinuous initial distributions.
As for the future directions, one can turn to the special quench of the \textit{perfect lead attachment} when the obtained exact kernel is an integrable one
and the Riemann--Hilbert problem appears without any approximations (see Section~\ref{perf}).
It would be also interesting to develop effective form factor methods to find large-time asymptotic behavior directly from the series \eqref{ff1}.
Besides, these methods could be used to describe the situation when the Levitov--Lesovik formula is not applicable, i.e. when there are
two or more bound states present in the spectrum of the post-quench potential and the FCS gets persistent oscillating behavior even for the constant potential bias.
We plan to clarify how the amplitudes of these oscillations depend on the initial conditions and whether some memory effects of the pre-quench potentials are present.
In this manuscript, we have not considered the case when there are bound states present in the pre-quench potential, but this case can be easily addressed in our formalism.
Much more involved improvement of the formalism would be needed to tackle more general initial states (in particular, when there are some particles on the right-hand side of the system
$\langle N_R(0)\rangle\neq 0$); to describe spinful electron and superconducting setups, and to explore the case of the driven system i.e. when the defect depends on time (for example, for the harmonically driven conformal defect \cite{PhysRevB.103.L041405}).
\ack
We are grateful to Jakub Tworzydło and Artur Slobodeniuk for useful discussions and for careful reading of the manuscript.
The authors acknowledge support by the National Research Foundation of Ukraine grant 2020.02/0296.
Y.Z. and N.I. were partially supported by NAS of Ukraine (project No. 0122U000888).
O.G. also acknowledges support from the Polish National Agency for Academic
Exchange (NAWA) through the Grant No. PPN/ULM/2020/1/00247. O.G. is grateful to
Galileo Galilei Institute for hospitality and support during the
scientific program on “Randomness, Integrability, and
Universality”, where part of this work was done.
\section{Introduction}
The Landauer--B\"uttiker formalism lies in the heart of mesoscopic physics \cite{Landauer_1957,doi:10.1080/14786437008238472,PhysRevLett.57.1761}.
It directly allows one to express the conductance in terms of the transmission matrix, this way relating transport and quantum properties \cite{Landauer_1992,Imry_1999}.
Historically, the substantiation of this formalism via linear response theory was connected with certain controversies ({\it cf} \cite{PhysRevLett.46.618,PhysRevB.23.6851} and \cite{PhysRevB.22.3519,PhysRevLett.47.972,PhysRevB.24.2978,PhysRevB.24.1151}). The original Landauer formulas proved to be sensitive to the proper formulation of the physical problem, in particular, to the proper definition of leads, electron reservoirs and self-consistency of linear response (for review see \cite{Stone_1988}).
The controversies were finally resolved by Buttiker in Ref. \cite{PhysRevLett.57.1761}, where the general formulas for multi-terminal mesoscopic conductance were obtained.
Even though according to elementary theory of tunnelling the transmission probability is defined in a stationary setup there were a lot of attention
related to the non-equilibrium approach to the transport \cite{Caroli_1971,PhysRevB.22.5887}.
The powerful analytic approaches involving Keldysh Green's function techniques were developed in Refs. \cite{Stefanucci2004,PhysRevB.69.195318,KOHLER2005,Moskalets2011}, along with the powerful numerical methods \cite{Gaury2016,PhysRevB.93.134506,Kloss2021}, which
allows one not only to describe creation of the asymptotic currents and address their properties beyond the linear response regime but also explore behavior of the generic time-dependent quantum transport.
From the point of view of the one-dimensional integrable models the attention to similar problems was renewed in the context of the quantum quenches, which are specifically,
understood as the evolution of the isolated quantum system initialized in the highly non-equilibrium state created either via the rapid change of the Hamiltonian, or containing macroscopic
spatial inhomogeneities \cite{Calabrese_2007,Sotiriadis2008,Polkovnikov2011,Calabrese_2016,Eisert_2015}. The latter is more pertinent to the quantum transport setup and is dubbed as the partition approach \cite{Caroli_1971,PhysRevB.69.195318}.
The large time behavior of such systems can be described by the generalized hydrodynamics
\cite{Bertini_2016,Castro_Alvaredo_2016}, which allows one to get analytic treatment of the non-equilibrium steady currents,
describe anomalous diffusion, and address the correlation functions (for review see the special issue \cite{Bastianello_2022}.
The transport in the transnational invariant systems of free fermions and their spin analogues attracted a lot of attention due to the possibility of obtaining analytic answers for
the average number of particles and its variance \cite{antal1999transport,Antal_2008,lancaster2010quantum,Viti_2016} (see also a numerical study in
\cite{PhysRevA.90.023624}).
Other aspects of the evolution of the bipartite system were studied in \cite{Perfetto2017,Jin2021}.
More delicate observables such as Loschmidt echo and Full Counting Statistics (FCS) were addressed in \cite{Viti_2016,St_phan_2017,PhysRevLett.110.060602,Sasamoto}, where the connection to random matrix theory was performed
and the FCS was expressed in terms of Fredholm determinants.
Other connections of one-dimensional fermions at equilibrium in an external potentials and random matrix theory are reviewed in \cite{Dean2019}.
The simplest case when translational invariance is broken by a local defect in many cases also allows for an analytic treatment.
Among others we would like to emphasize research that studies entropy evolution \cite{eisler2009entanglement,eisler2012on_entanglement,Dubail_2017},
transport properties within the interacting resonant level model \cite{Bransch_del_2010,PhysRevB.82.205414,Bidzhiev_2017,Bidzhiev_2019}, as well as non-integrable Ising chain \cite{PhysRevB.99.180302}.
The inclusion of the defect in the generalized hydrodynamic approach was performed in \cite{Bertini2016} and the peculiarities of the thermalization via the defect were discussed in
\cite{10.21468/SciPostPhys.12.2.060}. Ref. \cite{10.21468/SciPostPhys.6.1.004} deals with the exact evaluation of the current and charge distribution for the bipartite scenario
when the left part of the system is prepared in the fully decorrelated state (infinite temperature) and is connected via the defect with the empty right part.
Further, this type of quench was considered for the hopping defect for the arbitrary initial distributions in \cite{Gamayun2020}, where FCS, Loschmidt echo and the entanglement entropy were computed.
In \cite{Schehr2022} analytic answers for the particle and energy currents as well as the full density distribution, were obtained for the continuous system with a delta impurity.
In this paper we study the continuous bipartite system with the arbitrary defect.
We consider a bipartite protocol, in which the "left" part of the system is filled with fermions
up to some chemical potential or distributed according to some probability (to model, for instance, the thermal initial state).
according to some distributions and the "right" part is empty.
in more details the physical formulation that corresponds to the non-equilibrium initial setup.
Namely, we consider two closed one-dimensional systems to which we refer as to "left" and "right". related by the junction that corresponds to the potential scattering.
Initially the left system contains free fermions subjected to the local potential.
The energy level are filled up to some chemical potential or distributed according to some probability (to model, for instance, the thermal initial state).
The right part of the system is empty. At some moment two systems are brought in contact and the subsequent evolution is given by the modified potential.
\section{General properties of scattering}
In this section we briefly remind some general notions of the one-dimensional scattering on the local potential $V(x)$.
The locality means that the potential vanishes fast enough as $|x|\to \infty$. For all practical purposes we assume that the potential is nonzero only in the finite domain
$|x|<\xi$. This way, for $|x|>\xi$ the wave functions that correspond to the energy $E=k^2$ are the plane waves $e^{\pm i k x}$. With the chosen units of mass the Hamiltonian reads
\begin{equation}\label{eigenvalue}
- \frac{d^2\Psi}{dx^2}+V(x)\Psi = k^2\Psi
\end{equation}
For every real $k \neq 0$ there exists a two-dimensional space of solutions (that corresponds to $k$ and $-k$).
The typical basis in this space can be conveniently described by the Jost states $\psi_k$, $\varphi_k$ defined by their asymptotic behavior, namely
\begin{equation}
\psi_k(x) = e^{-ikx} + o(1),\,\,\,\,\, x\to +\infty
\end{equation}
\begin{equation}
\varphi_k(x)= e^{-ikx} + o(1),\,\,\,\,\, x\to -\infty.
\end{equation}
For a real potential these states are connected to their complex conjugated counterparts as $\psi_{-k}(x) = \bar{\psi}_k(x)$,
$\varphi_{-k}(x) = \bar{\varphi}_k(x)$.
If additionally the potential is symmetric $V(x) = V(-x)$, then $\psi_k(-x)$ and $\varphi_k(-x)$ are still eigenfunctions. Considering the asymptotic behavior one can conclude that in this case $
\psi_k(-x) = \bar{\varphi}_k(x)$.
Using Eq. \eqref{eigenvalue} we can see that the Jost solutions satisfy the following integral equations
\begin{equation}\label{psiint}
\psi_k(x) = e^{-ik x} - \int\limits_x^\infty \frac{\sin(k(x-y))}{k}V(y) \psi_k(y) dy,
\end{equation}
\begin{equation}\label{phiint}
\varphi_k(x) = e^{-ik x} + \int\limits^x_{-\infty} \frac{\sin(k(x-y))}{k}V(y) \varphi_k(y) dy.
\end{equation}
As both Jost solutions form a basis they are connected by the linear transformation, the transfer matrix,
\begin{equation}\label{transfer}
\left(
\begin{array}{c}
\varphi_k(x) \\
\bar{\varphi}_k(x)
\end{array}
\right) =
\mathcal{T}(k)
\left(
\begin{array}{c}
\psi_k(x) \\
\bar{\psi}_k(x)
\end{array}
\right),\,\,\,\,\,\,\,\,\, \mathcal{T}(k) = \left(
\begin{array}{cc}
a_k & b_k\\
\bar{b}_k & \bar{a}_k
\end{array}
\right).
\end{equation}
Note that for a real potential $a_{-k}=\bar{a}_k$, $b_{-k}=\bar{b}_k$, while for symmetric potential $b_k$ is purely imaginary.
Considering the Wronskian of the eigenvalue problem \eqref{eigenvalue} we conclude that the transfer matrix is unimodular
\begin{equation}\label{uni}
\det \mathcal{T}(k) =|a_k|^2-|b_k|^2= 1.
\end{equation}
The transfer matrix $\mathcal{T}$ can be repacked into the $S$-matrix \cite{Newton1982} as follows
\begin{equation}
S = \frac{1}{a_k}\left(
\begin{array}{cc}
-\bar{b}_k & 1\\
1 & b_k
\end{array}
\right).
\end{equation}
The unimodularity condition \eqref{uni} means the unitarity for S-matrix $SS^+ =1$.
The transmission and the reflection coefficients are defined as the squared absolute values of the off-diagonal and diagonal components of the S-matrix, respectively,
\begin{equation}\label{tran}
T(E) = \frac{1}{|a_k|^2},\qquad R(E) = \frac{|b_k|^2}{|a_k|^2}.
\end{equation}
Here we present them as the functions of energy $E = k^2$. The unitarity \eqref{uni} guarantees that $T(E) + R(E) = 1$.
The coefficient $a_k$ can be analytically continued to the upper half plane where it might have zeroes that correspond to the bound states. They are purely imaginary $k=i\varkappa$ so the corresponding energy is negative $E = -\varkappa^2$. In fact the analytic properties allow one to present (see for instance \cite{Novikov})
\begin{equation}
\label{a}
a_k = \prod\limits_{n=1}^{N} \frac{k-i\varkappa_n}{k+i\varkappa_n}
\exp\left(\frac{1}{2\pi i}\int\limits_{-\infty}^{\infty}\frac{\log (1+|b_q|^2)}{q-k-i0}dq\right)
\end{equation}
To describe the wave function of a bound state we can use either $\varphi_k(x)$ and $\bar{\psi}_k(x)$ as both these functions can be analytically continued to the upper half plane. In fact, it turns out that they are proportional $\varphi_{i\varkappa}(x) = b_\varkappa \bar{\psi}_{i\varkappa}$. Taking into account the definition of transfer matrix \eqref{transfer} this relation is hardly surprising and $b_{\varkappa}$ can be considered as an analytic continuation of the $b_k$, however, contrary to $a_k$ such continuation is not always possible, and the coefficient $b_\varkappa$ should be considered as additional scattering data.
Finally, let us comment on the normalization conditions of the continuous spectrum. Similar to \cite{Novikov} we conclude that
\begin{equation}
\int\limits_{-\infty}^\infty dx \varphi_k(x) \bar{\psi}_q(x) = a_q \delta(k-q)
\end{equation}
Therefore the Green's function $G(x,y,t)$ defined as a solution of the Schrodinger equation in $x$ variable with the initial condition $G(x,y,t=0) = \delta(x-y)$, can be presented as
\begin{equation}\label{Gsimple}
G(x,y,t) = \int_C\frac{dk}{2\pi} \frac{\varphi_k(x) \bar{\psi}_k(y)}{a_k} e^{-itE_k}.
\end{equation}
The contour $C$ originally goes along the real line. We notice however that the integrand can be analytically continued in the upper half plane. Moreover, in this form we can easily take into account also contributions from the bound states. To do so the contour $C$ should run above all positions of zeroes of $a_k$ in the upper half plane.
Below we re-derive this presentation using wave functions in the box (hard-wall boundary conditions), and demonstrate how to express full counting statistics via the scattering data and Jost solutions.
\section{Quench protocol and hard-wall wave functions}
The scattering states introduced in the previous section describe an infinite system. To correctly formulate transport problem we consider
open (hard-wall) boundary conditions placed at $x=\pm R$, perform computations at finite $R$, and send $R\to\infty$ in the end on the computation.
At the initial moment of time only the left part of the system $x<0$ is filled. Meaning that the single particle wave functions $\Lambda_k$ are non-zero only in the interval $x\in [-R,0]$, more formally
\begin{equation}\label{eq1}
- \frac{d^2\Lambda_k}{dx^2}+V_0(x)\Lambda_k = k^2\Lambda_k,\qquad\qquad \Lambda_k(0) = \Lambda_k(-R) = 0.
\end{equation}
The post-quench wave functions satisfies
\begin{equation}\label{eq2}
- \frac{d^2\chi_k}{dx^2}+V(x)\chi_k = k^2\chi_k,\qquad\qquad \chi_k(-R) = \chi_k(R) = 0.
\end{equation}
The initial $N$-particle state of the system $|{\rm in}\rangle$ is given in a Fock space by an ordered set of momenta $q_1<q_2<\dots< q_N$. Formally, it can be presented as a wedge product
\begin{equation}\label{vac}
|{\rm in}\rangle = \Lambda_{q_1}\bigwedge \Lambda_{q_2} \dots \bigwedge\Lambda_{q_N},
\end{equation}
which in the coordinate space corresponds to a single Slater determinant. The case of the statistical ensemble in the $N\to \infty$ limit can be described by taking the typical distribution of $q_i$.
To characterize many body dynamics we consider full counting statistics (FCS). It can be written as
\begin{equation}
\mathcal{F}(\lambda,t) = \langle {\rm in}| e^{itH} e^{\lambda N_R} e^{-itH} |{\rm in} \rangle = \langle {\rm in}| e^{\lambda \int\limits_0^t d\tau J(\tau)} |{\rm in} \rangle,
\end{equation}
where $N_R$ is number of particles in right part of the system and $J(\tau)$ is the current with the point $x=0$. Due to the free fermionic structure of the initial state \eqref{vac} the FSC can be presented as
\begin{equation}
\mathcal{F}(\lambda,t) = \det X_{ab},
\end{equation}
with indices $a$ and $b$ correspond to the momenta in the initial state $|{\rm in}\rangle$, and the matrix elements are
\begin{equation}
X_{ab} =\delta_{ab}+ (e^\lambda-1)\sum_{k,p} \frac{(\Lambda_a,\chi_k)(\chi_k, P_>\chi_p)(\chi_p,\Lambda_b)}{\sqrt{(\Lambda_a,\Lambda_a)}(\chi_k,\chi_k)(\chi_p,\chi_p)\sqrt{(\Lambda_b,\Lambda_b)}} e^{it(E_k-E_p)}.
\label{Xab}
\end{equation}
Here $P_>$ is a projector on the right part of the system i.e. $x\in [0, R)$. Our goal is to present this expression in the thermodynamic limit such that the
FCS can be written as the Fredholm determinant of some trace-class operator. Namely, we present
\begin{equation}\label{kk}
X_{ab} =\delta_{ab}+ \frac{\pi}{R} K(q_a,q_b)+ o(1/R)
\end{equation}
so that FCS
in the thermodynamic limit $R\to \infty$ transforms into a Fredholm determinant
\begin{equation}\label{Ftd}
\mathcal{F} (\lambda,t) = \det X \to \det \left(1 + \rho \hat{K}\right)
\end{equation}
where $\rho$ in the density of the initial state and the operator $\hat{K}$ acts on the integrable functions via the convolution with the kernel $K(q,q')$.
{\color{red} We compute this kernel in Sec. ??, while the full answer is given in Eq. ?? }
In the rest of this section we give an explicit description of the hard-wall wave functions in terms of the Jost functions and scattering data.
We start with $\chi_k$. Assuming that the range of the potential $\xi$ is much smaller than $R$ the wave function can be presented as
\begin{eqnarray}\label{chikk}
\chi_k(x) = {\rm Im} \left[e^{ikR}\psi_k(x)\right]
\end{eqnarray}
where $\psi_k$ is a Jost function that corresponds to the potential $V(x)$ (see Eq. \eqref{psiint}).
This way the condition $\chi_k(R) = 0 $ is satisfied automatically, while for the large negative $x$ the behavior reads
\begin{equation}
\chi_k(x) = {\rm Im} \left[e^{ikR}(\bar{a}_k e^{-ikx} -b_k e^{ikx})\right].
\end{equation}
Here the scattering data corresponds to the potential $V(x)$. Demanding $\chi_k(-R)=0$ will provide us with the spectrum condition, that can be resolved as
\begin{equation}\label{sp23}
e^{2i k R} = \frac{i {\rm Im}\,b_k+\sqrt{1+({\rm Re}\,b_k)^2}}{\bar{a}_k} \equiv e^{-2i\delta(k)}.
\end{equation}
Here we have introduced the scattering phase $\delta(k)$. We have to take into account two possible solution that corresponds to the two different branches of the square root.
Therefore, in fact we have two different scattering phases. For both of them we have $\delta(k) = - \delta(-k)$, this fact plus that the energy $E_k = k^2$, limits us to the only positive solutions of Eq.~\eqref{sp23}.
Let us also discuss the normalization of the wave function.
To this end we notice that the $k$ derivative of the $\chi_k$ satisfies
\begin{equation}
\left(
-\partial_x^2+ V(x) - k^2
\right)\partial_k\chi_k = 2k \chi_k,\qquad
\left(
-\partial_x^2+ V(x) - k^2
\right)\chi_k =0.
\end{equation}
So we can write
\begin{equation}
2k (\chi_k,\chi_k) = \int\limits_{-R}^{R} dx \left[
-\frac{d^2\partial_k\chi_k }{dx^2}\chi_k(x) + \partial_k\chi_k \frac{d^2\chi_k(x)}{dx^2}
\right] = \left[
-\frac{d\partial_k\chi_k }{dx}\chi_k(x) + \partial_k\chi_k \frac{d\chi_k(x)}{dx}
\right] \Big|_{-R}^{R}\label{norm0}
\end{equation}
This allows us to present
\begin{equation}\label{norm}
(\chi_k,\chi_k) = ({\rm Re}\, b_k + \sqrt{1 + ({\rm Re}\, b_k)^2})\sqrt{1 + ({\rm Re}\, b_k)^2} (R + \delta'(k)).
\end{equation}
Here $\delta'(k)$ means the momentum derivative.
Similarly, we can describe the matrix elements $(\chi_k, P_>\chi_p)=\int\limits_{0}^R dx \chi_k(x) \chi_p(x) $ on the projector in Eq. \eqref{Xab}
\begin{multline}\label{chi2}
(E_k-E_p)(\chi_k, P_>\chi_p) = \int\limits_{0}^R dx \left[\left(-\partial_x^2+ V(x) \right)\chi_k(x)\right] \chi_p(x) -
\int\limits_{0}^R dx \chi_k(x)\left(-\partial_x^2+ V(x) \right)\chi_p(x)
\\ =\int\limits_{0}^R dx \partial_x\left(
-\chi_p(x) \partial_x\chi_k(x)+ \chi_k(x)\partial_x\chi_p(x)
\right) = \chi_p(0) \partial_x\chi_k(0)-\chi_k(0) \partial_x\chi_p(0).
\end{multline}
To describe bound states that might be present in the system, one can argue that due to finite range of the potential the corresponding wave functions will be localized around $x=0$, and decay exponentially for large $x$. Therefore the boundary conditions are satisfied automatically with the exponential precision, and we may put
\begin{equation}
\chi_k^{\rm bound} (x) \approx \varphi_{i\varkappa}(x),\qquad k = i\varkappa.
\end{equation}
Its normalization can be found in the similar manner taking into account the identification $\varphi_{i\varkappa}(x) = b_\varkappa \bar{\psi}_{i\varkappa}(x)$, discussed in the previous chapter.
Indeed, using the fact that at $x\to+\infty$, the leading term over the momentum in the leading wave function behaves as $a'_{i\varkappa } e^{\varkappa x}$, we obtain
\begin{equation}\label{NormBound}
(\varphi_{i\varkappa},\varphi_{i\varkappa}) = i a'_{i\varkappa} b_\varkappa
\end{equation}
Similarly we can find the pre-quench wave function $\Lambda_k$. In this case it is more convenient to use the Jost solution \eqref{phiint} on the potential $V_0$, which we denote as $\Phi_k(x)$. In this notation we propose the following formula
\begin{equation}\label{lambda1}
\Lambda_k(x) = {\rm Im}\frac{\Phi_k(x)}{\Phi_k(0)}
\end{equation}
Notice that in this form the boundary condition $\Lambda_k(0)=0$ is satisfied automatically, while the condition $\Lambda_k(-R)=0$ defines spectrum
and the scattering phase $\eta(k)$
\begin{equation}\label{sp44}
e^{2ikR} = \frac{\Phi_k(0)}{\bar{\Phi}_k(0)} \equiv e^{-2i\eta(k)}.
\end{equation}
Normalization now reads as
\begin{equation}\label{LambdaOver}
(\Lambda_k,\Lambda_k) = \frac{R +\eta'(k)}{2|\Phi_k(0)|^2}.
\end{equation}
Finally, computation of the overlaps between pre- and postquench wavefunctions in \eqref{Xab}, can be avoided completely, and replaced by the corresponding
overlaps with the Jost's functions. Namely, as it follows from Eq. \eqref{chi2} the time derivative of the $\eqref{Xab}$ can be expressed via
the (conjugated) time evolution of the wave function {\color{blue} $\Lambda_q(x,t)$}
defined as
\begin{equation}\label{L0}
\Lambda_q(y,t) \equiv \sum_k \frac{(\Lambda_q,\chi_k)\chi_k(y)}{(\chi_k,\chi_k)}e^{itE_k} = \int\limits_{-R}^0 dx \Lambda_q(x) G^*(x,y,t).
\end{equation}
Here we have used the following presentation of the Green's function
\begin{equation}\label{Gstar}
G^*(x,y,t) \equiv \sum\limits_k \frac{\chi_k(x)\chi_k(y)}{(\chi_k,\chi_k)} e^{itE_k},
\end{equation}
The summation is taken over all spectral points \eqref{sp23}. We perform this summation explicitly in Appendix \eqref{appG} with the genuine discrete degrees of freedom, and taking the thermodynamic limit only the very end. The computation is straightforward but a bit tedious.
However, the obtained result can be easily explained heuristically. Namely, one can argue that in the thermodynamic limit instead of function \eqref{Gstar}
one can use \eqref{Gsimple}.
This way, we can find a presentation only with the Jost solutions introduced in the previous section
\begin{equation}
\Lambda_q(y,t) = \int_C \frac{dk}{2\pi} \frac{(\Lambda_q,\varphi_k)\bar{\psi}_k(y)}{a_k}e^{itE_k}.
\end{equation}
{\color{blue} The integrals path $C$} similar to Eq. \eqref{Gsimple} runs from $-\infty$ to $+\infty$ in the upper half plane above all positions of zeroes of $a_k$.
The overlap $(\Lambda_q,\varphi_k)$ can be computed using the same trick as in Eq. \eqref{norm0} and Eq. \eqref{chi2}.
Indeed, if we introduce function
\begin{equation}\label{Xiqk}
\Xi_{q,k} =\Lambda_q'(0)\varphi_k(0)- \int\limits_{-\infty}^0 dx \Lambda_q(x) (V_0(x) - V(x))\varphi_k(x),
\end{equation}
we can present
\begin{equation}\label{overlll}
(E_k -E_q) \int\limits_{-R}^0 dx \Lambda_q(x) \varphi_k(x) = \Xi_{q,k} - \Lambda_q'(-R)\varphi_k(-R).
\end{equation}
Here we have used that due to {\color{blue} finite range} of the potentials the lower limit of the integration in \eqref{Xiqk} can be either $-R$ or $-\infty$.
Taking into that for $k\in C$ the last term vanishes exponentially $\varphi_k(-R)\sim e^{ikR}$, we finally present
\begin{equation}\label{L1}
\Lambda_q(y,t) = \int_C \frac{dk}{2\pi} \frac{\Xi_{q,k} \bar{\psi}_k(y)}{(k^2-q^2)a_k}e^{itE_k}
\end{equation}
This the final answer in the thermodynamic limit.
{\color{blue} Notice that $\Xi_{q,k}$ is a regular function and can be continued from the discrete spectrum to upper half plane of the variable $k$.}
The direct computation of $\Lambda_q(0,t)$ and its derivative in the finite system is given in Appendix \eqref{appF}.
In the next section we use the presentation \eqref{L1} to compute kernel $K(q,q')$ in \eqref{kk} and \eqref{Ftd}.
\section{Kernel}
To compute the kernel $K(q,q')$ for the Fredholm determinant of the Full Counting Statistics \eqref{Ftd}, we start by considering its time derivative.
Using explicit presentation \eqref{Xab} and \eqref{chi2}, along with the definition \eqref{L0}, we arrive at
\begin{equation}\label{dK}
\frac{dK(q,q')}{dt} = \frac{2i(e^\lambda-1)}{\pi}
|\Phi_q(0)| \left( f^{(1)}_q(t) \bar{f}^{(0)}_{q'}(t) - f^{(0)}_q(t)\bar{f}^{(1)}_{q'}(t)\right)|\Phi_{q'}(0)|.
\end{equation}
where we have denoted
\begin{equation}\label{fa2m}
f^{(\alpha)}_q (t) = \partial^\alpha_y \Lambda_q(y,t)\Big|_{y=0}=
\int\limits_C \frac{dk}{2\pi}
\frac{\Xi_{q,k}\partial_x^\alpha\bar\psi_k(0)}{a_k}
\frac{e^{itk^2}}{k^2-q^2}, \qquad \alpha=0,1.
\end{equation}
Using presentation \eqref{L1} we can directly integrate Eq. \eqref{dK}. However, in order to easier assess the long-time asymptotic we first identically transform $f^{(\alpha)}_q $ to highlight the most relevant terms as $t\to+\infty$. To do so we notice that the exponential $e^{itk^2}$ is decaying in the first and third quadrants of complex plane of $k$ (see Fig. ???).
So we deform the controur $C$ into $C'$ by pulling it towards the real negative line and crossing it.
By doing so we inevitably encircle all positions of the bound states and the pole $k=-q$.
The obtained deformation reads
\begin{equation}\label{faCp}
f^{(\alpha)}_q(t) = i \frac{\Xi_{q,-q}\partial_x^\alpha\bar\psi_{-q}(0)}{a_{-q}} \frac{e^{itq^2}}{2q}+ \sum_{n=1}^{N^\mathrm{b}}
\frac{i \Xi_{q,i\varkappa_n}\partial_x^\alpha\bar\psi_{i\varkappa_n}(0)}{a'_{i\varkappa_n}}
\frac{e^{-it\varkappa_n^2}}{\varkappa_n^2+q^2}
+
\int\limits_{C'} \frac{dk}{2\pi}
\frac{\Xi_{q,k}\partial_x^\alpha\bar\psi_k(0)}{a_k}
\frac{e^{itk^2}}{k^2-q^2}.
\end{equation}
Notice that we can use \eqref{overlll} along with the asymptotic behavior $\Lambda_q'(-R)\sim -q e^{iqR}/\Phi_q(0)$ for large $R$ (see \eqref{lambda1}) to obtain
\begin{equation}\label{Xiqmq}
\Xi_{q,-q}=-\frac{q}{\Phi_q(0)}.
\end{equation}
The direct proof of this expression from the definition \eqref{Xiqk} is given in Appendix \eqref{pp}.
Further we use the symmetry $k\to -k$ to fold the full contour $C'$ and consider integration only with $\mathrm{Re}\,k>0$, namely
\begin{equation}\label{faRe}
f^{(\alpha)}_q(t) = \sum_{n=1}^{N^\mathrm{b}}
B_{n,q}^{(\alpha)} e^{-it\varkappa_n^2}
+F^{(\alpha)}_q e^{itq^2} +
\int\limits_{0}^{\infty} \frac{dk}{\pi}
\Omega^{(\alpha)}_{q,k}
\frac{e^{itk^2}}{(k+i0)^2-q^2},
\end{equation}
\begin{equation}\label{BFOm}
B_{n,q}^{(\alpha)}=
\frac{i \Xi_{q,i\varkappa_n}\partial_x^\alpha\bar \psi_{i\varkappa_n}(0)}
{a'_{i\varkappa_n}(\varkappa_n^2+q^2)},
\qquad
F_q^{(\alpha)}=-i \frac{\partial_x^\alpha\psi_{q}(0)}{2\Phi_q(0)a_{-q}},
\qquad
\Omega^{(\alpha)}_{q,k}=
\mathrm{Re}\,
\frac{\Xi_{q,k}\partial_x^\alpha\bar\psi_k(0)}{a_k}.
\end{equation}
Such form of $f^{(\alpha)}_q(t)$ is convenient for large $t$ asymptotic analysis. The main contributions come from the poles corresponding to bound states of $H$ and from the pole at $k=-q$.
They give an oscillatory behaviour in $t$. The integral in Eq.~\eqref{faRe} can be estimated by the method of stationary phase with a saddle point at $k=0$ producing power-like decay in $t$ for large $t$. The exponent of this power law in $t$ depends on the behaviour of $\Omega^{(\alpha)}_{q,k}$ at $k=0$. In the case of generic potentials, $a_k$ has a first order pole at $k=0$ while
$\Xi_{q,k}$ and $\partial_x^\alpha\psi_k(0)$ are regular at $k=0$.
Therefore $\Omega^{(\alpha)}_{q,k}$ has at least first order zero at $k=0$
which implies power law $t^{-1}$ (or even faster) decaying of the integral
in Eq.~\eqref{faRe} for large $t$.
For some special potentials (for example reflectionless potentials), $a_k$ is regular at $k=0$. For such potentials the integral decays
as $t^{-1/2}$:
\begin{equation}\label{intreg}
\int\limits_{0}^{\infty} \frac{dk}{\pi}
\Omega^{(\alpha)}_{q,k}
\frac{e^{itk^2}}{(k+i0)^2-q^2}=\frac{I^{(\alpha)}_{q}}{\sqrt{t}}+ O(t^{-1}),
\end{equation}
\begin{equation}\label{Iq}
I^{(\alpha)}_{q}=- \frac{\sqrt{\pi} e^{i\pi/4} \Xi_{q,0}\partial_x^\alpha\psi_0(0)}{2 a_0 q^2}.
\end{equation}
In what follows we will need the relation
\begin{equation}\label{resreg}
I^{(1)}_{q}\bar I^{(0)}_{q'}-I^{(0)}_{q}\bar I^{(1)}_{q'}=0.
\end{equation}
Note, Eq.~\eqref{Iq} shows a singular behaviour for small $q$.
In fact there is no such a singularity because in asymptotic analysis of Eq.~\eqref{intreg} we assumed
that a pole at $k=q$ is far from stationary point $k=0$. This assumption is incorrect for small
$q\lesssim t^{-1/2}$ and we need a different asymptotic analysis of Eq.~\eqref{intreg}.
Such analysis was done for the current and it was shown that the contribution of small $q$ changes only subleading behaviour of it. We believe that small $q$ do not change asymptotic behaviour of FCS too.
Using notations from the previous section we can explicitly write down the kernel of the FCS in
Eq.~\eqref{Xab}. After presenting this kernel in the form $X_{ab} =\delta_{ab}+ \frac{\pi}{R} K(q_a,q_b)+ o(1/R)$
the full counting statistics in the thermodynamic limit $R\to \infty$ transforms into a Fredholm determinant
\begin{equation}\label{FCSdet}
\mathcal{F} (\lambda,t) = \det X \to \det \left(1 + \rho \hat{K}\right)
\end{equation}
where $\rho$ is the density of the initial state and the operator $\hat{K}$ acts on the integrable functions via the convolution with the kernel $K(q,q')$.
To compute this kernel we integrate in $t$ Eq.~\eqref{dK} with substituted Eq.~\eqref{faRe}.
This way the kernels can be written as
\begin{equation}\label{Kqqp}
K(q,q') =\frac{2i(e^\lambda-1)}{\pi}
|\Phi_q(0)| \left( S_{qq'}(t) + M_{qq'}(t)- \bar M_{q'q}(t) \right)|\Phi_{q'}(0)|
\end{equation}
\begin{equation}
S_{qq'}(t) = (F_q^{(1)} \bar F_{q'}^{(0)}-F_q^{(0)} \bar F_{q'}^{(1)})
\frac{e^{it(E_q - E_{q'})}-1}{i(E_q - E_{q'})},
\end{equation}
\begin{equation}
M_{qq'}(t)=X_{qq'}(t)+Y_{qq'}(t)-Y_{qq'}(0),
\end{equation}
\begin{equation}
X_{qq'}(t)=\sum_{n=1}^{N^\mathrm{b}} (B_{nq'}^{(0)} F_{q}^{(1)}-B_{nq'}^{(1)} F_{q}^{(0)})
\frac{e^{it(\varkappa_n^2+E_{q})}-1}{i(E_{q}+\varkappa_n^2)}\\
+\sum_{m<n}^{N^\mathrm{b}} (B_{mq}^{(1)}B_{nq'}^{(0)}-B_{mq}^{(0)} B_{nq'}^{(1)})
\frac{e^{it(\varkappa_n^2-\varkappa_m^2)}-1}{i(\varkappa_n^2-\varkappa_m^2)},
\end{equation}
\begin{multline}\label{Yqqp}
Y_{qq'}(t) =
\int_0^\infty \frac{dk}{\pi} \frac{e^{it(E_k-E_{q'})}}{i(E^+_k-E_{q'})}
\frac{ \Omega_{qk}^{(1)}\bar F_{q'}^{(0)}- \Omega_{qk}^{(0)}\bar F_{q'}^{(1)}}{E^+_k-E_q}
+\frac12 \int\limits_0^\infty \frac{dk}{\pi}\int\limits_0^\infty \frac{dp}{\pi} \frac{e^{it(E_k-E_p)}}{i(E^+_k-E^-_p)}
\frac{\Omega_{qk}^{(1)} \Omega_{q'p}^{(0)}-\Omega_{qk}^{(0)} \Omega_{q'p}^{(1)}}
{(E^+_k-E_q)(E^-_p-E_{q'})}\\
+ \sum_{n=1}^{N^\mathrm{b}}
\int_0^\infty \frac{dk}{\pi} \frac{e^{it(E_k+\varkappa_n^2)}}{i(E^+_k+\varkappa_n^2)}
\frac{ \Omega_{qk}^{(1)} B_{nq'}^{(0)}- \Omega_{qk}^{(0)} B_{nq'}^{(1)}}{E^+_k-E_q} .
\end{multline}
In the limit of large $t$ the integrals in $Y_{qq'}(t)$ have a power like decaying behaviour.
It can be deduced using asymptotic behaviour of the integral in Eq.~\eqref{faRe} at large $t$
after substitution into Eq.~\eqref{dK} and following integration in $t$.
The most subtle analysis is needed for the case when $a_k$ is regular at $k=0$.
In this case Eq.~\eqref{intreg} leads potentially to a logarithmic growth for large $t$ in the double integral
in Eq.~\eqref{Yqqp}, but due to the relation \eqref{resreg} the leading term in $t$ is canceled. Finally, in the case of regular $a_k$ at $k=0$, the double integral in Eq.~\eqref{Yqqp} is decaying for large $t$ as $t^{-1/2}$, while in the generic case (there is a pole of $a_k$ at $k=0$) it is decaying is as $t^{-1}$.
Unfortunately we were not able to provide a strict analysis of large $t$ behaviour of FCS given by Eq.~\eqref{FCSdet}.
However we can guess a form of its leading term reducing the problem to known results on Fredholm determinant
of generalized sine-kernel [...]. In the next section (?) we consider numerical analysis of FCS for different potentials supporting our approximations.
In the case of presence of bound states for potential $V(x)$ there is a finite-rank perturbation $X(t)$ of a Fredholm determinant. For simplicity we consider potential $V(x)$ without bound states.
We believe that for large time $t$ the main contribution to FCS \eqref{FCSdet} comes from $S_{qq'}$ of the kernel \eqref{Kqqp}.
We rewrite the term with $S_{qq'}$ as a generalized sine-kernel
\begin{multline}
S_{qq'}(t) = 2e^{it(E_q - E_{q'})/2}(F_q^{(1)} \bar F_{q'}^{(0)}-F_q^{(0)} \bar F_{q'}^{(1)})
\frac{\sin t(E_q - E_{q'})/2}{E_q - E_{q'}}\approx
2 (F_q^{(1)} \bar F_{q}^{(0)}-F_q^{(0)} \bar F_{q}^{(1)})
\frac{\sin t(E_q - E_{q'})/2}{E_q - E_{q'}}\\
=-\frac{iq}{2|\Phi_q(0)|^2} T(E) \frac{\sin t(E_q - E_{q'})/2}{E_q - E_{q'}},
\end{multline}
where the approximation sign means large $t$ approximation of $ S_{qq'}(t)$ valid under substitution into Fredholm determinant,
and at the last step we used Eq.~\eqref{BFOm}, a formula for the current at $x=0$ of Jost solution $\psi_q(x)$ and Eq.~\eqref{tran} for the transmission coefficient
$T(E)$. Also we expect that $Y(t)$ gives subleading correction and $Y(0)$ corrects the leading term coming from sine-kernel by a time-independent prefactor.
Finally, we expect that FCS given by Eq.~\eqref{FCSdet} in the case of absence of bound states has the same large-time leading asymptotic behaviour
(up to a prefactor $C(\lambda)$ independent of time) as Fredholm determinant of a generalized sine-kernel
\begin{equation}\label{Fsindet}
\mathcal{F}(\lambda,t) \approx C(\lambda) \det \left(1 + \frac{e^\lambda-1}{\pi}\rho(E)T(E)\frac{\sin \frac{t(E-E')}{2}}{E-E'} \right),
\end{equation}
where $\rho(E)$ is a distribution of the initial state, and also we changed the integration variable $q$, momentum of fermions, to $E=q^2$.
Comparing the kernel of Fredholm determinant in Eq.~\eqref{Fsindet} with a standard form of Fredholm determinant of a generalized sine-kernel
\begin{equation}\label{detS}
\mathcal{S}(\lambda,t) = \det \left(1 + \frac{e^{2\pi i \nu(E)}-1}{\pi} \frac{\sin \frac{t(E-E')}{2}}{E-E'} \right)
\end{equation}
we identify
\begin{equation}\label{nuE}
\nu(E) = \frac{1}{2\pi i } \log \left(1+ (e^\lambda-1) \rho(E) T(E) \right).
\end{equation}
Large $t$ asymptotic behaviour of Eq.~\eqref{detS} was found in [...] and for $\nu(E)$ given by Eq.~\eqref{nuE} it leads to
\begin{equation}
\mathcal{F}(\lambda,t) \approx \frac{ C(\lambda) \tilde C(\lambda) }{t^{\nu(E_F)^2}} \exp\left(
\frac{t}{2\pi }\int\limits_0^{E_F} \log (1 + (e^\lambda-1) \rho(E) T(E)) dE
\right).
\end{equation}
{\color{blue} Note, $\nu(0)=0$.}
The prefactor $\tilde C(\lambda)$ is given by
\begin{multline}
\tilde C(\lambda)=\frac{G(1+\nu(E_F))G(1-\nu(E_F))}{E_F^{\nu(E_F)^2}}
\exp \left(\nu(E_F)\int_0^{E_F} \frac{\nu(E_F)-\nu(E)}{E_F-E}dE\right)\\
\times\exp \frac12\int_0^{E_F}\int_0^{E_F} dEdE'\frac{\nu'(E)\nu(E')-\nu(E)\nu'(E')}{E-E'},
\end{multline}
where $G(x)$ is the Barnes function.
{\color{blue} It is similar to Levitov--Lesovik formula ...}
\subsection{FCS for perfect lead attachment}
In this subsection we consider a special case of quench setup when $V_0(x) = V(x)$ for $x<0$.
We call this situation the \textit{perfect lead attachment}. In this case due to the integral presentation \eqref{phiint} the corresponding Jost functions coincide $\varphi_q(x) =\Phi_q(x)$ for $x \le 0$. From presentation \eqref{Xiqk} we observe the factorization
\begin{equation}\label{XiqkPLA}
\Xi_{q,k} = \Lambda'_q(0) \varphi_k(0),
\end{equation}
which imply a similar factorization ${f}^{(\alpha)}_q(t)= \Lambda'_q(0) g^{(\alpha)}_q(t)$ for ${f}^{(\alpha)}_q(t)$ given by Eq.~\eqref{fa2m},
where
\begin{equation}\label{fa22}
g^{(\alpha)}_q(t) = \int\limits_C \frac{dk}{2\pi} \omega_k^{(\alpha)}
\frac{e^{itk^2}}{k^2-q^2} ,\qquad
\omega_k^{(\alpha)} \equiv \frac {\varphi_{k}(0)\partial_x^\alpha\bar\psi_k(0)}{a_k}.
\end{equation}
Comparing Eq.~\eqref{XiqkPLA} at $k=-q$ with Eq.~\eqref{Xiqmq} we conclude that
$\Lambda'_q(0)= -q/|\varphi_q(0)|^2 $.
Therefore now the Eq.~\eqref{dK} reads
\begin{equation}\label{dKqqp}
\frac{dK(q,q')}{dt} = \frac{2i(e^\lambda-1)qq'}{\pi |\varphi_q(0)| |\varphi_{q'}(0)|}
\left( g^{(1)}_q(t) \bar{g}^{(0)}_{q'}(t) - g^{(0)}_q(t)\bar{g}^{(1)}_{q'}(t)\right).
\end{equation}
Integrating in $t$ we can present the kernel $K(q,q')$ in the integrable form
\begin{equation}\label{Kqqp}
K(q,q') = \frac{2(e^\lambda-1)qq'}{\pi|\varphi_q(0)| |\varphi_{q'}(0)|} \frac{g^{(1)}_q(t) \bar{g}^{(0)}_{q'}(t) - g^{(0)}_q(t)\bar{g}^{(1)}_{q'}(t)+
\bar{D}_q(t)-D_{q'}(t)}{E_q - E_{q'}},
\end{equation}
where
\begin{equation}
D_{q}(t) =i \int\limits_0^t d\tau \int\limits_C \frac{dk}{2\pi} e^{i\tau k^2}
\left[ \omega_k^{(1)}\bar{g}^{(0)}_{q}(\tau)-\omega_k^{(0)}\bar{g}^{(1)}_{q}(\tau) \right] .
\end{equation}
To check correctness of Eq.~\eqref{Kqqp} we need to compare its derivative in $t$ with Eq.~\eqref{dKqqp} using
\begin{equation}
\frac{d}{dt}g^{(\alpha)}_q(t) =iq^2 g^{(\alpha)}_q(t)+ i \int\limits_C \frac{dk}{2\pi} \omega_k^{(\alpha)}e^{i t k^2}.
\end{equation}
Also we have to check that $K(q,q')=0$ at $t=0$. It is correct due to equality $\mathfrak{f}^{(0)}_q(0)=0$ following from analyticity of $\omega_k^{(0)}$ in the upper half-plane of $k$.
The integrable form of kernel $K(q,q')$ allows one, in particular, to replace evaluation of the Fredholm determinants by a solution of the Riemann--Hilbert problem \cite{Deift_1997,Bogoliubov1997}.
This approach is especially useful for the asymptotic analysis at large time $t\to + \infty$.
In this case, however, if we follow the standard procedure outlined in \cite{Bogoliubov1997}, the corresponding jump matrix will have size $4\times4$.
{\color{blue}
The kernel simplifies in the $t\to \infty $ limit. For $\mathfrak{f}^{(\alpha)}_q(t)$ we can neglect the last integral in Eq. \eqref{fa22}, and to find the large time asymptotic of ${D}_{q}(t)$
we present it identically as
\begin{equation}
{D}_{q}(t) = \int\limits_{C} \frac{dk}{2\pi} \int\limits_{C^*} \frac{dp}{2\pi}
\frac{e^{it(k^2-p^2)}-1}{k^2-p^2} \frac{\bar{\omega}_p^{(0)}\omega_k^{(1)}-\bar{\omega}_p^{(1)}\omega_k^{(0)}}{p^2-q^2}\approx
-\int\limits_{C} \frac{dk}{2\pi} \int\limits_{C^*} \frac{dp}{2\pi}
\frac{1}{k^2-p^2+i0} \frac{\bar{\omega}_p^{(0)}\omega_k^{(1)}-\bar{\omega}_p^{(1)}\omega_k^{(0)}}{p^2-q^2}
\end{equation}
}
\section{The current}
The full current $J(t)$ of the particles flowing {\color{blue} through $x=0$ to the right part of the system} can be evaluated using connection with FCS given by Eq.~\eqref{Ftd}
\begin{equation}\label{J}
J(t) =\frac{d}{dt} \frac{d\mathcal{F}(\lambda,t)}{d\lambda}\Big|_{\lambda=0}
= \mathrm{Tr}\,\left(\rho \frac{d}{dt} \frac{d\hat K}{d\lambda}\Big|_{\lambda=0}\right) =
-\int_0^\infty dq \rho(q)
\frac{4|\Phi_q(0)|^2 }{\pi}
{\rm Im}\, f^{(1)}_q(t) \bar{f}^{(0)}_{q}(t) ,
\end{equation}
where at the last step we used trace of Eq.~\eqref{dK} .
As was discussed after Eq.~\eqref{faRe}, the integral in Eq.~\eqref{faRe} may be dropped for the calculation of current for large $t$ since it does not give a leading contribution:
\begin{equation}\label{faap}
f^{(\alpha)}_q(t) \approx F^{(\alpha)}_q e^{itq^2} + \sum_{n=1}^{N^\mathrm{b}}
B_{n,q}^{(\alpha)} e^{-it\varkappa_n^2}.
\end{equation}
Substituting this expression into Eq.~\eqref{J} we obtain three type of contributions to the current
\begin{equation}
J(t)\approx J_\mathrm{LB} + J^\mathrm{b} + \delta J,
\end{equation}
where $J_\mathrm{LB}$ comes from the first summand of \eqref{faap},
$J^\mathrm{b}$ comes from the summands corresponding to bound states and
$\delta J$ follows from mixing the first summand and the summands corresponding to bound states.
To calculate $J_\mathrm{LB}$ we use ${\rm Im}\, \psi_q'(0)\bar{\psi}_q(0) = -q$
and Eq.~\eqref{tran}
\begin{equation}
J_\mathrm{LB}=\int \limits_{0}^\infty \frac{dq }{\pi} \frac{q\rho(q)}{|a_q|^2} =
\int \frac{dE}{2\pi} \rho(E) T(E) .
\end{equation}
It is well-known Landauer--B\"uttiker formula for the current.
The contribution of bound states to the current is
\begin{equation}
J^\mathrm{b}=\sum_{m<n}
A_{mn} \sin t(\varkappa_m^2-\varkappa_n^2),
\end{equation}
where
\begin{equation}\label{Amn}
A_{mn} =\frac{ 4 \left(\bar\psi_{i\varkappa_n}'(0)\bar\psi_{i\varkappa_m}(0)-\bar\psi_{i\varkappa_m}'(0)\bar\psi_{i\varkappa_n}(0)\right) }{a'_{i\varkappa_m}a'_{i\varkappa_n}}
\int_0^\infty \frac{dq}{\pi} \rho(q)
|\Phi_q(0)|^2
\frac{ \Xi_{q,i\varkappa_m}\Xi_{q,i\varkappa_n}}
{(\varkappa_m^2+q^2)(\varkappa_n^2+q^2)} .
\end{equation}
For an even potential $V(x)$, the bound states are either even functions with $\bar\psi_{i\varkappa_n}'(0)=0$ or odd functions with
$\bar\psi_{i\varkappa_n}(0)=0$. Therefore, in this case, a nontrivial contribution to the current may arise only from pairs of odd-even states. Furthermore, in the case of perfect lead attachment, $V(x)=V_0(x)$, we have $\Xi_{q,i\varkappa_n}=0$ for odd bound states
$\bar\psi_{i\varkappa_n}(x)$
and therefore there is no contribution at all to the current from bound states in the case of perfect lead attachment with an even potential.
The integral in $q$ for $\delta J$ can be estimated by the
contribution at $q=0$ by the method of stationary phase and it can be shown that $\delta J$
decays for large $t$ at least as $t^{-1/2}$ and therefore does not give a leading contribution to the current.
Finally the leading contribution to the current for large $t$ consists of
constant Landauer--B\"uttiker current and an oscillating current
\begin{equation}\label{Jtot}
J(t)\approx \int \frac{dE}{2\pi} \rho(E) T(E) +
\sum_{m<n}
A_{mn} \sin t(\varkappa_m^2-\varkappa_n^2).
\end{equation}
\section{Examples}
In this section we apply general results to three potentials: $V(x)=g\delta(x)$ and $V(x)=-\lambda(\lambda-1)\cosh^{-2} x$ with $V_0(x)=0$,
and $V(x)=V_0(x)=g_1\delta(x-d_1)+g_2\delta(x-d_2)$.
\subsection{ $V(x)=g\delta(x)$, $V_0(x)=0$}
For the potential $V(x)=g\delta(x)$ the Jost solutions can be found using Eqs.~\eqref{psiint} and \eqref{phiint}
\begin{equation}
\psi_k(x) = e^{-ikx} - \frac{g}{k} \theta(-x) \sin (kx),
\end{equation}
\begin{equation}
\varphi_k(x) = e^{-ikx} + \frac{g}{k} \theta(x) \sin (kx),
\end{equation}
where $\theta(x)$ is Heaviside step function.
So the scattering data is
\begin{equation}\label{Tdelta}
a_k = 1- \frac{g}{2ik},\qquad b_k = \frac{g}{2ik},\qquad
T(E) = \frac{1}{|a_k|^2} = \frac{k^2}{k^2+g^2/4} = \frac{E}{E+g^2/4}.
\end{equation}
If $g<0$ there is also a bound state corresponding to zero of $a_k$ at
$k=i\varkappa=-ig/2$
\begin{equation}
\varphi_{i\varkappa}(x) = e^{-\varkappa |x|}, \qquad
\varkappa=|g|/2.
\end{equation}
The states corresponding to the initial potential $V_0(x)$ are
\begin{equation}
\Phi_q(x)=e^{-iqx},\qquad \Lambda_q(x)=-\sin qx.
\end{equation}
It leads to $\Xi_{q,k}=\Lambda_q'(0)\varphi_k(0)=-q$.
Functions $f^{(\alpha)}_q(t)$ can be obtained without Green function and they equal
\begin{equation}
f_q^{(1)}(t)=\frac{1}{2}qe^{itq^2},
\end{equation}
\begin{equation}
f^{(0)}_q(t)=-\frac{1}{2}\frac{qe^{itq^2}}{iq+g/2}-\theta(-g)\frac{q\varkappa e^{-it\varkappa^2}}{\varkappa^2+q^2}+qE(q),
\end{equation}
where
\begin{equation}
E(q) = \int\limits_0^\infty \frac{dp}{\pi} \frac{p^2 e^{itp^2}}{(p^2+\varkappa^2)((p+i0)^2-q^2)}=\frac{\varkappa \bar{f}_{\varkappa}(t)}{2(q^2+\varkappa^2)}
- \frac{iq f_q(t)}{2(q^2+\varkappa^2)}.
\end{equation}
and
\begin{equation}
f_q(t) = e^{itq^2}\left[1- {\rm Erf} \left(qe^{i\pi/4}\sqrt{t}\right)\right]
\end{equation}
Kernel of FCS up to $\rho(q)(e^\lambda-1)/\pi$ is given by the formula
\begin{equation}
2ie^{-it(q^2-q'^2)/2}\int_{0}^{t}d\tau\,\left\{f^{(1)}_{q}(\tau)\bar f^{(0)}_{q'}(\tau)-f^{(0)}_{q}(\tau)\bar f^{(1)}_{q'}(\tau)\right\}=X_0(q,q')+X_1(q,q'),
\end{equation}
where
\begin{equation}
X_0(q,q') =qq' \frac{2q}{\varkappa^2+q^2} \frac{\sin\left[t(q^2-q'^2)/2\right]}{q^2-q'^2},
\end{equation}
\begin{multline}
X_1(q,q') = qq' \frac{\sin\left[t(q^2-q'^2)/2\right]}{q^2-q'^2} \left(\frac{q'}{\varkappa^2+q'^2}-\frac{q}{\varkappa^2+q^2}\right)
-2q q'\mathrm{Im}\left(e^{-it(q^2+q'^2)/2}\frac{e(q)-e(q')}{q^2-q'^2}\right) \\
+\frac{q q'}{(\varkappa^2+q^2)(\varkappa^2+q'^2)}\left\{\varkappa\mathrm{Re}\left(e^{it(q^2+q'^2)/2}f_{\varkappa}(t)\right)-\frac{g}{2}\cos\left[t(q^2-q'^2)/2\right]-2\theta(-g)\varkappa\cos \left[t(q^2+q'^2+2\varkappa^2)/2\right]\right\},
\end{multline}
and
\begin{equation}
e(q)=\frac{q f_q(t)}{2(q^2+\varkappa^2)}.
\end{equation}
Using that we obtain formula for FCS
\begin{equation}
\mathcal{F} (\lambda,t) = \det \left(1 + \frac{e^\lambda-1}{\pi}\rho(q)X_0(q,q') +\frac{e^\lambda-1}{\pi}\rho(q)X_1(q,q')\right),
\end{equation}
\subsection{Two delta functions}
We consider the case when at the initial time $V_0=0$ on $[-R;0]$ and then a potential with two delta-functions on $[-R;R]$ turns on:
\begin{equation}\label{V2delta}
V(x) = g_1 \delta(x-d_1)+g_2\delta(x-d_2),
\end{equation}
where we assume that $d_2>0>d_1$. The Jost solutions for this potential can be find using Eq.~\eqref{psiint} and Eq.~\eqref{phiint}
\begin{equation}
\psi_k(x) = e^{-ik x} - \theta(d_1-x)\frac{\sin(k(x-d_1))}{k}g_1 \psi_k(d_1) - \theta(d_2-x)\frac{\sin(k(x-d_2))}{k}g_2 \psi_k(d_2),
\end{equation}
\begin{equation}
\varphi_k(x) = e^{-ik x} +\theta(x-d_1)\frac{\sin(k(x-d_1))}{k}g_1 \varphi_k(d_1) +\theta(x-d_2)\frac{\sin(k(x-d_2))}{k}g_2 \varphi_k(d_2),
\end{equation}
where
\begin{equation}
\psi_k(d_1) = e^{-ikd_1 } \left(1 + \frac{g_2}{2ik} \right) - \frac{g_2}{2ik}e^{ik(d_1-2d_2)}, \qquad
\psi_k(d_2)= e^{-ikd_2},
\end{equation}
\begin{equation}
\varphi_k(d_1) = e^{-ikd_1}, \qquad
\varphi_k(d_2) = e^{-ikd_2 } \left(1 -\frac{g_1}{2ik} \right) +\frac{g_1}{2ik}e^{ik(d_2-2d_1)}.
\end{equation}
The scattering data follows from Eq.~\eqref{transfer}
\begin{equation}\label{Tdelta2a}
a_k = \frac{g_1 g_2 e^{-2 i k (d_1-d_2)}+(2 k+i g_1) (2 k+i g_2)}{4 k^2},
\end{equation}
\begin{equation}\label{Tdelta2b}
b_k = \frac{g_2 e^{-2 i d_2 k} (g_1-2 i k)-g_1 e^{-2 i d_1 k} (g_2+2 i k)}{4 k^2}.
\end{equation}
Another way to find scattering data for potential $V(x)$ with two delta-functions is to use a composition of transfer matrices corresponding to each
of delta-functions. This approach works in more general case when we have a disjoint potential $V(x)=V_1(x)+V_2(x)$ with
$V_1(x)=0$ for $x>x_1$ and $V_2(x)=0$ for $x<x_2$ for some $x_1<x_2$. In this case we have
\begin{equation}
\mathcal{T}=\mathcal{T}_1\mathcal{T}_2,
\end{equation}
where $\mathcal{T}_j$ is the transfer matrix for $V_j$ with corresponding Jost solutions $\psi_j$ and $\varphi_j$. It can be derived in the following way
\begin{equation}
\begin{pmatrix}
\varphi_1\\
\bar{\varphi}_1
\end{pmatrix}= \mathcal{T}_1\begin{pmatrix}
\psi_1\\
\bar{\psi}_1
\end{pmatrix}=\mathcal{T}_1 \begin{pmatrix}
\varphi_2\\
\bar{\varphi}_2
\end{pmatrix}=\mathcal{T}_1\mathcal{T}_2\begin{pmatrix}
\psi_2\\
\bar{\psi}_2
\end{pmatrix}.
\end{equation}
Also we have to take into account that the transfer matrix $\tilde {\mathcal{T}}$ for the shifted potential $\tilde V(x)=V(x-d)$ is obtained from $\mathcal{T}$
by conjugation by a diagonal matrix
\begin{equation}
\tilde{\mathcal{T}}=\mathcal{T}(d)=\begin{pmatrix}
a_k & b_k e^{-2ikd}\\
\bar{b}_k e^{2ikd} & \bar{a}_k
\end{pmatrix}.
\end{equation}
Therefore, in particular, for the potential \eqref{V2delta} with two delta functions we have the transfer matrix
\begin{equation}
\mathcal{T}=\mathcal{T}_{g_1}(d_1)\mathcal{T}_{g_2}(d_2), \qquad
\mathcal{T}_g(0)=\begin{pmatrix}
1-\frac{g}{2ik} & \frac{g}{2ik}\\
-\frac{g}{2ik} & 1+\frac{g}{2ik}
\end{pmatrix},
\end{equation}
where we used Eq.~\eqref{Tdelta}. It gives the scattering data \eqref{Tdelta2a} and \eqref{Tdelta2b}.
The potential $V(x)$ can has one or two bound states depending on the values of parameters. For simplicity, let us consider the symmetric case of the potential, i.e.
$g_1=g_2=g$, $d_2=-d_1=d/2$,
\begin{equation}\label{Vsym}
V(x)=g\delta(x+d/2)+g\delta(x-d/2).
\end{equation}
The bound states momenta follow from the relation $a_{i\varkappa}=0$ or explicitly
\begin{equation}\label{bound2d}
a_{i\varkappa}= \frac{(u-1)^2 -e^{-u D}}{u^2}=0,
\end{equation}
where we introduced notations
\begin{equation}
k=i\varkappa, \qquad
u=2\varkappa/|g|, \qquad D=|g|d.
\end{equation}
The equation \eqref{bound2d} has two solutions for $D>2$ and one solution for $0\le D \le 2$. Note $a_k$ has a simple pole at $k=0$ if $D\ne 2$. The case $D=2$ corresponds to the
situation when a bound state arises from the continuous spectrum and in this case $a_{k}$ is regular at $k=0$.
For nonsymmetric potential $V(x)$, the condition $D\gtrless 2$ is generalized to
\begin{equation}
d_{2}-d_1 \gtrless \frac{1}{|g_1|}+\frac{1}{|g_2|}.
\end{equation}
In what follows we will need
\begin{equation}\label{Xi2delta}
\Xi_{qk}=\Lambda'_q(0)\varphi_k(0)+\int_{-\infty}^{0}dx \Lambda_q(x)V(x)\varphi_k(x)=-q+g_1 e^{-ikd_1} \left(\frac{q}{k}\sin kd_1-\sin qd_1\right),
\end{equation}
where we used
\begin{equation}
\Lambda_q(x)=\mathrm{Im}\, \Phi_q(x)=-\sin qx.
\end{equation}
To compute current \eqref{J} we need to find $f^{(\alpha)}_q(t)$ given by Eq.~\eqref{faRe}. The most difficult part of computation is estimation of integrals
\begin{equation}
I^{(\alpha)}_q(t)=\int\limits_{0}^{\infty} \frac{dk}{\pi}
\Omega^{(\alpha)}_{q,k}
\frac{e^{itk^2}}{(k+i0)^2-q^2},
\end{equation}
where
\begin{equation}
\Omega^{(\alpha)}_{q,k}=
\mathrm{Re}\,
\frac{\Xi_{q,k}\partial_x^\alpha\bar\psi_k(0)}{a_k}.
\end{equation}
For symmetric potential \eqref{Vsym} we have
\begin{equation}
\Omega^{(0)}_{q,k}=\frac{2k^2(-q+g\cos kd/2\sin qd/2)}{g^2+2k^2+g^2\cos kd-2gk \sin kd},
\end{equation}
\begin{equation}
\Omega^{(1)}_{q,k}=-\frac{2gk^3\sin kd/2\sin qd/2}{g^2+2k^2-g^2\cos kd+2gk \sin kd}.
\end{equation}
To find asymptotic behaviour of the integrals $I^{(\alpha)}_q(t)$ we need to know expansions of the integrands at $k=0$
\begin{equation}
\Omega^{(0)}_{q,k}=\frac{k^2}{g^2}(-q+g\sin qd/2)+O(k^4), \quad \Omega^{(1)}_{q,k}=-\frac{2k^2gd\sin qd/2}{(2+gd)^2} + O(k^4).
\end{equation}
The formula for $\Omega^{(1)}_{q,k}$ is valid for $D=-gd\ne 2$. The asymptotic behaviour of $\Omega^{(1)}_{q,k}$ for $D=2$ and small $k$ is
\begin{equation}
\Omega^{(1)}_{q,k}=\frac{4}{d^2}\sin \frac{qd}{2}+\frac{k^2}{18}\sin \frac{qd}{2}+O(k^4).
\end{equation}
Therefore, the integrals have following decaying behavior for large $t$
\begin{equation}
I^{(\alpha)}_q(t)\sim t^{-\frac{3}{2}} \quad \mathrm{for} \quad D\ne 2,\quad \mathrm{and} \quad I^{(\alpha)}_q(t)\sim t^{-\frac{3}{2}+\alpha} \quad \mathrm{for} \quad D=2.
\end{equation}
They will be neglected since they do not give a contribution to the leading term of asymptotic current
given by Eq.~\eqref{Jtot}. If the potential has two bound states than there is an oscillatory part of the current with the amplitude of oscillations
given by Eq.~\eqref{Amn}
\begin{equation}\label{A12d2}
A_{12} =\frac{ 4 \left(\bar\psi_{i\varkappa_2}'(0)\bar\psi_{i\varkappa_1}(0)-\bar\psi_{i\varkappa_1}'(0)\bar\psi_{i\varkappa_2}(0)\right) }{a'_{i\varkappa_1}a'_{i\varkappa_2}}
\int_0^\infty \frac{dq}{\pi} \rho(q)
|\Phi_q(0)|^2
\frac{ \Xi_{q,i\varkappa_1}\Xi_{q,i\varkappa_2}}
{(\varkappa_1^2+q^2)(\varkappa_2^2+q^2)} .
\end{equation}
For the symmetric potential \eqref{Vsym}, a simplified formula for Eq.~\eqref{A12d2} can be computed using Eq.~\eqref{Xi2delta} and
\begin{equation}
a'_{i\varkappa_j}=\left.\frac{da}{dk}\right|_{k=i\varkappa_j}=-\frac{2i}{|g|}\frac{(u_j-1)(D(u_j-1)+2)}{u_j^2},
\end{equation}
\begin{equation}
\bar\psi_{i\varkappa_1}(0)=2-2/u_1, \qquad \bar\psi_{i\varkappa_2}(0)=0, \qquad
\bar\psi_{i\varkappa_1}'(0)=0 , \qquad
\bar\psi_{i\varkappa_2}'(0)=(1-u_2)|g|.
\end{equation}
Therefore for such potential we have
\begin{equation}
A_{12}= \frac{2u_1 u_2^2|g|^3}{(D(u_1-1)+2)(D(u_2-1)+2)}\int_0^\infty \frac{dq}{\pi} \rho(q)
\frac{ \Xi_{q,i\varkappa_1}\Xi_{q,i\varkappa_2}}
{(\varkappa_1^2+q^2)(\varkappa_2^2+q^2)}.
\end{equation}
Finally, the leading contribution to the current for large $t$ consists of
constant Landauer--B\"uttiker current and an oscillating current (if there are two bound states)
\begin{equation}\label{Jtot}
J(t)= \int \frac{dE}{2\pi} \rho(E) T(E) +
A_{12} \sin t(E_2-E_1)+O(t^{-1/2}),
\end{equation}
where $T(E)=|a_k|^{-2}$ is the transmission coefficient, the energy of bound states $E_j=-\varkappa_j^2$ and $\varkappa_j$ are defined by the roots of Eq.~\eqref{bound2d},
the amplitude $A_{12}$ is given by Eq.~\eqref{A12d2}.
.
\subsection{An example of reflectionless potential}
In this subsection we consider an example of perfect lead attachment, i.e. $V_0(x)=V(x)$, $x<0$, for the reflectionless potential
\begin{equation}\label{Vcosh}
V(x)=-\frac{2}{\cosh^2 x}.
\end{equation}
The corresponding Jost solutions are
\begin{equation}
\psi_k(x)= e^{-ikx}\left(1+\frac{2i}{k-i}\frac{1}{e^{2x}+1}\right),
\end{equation}
\begin{equation}
\varphi_k(x)=\bar\psi_k(-x)=e^{-ikx}\left(1-\frac{2i}{k+i}\frac{1}{e^{-2x}+1}\right)=\frac{k-i}{k+i} \psi_k(x)
\end{equation}
with
\begin{equation}
a_k=\frac{k-i}{k+i} , \qquad b_k=0.
\end{equation}
This potential has one bound state corresponding to the zero of $a_k$ at $k=i$:
\begin{equation}
\chi_1^{\rm b}(x) = \varphi_{k=i}(x)=\frac{1}{2\cosh x}.
\end{equation}
Initial one-particle states are given by \eqref{lambda1}:
\begin{equation}
\Lambda_q(x)=-\frac{q \sin qx +\tanh x\cos qx}{q}.
\end{equation}
Therefore $\Xi_{q,k}$ defined in \eqref{Xiqk} becomes
\begin{equation}
\Xi_{q,k}= \Lambda_q'(0)\varphi_k(0) =-\frac{1+q^2}{q} \cdot \frac{k}{k+i}.
\end{equation}
We have
\begin{equation}
\frac{\Xi_{q,k}\bar{\psi}'_k(0)}{a_k}=-i\frac{1+q^2}{q} k.
\end{equation}
Therefore the integral in \eqref{faRe} for $\alpha=1$ does not give a contribution. Since for the the bound state
$(\chi_1^{\rm b})'(0)=0$, there is no contribution of bound states
to $f^{(1)}_q(t)$. Finally we obtain
\begin{equation}
f^{(1)}_q(t)=-\frac{1+q^2}{2q}e^{itq^2}.
\end{equation}
Similarly we have
\begin{equation}
\frac{\Xi_{q,k}\bar{\psi}_k(0)}{a_k}=-\frac{1+q^2}{q} \frac{k^2}{1+k^2}, \qquad \frac{i\Xi_{q,k}\bar{\psi}_k(0)}{a'_k}=-\frac{k^2}{2q}.
\end{equation}
Therefore
\begin{equation}
f^{(0)}_q(t) = \left.\frac{i\Xi_{q,k}\bar{\psi}_k(0)e^{itk^2}}{a'_k}\right|_{k=i}
-i\frac{\psi_q(0)e^{itq^2}}{2\varphi_q(0) a_{-q}} + \int\limits_0^\infty \frac{dk}{\pi} {\rm Re} \left[
\frac{\Xi_{q,k}\bar{\psi}_k(0)}{a_k}
\right] \frac{e^{itk^2}}{(k+i0)^2-q^2}
\end{equation}
becomes
\begin{equation}\label{GqCosh}
f^{(0)}_q(t) = \frac{e^{-it }} {2q}
+\frac{e^{itq^2}}{2i}
-\frac{1+q^2}{q}
\int\limits_0^\infty \frac{dk}{\pi} \frac{k^2}{1+k^2} \frac{e^{itk^2}}{(k+i0)^2-q^2}
\end{equation}
\begin{equation}
\approx \frac{e^{-it }} {2q}
+\frac{e^{itq^2}}{2i}+\frac{1+q^2}{q^3}\frac{e^{3\pi i/4}}{4\sqrt{\pi}} t^{-\frac32},\qquad t\to \infty,
\end{equation}
where the asymptotic behaviour is given for fixed $q>0$.
The current is given by formula \eqref{J}
\begin{equation}
J (t)=
-\int_0^\infty dq \rho(q)
\frac{4|\varphi_q(0)|^2 }{\pi}
{\rm Im}\, f^{(1)}_q(t) \bar{f}^{(0)}_{q}(t) .
\end{equation}
Using expressions for $f^{(1)}_q(t)$ and $f^{(0)}_q(t)$ we obtain
\begin{equation}
J(t) = \int_0^\infty \frac{dq}{\pi} \rho(q)\left(q+\sin(1+q^2)t+2(1+q^2)\mathrm{Im}\int\limits_0^\infty \frac{dk}{\pi} \frac{k^2}{1+k^2} \frac{e^{it(k^2-q^2)}}{(k+i0)^2-q^2}\right).
\end{equation}
The integral in $q$ of the second term is decreasing as $1/\sqrt{t}$, which can be shown by the method of stationary phase.
The contribution of the third term is even smaller for large $t$. So, making substitution from momenta to energy, we obtain the Landauer--B\"uttiker current
for reflectionless potential \eqref{Vcosh}
\begin{equation}
J=\int_0^\infty \frac{dE}{2\pi} \rho(E)+O(t^{-\frac{1}{2}}).
\end{equation}
\subsubsection{P\"oschl--Teller potential}
Here we follows notations of [Flugge Problem 39].
Namely, we consider Shr\"odinger equation for P\"oschl--Teller potential
\begin{equation}
u''+ \left(k^2 + \frac{\lambda(\lambda-1)}{\cosh(x)^2} \right)u=0.
\end{equation}
The generic solution reads
\begin{equation}
u = A \cosh^\lambda(x) F(a_+,a_-,1/2,-\sinh^2(x)) + B \cosh^\lambda(x) \sinh(x) F(a_++1/2,a_-+1/2,3/2,-\sinh^2(x))
\end{equation}
with
\begin{equation}
a_\pm = \frac{\lambda \pm ik}{2} .
\end{equation}
Taking into account 15.8.2 from [https://dlmf.nist.gov/15.8], we present at for large $x$
\begin{multline}
u(x) \approx A \cosh^\lambda(x)
\left(
\frac{\Gamma(1/2)\Gamma(a_--a_+)[\sinh^2(x)]^{-a_+}}{\Gamma(a_-)\Gamma(1/2-a_+)} +
\frac{\Gamma(1/2)\Gamma(a_+-a_-)[\sinh^2(x)]^{-a_-}}{\Gamma(a_+)\Gamma(1/2-a_-)}
\right) \\
+B \sinh(x) \cosh^\lambda(x)\left(
\frac{\Gamma(3/2)\Gamma(a_--a_+)[\sinh^2(x)]^{-a_+-1/2}}{\Gamma(a_-+1/2)\Gamma(1-a_+)} +
\frac{\Gamma(3/2)\Gamma(a_+-a_-)[\sinh^2(x)]^{-a_--1/2}}{\Gamma(a_++1/2)\Gamma(1-a_-)}
\right).
\end{multline}
This gives at $x\to + \infty$
\begin{multline}
u(x) \approx A\sqrt{\pi}
\left(
\frac{\Gamma(a_--a_+)e^{-ikx+ik\ln 2}}{\Gamma(a_-)\Gamma(1/2-a_+)} +
\frac{\Gamma(a_+-a_-)e^{ikx-ik\ln 2}}{\Gamma(a_+)\Gamma(1/2-a_-)}
\right) \\
+\frac{ B \sqrt{\pi}}{2} \left(
\frac{\Gamma(a_--a_+)e^{-ikx+ik\ln 2}}{\Gamma(a_-+1/2)\Gamma(1-a_+)} +
\frac{\Gamma(a_+-a_-)e^{ikx-ik\ln 2}}{\Gamma(a_++1/2)\Gamma(1-a_-)}
\right)
\end{multline}
and $x\to - \infty$
\begin{multline}
u(x) \approx A\sqrt{\pi}
\left(
\frac{\Gamma(a_--a_+)e^{ikx+ik\ln 2}}{\Gamma(a_-)\Gamma(1/2-a_+)} +
\frac{\Gamma(a_+-a_-)e^{-ikx-ik\ln 2}}{\Gamma(a_+)\Gamma(1/2-a_-)}
\right) \\ -\frac{ B \sqrt{\pi}}{2} \left(
\frac{\Gamma(a_--a_+)e^{ikx+ik\ln 2}}{\Gamma(a_-+1/2)\Gamma(1-a_+)} +
\frac{\Gamma(a_+-a_-)e^{-ikx-ik\ln 2}}{\Gamma(a_++1/2)\Gamma(1-a_-)}
\right).
\end{multline}
So for the Jost solution $\varphi_k(x)$ we have to choose the following constants
\begin{eqnarray}
A^\varphi = \frac{\cosh(2\pi k)-\cos(2\pi \lambda)}{(2\pi)^2\sqrt{\pi}2^{ik}}\Gamma(1-ik)\Gamma(2a_+)\Gamma(1-2a_-) \Gamma(a_-)\Gamma(1/2-a_+)\equiv \alpha(k),\\ B^\varphi =
2\frac{\cosh(2\pi k)-\cos(2\pi \lambda)}{(2\pi)^2\sqrt{\pi}2^{ik}}\Gamma(1-ik)\Gamma(2a_+)\Gamma(1-2a_-) \Gamma(1-a_+)\Gamma(1/2+a_-)\equiv \beta(k).
\end{eqnarray}
And similarly for $\psi_k(x)$
\begin{equation}
A^\psi = \alpha(-k),\qquad B^\psi = -\beta(-k).
\end{equation}
The transmission and reflection coefficients can be expressed as
\begin{equation}
a_k = \frac{2\alpha(k)\beta(k)}{\alpha(-k)\beta(k)-\alpha(k)\beta(-k)} = \frac{\Gamma (1-i k) \Gamma (-i k)}{\Gamma (1-i k-\lambda ) \Gamma (\lambda -i k)},\qquad
b_k = {\color{blue} -} \frac{\alpha(-k)\beta(k)+\alpha(k)\beta(-k)}{\alpha(-k)\beta(k)-\alpha(k)\beta(-k)} = -i\frac{\sin(\pi \lambda)}{\sinh(\pi k)}.
\end{equation}
Therefore
\begin{equation}
\frac{\varphi_k(0)\partial_x \psi_{-k}(0)}{a_k} = -\frac{\alpha(k)\beta(k)}{a_k} = ik,
\end{equation}
\begin{equation}
\frac{\varphi_k(0)\psi_{-k}(0)}{a_k} = \frac{\alpha(k)^2}{a_k} = -\frac{i k \Gamma \left(\frac{1}{2} (1-i k-\lambda )\right) \Gamma \left(\frac{1}{2} (\lambda -i k)\right)}{2 \Gamma \left(1-\frac{i k}{2}-\frac{\lambda }{2}\right) \Gamma \left(\frac{1}{2} (1+\lambda -i k)\right)}.
\end{equation}
Note, for arbitrary even potential $V(-x)=V(x)$, the Jost solutions are related as $\psi_{-k}(x)=\varphi_k(-x)$.
Taking into account that the Wronskian $\varphi_k(x) \partial_x \psi_{-k}(x) - \psi_{-k}(x) \partial_x \varphi_k(x)$ does not depend on $x$ and calculating it at $x\to -\infty $ and $x=0$
we obtain the relation
\begin{equation}
\frac{\varphi_k(0)\partial_x \psi_{-k}(0)}{a_k} = ik.
\end{equation}
\begin{acknowledgments}
The authors acknowledge support by the National Research Foundation of Ukraine grant 2020.02/0296.
Y.Z. and N.I. were partially supported by NAS of Ukraine (project No. 0122U000888).
O. G. also acknowledges support from the Polish National Agency for Academic
Exchange (NAWA) through the Grant No. PPN/ULM/2020/1/00247.
\end{acknowledgments}
|
2,869,038,154,580 | arxiv | \section{Introduction}
Navigation comes easy to humans. We are able to maneuver through novel parts of our environments, self-locate by integrating over convoluted trajectories, and even come up with shortcuts that traverse areas of the environment we have never visited before. Two principles seem to drive these abilities: latent learning and parsimonious dynamics. Latent learning describes the ability to represent paths through our world not as we experience them literally, but in an abstract manner: That is, we reason about trajectories not in reference necessarily to what we expect to see along a path, but rather in an abstract latent space, containing information about the places' spatial coordinates \citep{tolman1948cognitive, constantinescu2016organizing}. These coordinates are themselves never experienced, but are useful representations constructed to reflect the structure of the environment we inhabit. Parsimonious dynamics describe the fact that the rules governing how state transitions work should be simple. We assume that moving around in novel parts of our environment works the same way as in parts we are familiar with. These two principles work together in tandem: it is in the latent space that the dynamics show parsimonious characteristics.
We extend these ideas to the more general framework of learning latent dynamics models for reinforcement learning (RL). Recent advances in model-based RL have showcased the potential improvements in the performance and sample complexity that can be gained by learning accurate latent dynamics models \citep{deisenroth2011pilco, hafner2019dream, schrittwieser2020mastering}. These models summarize the transitions that the agent experiences by interacting with its environment in a low-dimensional latent code that is learnt alongside their dynamics. By learning such models, agents may for instance perform control by planning ahead in this low-dimensional state space \citep{hafner2019learning}, or, if the latent states contain useful information for policy learning, simply learn a policy over latent states rather than the original states \citep{ha2018recurrent, lee2020stochastic}. We employ the principle of parsimony to learn a latent dynamics model that is able to generalize about novel transitions, and whose latent states contain information about the topology of the environment. These latent state representations prove to be useful for policy learning, planning and future state prediction. Moreover, we show that one can learn such latent representations of the environment simply from encouraging the dynamics to be parsimonious, without supervision about the underlying latent structure.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=1\textwidth]{figures/general/overview_cropped2.png}
\end{center}
\caption{Illustration of the learning problem. Environments can be embedded in an underlying low-dimensional space, and transitions can be described by a small set of linear transformations, like rotations and translations, information that is unavailable to the agent. The observation space $\mathcal{S}$ is not informative of the underlying topological structure of the environments, and must be learnt. Our model discovers a latent space $\tilde{\mathcal{Z}}$ where state-transitions can be described by a small number of learnt transformations.}
\end{figure}
We draw inspiration from group theory, the study of transformations that preserve symmetries \citep{kondor2008group}, to learn such latent spaces. Particularly, we adopt the framework of \cite{quessard} and \cite{caselles2019symmetry} where the interventions an agent can perform on its environment are treated as transformations belonging to a group, and transitioning between states through selecting actions is equivalent to transforming the source state with the action's corresponding group transformation. We extend this approach by summarizing a data set of experienced transitions invoking only a small set of different types of learned transformations. We hypothesize that a model that infers a small set of locally linear transformations to explain global transition dynamics should be able to generalize effectively about the transition dynamics of novel parts of the agent's environment, and that the latent state representations that result from embedding states onto such discovered manifolds are beneficial for policy learning. In the end, we show that our approach outperforms alternative dynamics and representation learning models in planning and policy learning tasks, as well as in an open-loop pixel prediction tasks based on the Deepmind Lab environment \citep{beattie2016deepmind}.
\section{Model}
\subsection{Preliminaries}
We assume that the environment is a Markov Decision Process (MDP) defined as the tuple $\langle \mathcal{S}, \mathcal{A}, \mathcal{R}, \mathcal{T}, \gamma\rangle$, where $\mathcal{S}$ is the state space, $\mathcal{A}$ is the set of actions, $\mathcal{T}$ is the transition function describing the probability of successor states given the current state-action tuple $\mathbf{s}_{t+1}\sim \mathcal{T}(\mathbf{s}_t, \mathbf{a}_t)$, $\mathcal{R}$ is the reward function and $\gamma$ is the discount factor. In every state, the agent selects an action according to the policy $\mathbf{a}_t \sim \pi(\mathbf{s}_t)$. The agent's goal is to learn a policy $\pi(\mathbf{a}_t\mid \mathbf{s}_t)$ or plan using knowledge of environment dynamics to maximize future discounted rewards $\mathop{\mathbb{E}}_{\mathcal{T}}\left[\sum_{t=0}^{\infty}\gamma^t\mathcal{R}(\mathbf{s}_t)\right]$.
The original state space of the MDP may be high-dimensional and difficult to perform RL on. We hypothesize that constructing a low-dimensional latent space that exhibits parsimonious dynamics is beneficial for RL in at least two ways: i) Learning a policy on the latent space $\pi(\mathbf{a}\mid \mathbf{z}_t)$ should be easier since the latent states are organized such that they match the underlying, hidden topology of the environment. ii) Planning should be possible with less experience: the gains we seek to make here lie in exploiting the knowledge of the transformations that describe how the state variable changes with our actions. For instance, if the agent learns that all state-transitions can be described by a small set of transformations of the source state depending on the action, we can correctly generalize about what the next latent state will be simply if we can predict what the appropriate transformation of our current latent state is, given the action the agent selected.
\subsection{Model components}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=1\textwidth]{figures/general/model4.png}
\end{center}
\caption{Left: Illustration of model components. Right: Embeddings of environment states produced by our model, compared to those produced by a $\beta$-VAE. The VAE embeddings are learnt without parsimony and with a reconstruction objective, making the latent space reflect the topology of the observation space, rather than that induced by the MDP's transition function.}
\end{figure}
Our model learns an encoding function $f_\phi$ that maps states $\mathbf{s}_t$ to latent states $\mathbf{z}_t$. For the subsequent experiments we assume a deterministic function, but it is straightforward to formulate our model as a stochastic latent dynamics model too (see Appendix \ref{A:transition}). In fact, in section \ref{sec:DML} we perform pixel prediction with a stochastic variant of our model. Given an action $\mathbf{a}_t$ and the current latent state $\mathbf{z}_t$, we seek to predict the next latent state $\tilde{\mathbf{z}}_{t+1}$. We represent $\tilde{\mathbf{z}}_{t+1}$ as the product of the current latent state with a linear transformation matrix $\mathbf{z}_{t+1}\approx\mathbf{T}_t\mathbf{z}_t$ which we predict from the latent state-action tuple $(\mathbf{z}_t, \mathbf{a}_t)$.
\begin{equation}
\begin{aligned}
&\mathbf{z}_t = f_\phi(\mathbf{s}_t) \\
&\mathbf{h}_t = g_\psi(\mathbf{z}_t, \mathbf{a}_t) \\
&\mathbf{T}_t = j_\omega(\mathbf{h}_t, \mathbf{a}_t) \\
&\tilde{\mathbf{z}}_{t+1} = \mathbf{T}_t\mathbf{z}_t \\
\end{aligned}
\end{equation}
We use a probabilistic approach to learning $\mathbf{T}_t$: We seek to infer the posterior distribution of a discrete latent code $\mathbf{h}_t \sim q_{\psi}^t(\mathbf{z}_t, \mathbf{a}_t)$, given a prior $p_{\psi}^t(\mathbf{h}\mid \mathbf{a}_t)$ from which we can decode the appropriate transformation $\mathbf{T}_t$ that describes the transition.
\begin{equation}
\begin{aligned}
&\text{Posterior: } \ &&q_{\psi}^t(\mathbf{h}\mid \mathbf{z}_t, \mathbf{a}_t) \\
&\text{Prior: } \ &&p_{\psi'}^t(\mathbf{h}\mid \mathbf{a}_t) \\
\end{aligned}
\end{equation}
We want our model to be able to recapitulate observed transitions as accurately as possible, while maximizing the predictability of the transformations describing individual transitions in latent space from the chosen actions alone. This is what we refer to as the principle of parsimony. We construct $q_{\psi}^t(\mathbf{h}\mid \mathbf{z}_t, \mathbf{a}_t)$ as a multivariate Bernoulli distribution of dimensionality $n$ with probability vector $\mathbf{p}_t$, which is predicted by a neural network function $g_{\psi}(\mathbf{z}_t, \mathbf{a}_t)$. The latent code $\mathbf{h}_t$ is a binary vector which we produce by rounding $\mathbf{p}_t$.
\begin{equation}
h_i = \begin{cases}
1,& \text{if } p_i \geq 0.5\\
0, & \text{otherwise}
\end{cases}
\end{equation}
The latent transition code $\mathbf{h}_t$ is decoded with a learnt decoding function $j_\omega$ into the parameters of the linear transformation matrix $\mathbf{T}_t$ with which we predict the next latent state. We then train the transition encoder and decoder networks $g_{\psi}$ and $j_\omega$ using a variational objective \citep{kingma2013auto, hafner2019learning}. To backpropagate through the discrete transition representation, we make us of the straight-through estimator \citep{bengio2013estimating}. Due to the discretization, this transition representation already acts as a bottleneck on the number of transformation matrices we can associate with each action \citep{van2017neural}. However, a chief aim of our model is to represent the transitions of the environment with as \emph{few} transformations as possible. We incorporate this desideratum in the way we construct the variational objective for posterior inference of $q_{\psi}^t(\mathbf{h}\mid \mathbf{z}_t, \mathbf{a}_t)$. We leverage a second learnt neural network function $g_\psi'(\mathbf{z}_t, \mathbf{a}_t)$ to learn a prior distribution $p_{\psi'}^t(\mathbf{h}\mid \mathbf{a}_t)$ where $\mathbf{h}_t$ does not depend on the current latent state $\mathbf{z}_t$.
Importantly, doing so makes our prior over transformation matrices, given an action, state-invariant. By enforcing closeness to the prior, we encourage our model to learn a latent state space such that latent state transitions may be predicted accurately even without information about the current latent state $\mathbf{z}_t$. Finally, we can construct a loss function for learning the posterior distribution as follows:
\begin{equation}\label{eq:loss_dyna}
\mathcal{L}_{transition} = \underbrace{\log p(\mathbf{z}_{t+1}\mid \mathbf{h}_t, \mathbf{a}_t)}_\text{next state prediction} + \underbrace{\beta D_{KL}\left[ q_{\psi}^t(\mathbf{h}\mid \mathbf{z}_t, \mathbf{a}_t) \lVert p_{\psi'}^t(\mathbf{h}\mid \mathbf{a}_t)\right]}_\text{parsimony}
\end{equation}
The first term reflects the accuracy with which our model predicts the next latent state (as encoded by our model) and is a function of the distance between $\tilde{\mathbf{z}}_{t+1}$ and $\mathbf{z}_{t+1}$ (see Appendix \ref{A:transition}). The second term reflects how close our posterior over $\mathbf{h}_t$ is to our state-invariant prior, scaled by the hyperparameter $\beta$.
\subsection{Parameterizing transformations}
We consider three types of transformations -- rotations, translations and their composition. As such, we assume that all transitions can be represented as an affine transformation of the current latent state. We leverage a decoder network $j_\omega$ which takes the current latent transition code and action tuple $(\mathbf{h}_t, \mathbf{a}_t)$ and predicts the parameters of either a rotation matrix $\mathbf{R}$ or a translation matrix. For translations we predict a vector of displacement values $\mathbf{v}$ and predict the next latent state $\tilde{\mathbf{z}}_{t+1} = \mathbf{z}_t + \mathbf{v}_t$, requiring $n$ parameters for a latent state space of dimensionality $n$. Parameterizing rotations is more involved. \cite{quessard} parametrize rotations with a product of $\frac{n(n - 1)}{2}$ 2-dimensional rotations. In our approach we predict the entries of a skew-symmetric matrix. The space of skew-symmetric matrices form the Lie algebra of the special orthogonal group and its elements can thus be viewed as infinitesimal rotations \citep{sola2018micro}. By taking the matrix exponential of an $n\times n$ skew-symmetric matrix, we obtain an $n\times n$-dimensional rotation matrix $\mathbf{R}$. Since the upper triangle of a skew symmetric matrix is the negative of the lower triangle, we can parameterize the $n$-dimensional rotation using the same number of parameters as \cite{quessard}.
\subsection{Learning the encoding function}
Encouraging the latent dynamics to be parsimonious through the KL term in equation 3 is not sufficient, we also need to make sure that the condition of parsimony is not fulfilled vacuously, for instance if the encoder maps all states $\mathbf{s}_t$ to a single latent state. A popular approach for avoiding state collapse is to equip the model with a state decoder that tries to predict the state $\mathbf{s}_t$ from the latent state $\mathbf{z}_t$ \citep{hafner2019learning, watter2015embed}. However, encouraging the model to learn latent states that are easily decodable could conflict with our goal of learning a latent state space governed by parsimonious dynamics, as generative factors of the data distribution could influence the topology of our latent space. Instead, we opt for a contrastive objective to distinguish between states \citep{oord2018representation, laskin2020curl}, while giving our transition model the freedom to embed states such that transitions between them can be encoded parsimoniously. Our approach is inspired by noise contrastive estimation \citep{gutmann2010noise, oord2018representation}, which seeks to keep a latent state $\mathbf{z}_t\mid\mathbf{s}_t$ predictable from the observed state, while keeping it diverse from the distinct latent states. For a mini-batch $\mathcal{B}$ we construct target labels for each state $\mathbf{s}_t\in \mathcal{B}$ and corresponding similarity ratings for our encodings of them:
\begin{equation}
\begin{gathered}
l(\mathbf{s}, \mathbf{s}')=e^{-\tau_\mathbf{s}\lVert \mathbf{s} - \mathbf{s}'\rVert_2} \\
k(\mathbf{z}, \mathbf{z}')=e^{-\tau_\mathbf{z}\lVert \mathbf{z} - \mathbf{z}'\rVert_2}
\end{gathered}
\end{equation}
where $l(\mathbf{s}, \mathbf{s}')$ are targets and $k(\mathbf{z}, \mathbf{z}')$ are latent state similarities. Here $\tau_\mathbf{s}$ and $\tau_\mathbf{z}$ are scaling parameters quantifying how quickly state similarity decays with distance. To keep latent states diverse we minimize the cross entropy between $k(\mathbf{z}, \mathbf{z}')$ and $l(\mathbf{s}, \mathbf{s}')$:
\begin{equation}
\mathcal{L}_{contrastive} = - \dfrac{1}{N}\sum_{\mathbf{s}, \mathbf{s}'\in\mathcal{B}\times\mathcal{B}}k(\mathbf{z}, \mathbf{z}')\log l(\mathbf{s}, \mathbf{s}') + (1 - k(\mathbf{z}, \mathbf{z}'))\log(1-l(\mathbf{s}, \mathbf{s}'))
\end{equation}
By scaling $\tau_\mathbf{s}$ sufficiently high, we encourage our model to distinguish between states that are not identical. This facilitates the learning of parsimonious dynamics: as we only require that distinct states are encoded far enough apart, the transition model is afforded freedom to embed states so that the condition of parsimony is satisfied. Our final loss function is then the sum of the transition loss and the contrastive loss.
\section{Related work}
\textbf{World models}\\
\cite{quessard} represent latent state transitions as the product of the current latent state with elements of the special orthogonal group, i.e. a learnt rotation matrix. However, they assume that the rotations describing state transitions are state-invariant. That is, actions can only affect the state in a single way. We make state-invariance a soft constraint by keeping transition representations close to a state-invariant prior, and represent latent state transitions using elements from the affine group. \cite{watter2015embed} learn to embed high-dimensional inputs in a low-dimensional space in which the dynamics are locally linear, allowing them to plan with stochastic optimal control. Unlike our approach, they use a reconstruction objective to mitigate state collapse, and do not regularize the state-action representations the agent learns. \cite{hafner2020mastering}, \cite{hafner2019dream}, \cite{kaiser2019model} and \cite{ha2018recurrent} learn world models with the purpose of learning policies, either by training the agent within the world model entirely, or by extracting useful latent features of the environment.
\textbf{Planning}\\
Dynamics models are also pervasive in planning tasks. \cite{deisenroth2011pilco} use Gaussian process regression to learn environment dynamics for planning in a sample efficient manner. \cite{schrittwieser2020mastering} learn a latent state dynamics model without a reconstruction objective to play chess, shogi and Go using Monte Carlo Tree Search. \cite{hafner2019learning} learn a recurrent state space model, representing latent states both with a deterministic and stochastic component, and perform planning in pixel environments using the Cross Entropy method. Our approach extends on previous work by building latent state spaces that facilitate planning with incomplete knowledge of the environment. This affordance is due to the latent state space being organized such that transitions can be described with a sparse latent code.
\textbf{Disentangled representations}\\
Learning disentangled representations is a popular approach for building latent variable models \citep{higgins2016beta, burgess2018understanding}.\cite{higgins2018towards} propose a group theoretic definition of disentangled representations and \cite{caselles2019symmetry} argue that learning symmetry based disentangled representations requires interactions with the environment. Our model can be viewed as learning a latent state space whose dynamics are described by a small number of transformations belonging to the affine group.
\textbf{Navigation}\\
We argue that the ability to navigate could be supported by learning parsimonious dynamics. Using an RL framework, \cite{mirowski2016navigation} learn to navigate to goal locations in 3D mazes with pixel inputs, for which they rely on auxiliary depth prediction and location prediction tasks. \cite{banino2018vector} use velocity and head direction information to learn map-like representations of environments, allowing them to generalize about novel shortcuts to goals. However, biological agents do not usually get information about how close they are to boundaries, their direction of travel, or what their current spatial location is. Rather, these variables must be inferred from interactions with stimuli of much higher dimensionality. We propose a method for learning the latent spatial layout of environments and generalize about their transition dynamics from the principle of parsimony alone, without supervision or additional information about the aforementioned latent variables.
\section{Experiments}
We test our model's ability to learn latent spaces and dynamics that are useful for policy learning and planning. We designed three environments with different topological properties. In all environments the agent is tasked with navigating to a fixed goal location from a fixed starting location, and once it has arrived at the goal location, stay there for the remainder of the episode. The action space consists of the five actions $\mathcal{A} = \lbrace LEFT, RIGHT, UP, DOWN, STAY\rbrace$. The actions are represented as one-hot encoded vectors to not reveal any information about the transition function of the environment. The agent moves around on the grid by selecting a cardinal direction, which moves the agent one unit in the respective, latent direction. The latent coordinate features of the states are unobservable to the agent. With the $STAY$ action the agent stays put on the current state.
Environment states are represented as random vectors drawn from a multivariate Gaussian $\mathbf{s}\sim\mathcal{N}(\boldsymbol{\mu}, \mathbf{I})$ with a diagonal covariance matrix. These are the vectors that the agent `observes' when occupying a state, and \emph{not}, for instance, a top-down view of the environment. The vectors are drawn when environments are initialized, and then remain fixed for the duration of an experiment. Generating the state vectors from an isotropic Gaussian makes the observation space independent of the underlying hidden variable describing the agent's position. This is a key property of our tasks: We maintain that the ability to learn the group properties of an environment in a way that is disassociated from learning the generative factors of the observations that the states emit is important. Generally, the manifold that the state-observations lie on may be entirely different from the manifold defined by an environment's transition function. We hypothesize that structuring the latent state space to reflect the topology of the environment is beneficial for solving several RL tasks.
\subsection{Gridworlds}
We designed two $11\times11$ state discrete gridworlds (see Figure \ref{fig:mfrl}). On the boundary states of the gridworld there were walls. One of the gridworlds was partitioned into four rooms by walls. Information about whether the agent was facing a wall in any of the four cardinal directions was encoded as a binary vector, which we concatenated with the initial random state vector to produce what the agent sees when occupying a state.
\subsection{Torus}
We designed a discrete torus world similar to the gridworld by connecting the gridworlds boundary states to the corresponding boundary states on the opposite end (see Figure \ref{fig:mfrl}). The torus contained $13\times 13$ states and no boundaries.
\subsection{Model free learning}
We designed a model free learning task for each environment. In each task, the agent needs to learn to move from a starting state $\mathbf{s}_{start}$ to an unknown goal state $\mathbf{s}_{goal}$ and stay there for the remainder of an episode which lasted for 250 timesteps. Each state except for the goal state yields a reward of $-1$ when exited except for the goal state which yields a reward of $+1$. The agent learns a policy which takes it to the goal location using the Soft Actor Critic (SAC) algorithm \citep{haarnoja2018soft} adapted for discrete action spaces \citep{christodoulou2019soft} (see Appendix \ref{A:sac} for details). Crucially, the agent learns a policy over its latent state representations $\pi(\mathbf{a}\mid\mathbf{z})$ as opposed to the observations the environment emits. We train agents for 200 episodes in the gridworld, 500 episodes in the four rooms environment, and for 250 episodes in the torus world. Each episode lasts for 250 steps. The agents are trained as described in Algorithm \ref{algo:mfrl}. We compare our model to two alternative representation learning approaches: i) The $\beta$-Variational Autoencoder \citep{higgins2016beta, lee2020stochastic} which learns disentangled probabilistic embeddings by reconstructing the states as well as performing next state prediction. Next state prediction is performed by multiplying the current latent state with a predicted affine transformation matrix, however, without regularizing the state-action representation to be parsimonious. ii) A baseline feedforward neural network for which no representation learning objectives influenced latent state representations except the actor and critic losses. Our model achieves the best total score summed over episodes in all environments, averaged over 10 seeds. We fine tuned the regularization coefficient $\beta$ in equation \ref{eq:loss_dyna} and the $\beta$ of the VAE model. Comparisons revealed that regularizing the latent state-action representation $\mathbf{h}_t$ proved beneficial in all environments except the Four Rooms environment in which a $\beta$ value of 0 proved slightly better than the next best regularized implementation of our model.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=1\textwidth]{figures/mfrl/mfrl_comparison.png}
\end{center}
\caption{Top: The environments have a starting state (yellow) and a goal state (red). Bottom: The corresponding learning curves of a SAC agent learning policies over three types of latent state spaces, averaged over 10 seeds: \textit{Ours} has a latent state space constrained to be such that dynamics are parsimonious. \textit{Baseline} has no representation learning objective. \textit{VAE} learns a latent state space using the $\beta$-VAE model, with next state prediction as an auxiliary task.}
\label{fig:mfrl}
\end{figure}
\subsection{Planning}
For each environment we generated a set of planning problems where the agent starts in a random state, and needs to plan a sequence of actions to reach a goal state and stay there for the remainder of an episode which lasted for 50 timesteps. The agent has no knowledge of environment dynamics initially except for what the goal state is, and needs to learn a viable dynamics model as it engages with the task. The agent attempts to solve the planning problems by encoding the goal state into its learnt latent state space $\mathbf{z}_\text{goal}$ and by simulating trajectories that take it to the goal. The agent estimates the return of a trajectory as the sum of latent state occupancies weighted by the exponential of their negative distance to the latent goal state $G = \sum_{t=0}^H e^{-\lVert\mathbf{z}_t - \mathbf{z}_{goal}\rVert_2}$. After an episode, the agent fits its dynamics model to the observations it gathered through executing its plan. Each planning task consists of 30 such planning problems, varying with difficulty, as some goal locations are further away from the agents' starting location. Following \cite{hafner2019learning}, we use the Cross Entropy Method (CEM) \citep{rubinstein1999cross} as our planning algorithm (see \ref{algo:CEM}). We verified that it was able to solve most tasks when using the true environment dynamics with a moderate planning budget.
We compare our model to alternative latent dynamics models: A deterministic RNN and a stochastic latent state model \citep{hafner2019learning} trained with a $\beta$-VAE objective \citep{higgins2016beta}. All models represent state transitions as the product of the current latent state with a learnt affine transformation matrix, but lack the parsimony constraint at the core of our model. The models were trained as described by Algorithm \ref{algo:planning}. As with the policy learning task, we found that our model achieved the best score pooled over the 30 planning tasks and 10 seeds. Moreover, regularizing $\mathbf{h}_t$ proved beneficial in all environments, providing further evidence that parsimonious dynamics are beneficial for planning.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=1\textwidth]{figures/planning/planning_comparison.png}
\end{center}
\caption{Top row: The score achieved by the planning agent using different dynamics models, averaged over the 30 tasks and 10 seeds: Our approach beats the deterministic recurrent world model (RNN) and the stochastic world model (SMM) trained with a disentanglement objective. Error bars show standard deviation computed across seeds. Red lines show score of planning agent using the true environment dynamics. Bottom row: The score achieved for each task, dots indicate individual samples across 10 seeds, and curves are smoothed using a Gaussian filter with standard error computed across seed, with lengthscale $\sigma = 2$.}
\end{figure}
\section{Learning parsimonious dynamics from pixels}\label{sec:DML}
We sought to evaluate our model's ability to perform long-term future state prediction in an environment with pixel inputs. For this task, we relied on the Deepmind Lab environment, a challenging partially observable environment with image observations \citep{beattie2016deepmind}. To make our model suitable for pixel prediction, to mitigate the partially observability, and to make it comparable to other models in the literature, we used the stochastic variant of our model (see Appendix \ref{A:transition}) and image reconstruction loss rather than a contrastive objective to avoid latent state collapse. Furthermore, we endowed it with a convolutional neural network image encoder, a transpose convolutional neural network decoder, and recurrent neural network whose outputs were concatenated with the inferred latent state for pixel prediction (see Appendix \ref{A:dml}).
As a comparison model we chose the Recurrent State Space Model (RSSM) from \cite{hafner2019learning} (see Appendix \ref{A:rssm}). When applicable, we also used the hyperparameters they provide for our model. We trained both models to reconstruct sequences of images, conditioned on previous image and action observations, collected from an agent executing a random policy for 250 episodes in the \texttt{seekavoid\_arena\_01} environment. No velocity or location information was provided to the agent. We then made the models perform open-loop prediction of 30 test sequences of 149 environment steps, that were not in the training set. We evaluated open-loop reconstruction errors and the KL divergence between predicted future latent states and closed-loop inferred latent states from observations $KL_D\left[q(\tilde{\mathbf{z}}_{t+1}\mid \textbf{z}_{\leq t}, \textbf{u}_{\leq t}, \textbf{a}_{\leq t})\rVert p(\mathbf{z}_{t+1}\mid \textbf{s}_{t+1}\textbf{u}_{t})\right]$, where $\mathbf{u}_t$ is a deterministic latent variable provided by the recurrent neural network. Our model generalized better to the test sequences, achieving both lower KL divergences between predicted and encoded states, and lower average reconstruction error in the open-loop prediction task. This provides evidence for the utility of learning parsimonious dynamics in more challenging pixel environments as well.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=1\textwidth]{figures/general/dml2.png}
\end{center}
\caption{Open loop prediction of future states in the Deepmind Lab environment. Our model is better at predicting transitions in latent space, and reconstructing images from imagined latent states.}
\end{figure}
\section{Discussion}
In the current paper, we introduced a model that learns a parsimonious set of transformation matrices to describe the dynamics of MDPs in a latent state space. Learning a world model whose states are organized according to this principle proved beneficial in a policy learning task, a planning task and a video prediction task. With the objective of carving the environment at its joints rather than the observations its states emit, the learnt latent states contained information that was valuable for policy learning. Moreover, planning in the learnt latent space became feasible having observed fewer state transitions: This is because the agent could systematically generalize about the dynamics of parts of the environment that were not yet explored exhaustively. We have shown that simply endowing the dynamics model's objective with a term encouraging parsimony was sufficient to produce latent spaces that display useful characteristics.
We investigated the utility of parsimonious dynamics in simpler environments whose transitions could indeed be characterized by a small set of linear transformations. A limitation of our approach is that the environments we investigated were rather simplistic compared to the rich environments humans and other state-of-the-art models have been shown to be able to navigate through. To remedy this we provided promising initial evidence that parsimonious dynamics can facilitate future state prediction in a Deepmind Lab environment. In future work we intend to scale up our approach to also perform policy search in similarly rich environments. A further limitation is that the degree of sparseness with which the model tries to recapitulate transitions is controlled by the KL scaling term $\beta$. Though we avoid assuming that dynamics are completely state-invariant, we still have to tune $\beta$. In future work, we seek to address this issue by adaptively regulating the complexity of our learnt dynamics to reflect the complexity of the environment.
The principle of parsimony was initially motivated from the viewpoint of cognitive science. \cite{tolman1948cognitive} showed that rats preferred a novel shortcut over a repeatedly reinforced longer route to a goal location, hypothesizing that animals learn and use latent representations of their environment that must contain assumptions about how they are structured. Seminal work in neuroscience demonstrated the existence of neurons selectively tuned to specific spatial positions (place cells), and others that represent global geometrical information about the environment (grid cells), representations that were also found in artificial agents trained to navigate in Euclidean spaces \citep{banino2018vector, cueva2018emergence}. Recent cognitive neuroscience studies revealed that humans rely on cognitive maps to navigate complex environments \citep{epstein2017cognitive}, abstract spaces \citep{constantinescu2016organizing, garvert2017map}, generalize about rewards \citep{garvert2021hippocampal}, and draw inferences about transition dynamics in novel environments \citep{mark2020transferring}. Future work could also investigate the role of parsimony in the mental maps that humans and other animals build of the worlds they inhabit.
\newpage
|
2,869,038,154,581 | arxiv | \section{Introduction}
In two previous papers~\cite{cmr,cmr1}, we considered the long term nonlinear
evolution of a Keplerian binary system that is perturbed by a normally incident
periodic gravitational wave, and in a recent work~\cite{cmr2} we considered the
additional effect of radiation damping,
which is of interest in connection with
the observed behavior of the Hulse-Taylor
binary pulsar PSR~1913+16~\cite{hulse,taylor}.
These studies have been concerned with the issue
of {\em gravitational ionization}, i.e. the possibility that an external
periodic gravitational wave could ionize a Keplerian binary system over a long
period of time. The impetus for this subject has come from the close conceptual
analogy between gravitational ionization and a fundamental physical process,
namely, the electromagnetic ionization of a hydrogen atom. That is, in these
studies one hopes to learn about the disposition of gravitational radiation to
transfer energy and angular momentum in its interaction with matter. In our
recent investigation~\cite{cmr2}, we encountered an interesting dynamical
phenomenon connected with the passage of the binary orbit through resonance. As
the binary system loses energy and angular momentum by emitting gravitational
waves, its semimajor axis and eccentricity decrease monotonically on the
average; however, this process of inward spiraling could stop if the system is
captured into a resonance. The resonance condition fixes the semimajor axis;
therefore, if the semimajor axis decreases to the resonant value and the orbit
is trapped, it then maintains this value {\it on the average} while the external
perturbation deposits energy into the system to compensate for the radiation
damping {\it on the average}.
It turns out that along with this energy deposition, the external
tidal perturbation can also deposit angular momentum into the binary orbit so that
its eccentricity decreases considerably during the passage through resonance.
This was the situation for the particular instance of resonance trapping
reported in~\cite{cmr2}.
In general, the orbital angular momentum can increase or decrease while the orbit
is trapped in resonance.
Figure~\ref{fig1} depicts passage through a
$(1:1)$ resonance in which the eccentricity decreases.
A similar phenomenon but with an increase in eccentricity
has been reported in the recent numerical study
of the three-body problem by Melita and Woolfson~\cite{melita}.
The same situation can occur over a long time-scale in a $(4:1)$ resonance in our
model as depicted in Figure~\ref{fig3}.
The dynamical phenomena associated with an orbit trapped in a resonance
occur over many Keplerian periods of the osculating ellipse;
therefore, it is natural to average the dynamical equations over the
orbital period at resonance. This partial averaging removes the ``fast'' motion
and allows us to see more clearly the ``slow'' motion during trapping.
It is possible
to provide a description of the slow motion as well as
a theoretical justification for the transfer of angular momentum by
means of the method of second order partial averaging near a resonance. The
purpose of the present paper is to study this phenomenon theoretically using the
method of averaging for the resonances that occur in a Keplerian binary that is
perturbed by the emission and absorption of gravitational radiation.
Let us consider the simplest model involving a perturbed Keplerian (i.e.
nonrelativistic) binary that contains the effects of radiation reaction damping
and external tidal perturbation in the lowest (i.e. quadrupole) order. The
tidal perturbation could be due to external masses and gravitational radiation;
in fact, we choose in what follows a normally incident periodic gravitational
wave for the sake of simplicity \cite{cmr2}. The Keplerian binary orbit under
the combined perturbations due to the emission and absorption of gravitational
radiation in the quadrupole approximation is given in our model
by the equation of relative
motion
\begin{equation}\label{RelativeMotion}
\frac{d^2x^i}{dt^2} + \frac{kx^i}{r^3} + {\cal R}^i =
- \epsilon {\cal K}_{ij}\; x^j,
\end{equation}
where ${\bf x}(t)$ is the relative two-body orbit, $r = |{\bf x}|$, and $k =
G_0(m_1+m_2)$. The binary consists of two point masses $m_1$ and $m_2$,
$m_1+m_2 = M$, that move according to ${\bf x}_1(t) = (m_2/M)\; {\bf x}(t)$ and
${\bf x}_2(t) = -(m_1/M)\; {\bf x}(t)$
, so that the center of mass of the system is at the origin of the
coordinates. In fact, the center of mass of the binary can be taken to be at
rest in the approximation under consideration here. The explicit expressions
for ${\cal R}$ and ${\cal K}$ are
\begin{equation}\label{RadiationReaction}
{\cal R} = \frac{4G_0^2m_1m_2}{5c^5r^3}
\left[\left(12 v^2-30\dot{r}^2-\frac{4k}{r}\right){\bf v}
-\frac{\dot{r}}{r}\left(36v^2-50\dot{r}^2+\frac{4k}{3r}\right){\bf x}\right],
\end{equation}
and
\begin{equation}\label{TidalForce}
{\cal K} = \Omega^2 \left[ \begin{array}{ccc}
\alpha \cos{\Omega t} & \beta \cos{(\Omega t + \rho)} & 0 \\
\beta \cos{(\Omega t + \rho)} & -\alpha \cos{\Omega t} & 0 \\
0 & 0 & 0
\end{array} \right],
\end{equation}
where ${\bf v}=\dot{\bf x}(t)$, an overdot denotes differentiation with respect
to time, $\alpha$ and $\beta$ are of the order of unity and are the amplitudes
of the two independent states of linear polarization of the normally incident wave, $\rho$
is a constant phase, and $\Omega$ is the frequency of the external wave.
It is interesting to transform the dynamical
system~(\ref{RelativeMotion})--(\ref{TidalForce}) to dimensionless form.
To accomplish this, let ${\bf x}\rightarrow R_0 {\bf x}$ and $t \rightarrow T_0 t$
where $R_0$ and $T_0$ are
scale parameters. Under this transformation, $k \rightarrow k T_0^2/R_0^3$ and
${\cal K}$ remains unchanged if we let $\Omega \rightarrow \Omega/T_0$. Let us
further restrict $R_0$ and $T_0$ by the relation $k T_0^2 = R_0^3$, so that we
can set $k=1$ in the dynamical equations; for instance, this condition is
satisfied if $R_0$ is the initial semimajor axis of the unperturbed Keplerian
orbit and $2\pi T_0$ is its period. Furthermore,
the dynamical system~(\ref{RelativeMotion})--(\ref{TidalForce}) is planar;
therefore, it is
convenient to express these dimensionless equations in polar coordinates
$(r, \theta)$ in the orbital plane. The result is
\begin{eqnarray}\label{MathEQM}
\frac{dr}{dt} & = & p_r, \nonumber \\
\frac{d\theta}{dt} & = & \frac{p_{\theta}}{r^2}, \nonumber \\
\frac{dp_r}{dt} & = & -\frac{1}{r^2} + \frac{p_{\theta}^2}{r^3} +
\frac{4\delta p_r}{r^3}\left(p_r^2+6\frac{p_{\theta}^2}{r^2}+
\frac{4}{3r}\right) \nonumber \\
& & \hspace*{.25in} -\epsilon r
\Omega^2 [\alpha\cos{2\theta}\cos{\Omega t}
+\beta\sin{2\theta}\cos{(\Omega t+\rho)}], \nonumber \\
\frac{dp_{\theta}}{dt} & = & \frac{2\delta p_{\theta}}{r^3}
\left( 9p_r^2 - 6 \frac{p_{\theta}^2}{r^2} + \frac{2}{r}\right) \nonumber \\
&& \hspace*{.25in} +\epsilon r^2 \Omega^2 [\alpha\sin{2\theta}\cos{\Omega t}
-\beta\cos{2\theta}\cos{(\Omega t+\rho)}],
\end{eqnarray}
where $\delta$, $0<\delta<<1$,
is the dimensionless strength of radiation reaction and is given by
\begin{equation}\label{DeltaDefn}
\delta = \frac{4G_0^2m_1m_2}{5c^5T_0R_0},
\end{equation}
while $\epsilon$, $0<\epsilon<<1$, is the dimensionless strength of the
external periodic perturbation.
In this paper, we let $\delta = \epsilon \Delta$, where $\Delta$,
$0 < \Delta < \infty$, is a parameter that is fixed in the system;
in this way, we avoid dealing with a two parameter $(\epsilon,\delta)$
perturbation problem. In particular, we consider only perturbations that
correspond to fixed directions from the origin of this parameter space.
The full two parameter problem would require the consideration of perturbations
corresponding to all curves in the parameter space.
In the absence of radiative perturbations $(\epsilon =0$ and $\delta=0)$, the
dynamical system (\ref{MathEQM}) describes a Keplerian ellipse. It is therefore
useful to
express the dynamical equations (\ref{MathEQM}) in terms of Delaunay variables
that are closely related to the elements of the osculating ellipse. This is the
ellipse that the relative orbit would describe at time $t$, if the perturbations
were turned off at $t$.
The osculating ellipse always has the same focus,
which is taken to be the
origin of the (polar) coordinates in the space of relative coordinates.
Let the state of relative motion be described by $({\bf
x}, {\bf v})$, or equivalently $(r, \theta, p_r, p_{\theta})$, at time $t$; then,
the energy of the motion fixes the semimajor axis $a$ of the osculating ellipse
and its eccentricity is subsequently fixed by the orbital angular momentum
$p_{\theta}$. Only two angles are left to determine the osculating ellipse
completely: the orientation of the ellipse in the orbital plane given by $g$ and
the position on the ellipse measured from the periastron given by the true
anomaly $\hat{v}$. The latter is obtained from
$p_rp_\theta=e\sin\hat{v}$ and the equation of the
ellipse
\[ \frac{p_{\theta}^2}{r} = 1+e \cos{\hat{v}}, \]
and the former from
$\theta - \hat{v}$. The relevant Delaunay ``action-angle" variables $(L, G,
\ell, g)$ are thus defined by \cite{Kov,Stern}
\begin{eqnarray}
L := a^{1/2}, & \hspace*{.25in} & G := p_{\theta} = L
(1-e^2)^{1/2}, \nonumber \\
\ell := \hat{u} - e \sin{\hat{u}}, & & g := \theta - \hat{v},
\end{eqnarray}
where $\hat{u}$ is the eccentric anomaly of the osculating ellipse, $r =
a(1-e\cos{\hat{u}})$, and $\ell$ is the mean anomaly.
The dynamical system~(\ref{MathEQM}) in terms of Delaunay variables is given
briefly in Appendix~\ref{appendixa} and used in the following section.
The Delaunay equations of motion are useful for the investigation of periodic
orbits using the Poincar\'{e} surface of section technique~\cite{poincare}. It
has been shown in \cite{cmr2} that nearly resonant periodic orbits exist in system
(\ref{MathEQM}) for sufficiently small $\epsilon$ and $\Delta$. These
correspond to
$(m:1)$ resonances, where $(m:n)$ refers to the resonance condition $m\omega =
n\Omega$. Here $m$ and $n$ are two relatively prime integers and $\omega=1/L^3$
is the Keplerian frequency of the orbit. A linear perturbation treatment
\cite{mashoon1} first revealed resonant absorption at $(m:1)$ resonances. There
could, in principle, be other periodic orbits whose existence is not revealed by
our method \cite{ccc,cmr}.
In our numerical investigation of the simple nonlinear model described above,
we found~\cite{cmr2} instances of resonance trapping during which the
behavior of the osculating orbit could not be inferred in a simple manner
on the basis of equation~(\ref{MathEQM}).
However, the dynamics of the {\em averaged} equations
in resonance is simpler to analyze and it turns out that our numerical
results~\cite{cmr2} can be adequately explained using the second order partially
averaged dynamics. The phenomenon of resonance trapping appears to be
of basic significance for the origin of the solar system; therefore,
it is worthwhile to develop a general theoretical framework for the
study of the evolutionary dynamics while trapped in resonance.
The {\em dynamics} of a system when it is locked in
resonance is interesting in any circumstance involving
more than one degree of freedom; for instance, let us suppose that
the resonance condition
fixes an action variable---say, energy---and for a one dimensional
motion this would then imply that the state of the system at resonance is definite.
However, if other action variables are present, they will not necessarily remain
fixed while the system is trapped in resonance.
Instead, the state of the
system will in general vary and its dynamics at resonance is best
investigated using the method of averaging. This is a generalization
of the simple procedure that is commonly
employed in Hamiltonian dynamics: The Hamiltonian is averaged over certain
``fast'' variables and the resulting averaged Hamiltonian is
used to derive new dynamical equations that presumably describe the ``slow''
motion in a certain averaged sense. The general method is described in
Appendix~\ref{appendixb}, and it is applied to the dissipative
dynamical system under consideration here in the rest of this paper.
Resonance is a general and significant physical phenomenon and the description of the
state of a physical system while trapped in resonance is of intrinsic importance. The inherent
dynamics at resonance is trivial for a one dimensional oscillator, but is rich in
physical consequences for higher dimensional systems.
While the general methods
described here could be applied to a wide variety of physical problems, we confine our
attention to a single model.
Our results may, however,
be of qualitative interest in dealing with the three-body
problems that arise in the discussions of the origin of the structure
in the solar system.
The present paper relies on the results of our recent work \cite{cmr2}. We have
repeated here only what is needed for the discussion of the dynamics at
resonance; for a more precise and complete presentation of the
background material, our papers \cite{cmr,cmr1,cmr2} should be consulted.
Finally, a basic limitation of our model
should be noted. The only damping mechanism that we take into account is
gravitational radiation reaction; this is
consistent with our assumption that the binary consists of Newtonian point
masses moving in vacuum except for the presence of background gravitational
radiation. In this model, a theorem of Neishtadt can be used to show that
resonance trapping is a rather rare phenomenon~\cite{cmr2}. However, taking due
account of the finite size and structure of astronomical bodies and the existence
of an ambient medium, we would have to include in our model---among other things---the
additional damping effects of tidal friction as well as the various drag effects of
the ambient medium and electromagnetic radiation
(cf., for instance,~\cite{melita,henrard,beauge,gomes,lai}).
These additional frictional effects could well combine to violate the
condition $N$ of Neishtadt's Theorem~\cite{cmr2,arnold2}.
Thus, resonance trapping
may not be so rare in astrophysics after all~\cite{bj}. Inclusion of these additional
effects is beyond the scope of our work.
\section{Partial Averaging Near A Resonance}
We will consider the dynamics of the model system~(\ref{MathEQM}) that is
derived in~\cite{cmr2}.
It has the following form when expressed in the Delaunay elements for the
Kepler problem under consideration (cf. Appendix~\ref{appendixa}):
\begin{eqnarray}\label{AAAveraging}
\dot{L} & = & -\epsilon
\frac{\partial {\cal H}_{\mbox{\scriptsize ext}}}{\partial \ell} +
\epsilon\Delta f_L, \nonumber \\
\dot{G} & = & -\epsilon
\frac{\partial {\cal H}_{\mbox{\scriptsize ext}}}{\partial g} +
\epsilon\Delta f_G \nonumber \\
\dot{\ell}& = &\quad \frac{1}{L^3}
+ \epsilon \frac{\partial{\cal H}_{\mbox{\scriptsize ext}}}{\partial L}
+\epsilon\Delta f_{\ell},\nonumber \\
\dot{g} & = & \quad\epsilon
\frac{\partial {\cal H}_{\mbox{\scriptsize ext}}}{\partial G} +
\epsilon\Delta f_g,
\end{eqnarray}
where $\epsilon{\cal H}_{\mbox{\scriptsize ext}}$ is the Hamiltonian
corresponding to the external perturbation and
\begin{equation}\label{Hext}
{\cal H}_{\mbox{\scriptsize ext}}=\frac{1}{2}\Omega^2\left[
\alpha {\cal C}(L,G,\ell,g)\cos{\Omega t}+\beta {\cal S}(L,G,\ell,g)\cos(\Omega t+\rho)
\right].
\end{equation}
Here
\begin{eqnarray}\label{CSfs}
{\cal C}(L,G,\ell,g) & = & \frac{5}{2}a^2e^2\cos{2g}
+a^2\sum_{\nu =1}^\infty (A_\nu\cos{2g}\cos {\nu\ell}
-B_\nu\sin{2g}\sin{\nu \ell}), \nonumber \\
{\cal S}(L,G,\ell,g) & = &\frac{5}{2}a^2e^2\sin{2g}
+a^2\sum_{\nu =1}^\infty (A_\nu\sin{2g}\cos {\nu\ell}
+B_\nu\cos{2g}\sin{\nu \ell}), \nonumber \\
A_\nu & = &\frac{4}{\nu^2e^2}
\big[2\nu e(1-e^2)J_\nu'(\nu e)-(2-e^2)J_\nu(\nu e)\big], \nonumber \\
B_\nu & = & -\frac{8}{\nu^2e^2}(1-e^2)^{1/2}\,
\big[e J_\nu'(\nu e)-\nu (1-e^2)J_\nu(\nu e)\big],
\end{eqnarray}
$J_\nu$ is the Bessel function of the first kind of order $\nu$,
and $e=(L^2-G^2)^{1/2}/L$.
The radiation reaction ``forces" $f_D$, $D\in\{L,G,\ell,g\}$, are certain
complicated functions
of the Delaunay variables given in Appendix~\ref{appendixa}.
In fact, we will only require the
averages of the $f_D$ given by
\[\bar f_{D}:=\frac{1}{2\pi}\int_0^{2\pi} f_{D}(L, G, \ell, g) \,
d\ell.\]
These have been computed in~\cite{cmr2}, and are given by
\begin{eqnarray}\label{fqAve3}
\bar f_{L}&=&-\frac{1}{G^7}\big( 8+\frac{73}{3} e^2+ \frac{37}{12} e^4 \big),
\nonumber \\
\bar f_{G}&=&-\frac{1}{L^3G^4}(8+7e^2),\nonumber \\
\bar f_{\ell}&=&0, \qquad \bar f_{g}=0.
\end{eqnarray}
In order to study the dynamics of the system~(\ref{AAAveraging}) at resonance,
we will apply the method of averaging. We note here that averaging
over the fast angle $\ell$ and the time $t$ gives the correct approximate
dynamics for most initial conditions via Neishtadt's Theorem as explained
in our previous paper~\cite{cmr2}. Here we are interested in the orbits
that are captured into resonance. To study them, we consider partial
averaging at each resonance.
Let us fix the value of $L$ at the $(m:n)$ resonance, say
$L=L_*$ with $m/L_*^3=n\Omega$, and consider the deviation of $L$ from
the resonance manifold.
To measure this deviation, we introduce a new variable ${\cal D}$ given by
\begin{equation}\label{coordtrans}
L = L_* + \epsilon^{1/2}\; {\cal D}
\end{equation}
and a new angular variable $\varphi$ by
\[\ell=\frac{1}{{L_*}^3}\: t+\varphi=\frac{n\Omega}{m}t+\varphi.\]
The scale factor $\epsilon^{1/2}$ ensures that, after changing to
the new variables in~(\ref{AAAveraging}), the resulting equations
for $\dot {\cal D}$ and $\dot \varphi$ have the same order in
the new small parameter $\epsilon^{1/2}$ and therefore the system is in
the correct form for averaging.
These new coordinates are standard choices in the mathematical literature
(for more details see, for example,~\cite{arnold2} or~\cite{wig}).
It is important to emphasize here that the small parameter in the
actual dynamics is $\epsilon$; however, the small parameter turns out to
be $\epsilon^{1/2}$ in this case for the averaged dynamics.
To effect the coordinate transformation, we use the expansion
\begin{equation}\label{tayL}
\frac{1}{{L}^3}=\frac{1}{{L_*}^3}\Big[1-\epsilon^{1/2}\frac{3{\cal D}}{L_*}
+\epsilon\frac{6{\cal D}^2}{L_*^2} +O(\epsilon^{3/2})\Big]
\end{equation}
and find
\begin{eqnarray}\label{AA1Averaging}
\dot{{\cal D}} & = & -\epsilon^{1/2} F_{11}-\epsilon {\cal D} F_{12}+O(\epsilon^{3/2}),
\nonumber \\
\dot{G} & = & -\epsilon F_{22}+O(\epsilon^{3/2}), \nonumber \\
\dot{\varphi} & = & -\epsilon^{1/2}\frac{3}{L_*^4}{\cal D}
+ \epsilon \big( \frac{6}{L_*^5}{\cal D}^2+F_{32}\big)+O(\epsilon^{3/2}),\nonumber \\
\dot{g} & = & \quad\epsilon F_{42}+O(\epsilon^{3/2}),
\end{eqnarray}
where the
$F_{ij}(G, n\Omega t/m+\varphi,g,t)$
are defined such that the first index refers to the equation in which
it appears and the second index refers to the perturbation order in powers
of $\epsilon^{1/2}$ that is employed.
These quantities are given by
\begin{eqnarray}\label{Fij}
F_{11}&:=&\frac{\partial{\cal H}_{\mbox{\scriptsize ext}}}{\partial
\ell}(L_*,G,\frac{n\Omega}{m}t+\varphi,g,t)
-\Delta f_L(L_*,G,\frac{n\Omega}{m}t+\varphi,g), \nonumber \\
F_{12}&:=&\frac{\partial^2{\cal H}_{\mbox{\scriptsize ext}}}{\partial L\partial \ell}
(L_*,G,\frac{n\Omega}{m}t+\varphi,g,t)
-\Delta \frac{\partial f_L}{\partial
L}(L_*,G,\frac{n\Omega}{m}t+\varphi,g),\nonumber \\
F_{22}&:=&\frac{\partial{\cal H}_{\mbox{\scriptsize ext}}}{\partial
g}(L_*,G,\frac{n\Omega}{m}t+\varphi,g,t)
-\Delta f_G(L_*,G,\frac{n\Omega}{m}t+\varphi,g),\nonumber \\
F_{32}&:=&\frac{\partial{\cal H}_{\mbox{\scriptsize ext}}}
{\partial L}(L_*,G,\frac{n\Omega}{m}t+\varphi,g,t)
+\Delta f_\ell(L_*,G,\frac{n\Omega}{m}t+\varphi,g), \nonumber\\
F_{42}&:=&\frac{\partial{\cal H}_{\mbox{\scriptsize ext}}}{\partial
G}(L_*,G,\frac{n\Omega}{m}t+\varphi,g,t)
+\Delta f_g(L_*,G,\frac{n\Omega}{m}t+\varphi,g).
\end{eqnarray}
The system~(\ref{AA1Averaging}) is $2\pi m/\Omega$ periodic in
the temporal variable---since $\cal H_{{\mbox{\scriptsize ext}}}$ is
$2\pi /\Omega$ periodic in time---and is in time-periodic standard form.
Anticipating our intention to average to second order, we will apply an
averaging transformation (for a detailed exposition see~\cite{wig}).
It is the characteristic property of this transformation that it automatically
renders system~(\ref{AA1Averaging}) in a form such that to lowest order
the new system is exactly the first order averaged system and the second order
averaged system can be simply obtained by averaging the new
system (cf. Appendix~\ref{appendixb}).
To obtain the desired transformation, we define
\[
\bar{F}_{ij}:= \frac{\Omega}{2\pi m}
\int_0^{2\pi m/\Omega} F_{ij}(G,\frac{n\Omega}{m}s+\varphi,g,s)\,ds,
\]
and the deviation from the mean for $F_{11}$
\begin{equation}\label{avertran}
\lambda(G,\varphi,g,t):=F_{11}(G,\frac{n\Omega}{m}t+\varphi,g,t)-\bar{F}_{11}.
\end{equation}
Furthermore, we define $\Lambda(G,\varphi,g,t)$ to be the antiderivative of
$\lambda(G,\varphi,g,t)$ with respect to $t$ with the additional property
that the average of $\Lambda$ should vanish, i.e.
\[
\int_0^{2\pi m/\Omega} \Lambda(G,\varphi,g,s)\,ds=0.
\]
Moreover, we note that
both $\lambda$ and $\partial\Lambda/\partial\varphi$ have
zero averages. Our averaging transformation is given by
\[
{\cal D}=\widehat {{\cal D}}
-\epsilon^{1/2}\Lambda(\widehat{G},\widehat{\varphi},\widehat{g},t),\quad
G=\widehat{G},\quad \varphi=\widehat{\varphi},\quad g=\widehat{g}.
\]
The averaging transformation is constructed so that its average becomes
the identity transformation.
Let us observe that if $G$, $\varphi$, and $g$ depend on $t$ as solutions of the
system~(\ref{AA1Averaging}), then
\[
\dot\Lambda=\lambda-\epsilon^{1/2}\Big(\frac{3{\cal D}}{L_*^4}\Big)
\frac{\partial\Lambda}{\partial\varphi} +O(\epsilon).
\]
After applying the averaging transformation, we find that the
system~(\ref{AA1Averaging}) takes the form
\begin{eqnarray}\label{AA2Averaging}
\dot{\widehat{{\cal D}}} & = & -\epsilon^{1/2} \bar{F}_{11}
-\epsilon \widehat{{\cal D}} \Big( F_{12}
+\frac{3}{L_*^4}\frac{\partial\Lambda}{\partial\varphi}\Big)
+O(\epsilon^{3/2}), \nonumber \\
\dot{\widehat{G}} & = & -\epsilon F_{22}+O(\epsilon^{3/2}), \nonumber \\
\dot{\widehat{\varphi}} & = & -\epsilon^{1/2}\frac{3}{L_*^4}\widehat{{\cal D}}
+ \epsilon \Big(\frac{6}{L_*^5}{\widehat{{\cal D}}}^2+F_{32}
+\frac{3}{L_*^4}\Lambda\Big)+O(\epsilon^{3/2}), \nonumber \\
\dot{\widehat{g}} & = &\quad \epsilon F_{42}+O(\epsilon^{3/2}).
\end{eqnarray}
Finally, we drop the $O(\epsilon^{3/2})$ terms in~(\ref{AA2Averaging}) and
average the remaining truncated system
to obtain the {\em second order partially averaged system}
\begin{eqnarray}\label{2ndoave}
\dot{\widetilde{{\cal D}}}
& = & -\epsilon^{1/2} \bar{F}_{11}
-\epsilon\widetilde{{\cal D}}\bar{F}_{12},\nonumber \\
\dot{\widetilde{G}} & = & -\epsilon \bar{F}_{22}, \nonumber \\
\dot{\widetilde{\varphi}} & = & -\epsilon^{1/2}\frac{3}{L_*^4}\widetilde{{\cal D}}
+ \epsilon \big(\frac{6}{L_*^5}{\widetilde{{\cal D}}}^2+\bar{F}_{32}\big),\nonumber \\
\dot{\widetilde{g}} & = & \quad \epsilon \bar{F}_{42}.
\end{eqnarray}
This system is the averaged form of system~(\ref{AA1Averaging})
after dropping its $O(\epsilon^{3/2})$ terms;
however, this coincidence is fortuitous in this case.
In general, one has to employ an averaging
transformation in order to obtain the second order averaged system.
To explain the evolutionary dynamics at resonance, we will
replace the actual dynamical
equations by the second order partially averaged system. As explained in
Appendix~\ref{appendixb}, this is a reasonable approximation over a limited time-scale.
We remark that although the second order partially averaged system will be used
to explain some of the features of our model system near its resonances,
the actual dynamics predicted by our model is certainly much more complex than the
averaged equations reveal. In particular, we expect that near the resonances---perhaps
in other regions of the phase space as well---there are chaotic invariant sets and,
therefore, transient chaotic motions~\cite{cmr2}.
The averaged system is nonlinear and could in general exhibit chaos; however,
we have not encountered chaos in the second order averaged equations obtained
from the model under consideration here.
It remains to compute $\bar{F}_{ij}$, where $F_{ij}$ are defined
in~(\ref{Fij}). For this, we recall equation~(\ref{Hext})
and set $\ell=n\Omega t/m+\widetilde{\varphi}$,
expand the trigonometric terms in the Fourier series using trigonometric
identities, and then average over the variable $t$. The required
averages involving the external perturbation can be computed from the average of
${\cal H}_{\mbox{\scriptsize ext}}$. In fact, after some computation, we find that
\begin{eqnarray*}
\bar{{\cal H}}_{\mbox{\scriptsize ext}}&:=&\frac{\Omega}{2\pi m}\int_0^{2\pi m/\Omega}
{\cal H}_{\mbox{\scriptsize ext}}(L_*,\widetilde{G},
\frac{n\Omega}{m}t+\widetilde{\varphi},\widetilde{g},t)\,dt\\
&=&
T_c(L_*,\widetilde{G},\widetilde{g})\cos{m\widetilde{\varphi}}
+T_s(L_*,\widetilde{G},\widetilde{g})\sin{m\widetilde{\varphi}},
\end{eqnarray*}
where, for $n=1$,
\begin{eqnarray}\label{tcts}
T_c & := & \frac{L^4\Omega^2}{4}\big[
\alpha A_m(e)\cos{2g}+\beta A_m(e)\sin{2g}\cos{\rho}
-\beta B_m(e)\cos{2g}\sin{\rho}\big],\nonumber \\
T_s & := & \frac{L^4\Omega^2}{4}\big[-\alpha B_m(e)\sin{2g}+\beta
A_m(e)\sin{2g}\sin{\rho}
+\beta B_m(e)\cos{2g}\cos{\rho}\big],
\end{eqnarray}
and for $n\ne 1$, $\bar{{\cal H}}_{\mbox{\scriptsize ext}}=0$ so
that in this case we can define $T_c=T_s=0$.
The averages of $f_D$, $D\in\{L,G,\ell,g\}$, are given in~(\ref{fqAve3}).
The terms involving radiation damping
that appear in the partially averaged system are obtained from
these expressions by Taylor expansion about the resonant orbit using
equation~(\ref{coordtrans}). For example, we will use
\[
\Gamma(G):=\left.\bar{f}_L\right|_{L=L_*}=
-\left.\frac{1}{G^7}\left(8 + \frac{73}{3} e^2 + \frac{37}{12} e^4 \right)
\right|_{e^2=1-G^2/L_*^2}
\]
and the average of $\partial f_L/\partial L$ at resonance given by
\[
\left.\frac{\partial}{\partial L} \bar{f}_L \right|_{L=L_*}=
\left.-\frac{1}{3 L_*^3 G^5}(146+37 e^2)\right|_{e^2=1-G^2/L_*^2}.
\]
The second order partially averaged system~(\ref{2ndoave}) is
thus given explicitly by
\begin{eqnarray}\label{ex2ndoave}
\dot {\cal D} &=& -\epsilon^{1/2}\Big[-mT_c\sin{m\varphi}+mT_s\cos{m\varphi}
+\frac{\Delta}{G^7}\Big(8 + \frac{73}{3} e^2 + \frac{37}{12} e^4 \Big)\Big]
\nonumber \\
&&\quad-\epsilon {\cal D} \Big[-m\frac{\partial T_c}{\partial L}\sin{m\varphi}
+m\frac{\partial T_s}{\partial L}\cos{m\varphi}
+\frac{\Delta}{3L_*^3 G^5}\Big(146 + 37 e^2 \Big)\Big], \nonumber \\
\dot G &=& -\epsilon\Big[\frac{\partial T_c}{\partial g}\cos{m\varphi}
+\frac{\partial T_s}{\partial g}\sin{m\varphi}
+\frac{\Delta}{L_*^3 G^4}(8 + 7 e^2 )\Big], \nonumber \\
\dot \varphi &=& -\epsilon^{1/2}\frac{3}{L_*^4} {\cal D}+\epsilon\Big(
\frac{6}{L_*^5}{\cal D}^2+\frac{\partial T_c}{\partial L}\cos{m\varphi}
+\frac{\partial T_s}{\partial L}\sin{m\varphi}\Big), \nonumber \\
\dot g &=& \:\epsilon\Big(\frac{\partial T_c}{\partial G}\cos{m\varphi}
+\frac{\partial T_s}{\partial G}\sin{m\varphi}\Big),
\end{eqnarray}
where we have dropped the tildes.
It is clear that $L$ in the
expressions involving $T_c$, $T_s$, and $e$ must be replaced by $L_*$, the value of
$L$ at resonance.
Having derived the equations for the averaged dynamics~(\ref{ex2ndoave}),
we now turn our attention to the consequences of these equations and the comparison
of predictions based on them with the actual dynamics given by~(\ref{MathEQM}).
\section{First Order Averaged Dynamics}\label{drt}
The first order partially averaged system, obtained from~(\ref{ex2ndoave})
by dropping the
$O(\epsilon)$ terms, is given by
\begin{eqnarray}\label{foav}
\dot{\cal D}&=&-\epsilon^{1/2}\Big[-mT_c\sin{m\varphi}+mT_s\cos{m\varphi}
-\Delta \Gamma(G)\Big],\nonumber\\
\dot G&=&0,\nonumber\\
\dot \varphi&=&-\epsilon^{1/2}\Big(\frac{3}{L_*^4}\Big) {\cal D}, \nonumber\\
\dot g&=&0.
\end{eqnarray}
In this approximation, the variables $G$ and $g$ are constants fixed
by the initial conditions while the remaining system in ${\cal D}$ and $\varphi$
is equivalent to a pendulum-type equation with torque; namely,
\begin{equation}\label{pendulum}
\ddot\varphi+\frac{3\epsilon}{{L_*}^4}(mT_c\sin{m\varphi}-mT_s\cos{m\varphi})
=-\frac{3\epsilon}{{L_ *}^4}\: \Delta\: \Gamma(G).
\end{equation}
We also have a second order differential equation---a harmonic
oscillator with slowly varying frequency---for the deviation
${\cal D}$ given by
\begin{equation}\label{fodeveq}
\ddot {\cal D}+\epsilon w^2 {\cal D}=0,
\end{equation}
where
\begin{equation}\label{w}
w:=\Big[\frac{3m^2}{L_*^4}(T_c\cos{m\varphi}+T_s\sin{m\varphi})\Big]^{1/2}.
\end{equation}
These results can be formally justified if they are recast in terms of a
new temporal variable given be $\epsilon^{1/2}t$; however, we use $t$ in
order to facilitate comparison with the actual dynamics.
To show that~(\ref{fodeveq}) is an oscillator, we must show that $w$ is a
real number.
To this end, we suppose that during capture into the resonance the orbit
is near an elliptic region of the first order partially averaged system~(\ref{foav}).
This system is Hamiltonian with an effective energy of the form
\[
-\epsilon^{1/2}\Big[\frac{1}{2}\Big(\frac{3}{L_*^4}\Big) {\cal D}^2+U(\varphi)\Big],
\]
where $U$ represents the effective potential energy given by
\[
U:=-(T_c\cos{m\varphi}+T_s \sin{m\varphi})+(\Delta\Gamma)\varphi.
\]
If $({\cal D}, \varphi)=(0,\varphi_0)$ is an elliptic rest point of the
first order partially averaged system; then,
$U'(\varphi_0)=0$ and $U''(\varphi_0)>0$, where a prime denotes
differentiation with respect to $\varphi$. It follows that
\[U''(\varphi_0)= m^2(T_c\cos{m\varphi_0}+T_s\sin{m\varphi_0}),\]
so that $w_0$, $w_0:=[3 U''(\varphi_0)]^{1/2}/L_*^2$, must be real.
To show that the frequency $\xi:=\epsilon^{1/2}w$ is slowly varying,
we differentiate this frequency with respect to time to
obtain
\[
\dot\xi=-\epsilon\frac{9m^3}{2L_*^8w}(T_s\cos{m\varphi}-T_c\sin{m\varphi}){\cal D}+
O(\epsilon^{3/2}).
\]
It follows from the first order averaged equations that
$\varphi-\varphi_0$ is expected to be of order unity; thus,
$\xi$ is nearly constant over a time-scale of order
$\epsilon^{-1/2}$ since $\dot\xi$ is proportional to $\epsilon$.
In particular, ${\cal D}$ varies on the time-scale $\epsilon^{1/2}t$.
Inspection of equations~(\ref{pendulum}) and~(\ref{fodeveq})
reveals that while ${\cal D}$
is predominantly a simple harmonic oscillator
with frequency $\xi_0=\epsilon^{1/2}w_0$,
the motion of $\varphi$ in time could be
rather complicated involving essentially all harmonics
of $\xi_0$. Therefore, $\varphi$ cannot in general
be considered a slow variable in time.
Librational motions (periodic motions in the phase plane) of the pendulum-type
averaged equation~(\ref{pendulum}) correspond to orbits of the original system
that are captured into the resonance. On the other hand, if there are no
librational motions, then all orbits pass through the resonance.
Thus, we observe a necessary condition for
capture: the pendulum system must have rest points in its phase plane.
The rest points of the first order averaged system are given by
${\cal D}=0$ and $\varphi=\varphi_0$, with
\begin{equation}\label{resteq}
R\sin(m\varphi_0+\eta)=-\Delta \Gamma,
\end{equation}
where $mT_c=R\cos{\eta}$ and $mT_s=-R\sin{\eta}$.
As an immediate consequence, we have the following proposition:
{\em For the $(m:n)$ resonance, if $n\ne 1$, there is no capture into resonance}.
On the other hand, for $n=1$,
if there are librational motions, then
we must have $\Delta|\Gamma/R| \le 1$.
These observations suggest that in the presence of sufficiently strong
radiation damping, i.e. for $\Delta$ sufficiently large, there are no
librational
motions and, as a result, each orbit will pass through all the resonant
manifolds that it encounters.
In particular, since $L=a^{1/2}$, the semimajor
axis of the corresponding osculating ellipse will collapse to zero.
Moreover, capture is only possible for $(m:n)$ resonances with $n=1$ when
$\Delta|\Gamma/R| \le 1$. In this case, if $\Delta|\Gamma/R| < 1$, then
the rest points in the phase plane of the pendulum-type equation~(\ref{pendulum})
at the resonance are all nondegenerate. This is precisely Neishtadt's
condition $B$.
The quantities $\Gamma$ and $R$
depend (nonlinearly) on the variables $(L, G, g)$ as well as
the parameters of the system; therefore, the precise range of their
ratio $\Gamma/R$ is difficult to
specify in general. However, the value of this ratio
can be determined numerically. In fact, to find orbits that are captured
into resonance as displayed in Figures~1--3, we use the first order averaged
system. A rest point of the first order system corresponds
to a ``fixed'' resonant orbit with ${\cal D}=0$ and constant
$G$, $g$, and $\varphi=\varphi_0$. The main characteristics of these orbits
do not change with respect to the slow variable $\epsilon^{1/2}t$;
that is, the resonant orbits are essentially fixed only over a time-scale of
order $\epsilon^{-1/2}$.
After choosing $G$ and $g$,
we solve for $\varphi_0$ at a rest point,
and then test to see if the resulting resonant orbit corresponds
to a libration point of the first order averaged system (e.g. $w_0^2$
must be positive). If it does, we
convert the point with coordinates
$(L_*, G,\varphi_0,g)$ to polar coordinates and then numerically integrate
our model~(\ref{MathEQM}) backward in time from this starting value.
When the backward integration in time is carried
out over a sufficiently long time interval, the orbit is expected to
leave the vicinity of the resonance.
After this occurs, we integrate forward
in time to obtain an orbit of~(\ref{MathEQM}) that is captured
into the resonance.
The first order partially averaged system~(\ref{foav}) is
Hamiltonian; therefore,
we do not expect that its dynamics would give a complete picture of the average
dynamics near a resonance for our {\em dissipative} system. In particular, the
librational motions of the pendulum-like system are not structurally stable.
Indeed, the phase space for the full four dimensional system~(\ref{foav})
is foliated by invariant two dimensional subspaces parametrized by the
variables $G$ and $g$.
If a librational motion exists in a leaf of the foliation, then
the corresponding rest points, the elliptic rest point and the
hyperbolic rest point (or points) at the boundary of the librational region for the
associated pendulum-like system, are degenerate in the four
dimensional phase space of the first order averaged system;
their linearizations have
two zero eigenvalues corresponding to the directions normal to the leaf.
In the second order partially averaged system, viewed as a small perturbation
of~(\ref{foav}),
we expect that only a finite number of the degenerate rest points survive. These
correspond to the continuable periodic solutions described in~\cite{cmr2}.
We expect the corresponding perturbed rest points
to be hyperbolic. As a result, they persist even
when higher order effects are added. The dimensions and positions of the
stable and unstable manifolds of these rest points determine the average
motion near the resonance.
In particular, if one of these rest points is a hyperbolic sink,
then there is an open set of initial conditions that correspond to orbits
{\em permanently} captured into resonance.
Our numerical experiments have provided no evidence for this behavior.
However, another possibility---that is consistent with our
numerical experiments---is that a rest point of the perturbed system
in phase space is stable
along two directions but unstable in the remaining two directions.
Thus a trajectory with initial point near the corresponding stable manifold will
be captured into resonance and undergo librational motions near the resonant
manifold until it spirals outward along the unstable manifold.
\section{Dynamical Evolution During Resonance}\label{dedr}
The dynamics of system~(\ref{ex2ndoave}) is expected to
give a close approximation of the
near resonance behavior of our original model~(\ref{MathEQM}) over
a time-scale of order $\epsilon^{-1/2}$. In particular,
a basic open problem is the following: Determine the positions of the rest points
and the corresponding stable and unstable manifolds of~(\ref{ex2ndoave}).
A solution of this problem would provide the information needed to analyze
the most important features of the behavior of the orbits that pass through the
resonance as well as those that are captured by the resonance. Unfortunately,
rigorous analysis of the dynamics in
the four dimensional phase space of system~(\ref{ex2ndoave})
seems at present to be very difficult. Thus, in lieu of a complete and rigorous
analysis of the dynamics, we will show how to obtain useful information from
approximations of the second order partially averaged system.
A typical example of the behavior of the orbit as it passes through a
$(1:1)$ resonance is depicted in Fig.~\ref{fig1}. The semimajor axis undergoes
librations of increasing amplitude around the resonant value. Furthermore,
the eccentricity of the orbit generally decreases while the orbit is trapped.
We propose to analyze the oscillations of ${\cal D}$ by using the
$O(\epsilon)$ approximation of the second order differential equation for ${\cal D}$
derived from the second order partially averaged equations~(\ref{ex2ndoave}).
In fact, a simple computation
yields the following differential equation
\begin{equation}\label{devos}
\ddot{\cal D}+\epsilon \gamma \dot{\cal D}+\epsilon w^2 {\cal D}=0,
\end{equation}
where
\begin{equation}\label{gam25}
\gamma:=m\Big(
\frac{\partial T_s}{\partial L}\cos{m\varphi}
-\frac{\partial T_c}{\partial L}\sin{m\varphi}\Big)
+\frac{\Delta}{3L^3G^5}(146+37e^2)
\end{equation}
is evaluated at $L=L_*$ in equation~(\ref{devos})
and $w$ is given by equation~(\ref{w}).
Similarly, the second order partially averaged system~(\ref{ex2ndoave})
can be used to derive an equation for the temporal evolution of $\varphi$
to order $\epsilon$; however, the resulting equation turns out to
be identical with equation~(\ref{pendulum}).
It is clear from inspection of the differential equation~(\ref{devos}) that
$L=L_*+\epsilon^{1/2}{\cal D}$ oscillates about its resonant
value $L_*$ with a libration frequency given approximately by
$\xi_0=\epsilon^{1/2} w_0$.
The magnitude of this frequency is in agreement with our numerical results to
within a few percent for the case of resonance trapping depicted in Fig.~\ref{fig1}.
Moreover,
the amplitude of the oscillations will increase (decrease) if $\gamma<0$
($\gamma>0$).
The sign of $\gamma$ varies with the choice of parameters, variables, and
the order of the resonance. Our numerical experiments---which have
by no means been exhaustive---indicate that the $(1:1)$ resonance for the
linearly polarized incident wave with $\alpha=1$ and $\beta=0$
is special in that $\gamma$ is negative at the
elliptic rest points of the first order averaged system.
However, for a resonance with order $m>1$, $\gamma$ is
not of fixed sign. Nevertheless, our numerical experiments indicate that
the amplitude of librations generally increases over a long time-scale
(cf. Fig.~\ref{fig2}).
Figures~\ref{fig2} and~\ref{fig3} illustrate the apparent richness of the
evolutionary dynamics near resonances with $m>1$.
It should be emphasized that the damped (or anti-damped)
oscillator~(\ref{devos}) approximates the
actual dynamics only over a time-scale of order $\epsilon^{-1/2}$
(cf. Appendix~\ref{appendixb}).
To obtain an approximate equation for the envelope of the oscillations,
we consider a time-dependent change of variables for the oscillator~(\ref{devos})
defined by the relation
\begin{equation}\label{Deq}
{\cal D}=Ve^{-(\epsilon/2)\int_0^t\gamma(s)\,ds}.
\end{equation}
After a routine calculation, we obtain the differential equation
\begin{equation}\label{harmos}
\ddot V+\epsilon w^2 V=O(\epsilon^{3/2}).
\end{equation}
Thus, to order $\epsilon$, $V(t)$ in equation~(\ref{Deq}) is
the solution of a harmonic oscillator equation with a
slowly varying frequency $\xi=\epsilon^{1/2}w$. In fact,
equation~(\ref{harmos}) is identical to the oscillator~(\ref{fodeveq})
with frequency $\xi$ obtained from the first order
averaged system.
The solution ${\cal D}(t)$ of the second order equation~(\ref{devos})
in this approximation
is obtained by modulating the amplitude of $V$.
We note that if $\gamma$ and $w$ are constants, then formula~(\ref{Deq})
reduces to
\[
{\cal D}(t)={\cal A}e^{-\epsilon\gamma t/2} \cos(\epsilon^{1/2}wt+\tau),
\]
where ${\cal A}$ and $\tau$ are constants depending on the initial conditions;
this is the standard result for the damped
(or anti-damped) harmonic oscillator with constant coefficients.
Equation~(\ref{Deq}) certainly agrees qualitatively with the numerical
experiments reported in Figures~\ref{fig1}--\ref{fig3}.
To check the accuracy quantitatively, we numerically integrated
system~(\ref{MathEQM}) and simultaneously evaluated
the integral of $\gamma$ in order
to obtain an approximation for the envelope
of ${\cal D} (t)$.
Over an interval of time of length $691$, which is of order $\epsilon^{-1/2}$,
the value of ${\cal D}$ at its maximum predicted from
equation~(\ref{Deq}) differs from the value obtained by numerical integration
of the differential equations by only a few percent (actually $2.5\%$).
The error in the comparison of the two values for the envelope of ${\cal D}$
grows slowly over time; in fact, it remains less than $20\%$ over a time interval
of length $9000$.
The evolutionary dynamics near resonance
is characterized by librational motions described
above as well as variations in angular momentum.
Our numerical experiments suggest considerable variation in the
angular momentum while the orbit is trapped in resonance.
The remainder of this section is devoted to a theoretical explanation
of this phenomenon.
During the time that an orbit is captured into resonance,
we expect that on average it will reside near an elliptic rest point of the
first order partially averaged system.
Thus, we expect the dynamical variables to satisfy approximately
equation~(\ref{resteq}) while the orbit is trapped.
We will show how to determine the dynamical behavior of the angular
momentum $G$ under this assumption.
In particular, for the $(1:1)$ resonance, we claim that $\dot G>0$; that is,
the orbital angular momentum increases.
It is easy to show that
\begin{eqnarray*}
\frac{\partial T_c}{\partial g}-2 T_s&=&
-\frac{1}{2L_*^2}[\alpha \sin 2g-\beta\cos(2g + \rho)]S_m(e), \\
\frac{\partial T_s}{\partial g}+2 T_c&=&
\frac{1}{2L_*^2}[\alpha \cos 2g+\beta\sin(2g + \rho)]S_m(e),
\end{eqnarray*}
where
\[ S_m(e):=m^2[A_m(e)-B_m(e)].\]
After substituting these identities as well as the condition~(\ref{resteq})
into the expression for
$\dot G$ in~(\ref{ex2ndoave}),
we find that
\begin{equation}\label{gdot}
\dot G= -\epsilon [P_m(e)+L_*^{-2} S_m(e) Q_m],
\end{equation}
where, after some manipulation using the relation $e=(1-G^2/L^2_*)^{1/2}$,
\begin{eqnarray*}
P_m(e)&:=&\frac{\Delta}{G^7}\Big[(1-e^2)^{3/2}(8+7 e^2)
-\frac{2}{m}\big( 8+\frac{73}{3} e^2+ \frac{37}{12} e^4 \big)\Big],\\
Q_m&:=&\frac{1}{2} [\alpha\sin(m\varphi-2g)+\beta\cos(m\varphi-2g-\rho)].
\end{eqnarray*}
The function $S_m(e)\to 0$ as $e\to 0$; therefore,
it is clear that $\dot G>0$ near $e=0$ as long as $P_m(e)<0$.
We have $P_1(e)<-8\Delta/G^7$ for $0<e<1$. For $m=2$,
we have $P_2(e)<0$, but
$P_2(0)=0$, while for $m>2$, we have $P_m(e)>0$ near $e=0$.
Our conclusion is that near the $(1:1)$ resonance, we can expect
$\dot G>0$ during capture provided the osculating
ellipse is not too eccentric. The other cases are more delicately balanced.
However, for a specific choice of parameters and for a specific
resonance, one can use equation~(\ref{gdot}) to predict
the behavior of $G$ near the resonance.
Consider, for instance, the system~(\ref{MathEQM}) with the following parameter values:
$\epsilon=10^{-4}$, $\Delta=\delta/\epsilon=10^{-3}$,
$\alpha=1$, $\beta=0$, $\rho=0$, and $\Omega= 1$.
The orbit, as depicted in Figure~\ref{fig1}, with
initial conditions given by
\[
(p_r,p_\theta,r,\theta)=(0.2817, 0.6389, 1.6628, 2.9272)
\]
appears to be trapped in $(1:1)$ resonance with a sojourn time of approximately
$40000$.
Moreover,
during this period of time
the angular momentum appears to increase from approximately
$0.66$ to $0.95$.
To demonstrate that our theoretical scheme using the second order partially
averaged system agrees with our numerical results, let us determine the
time-scale over which the angular momentum changes from $0.78$ to $0.95$.
According to the numerical integration of system~(\ref{MathEQM}) this occurs
over about $32700$ time units, whereas our approximate
analysis of the second order partially
averaged system predicts a time-scale of
\[
-\frac{1}{\epsilon}\int_{0.78}^{0.95}
\big[P_1\big((1-G^2/L^2_*)^{1/2}\big)\big]^{-1}\,dG\approx 34200,
\]
a time that is within less than $5\%$ of the value given by our numerical experiment.
In this integral, we have used the fact that in equation~(\ref{gdot}),
the term $L_*^{-2}S_1(e)Q_1$ is small compared with $P_1(e)$.
It should be clear from the results of this section that---by employing
the second order partially averaged system---we have been able
to provide theoretical explanations for the behaviors of semimajor axis
$a$ and eccentricity $e$ of the orbit that is trapped in resonance.
The basic dynamical equations for our model are only valid to order $\epsilon$,
since physical effects of higher order have been neglected in equation~(\ref{MathEQM});
therefore,
it would be inappropriate to go beyond the second order averaged system in
this case. The averaging method is general, however, and can be carried
through to any order.
\section{Conclusion}\label{c}
Some of the observed structures in the solar system are the direct results of
evolutionary dynamics while trapped in resonance
(see \cite{melita}, \cite{beauge}, and \cite{gomes}).
We have argued in this paper that such dynamics can in general be explained using
the method of averaging. In particular, the main aspects of the
evolution of the dynamical system during the time that it is locked in resonance
can be understood on the basis of the second order partially averaged dynamics.
We have illustrated these ideas using a particular model involving a Keplerian
binary system that emits and absorbs gravitational radiation.
|
2,869,038,154,582 | arxiv | \section{\fontsize{14}{16pt}\fontseries{bx}\fontshape{n}\selectfont #1}}
\newcommand{\subsEction}[1]
{\subsection{\fontsize{12}{14pt}\selectfont #1}}
\newcommand{\okrug}
{\unitlength=1mm
\makebox[10mm][l]{
\raisebox{-3mm}[4mm][3mm]{
\put(4,4){\circle{6}}
\put(1,4){\circle*{1.50}}
}}}
\newcommand{\lintwopt}[2]
{\unitlength=0.7mm
\makebox[23mm][l]{
\raisebox{-0mm}[5mm][0mm]{
\put(2,1){\line(1,0){27}}
\put(11,1){\circle*{1.5}}
\put(20,1){\circle*{1.5}}
\put(10,4){\makebox(0,0)[cc]{$#1$}}
\put(21,4){\makebox(0,0)[cc]{$#2$}}
}}}
\newcommand{\lintwoptright}[2]
{\unitlength=0.7mm
\makebox[23mm][l]{
\raisebox{-0mm}[5mm][0mm]{
\put(2,1){\line(1,0){27}}
\put(7,1){\vector(1,0){0}}
\put(16,1){\vector(1,0){0}}
\put(25,1){\vector(1,0){0}}
\put(11,1){\circle*{1.5}}
\put(20,1){\circle*{1.5}}
\put(10,4){\makebox(0,0)[cc]{$#1$}}
\put(21,4){\makebox(0,0)[cc]{$#2$}}
}}}
\newcommand{\anntwo}[2]
{\unitlength=1mm
\makebox[34mm][l]{
\raisebox{-15mm}[20mm][16mm]{
\put(16,16){\circle{10}}
\put(16,16){\oval(30,30)[]}
\put(7,30){\line(1,0){7}}
\put(18,30){\line(1,0){7}}
\put(10,34){\makebox(0,0)[cc]{$#1$}}
\put(22,34){\makebox(0,0)[cc]{$#2$}}
}}}
\newcommand{\anntwoline}[2]
{\unitlength=1mm
\makebox[34mm][l]{
\raisebox{-15mm}[20mm][16mm]{
\put(16,16){\circle{10}}
\put(16,16){\oval(30,30)[]}
\put(7,30){\line(1,0){7}}
\put(18,30){\line(1,0){7}}
\put(10,34){\makebox(0,0)[cc]{$#1$}}
\put(22,34){\makebox(0,0)[cc]{$#2$}}
\put(1,24){\line(1,0){30}}
}}}
\newcommand{\anntwogamdel}[2]
{\unitlength=1mm
\makebox[34mm][l]{
\raisebox{-15mm}[20mm][16mm]{
\put(16,16){\circle{10}}
\put(16,16){\oval(30,30)[]}
\put(7,30){\line(1,0){7}}
\put(18,30){\line(1,0){7}}
\put(10,34){\makebox(0,0)[cc]{$#1$}}
\put(22,34){\makebox(0,0)[cc]{$#2$}}
\put(16,11){\line(0,-1){10}}
\put(19,6){\makebox(0,0)[cc]{$\delta$}}
\put(16,21){\line(0,1){10}}
\put(19,24){\makebox(0,0)[cc]{$\gamma$}}
}}}
\newcommand{\diskfoursix}[6]
{\unitlength=1mm
\makebox[28mm][l]{
\raisebox{-14mm}[15mm][13mm]{
\put(1,10){\line(1,-1){7}}
\put(8,3){\line(1,0){10}}
\put(18,3){\line(1,1){7}}
\put(25,10){\line(0,1){10}}
\put(25,20){\line(-1,1){7}}
\put(18,27){\line(-1,0){10}}
\put(8,27){\line(-1,-1){7}}
\put(1,20){\line(0,-1){10}}
\put(1,11){\line(1,-1){8}}
\put(9,27){\line(-1,-1){8}}
\put(17,3){\line(1,1){8}}
\put(2,26){\makebox(0,0)[cc]{$#1$}}
\put(2,4){\makebox(0,0)[cc]{$#2$}}
\put(24,4){\makebox(0,0)[cc]{$#3$}}
\put(17,27){\line(1,-1){8}}
\put(24,26){\makebox(0,0)[cc]{$#4$}}
\put(13,3){\line(0,1){24}}
\put(1,15){\line(1,0){24}}
\put(11,23){\makebox(0,0)[cc]{$#5$}}
\put(21,17){\makebox(0,0)[cc]{$#6$}}
}}}
\newcommand{\annone}[1]
{\unitlength=1mm
\makebox[34mm][l]{
\raisebox{-16mm}[20mm][16mm]{
\put(16,16){\circle{10}}
\put(16,16){\oval(30,30)[]}
\put(7,30){\line(1,0){18}}
\put(16,34){\makebox(0,0)[cc]{$#1$}}
}}}
\newcommand{\annonetwo}[2]
{\unitlength=1mm
\makebox[34mm][l]{
\raisebox{-16mm}[20mm][16mm]{
\put(16,16){\circle{10}}
\put(16,16){\oval(30,30)[]}
\put(7,30){\line(1,0){18}}
\put(16,34){\makebox(0,0)[cc]{$#1$}}
\put(16,1){\line(0,1){10}}
\put(19,6){\makebox(0,0)[cc]{$#2$}}
}}}
\newcommand{\disktre}[3]
{\unitlength=0.75mm
\makebox[27mm][l]{
\raisebox{-11mm}[13mm][11mm]{
\put(1,15){\line(2,-3){8}}
\put(9,3){\line(1,0){14}}
\put(23,3){\line(2,3){8}}
\put(31,15){\line(-2,3){8}}
\put(23,27){\line(-1,0){14}}
\put(9,27){\line(-2,-3){8}}
\put(1,5){\makebox(0,0)[cc]{$#2$}}
\put(16,30){\makebox(0,0)[cc]{$#1$}}
\put(30,5){\makebox(0,0)[cc]{$#3$}}
\put(2,16){\line(2,-3){8.67}}
\put(30,16){\line(-2,-3){8.67}}
\put(8,26){\line(1,0){16}}
}}}
\newcommand{\disktwo}[2]
{\unitlength=0.5mm
\makebox[14mm][l]{
\raisebox{-6mm}[9mm][6mm]{
\put(11,26){\makebox(0,0)[cc]{$#1$}}
\put(11,3){\makebox(0,0)[cc]{$#2$}}
\put(1,7){\framebox(20,15)[cc]{}}
\put(1,8){\line(1,0){20}}
\put(21,21){\line(-1,0){20}}
}}}
\newcommand{\diskone}[1]
{\unitlength=0.66mm
\linethickness{0.4pt}
\makebox[12mm][l]{
\raisebox{-4mm}[6mm][4mm]{
\put(7.50,8){\oval(13,14)[b]}
\put(1,8){\line(1,0){13}}
\put(14,9){\line(-1,0){13}}
\put(8,12){\makebox(0,0)[cc]{$#1$}}
}}}
\newcommand{\torchokur}[2]
{\unitlength=0.75pt
\makebox[36 pt][l]{
\raisebox{-4 pt}[15 pt][7.5 pt]{
\put(0,0){\line(1,0){40}}
\put(12,0){\vector(1,0){0}}
\put(32,0){\vector(1,0){0}}
\put(20,0){\line(0,1){20}}
\put(20,12){\vector(0,1){0}}
\put(20,20){\circle*{4}}
\put(20,0){\circle*{4}}
\put(3,4){$#1$}
\put(28,4){$#2$}
}}}
\newcommand{\torchokdr}[2]
{\unitlength=0.75pt
\makebox[36 pt][l]{
\raisebox{-4 pt}[15 pt][7.5 pt]{
\put(0,0){\line(1,0){40}}
\put(12,0){\vector(1,0){0}}
\put(32,0){\vector(1,0){0}}
\put(20,0){\line(0,1){20}}
\put(20,8){\vector(0,-1){0}}
\put(20,20){\circle*{4}}
\put(20,0){\circle*{4}}
\put(3,4){$#1$}
\put(28,4){$#2$}
}}}
\newcommand{\torchokul}[2]
{\unitlength=0.75pt
\makebox[36 pt][l]{
\raisebox{-4 pt}[15 pt][7.5 pt]{
\put(0,0){\line(1,0){40}}
\put(8,0){\vector(-1,0){0}}
\put(28,0){\vector(-1,0){0}}
\put(20,0){\line(0,1){20}}
\put(20,12){\vector(0,1){0}}
\put(20,20){\circle*{4}}
\put(20,0){\circle*{4}}
\put(3,4){$#1$}
\put(28,4){$#2$}
}}}
\newcommand{\lezhak}[2]
{\unitlength=0.75pt
\makebox[36 pt][l]{
\raisebox{3 pt}[16 pt][2 pt]{
\put(0,0){\line(1,0){40}}
\put(20,0){\circle*{4}}
\put(3,4){$#1$}
\put(28,4){$#2$}
}}}
\newcommand{\lezhakright}[2]
{\unitlength=0.75pt
\makebox[36 pt][l]{
\raisebox{3 pt}[16 pt][2 pt]{
\put(0,0){\line(1,0){40}}
\put(12,0){\vector(1,0){0}}
\put(32,0){\vector(1,0){0}}
\put(20,0){\circle*{4}}
\put(3,4){$#1$}
\put(28,4){$#2$}
}}}
\newcommand{\lezhakleft}[2]
{\unitlength=0.75pt
\makebox[36 pt][l]{
\raisebox{3 pt}[16 pt][2 pt]{
\put(0,0){\line(1,0){40}}
\put(8,0){\vector(-1,0){0}}
\put(28,0){\vector(-1,0){0}}
\put(20,0){\circle*{4}}
\put(3,4){$#1$}
\put(28,4){$#2$}
}}}
\newcommand{\lezhakpt}[2]
{\unitlength=0.75pt
\makebox[36 pt][l]{
\raisebox{3 pt}[16 pt][2 pt]{
\put(0,0){\line(1,0){40}}
\put(20,0){\circle*{4}}
\put(40,0){\circle*{4}}
\put(3,4){$#1$}
\put(28,4){$#2$}
}}}
\newcommand{\lezhakptright}[2]
{\unitlength=0.75pt
\makebox[36 pt][l]{
\raisebox{3 pt}[16 pt][2 pt]{
\put(0,0){\line(1,0){40}}
\put(12,0){\vector(1,0){0}}
\put(32,0){\vector(1,0){0}}
\put(20,0){\circle*{4}}
\put(40,0){\circle*{4}}
\put(3,4){$#1$}
\put(28,4){$#2$}
}}}
\newcommand{\ptlezhak}[2]
{\unitlength=0.75pt
\makebox[36 pt][l]{
\raisebox{3 pt}[16 pt][2 pt]{
\put(0,0){\line(1,0){40}}
\put(0,0){\circle*{4}}
\put(20,0){\circle*{4}}
\put(3,4){$#1$}
\put(28,4){$#2$}
}}}
\newcommand{\ptlezhakright}[2]
{\unitlength=0.75pt
\makebox[36 pt][l]{
\raisebox{3 pt}[16 pt][2 pt]{
\put(0,0){\line(1,0){40}}
\put(12,0){\vector(1,0){0}}
\put(32,0){\vector(1,0){0}}
\put(0,0){\circle*{4}}
\put(20,0){\circle*{4}}
\put(3,4){$#1$}
\put(28,4){$#2$}
}}}
\newcommand{\pointedge}[1]
{\unitlength=0.75pt
\makebox[22 pt][l]{
\raisebox{3 pt}[16 pt][2 pt]{
\put(2,0){\line(1,0){20}}
\put(2,0){\circle*{4}}
\put(10,4){$#1$}
}}}
\newcommand{\pointedger}[1]
{\unitlength=0.75pt
\makebox[22 pt][l]{
\raisebox{3 pt}[16 pt][2 pt]{
\put(2,0){\line(1,0){20}}
\put(14,0){\vector(1,0){0}}
\put(2,0){\circle*{4}}
\put(10,4){$#1$}
}}}
\newcommand{\pointedgel}[1]
{\unitlength=0.75pt
\makebox[22 pt][l]{
\raisebox{3 pt}[16 pt][2 pt]{
\put(2,0){\line(1,0){20}}
\put(10,0){\vector(-1,0){0}}
\put(2,0){\circle*{4}}
\put(10,4){$#1$}
}}}
\newcommand{\edgepoint}[1]
{\unitlength=0.75pt
\makebox[22 pt][l]{
\raisebox{3 pt}[16 pt][2 pt]{
\put(2,0){\line(1,0){20}}
\put(22,0){\circle*{4}}
\put(10,4){$#1$}
}}}
\newcommand{\edgepointright}[1]
{\unitlength=0.75pt
\makebox[22 pt][l]{
\raisebox{3 pt}[16 pt][2 pt]{
\put(2,0){\line(1,0){20}}
\put(14,0){\vector(1,0){0}}
\put(22,0){\circle*{4}}
\put(10,4){$#1$}
}}}
\newcommand{\twopointedge}
{\unitlength=0.75pt
\makebox[24 pt][l]{
\raisebox{3 pt}[16 pt][2 pt]{
\put(2,0){\line(1,0){20}}
\put(2,0){\circle*{4}}
\put(22,0){\circle*{4}}
}}}
\newcommand{\twopointedger}
{\unitlength=0.75pt
\makebox[24 pt][l]{
\raisebox{3 pt}[16 pt][2 pt]{
\put(2,0){\line(1,0){20}}
\put(14,0){\vector(1,0){0}}
\put(2,0){\circle*{4}}
\put(22,0){\circle*{4}}
}}}
\newcommand{\twopointedgel}
{\unitlength=0.75pt
\makebox[24 pt][l]{
\raisebox{3 pt}[16 pt][2 pt]{
\put(2,0){\line(1,0){20}}
\put(10,0){\vector(-1,0){0}}
\put(2,0){\circle*{4}}
\put(22,0){\circle*{4}}
}}}
\newcommand{\stoyak}[2]
{\unitlength=0.75pt
\makebox[23 pt][l]{
\raisebox{3 pt}[22 pt][16 pt]{
\put(20,-20){\line(0,1){40}}
\put(20,0){\circle*{4}}
\put(5,7){$#1$}
\put(5,-17){$#2$}
}}}
\newcommand{\stoyakd}[2]
{\unitlength=0.75pt
\makebox[23 pt][l]{
\raisebox{3 pt}[22 pt][16 pt]{
\put(20,-20){\line(0,1){40}}
\put(20,8){\vector(0,-1){0}}
\put(20,-12){\vector(0,-1){0}}
\put(20,0){\circle*{4}}
\put(5,7){$#1$}
\put(5,-17){$#2$}
}}}
\newcommand{\trileftl}[3]
{\unitlength=0.75pt
\makebox[39 pt][l]{
\raisebox{3 pt}[19 pt][13 pt]{
\put(22,0){\line(-1,0){23}}
\put(8,0){\vector(-1,0){0}}
\put(22,0){\line(3,-5){12}}
\put(22,0){\line(3,5){12}}
\put(22,0){\circle*{4}}
\put(2,4){$#1$}
\put(35,-17){$#2$}
\put(35,10){$#3$}
}}}
\newcommand{\trileftr}[3]
{\unitlength=0.75pt
\makebox[39 pt][l]{
\raisebox{3 pt}[19 pt][13 pt]{
\put(22,0){\line(-1,0){23}}
\put(13,0){\vector(1,0){0}}
\put(22,0){\line(3,-5){12}}
\put(22,0){\line(3,5){12}}
\put(22,0){\circle*{4}}
\put(2,4){$#1$}
\put(35,-17){$#2$}
\put(35,10){$#3$}
}}}
\newcommand{\triup}[3]
{\unitlength=0.75pt
\makebox[33 pt]{
\raisebox{3 pt}[22.5 pt][16.5 pt]{
\put(-2,0){\line(0,1){23}}
\put(-2,0){\line(-5,-3){20}}
\put(-2,0){\line(5,-3){20}}
\put(-2,0){\circle*{4}}
\put(-17,16){$#1$}
\put(-21,-23){$#2$}
\put(7,-23){$#3$}
}}}
\newcommand{\triuprpt}[3]
{\unitlength=0.75pt
\makebox[33 pt]{
\raisebox{3 pt}[22.5 pt][16.5 pt]{
\put(-2,0){\line(0,1){23}}
\put(-2,0){\line(-5,-3){20}}
\put(-2,0){\line(5,-3){20}}
\put(-2,0){\circle*{4}}
\put(18,-12){\circle*{4}}
\put(-17,16){$#1$}
\put(-21,-23){$#2$}
\put(7,-23){$#3$}
}}}
\newcommand{\triupioo}[3]
{\unitlength=0.75pt
\makebox[33 pt]{
\raisebox{3 pt}[22.5 pt][16.5 pt]{
\put(-2,0){\line(0,1){23}}
\put(-2,12.5){\vector(0,-1){0}}
\put(-2,0){\line(-5,-3){20}}
\put(-12,-6){\vector(-3,-2){0}}
\put(-2,0){\line(5,-3){20}}
\put(8,-6){\vector(3,-2){0}}
\put(-2,0){\circle*{4}}
\put(-17,16){$#1$}
\put(-21,-23){$#2$}
\put(7,-23){$#3$}
}}}
\newcommand{\triupioolpt}[3]
{\unitlength=0.75pt
\makebox[33 pt]{
\raisebox{3 pt}[22.5 pt][16.5 pt]{
\put(-2,0){\line(0,1){23}}
\put(-2,12.5){\vector(0,-1){0}}
\put(-2,0){\line(-5,-3){20}}
\put(-22,-12){\circle*{4}}
\put(-12,-6){\vector(-3,-2){0}}
\put(-2,0){\line(5,-3){20}}
\put(8,-6){\vector(3,-2){0}}
\put(-2,0){\circle*{4}}
\put(-17,16){$#1$}
\put(-21,-23){$#2$}
\put(7,-23){$#3$}
}}}
\newcommand{\triupioorpt}[3]
{\unitlength=0.75pt
\makebox[33 pt]{
\raisebox{3 pt}[22.5 pt][16.5 pt]{
\put(-2,0){\line(0,1){23}}
\put(-2,12.5){\vector(0,-1){0}}
\put(-2,0){\line(-5,-3){20}}
\put(-12,-6){\vector(-3,-2){0}}
\put(-2,0){\line(5,-3){20}}
\put(18,-12){\circle*{4}}
\put(8,-6){\vector(3,-2){0}}
\put(-2,0){\circle*{4}}
\put(-17,16){$#1$}
\put(-21,-23){$#2$}
\put(7,-23){$#3$}
}}}
\newcommand{\triupiio}[3]
{\unitlength=0.75pt
\makebox[33 pt]{
\raisebox{3 pt}[22.5 pt][16.5 pt]{
\put(-2,0){\line(0,1){23}}
\put(-2,12.5){\vector(0,-1){0}}
\put(-2,0){\line(-5,-3){20}}
\put(-12,-6){\vector(-3,-2){0}}
\put(-2,0){\line(5,-3){20}}
\put(8,-6){\vector(-3,2){0}}
\put(-2,0){\circle*{4}}
\put(-17,16){$#1$}
\put(-21,-23){$#2$}
\put(7,-23){$#3$}
}}}
\newcommand{\triupoio}[3]
{\unitlength=0.75pt
\makebox[33 pt]{
\raisebox{3 pt}[22.5 pt][16.5 pt]{
\put(-2,0){\line(0,1){23}}
\put(-2,12.5){\vector(0,1){0}}
\put(-2,0){\line(-5,-3){20}}
\put(-12,-6){\vector(-3,-2){0}}
\put(-2,0){\line(5,-3){20}}
\put(8,-6){\vector(-3,2){0}}
\put(-2,0){\circle*{4}}
\put(-17,16){$#1$}
\put(-21,-23){$#2$}
\put(7,-23){$#3$}
}}}
\newcommand{\triupoii}[3]
{\unitlength=0.75pt
\makebox[33 pt]{
\raisebox{3 pt}[22.5 pt][16.5 pt]{
\put(-2,0){\line(0,1){23}}
\put(-2,12.5){\vector(0,1){0}}
\put(-2,0){\line(-5,-3){20}}
\put(-12,-6){\vector(3,2){0}}
\put(-2,0){\line(5,-3){20}}
\put(8,-6){\vector(-3,2){0}}
\put(-2,0){\circle*{4}}
\put(-17,16){$#1$}
\put(-21,-23){$#2$}
\put(7,-23){$#3$}
}}}
\newcommand{\triupooi}[3]
{\unitlength=0.75pt
\makebox[33 pt]{
\raisebox{3 pt}[22.5 pt][16.5 pt]{
\put(-2,0){\line(0,1){23}}
\put(-2,12.5){\vector(0,1){0}}
\put(-2,0){\line(-5,-3){20}}
\put(-12,-6){\vector(3,2){0}}
\put(-2,0){\line(5,-3){20}}
\put(8,-6){\vector(3,-2){0}}
\put(-2,0){\circle*{4}}
\put(-17,16){$#1$}
\put(-21,-23){$#2$}
\put(7,-23){$#3$}
}}}
\newcommand{\triupioi}[3]
{\unitlength=0.75pt
\makebox[33 pt]{
\raisebox{3 pt}[22.5 pt][16.5 pt]{
\put(-2,0){\line(0,1){23}}
\put(-2,12.5){\vector(0,-1){0}}
\put(-2,0){\line(-5,-3){20}}
\put(-12,-6){\vector(3,2){0}}
\put(-2,0){\line(5,-3){20}}
\put(8,-6){\vector(3,-2){0}}
\put(-2,0){\circle*{4}}
\put(-17,16){$#1$}
\put(-21,-23){$#2$}
\put(7,-23){$#3$}
}}}
\newcommand{\trirightl}[3]
{\unitlength=0.75pt
\makebox[39 pt][l]{
\raisebox{3 pt}[19 pt][13 pt]{
\put(22,0){\line(1,0){23}}
\put(31,0){\vector(-1,0){0}}
\put(22,0){\line(-3,5){12}}
\put(22,0){\line(-3,-5){12}}
\put(22,0){\circle*{4}}
\put(35,4){$#1$}
\put(-3,10){$#2$}
\put(-3,-17){$#3$}
}}}
\newcommand{\trirightr}[3]
{\unitlength=0.75pt
\makebox[39 pt][l]{
\raisebox{3 pt}[19 pt][13 pt]{
\put(22,0){\line(1,0){23}}
\put(36,0){\vector(1,0){0}}
\put(22,0){\line(-3,5){12}}
\put(22,0){\line(-3,-5){12}}
\put(22,0){\circle*{4}}
\put(35,4){$#1$}
\put(-3,10){$#2$}
\put(-3,-17){$#3$}
}}}
\newcommand{\tridowniio}[3]
{\unitlength=0.75pt
\makebox[33 pt]{
\raisebox{3 pt}[22.5 pt][16.5 pt]{
\put(-2,0){\line(0,-1){23}}
\put(-2,-13){\vector(0,-1){0}}
\put(-2,0){\line(5,3){20}}
\put(8,6){\vector(-3,-2){0}}
\put(-2,0){\line(-5,3){20}}
\put(-12,6){\vector(3,-2){0}}
\put(-2,0){\circle*{4}}
\put(-14,-23){$#1$}
\put(8,13){$#2$}
\put(-20,13){$#3$}
}}}
\newcommand{\tridownioo}[3]
{\unitlength=0.75pt
\makebox[33 pt]{
\raisebox{3 pt}[22.5 pt][16.5 pt]{
\put(-2,0){\line(0,-1){23}}
\put(-2,-13){\vector(0,-1){0}}
\put(-2,0){\line(5,3){20}}
\put(8,6){\vector(3,2){0}}
\put(-2,0){\line(-5,3){20}}
\put(-12,6){\vector(3,-2){0}}
\put(-2,0){\circle*{4}}
\put(-14,-23){$#1$}
\put(8,13){$#2$}
\put(-20,13){$#3$}
}}}
\newcommand{\tridownoio}[3]
{\unitlength=0.75pt
\makebox[33 pt]{
\raisebox{3 pt}[22.5 pt][16.5 pt]{
\put(-2,0){\line(0,-1){23}}
\put(-2,-13){\vector(0,-1){0}}
\put(-2,0){\line(5,3){20}}
\put(8,6){\vector(-3,-2){0}}
\put(-2,0){\line(-5,3){20}}
\put(-12,6){\vector(-3,2){0}}
\put(-2,0){\circle*{4}}
\put(-14,-23){$#1$}
\put(8,13){$#2$}
\put(-20,13){$#3$}
}}}
\newcommand{\tarahor}[5]
{\unitlength=0.75pt
\makebox[45 pt]{
\raisebox{3 pt}[27 pt][21 pt]{
\put(-19,0){\line(-3,5){12}}
\put(-19,0){\line(-3,-5){12}}
\put(15,0){\line(3,-5){12}}
\put(15,0){\line(3,5){12}}
\put(-19,0){\line(1,0){34}}
\put(-19,0){\circle*{4}}
\put(15,0){\circle*{4}}
\put(-25,20){$#1$}
\put(-25,-28){$#2$}
\put(11,-28){$#3$}
\put(11,20){$#4$}
\put(-7,5){$#5$}
}}}
\newcommand{\tarahorl}[5]
{\unitlength=0.75pt
\makebox[45 pt]{
\raisebox{3 pt}[27 pt][21 pt]{
\put(-19,0){\line(-3,5){12}}
\put(-19,0){\line(-3,-5){12}}
\put(15,0){\line(3,-5){12}}
\put(15,0){\line(3,5){12}}
\put(-19,0){\line(1,0){34}}
\put(-2,0){\vector(-1,0){0}}
\put(-19,0){\circle*{4}}
\put(15,0){\circle*{4}}
\put(-25,20){$#1$}
\put(-25,-28){$#2$}
\put(11,-28){$#3$}
\put(11,20){$#4$}
\put(-7,5){$#5$}
}}}
\newcommand{\tarahorr}[5]
{\unitlength=0.75pt
\makebox[45 pt]{
\raisebox{3 pt}[27 pt][21 pt]{
\put(-19,0){\line(-3,5){12}}
\put(-19,0){\line(-3,-5){12}}
\put(15,0){\line(3,-5){12}}
\put(15,0){\line(3,5){12}}
\put(-19,0){\line(1,0){34}}
\put(-2,0){\vector(1,0){0}}
\put(-19,0){\circle*{4}}
\put(15,0){\circle*{4}}
\put(-25,20){$#1$}
\put(-25,-28){$#2$}
\put(11,-28){$#3$}
\put(11,20){$#4$}
\put(-7,5){$#5$}
}}}
\newcommand{\tarahorioiol}[5]
{\unitlength=0.75pt
\makebox[45 pt]{
\raisebox{3 pt}[27 pt][21 pt]{
\put(-19,0){\line(-3,5){12}}
\put(-25,10){\vector(2,-3){0}}
\put(-19,0){\line(-3,-5){12}}
\put(-25,-10){\vector(-2,-3){0}}
\put(15,0){\line(3,-5){12}}
\put(21,-10){\vector(-2,3){0}}
\put(15,0){\line(3,5){12}}
\put(21,10){\vector(2,3){0}}
\put(-19,0){\line(1,0){34}}
\put(-2,0){\vector(-1,0){0}}
\put(-19,0){\circle*{4}}
\put(15,0){\circle*{4}}
\put(-25,20){$#1$}
\put(-25,-28){$#2$}
\put(11,-28){$#3$}
\put(11,20){$#4$}
\put(-7,5){$#5$}
}}}
\newcommand{\tarahoriooil}[5]
{\unitlength=0.75pt
\makebox[45 pt]{
\raisebox{3 pt}[27 pt][21 pt]{
\put(-19,0){\line(-3,5){12}}
\put(-25,10){\vector(2,-3){0}}
\put(-19,0){\line(-3,-5){12}}
\put(-25,-10){\vector(-2,-3){0}}
\put(15,0){\line(3,-5){12}}
\put(21,-10){\vector(2,-3){0}}
\put(15,0){\line(3,5){12}}
\put(21,10){\vector(-2,-3){0}}
\put(-19,0){\line(1,0){34}}
\put(-2,0){\vector(-1,0){0}}
\put(-19,0){\circle*{4}}
\put(15,0){\circle*{4}}
\put(-25,20){$#1$}
\put(-25,-28){$#2$}
\put(11,-28){$#3$}
\put(11,20){$#4$}
\put(-7,5){$#5$}
}}}
\newcommand{\tarahoriooir}[5]
{\unitlength=0.75pt
\makebox[45 pt]{
\raisebox{3 pt}[27 pt][21 pt]{
\put(-19,0){\line(-3,5){12}}
\put(-25,10){\vector(2,-3){0}}
\put(-19,0){\line(-3,-5){12}}
\put(-25,-10){\vector(-2,-3){0}}
\put(15,0){\line(3,-5){12}}
\put(21,-10){\vector(2,-3){0}}
\put(15,0){\line(3,5){12}}
\put(21,10){\vector(-2,-3){0}}
\put(-19,0){\line(1,0){34}}
\put(-2,0){\vector(1,0){0}}
\put(-19,0){\circle*{4}}
\put(15,0){\circle*{4}}
\put(-25,20){$#1$}
\put(-25,-28){$#2$}
\put(11,-28){$#3$}
\put(11,20){$#4$}
\put(-7,5){$#5$}
}}}
\newcommand{\tarahoriooor}[5]
{\unitlength=0.75pt
\makebox[45 pt]{
\raisebox{3 pt}[27 pt][21 pt]{
\put(-19,0){\line(-3,5){12}}
\put(-25,10){\vector(2,-3){0}}
\put(-19,0){\line(-3,-5){12}}
\put(-25,-10){\vector(-2,-3){0}}
\put(15,0){\line(3,-5){12}}
\put(21,-10){\vector(2,-3){0}}
\put(15,0){\line(3,5){12}}
\put(21,10){\vector(2,3){0}}
\put(-19,0){\line(1,0){34}}
\put(-2,0){\vector(1,0){0}}
\put(-19,0){\circle*{4}}
\put(15,0){\circle*{4}}
\put(-25,20){$#1$}
\put(-25,-28){$#2$}
\put(11,-28){$#3$}
\put(11,20){$#4$}
\put(-7,5){$#5$}
}}}
\newcommand{\tarahoriobor}[5]
{\unitlength=0.75pt
\makebox[45 pt]{
\raisebox{3 pt}[27 pt][21 pt]{
\put(-19,0){\line(-3,5){12}}
\put(-25,10){\vector(2,-3){0}}
\put(-19,0){\line(-3,-5){12}}
\put(-25,-10){\vector(-2,-3){0}}
\put(15,0){\line(3,5){12}}
\put(21,10){\vector(2,3){0}}
\put(-19,0){\line(1,0){34}}
\put(-2,0){\vector(1,0){0}}
\put(-19,0){\circle*{4}}
\put(15,0){\circle*{4}}
\put(-25,20){$#1$}
\put(-25,-28){$#2$}
\put(11,-28){$#3$}
\put(11,20){$#4$}
\put(-7,5){$#5$}
}}}
\newcommand{\tarahoriooorptdr}[5]
{\unitlength=0.75pt
\makebox[45 pt]{
\raisebox{3 pt}[27 pt][21 pt]{
\put(-19,0){\line(-3,5){12}}
\put(-25,10){\vector(2,-3){0}}
\put(-19,0){\line(-3,-5){12}}
\put(-25,-10){\vector(-2,-3){0}}
\put(15,0){\line(3,-5){12}}
\put(27,-20){\circle*{4}}
\put(21,-10){\vector(2,-3){0}}
\put(15,0){\line(3,5){12}}
\put(21,10){\vector(2,3){0}}
\put(-19,0){\line(1,0){34}}
\put(-2,0){\vector(1,0){0}}
\put(-19,0){\circle*{4}}
\put(15,0){\circle*{4}}
\put(-25,20){$#1$}
\put(-25,-28){$#2$}
\put(11,-28){$#3$}
\put(11,20){$#4$}
\put(-7,5){$#5$}
}}}
\newcommand{\tarahorioior}[5]
{\unitlength=0.75pt
\makebox[45 pt]{
\raisebox{3 pt}[27 pt][21 pt]{
\put(-19,0){\line(-3,5){12}}
\put(-25,10){\vector(2,-3){0}}
\put(-19,0){\line(-3,-5){12}}
\put(-25,-10){\vector(-2,-3){0}}
\put(15,0){\line(3,-5){12}}
\put(21,-10){\vector(-2,3){0}}
\put(15,0){\line(3,5){12}}
\put(21,10){\vector(2,3){0}}
\put(-19,0){\line(1,0){34}}
\put(-2,0){\vector(1,0){0}}
\put(-19,0){\circle*{4}}
\put(15,0){\circle*{4}}
\put(-25,20){$#1$}
\put(-25,-28){$#2$}
\put(11,-28){$#3$}
\put(11,20){$#4$}
\put(-7,5){$#5$}
}}}
\newcommand{\tarahoroooil}[5]
{\unitlength=0.75pt
\makebox[45 pt]{
\raisebox{3 pt}[27 pt][21 pt]{
\put(-19,0){\line(-3,5){12}}
\put(-25,10){\vector(-2,3){0}}
\put(-19,0){\line(-3,-5){12}}
\put(-25,-10){\vector(-2,-3){0}}
\put(15,0){\line(3,-5){12}}
\put(21,-10){\vector(2,-3){0}}
\put(15,0){\line(3,5){12}}
\put(21,10){\vector(-2,-3){0}}
\put(-19,0){\line(1,0){34}}
\put(-2,0){\vector(-1,0){0}}
\put(-19,0){\circle*{4}}
\put(15,0){\circle*{4}}
\put(-25,20){$#1$}
\put(-25,-28){$#2$}
\put(11,-28){$#3$}
\put(11,20){$#4$}
\put(-7,5){$#5$}
}}}
\newcommand{\tarahorooiol}[5]
{\unitlength=0.75pt
\makebox[45 pt]{
\raisebox{3 pt}[27 pt][21 pt]{
\put(-19,0){\line(-3,5){12}}
\put(-25,10){\vector(-2,3){0}}
\put(-19,0){\line(-3,-5){12}}
\put(-25,-10){\vector(-2,-3){0}}
\put(15,0){\line(3,-5){12}}
\put(21,-10){\vector(-2,3){0}}
\put(15,0){\line(3,5){12}}
\put(21,10){\vector(2,3){0}}
\put(-19,0){\line(1,0){34}}
\put(-2,0){\vector(-1,0){0}}
\put(-19,0){\circle*{4}}
\put(15,0){\circle*{4}}
\put(-25,20){$#1$}
\put(-25,-28){$#2$}
\put(11,-28){$#3$}
\put(11,20){$#4$}
\put(-7,5){$#5$}
}}}
\newcommand{\taraver}[5]
{\unitlength=0.75pt
\makebox[45 pt]{
\raisebox{3 pt}[27 pt][21 pt]{
\put(-2,17){\line(-5,3){24}}
\put(-2,-17){\line(-5,-3){24}}
\put(-2,-17){\line(5,-3){24}}
\put(-2,17){\line(5,3){24}}
\put(-2,17){\line(0,-1){34}}
\put(-2,17){\circle*{4}}
\put(-2,-17){\circle*{4}}
\put(-32,17){$#1$}
\put(-32,-25){$#2$}
\put(18,-25){$#3$}
\put(18,17){$#4$}
\put(2,-3){$#5$}
}}}
\newcommand{\taraverooiou}[5]
{\unitlength=0.75pt
\makebox[45 pt]{
\raisebox{3 pt}[27 pt][21 pt]{
\put(-2,17){\line(-5,3){24}}
\put(-12,23){\vector(-3,2){0}}
\put(-2,-17){\line(-5,-3){24}}
\put(-12,-23){\vector(-3,-2){0}}
\put(-2,-17){\line(5,-3){24}}
\put(8,-23){\vector(-3,2){0}}
\put(-2,17){\line(5,3){24}}
\put(8,23){\vector(3,2){0}}
\put(-2,17){\line(0,-1){34}}
\put(-2,0){\vector(0,1){0}}
\put(-2,17){\circle*{4}}
\put(-2,-17){\circle*{4}}
\put(-32,17){$#1$}
\put(-32,-25){$#2$}
\put(18,-25){$#3$}
\put(18,17){$#4$}
\put(2,-3){$#5$}
}}}
\newcommand{\taraverioiou}[5]
{\unitlength=0.75pt
\makebox[45 pt]{
\raisebox{3 pt}[27 pt][21 pt]{
\put(-2,17){\line(-5,3){24}}
\put(-12,23){\vector(3,-2){0}}
\put(-2,-17){\line(-5,-3){24}}
\put(-12,-23){\vector(-3,-2){0}}
\put(-2,-17){\line(5,-3){24}}
\put(8,-23){\vector(-3,2){0}}
\put(-2,17){\line(5,3){24}}
\put(8,23){\vector(3,2){0}}
\put(-2,17){\line(0,-1){34}}
\put(-2,0){\vector(0,1){0}}
\put(-2,17){\circle*{4}}
\put(-2,-17){\circle*{4}}
\put(-32,17){$#1$}
\put(-32,-25){$#2$}
\put(18,-25){$#3$}
\put(18,17){$#4$}
\put(2,-3){$#5$}
}}}
\newcommand{\taraverioiod}[5]
{\unitlength=0.75pt
\makebox[45 pt]{
\raisebox{3 pt}[27 pt][21 pt]{
\put(-2,17){\line(-5,3){24}}
\put(-12,23){\vector(3,-2){0}}
\put(-2,-17){\line(-5,-3){24}}
\put(-12,-23){\vector(-3,-2){0}}
\put(-2,-17){\line(5,-3){24}}
\put(8,-23){\vector(-3,2){0}}
\put(-2,17){\line(5,3){24}}
\put(8,23){\vector(3,2){0}}
\put(-2,17){\line(0,-1){34}}
\put(-2,0){\vector(0,-1){0}}
\put(-2,17){\circle*{4}}
\put(-2,-17){\circle*{4}}
\put(-32,17){$#1$}
\put(-32,-25){$#2$}
\put(18,-25){$#3$}
\put(18,17){$#4$}
\put(2,-3){$#5$}
}}}
\newcommand{\taraveroooid}[5]
{\unitlength=0.75pt
\makebox[45 pt]{
\raisebox{3 pt}[27 pt][21 pt]{
\put(-2,17){\line(-5,3){24}}
\put(-12,23){\vector(-3,2){0}}
\put(-2,-17){\line(-5,-3){24}}
\put(-12,-23){\vector(-3,-2){0}}
\put(-2,-17){\line(5,-3){24}}
\put(8,-23){\vector(3,-2){0}}
\put(-2,17){\line(5,3){24}}
\put(8,23){\vector(-3,-2){0}}
\put(-2,17){\line(0,-1){34}}
\put(-2,0){\vector(0,-1){0}}
\put(-2,17){\circle*{4}}
\put(-2,-17){\circle*{4}}
\put(-32,17){$#1$}
\put(-32,-25){$#2$}
\put(18,-25){$#3$}
\put(18,17){$#4$}
\put(2,-3){$#5$}
}}}
\newcommand{\taraveriooid}[5]
{\unitlength=0.75pt
\makebox[45 pt]{
\raisebox{3 pt}[27 pt][21 pt]{
\put(-2,17){\line(-5,3){24}}
\put(-12,23){\vector(3,-2){0}}
\put(-2,-17){\line(-5,-3){24}}
\put(-12,-23){\vector(-3,-2){0}}
\put(-2,-17){\line(5,-3){24}}
\put(8,-23){\vector(3,-2){0}}
\put(-2,17){\line(5,3){24}}
\put(8,23){\vector(-3,-2){0}}
\put(-2,17){\line(0,-1){34}}
\put(-2,0){\vector(0,-1){0}}
\put(-2,17){\circle*{4}}
\put(-2,-17){\circle*{4}}
\put(-32,17){$#1$}
\put(-32,-25){$#2$}
\put(18,-25){$#3$}
\put(18,17){$#4$}
\put(2,-3){$#5$}
}}}
\newcommand{\taraverioood}[5]
{\unitlength=0.75pt
\makebox[45 pt]{
\raisebox{3 pt}[27 pt][21 pt]{
\put(-2,17){\line(-5,3){24}}
\put(-12,23){\vector(3,-2){0}}
\put(-2,-17){\line(-5,-3){24}}
\put(-12,-23){\vector(-3,-2){0}}
\put(-2,-17){\line(5,-3){24}}
\put(8,-23){\vector(3,-2){0}}
\put(-2,17){\line(5,3){24}}
\put(8,23){\vector(3,2){0}}
\put(-2,17){\line(0,-1){34}}
\put(-2,0){\vector(0,-1){0}}
\put(-2,17){\circle*{4}}
\put(-2,-17){\circle*{4}}
\put(-32,17){$#1$}
\put(-32,-25){$#2$}
\put(18,-25){$#3$}
\put(18,17){$#4$}
\put(2,-3){$#5$}
}}}
\newcommand{\taraveriobod}[5]
{\unitlength=0.75pt
\makebox[45 pt]{
\raisebox{3 pt}[27 pt][21 pt]{
\put(-2,17){\line(-5,3){24}}
\put(-12,23){\vector(3,-2){0}}
\put(-2,-17){\line(-5,-3){24}}
\put(-12,-23){\vector(-3,-2){0}}
\put(-2,17){\line(5,3){24}}
\put(8,23){\vector(3,2){0}}
\put(-2,17){\line(0,-1){34}}
\put(-2,0){\vector(0,-1){0}}
\put(-2,17){\circle*{4}}
\put(-2,-17){\circle*{4}}
\put(-32,17){$#1$}
\put(-32,-25){$#2$}
\put(18,-25){$#3$}
\put(18,17){$#4$}
\put(2,-3){$#5$}
}}}
\newcommand{\taraveriooodptdr}[5]
{\unitlength=0.75pt
\makebox[45 pt]{
\raisebox{3 pt}[27 pt][21 pt]{
\put(-2,17){\line(-5,3){24}}
\put(-12,23){\vector(3,-2){0}}
\put(-2,-17){\line(-5,-3){24}}
\put(-12,-23){\vector(-3,-2){0}}
\put(-2,-17){\line(5,-3){24}}
\put(22,-31.6){\circle*{4}}
\put(8,-23){\vector(3,-2){0}}
\put(-2,17){\line(5,3){24}}
\put(8,23){\vector(3,2){0}}
\put(-2,17){\line(0,-1){34}}
\put(-2,0){\vector(0,-1){0}}
\put(-2,17){\circle*{4}}
\put(-2,-17){\circle*{4}}
\put(-32,17){$#1$}
\put(-32,-25){$#2$}
\put(18,-25){$#3$}
\put(18,17){$#4$}
\put(2,-3){$#5$}
}}}
\newcommand{\celoup}[4]
{\unitlength=0.75pt
\makebox[42 pt][l]{
\raisebox{-39 pt}[45 pt][39 pt]{
\put(24,76){\oval(40,40)[]}
\put(24,28){\line(0,1){28}}
\put(24,28){\line(-5,-3){24}}
\put(24,28){\line(5,-3){24}}
\put(24,28){\circle*{4}}
\put(24,56){\circle*{4}}
\put(5,5){$#1$}
\put(34,5){$#2$}
\put(29,38){$#3$}
\put(32,96){$#4$}
}}}
\newcommand{\celodown}[4]
{\unitlength=0.75pt
\makebox[42 pt][l]{
\raisebox{2 pt}[45 pt][39 pt]{
\put(24,-20){\oval(40,40)[]}
\put(24,28){\line(0,-1){28}}
\put(24,28){\line(5,3){24}}
\put(24,28){\line(-5,3){24}}
\put(24,28){\circle*{4}}
\put(24,0){\circle*{4}}
\put(6,42){$#1$}
\put(34,42){$#2$}
\put(29,11){$#3$}
\put(30,-49){$#4$}
}}}
\newcommand{\celodownor}[4]
{\unitlength=0.75pt
\makebox[42 pt][l]{
\raisebox{2 pt}[45 pt][39 pt]{
\put(24,-20){\oval(40,40)[]}
\put(26,-40){\vector(1,0){0}}
\put(24,28){\line(0,-1){28}}
\put(24,12){\vector(0,-1){0}}
\put(24,28){\line(5,3){24}}
\put(34,34){\vector(-3,-2){0}}
\put(24,28){\line(-5,3){24}}
\put(14,34){\vector(3,-2){0}}
\put(24,28){\circle*{4}}
\put(24,0){\circle*{4}}
\put(6,42){$#1$}
\put(34,42){$#2$}
\put(29,11){$#3$}
\put(30,-49){$#4$}
}}}
\newcommand{\tennis}[2]
{\unitlength=0.75pt
\makebox[67 pt][l]{
\raisebox{3 pt}[22.5 pt][16.5 pt]{
\put(0,0){\line(1,0){28}}
\put(48,0){\oval(40,40)[]}
\put(28,0){\circle*{4}}
\put(3,4){$#1$}
\put(70,4){$#2$}
}}}
\newcommand{\tennisu}[2]
{\unitlength=0.75pt
\makebox[67 pt][l]{
\raisebox{3 pt}[22.5 pt][16.5 pt]{
\put(0,0){\line(1,0){28}}
\put(16,0){\vector(1,0){0}}
\put(48,0){\oval(40,40)[]}
\put(68,0){\vector(0,1){0}}
\put(28,0){\circle*{4}}
\put(3,4){$#1$}
\put(70,4){$#2$}
}}}
\newcommand{\tennisd}[2]
{\unitlength=0.75pt
\makebox[67 pt][l]{
\raisebox{3 pt}[22.5 pt][16.5 pt]{
\put(0,0){\line(1,0){28}}
\put(16,0){\vector(1,0){0}}
\put(48,0){\oval(40,40)[]}
\put(68,0){\vector(0,-1){0}}
\put(28,0){\circle*{4}}
\put(3,4){$#1$}
\put(70,4){$#2$}
}}}
\newcommand{\sun}[4]
{\unitlength=0.75pt
\makebox[78 pt][l]{
\raisebox{3 pt}[30 pt][24 pt]{
\put(0,0){\line(1,0){28}}
\put(48,0){\oval(40,40)[]}
\put(28,0){\circle*{4}}
\put(68,0){\circle*{4}}
\put(68,0){\line(1,0){28}}
\put(3,4){$#1$}
\put(84,4){$#2$}
\put(55,21){$#3$}
\put(55,-30){$#4$}
}}}
\newcommand{\sunor}[4]
{\unitlength=0.75pt
\makebox[78 pt][l]{
\raisebox{3 pt}[30 pt][24 pt]{
\put(0,0){\line(1,0){28}}
\put(16,0){\vector(1,0){0}}
\put(48,0){\oval(40,40)[]}
\put(46,20){\vector(-1,0){0}}
\put(50,-20){\vector(1,0){0}}
\put(28,0){\circle*{4}}
\put(68,0){\circle*{4}}
\put(68,0){\line(1,0){28}}
\put(80,0){\vector(-1,0){0}}
\put(3,4){$#1$}
\put(84,4){$#2$}
\put(55,21){$#3$}
\put(55,-30){$#4$}
}}}
\newcommand{\cylindre}{
\unitlength=1mm
\makebox[31mm][l]{\raisebox{-15mm}[18mm][18mm]{
\put(15,15){\circle{10}}
\put(15,15){\oval(30,30)[]}
}}}
\newcommand{\spherepantstriline}[3]{
\unitlength=0.5mm
\makebox[30mm][l]{\raisebox{-13mm}[13.5mm][13mm]{
\put(30,45){\circle{10}}
\put(15,15){\circle{10}}
\put(45,15){\circle{10}}
\put(15,15){\makebox(0,0)[cc]{$#1$}}
\put(45,15){\makebox(0,0)[cc]{$#2$}}
\put(30,45){\makebox(0,0)[cc]{$#3$}}
\put(30,40){\line(0,-1){14}}
\put(30,26){\line(-4,-3){11}}
\put(30,26){\line(4,-3){11}}
}}}
\newcommand{\spherepantsrightlines}[3]{
\unitlength=0.5mm
\makebox[38mm][l]{\raisebox{-13mm}[13.5mm][13mm]{
\put(30,45){\circle{10}}
\put(15,15){\circle{10}}
\put(45,15){\circle{10}}
\put(15,15){\makebox(0,0)[cc]{$#1$}}
\put(45,15){\makebox(0,0)[cc]{$#2$}}
\put(30,45){\makebox(0,0)[cc]{$#3$}}
\put(42,19){\line(2,3){6.67}}
\put(48.67,29){\line(-5,3){18.67}}
\bezier{472}(49,29)(74,-17)(19,18)
}}}
\newcommand{\fourier}{
\unitlength=0.7mm
\linethickness{0.4pt}
\begin{picture}(50,43)
\put(1,22){\makebox(0,0)[cc]{$S\ =$}}
\put(26,23){\oval(20,18)[b]}
\put(40,19.50){\oval(20,19)[t]}
\put(50,20){\line(0,-1){18}}
\put(30,2){\line(0,1){8}}
\put(36,33){\line(0,1){9}}
\put(16,23){\line(0,1){19}}
\put(26,43){\makebox(0,0)[cc]{${\bold f}$}}
\put(40,1){\makebox(0,0)[cc]{${\bold f}$}}
\put(40,29){\line(0,1){8}}
\put(45,35){\makebox(0,0)[cc]{$\mu$}}
\end{picture}
}
\newcommand{\invfourier}{
\unitlength=0.7mm
\linethickness{0.4pt}
\begin{picture}(51,44)
\put(41,25){\oval(20,18)[b]}
\put(25.50,20){\oval(21,20)[t]}
\put(31,34){\line(0,1){10}}
\put(51,25){\line(0,1){19}}
\put(36,12){\line(0,-1){10}}
\put(15,20){\line(0,-1){18}}
\put(25,30){\line(0,1){8}}
\put(41,44){\makebox(0,0)[cc]{${\bold f}$}}
\put(26,1){\makebox(0,0)[cc]{${\bold f}$}}
\put(20,36){\makebox(0,0)[cc]{$\mu$}}
\put(1,22){\makebox(0,0)[cc]{$S^{-1} \ =$}}
\end{picture}
}
\newcommand{{\cal A}} \newcommand{\CC}{{\cal C}}{{\cal A}} \newcommand{\CC}{{\cal C}}
\newcommand{{\cal F}} \newcommand{\CS}{{\cal S}}{{\cal F}} \newcommand{\CS}{{\cal S}}
\newcommand{{\cal M}} \newcommand{\CZ}{{\cal Z}}{{\cal M}} \newcommand{\CZ}{{\cal Z}}
\newcommand{{\cal K}} \newcommand{\CN}{{\cal N}}{{\cal K}} \newcommand{\CN}{{\cal N}}
\newcommand{{\cal P}} \newcommand{\CU}{{\cal U}}{{\cal P}} \newcommand{\CU}{{\cal U}}
\newcommand{{\cal Q}} \newcommand{\CG}{{\cal G}}{{\cal Q}} \newcommand{\CG}{{\cal G}}
\newcommand{{\cal D}} \newcommand{\CR}{{\cal R}}{{\cal D}} \newcommand{\CR}{{\cal R}}
\newcommand{{\cal T}} \newcommand{\CE}{{\cal E}}{{\cal T}} \newcommand{\CE}{{\cal E}}
\newcommand{{\cal T\cal N}} \newcommand{\RG}{{\cal R\cal G}}{{\cal T\cal N}} \newcommand{\RG}{{\cal R\cal G}}
\newcommand{{\cal M\cal S}} \newcommand{\ON}{{\cal O\cal N}}{{\cal M\cal S}} \newcommand{\ON}{{\cal O\cal N}}
\newcommand{{\bold f}}{{\bold f}}
\renewcommand{\u}{{\bold u}}
\newcommand{{\Bbb R}}{{\Bbb R}}
\newcommand{{\Bbb C}}{{\Bbb C}}
\newcommand{{\Bbb Q}}{{\Bbb Q}}
\newcommand{{\Bbb Z}}{{\Bbb Z}}
\newcommand{{\Bbb N}}{{\Bbb N}}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{{\varepsilon}}{{\varepsilon}}
\newcommand{\text{comod\,}}{\text{comod\,}}
\newcommand{\text{-vect}}{\text{-vect}}
\newcommand{\text{\fontshape{n}\selectfont-Vect}}{\text{\fontshape{n}\selectfont-Vect}}
\newcommand{\otimes}{\otimes}
\newcommand{\operatorname{mod}}{\operatorname{mod}}
\newcommand{\operatorname{Aut}}{\operatorname{Aut}}
\newcommand{\operatorname{End}}{\operatorname{End}}
\newcommand{\operatorname{Hom}}{\operatorname{Hom}}
\newcommand{\operatorname{Mor}}{\operatorname{Mor}}
\newcommand{\operatorname{ev}}{\operatorname{ev}}
\newcommand{\operatorname{coev}}{\operatorname{coev}}
\newcommand{\operatorname{Im}}{\operatorname{Im}}
\newcommand{\operatorname{Ker}}{\operatorname{Ker}}
\newcommand{\operatorname{Ob}}{\operatorname{Ob}}
\newcommand{\operatorname{Id}}{\operatorname{Id}}
\newcommand{\operatorname{id}}{\operatorname{id}}
\newcommand{\operatorname{Frac}}{\operatorname{Frac}}
\newcommand{\operatorname{Int}}{\operatorname{Int}}
\newcommand{\operatorname{Card}}{\operatorname{Card}}
\newcommand{\operatorname{Can}}{\operatorname{Can}}
\newcommand{Sur\!f}{Sur\!f}
\newcommand{{1\over2}}{{1\over2}}
\newcommand{\ncite}[1]{[#1]}
\newcommand{\qquad\quad}{\qquad\quad}
\newcommand{\!\!\!\!\!\!}{\!\!\!\!\!\!}
\newcommand{\nquad\nquad}{\!\!\!\!\!\!\nquad}
\newcommand{\haj{{\ }}}{\haj{{\ }}}
\newcommand{\haj}[1]{{\mathaccent20 #1}}
\newcommand{\und}[1]{{\underline {#1}}}
\newcommand{\<}{\langle }
\renewcommand{\>}{\rangle }
\newcommand{\hspace{30 pt}}{\hspace{30 pt}}
\newcommand{\hbox to 20pt {\rightarrowfill}}{\hbox to 20pt {\rightarrowfill}}
\newcommand{\hbox to 30pt {\rightarrowfill}}{\hbox to 30pt {\rightarrowfill}}
\newcommand{\hbox to 45pt {\rightarrowfill}}{\hbox to 45pt {\rightarrowfill}}
\newcommand{\hbox to 60pt {\rightarrowfill}}{\hbox to 60pt {\rightarrowfill}}
\newcommand{\hbox to 100pt {\rightarrowfill}}{\hbox to 100pt {\rightarrowfill}}
\newcommand{\hbox to 20pt {\leftarrowfill}}{\hbox to 20pt {\leftarrowfill}}
\newcommand{\hbox to 30pt {\leftarrowfill}}{\hbox to 30pt {\leftarrowfill}}
\newcommand{\hbox to 45pt {\leftarrowfill}}{\hbox to 45pt {\leftarrowfill}}
\newcommand{\hbox to 60pt {\leftarrowfill}}{\hbox to 60pt {\leftarrowfill}}
\newcommand{\hbox to 100pt {\leftarrowfill}}{\hbox to 100pt {\leftarrowfill}}
\documentstyle[amscd,amssymb,12pt,righttag,ctagsplt,bezier]{amsart}
\vbadness=10000
\hbadness=10000
\hsize=15cm
\vsize=23.04cm
\topskip=12pt
\parindent=0.5cm
\parskip=0pt
\widowpenalty=10000
\clubpenalty=10000
\hfuzz=1.5pt
\abovedisplayskip=6pt plus 1pt
\abovedisplayshortskip=6pt plus 1pt
\belowdisplayskip=6pt plus 1pt
\belowdisplayshortskip=6pt plus 1pt
\frenchspacing
\newtheorem{thm}{Theorem}[section]
\newtheorem{cor}[thm]{Corollary}
\newtheorem{lem}[thm]{Lemma}
\newtheorem{prop}[thm]{Proposition}
\newtheorem{ax}{Axiom}
\renewcommand{\theax}{}
\theoremstyle{definition}
\newtheorem{defn}{Definition}[section]
\newtheorem{example}[defn]{Example}
\newtheorem{acknowledgement}{Acknowledgements}
\renewcommand{\theacknowledgement}{}
\theoremstyle{remark}
\newtheorem{rem}{Remark}[section]
\newtheorem{notation}{Notation}
\renewcommand{\thenotation}{}
\newtheorem{conjecture}{Conjecture}
\renewcommand{\theconjecture}{}
\numberwithin{equation}{section}
\newcommand{\thmref}[1]{Theorem~\ref{#1}}
\newcommand{\secref}[1]{Section~\ref{#1}}
\newcommand{\lemref}[1]{Lemma~\ref{#1}}
\newcommand{\propref}[1]{Proposition~\ref{#1}}
\renewcommand{\le}{\leqslant}
\renewcommand{\ge}{\geqslant}
\begin{document}
\title[\fontsize{9}{11pt}\selectfont Modular properties of
ribbon abelian categories]
{\fontsize{17}{25pt}\bf\selectfont Modular properties of ribbon
abelian categories}
\author[\fontsize{9}{11pt}\selectfont V.~Lyubashenko]
{\fontsize{14}{16pt}\selectfont Volodimir Lyubashenko}
\thanks
{The research was supported in part by the SERC research grant GR/G 42976.}
\thanks{The detailed version of this paper has been submitted
for publication elsewhere.}
\date {\ \\ \ }
\maketitle
\noindent {\fontsize{11}{13pt}\fontseries{bx}\selectfont Abstract.}
{\fontsize{11}{13pt}\selectfont
A category $N$ of labeled (oriented) trivalent graphs (nets) or ribbon
graphs is extended by new generators called fusing, braiding, twist and
switch with relations which can be called Moore--Seiberg relations. A
functor to $N$ is constructed from the category $Sur\!f$ of oriented surfaces
with labeled boundary and their homeomorphisms. Given an (eventually
non-semisimple) $k$-linear abelian ribbon braided category $\CC$ with some
finiteness conditions we construct a functor from a central extension of $N$
with the set of labels Ob$\CC$ to $k$-vector spaces. Composing the functors
we get a modular functor from a central extension of $Sur\!f$ to $k$-vector
spaces.}
\vskip 12pt
\noindent{\fontsize{11}{13pt}\selectfont 1991 Mathematics Subject
Classification: 18B30, 18D10, 57N05}
\vskip 12pt plus 1pt
\fontsize{12}{14.4pt}\selectfont
\noindent Moore and Seiberg's study \cite{MooSei} on conformal field theory
was continued and developed by Walker \cite{Wal} from the topological point
of view. The first systematic study in this direction of the example of
$\widehat{\frak s\frak l}(2)$ was made by Kohno \cite{Ko:inv,Ko:3man}. A
different topological approach was proposed by Reshetikhin and Turaev~
\cite{ResTur:3} (see also \cite{Tur:q3}). The aim of this article is
to present a categorical point of view on the subject. We use freely
notations and results from the previous papers \cite{Lyu:tan,Lyu:mod}.
We consider a category of surfaces $\CS$ labeled by a set $\CC$, which
Grothendieck calls the Teichm\"uller's tower. Its subcategory $Sur\!f$
consists of labeled surfaces and isotopy classes of their homeomorphisms.
Its central extension is denoted $ESur\!f$. We give also a definition of
a modular functor.
We show that any ribbon abelian category $\CC$, satisfying the axioms of
modularity \cite{Lyu:mod} yields a modular functor
$Z_{\CC}:ESur\!f\to k$-vect. Thus, such category deserves to be called
{\sl modular}. Precisely, modularity means the following: $\CC$ is a
noetherian abelian $k$-linear ribbon tensor category with finite
dimensional $k$-vector spaces $\operatorname{Hom}_{\CC}(A,B)$. In a cocompletion of
$\CC$ there exists a Hopf algebra $F=\int^{X\in \CC} X\otimes X\haj{{\ }}$, an
automorphism $T:F\to F$ and a Hopf pairing $\omega:F\otimes F\to I$ (see
\cite{Lyu:mod}) with the kernel $\operatorname{Ker}\omega\subset F$. We require that
(M1) ${\bold f} \stackrel{\text{def}}{=} F/\operatorname{Ker}\omega$ is an object of $\CC$
(``${\bold f}$ is finite dimensional'');
(M2) $T(\operatorname{Ker}\omega)\subset\operatorname{Ker}\omega$.
Vice versa, with some assumptions of representability any modular functor
$Z:ESur\!f \to k$-vect induce on $\CC$ the structure of a ribbon
category and the functor $Z$ is isomorphic to $Z_{\CC}$. So, up to some
extent, our conditions are not only sufficient, but also necessary.
In the case when $\CC$ is a semisimple abelian category with finite number of
simple objects and $\operatorname{Ker}\omega=0$, obtained results generalize those of Moore
and Seiberg~\cite{MooSei}. Practically, in this case the obtained functor is
the same as constructed by Walker~\cite{Wal} (in dimension two) and probably
coincides with the one of Reshetikhin and Turaev~\cite{ResTur:3}. A new
feature is a possibility to work with non-semisimple categories.
\begin{acknowledgement} I wish to thank V.G.~Drinfeld, A.~Joyal,
T.~Kohno, S.~Majid, Y.~Soibelman, R.~Street for multiple helpful discussions.
I am grateful to L.~Breen for his kind hospitality and opportunity to work
in Universit\'e Paris-Nord, where some results of this paper were obtained.
More results were obtained, when the author visited C.~De Concini at Scuola
Normale Superiore and I express my deep gratitude to him.
\end{acknowledgement}
\sEction{Introduction}
In \secref{surfaces} a category of labeled surfaces $\CS$ and isotopy classes
of their glueing maps is considered. It encompasses the usual category of
surfaces $Sur\!f$ with isotopy classes of homeomorphisms as morphisms. This
is a topological counterpart of Teichm\"uller's tower, considered by
Grothendieck \cite{Gro:esq}. He conjectured that the
Teichm\"uller's tower can be described by few essential relations in small
genera. These relations were written by Moore and Seiberg \cite{MooSei}.
They are essential relations of the category of ribbon graphs $RG$ defined
in \secref{Ribgra}. Ribbon graphs are surfaces with distinguished intervals
at the boundary. The category has infinite number of generators combined into
finite number of classes: topological morphisms (homeomorphisms and glueings
of ribbon graphs), braiding, twists and switches. Also it has infinite number
of relations, which are relatively trivial and can be considered as
commutation relations or as identification of generators.
There is a functor $dupl:RG \to \CS$. We construct in \secref{compared to} a
functor $Sur\!f\to RG$ using the presentations of the mapping class group
$M_{g,0}$ by Wajnryb~\cite{Waj} and of the braid group of a closed surface by
Scott~\cite{Scott}. The composition $Sur\!f\to RG\to\CS$ is the natural
inclusion. The conjecture is that the functor $dupl$ is an equivalence
between $\CS$ and $RG$.
We describe another category which objects are 1-complexes (graphs) called
nets and morphisms besides maps of graphs include morphisms of insertion or
deletion of a vertex, fusing, braiding, twists and switches. This category
comes in two versions---unoriented $N$ and oriented $ON$ (Sections
\ref{trivalent}--\ref{oriented}). Both are equivalent to the category of
ribbon graphs $RG$.
In \secref{Extensions} we study central extensions of the last category (in
any form) and define a universal extension $EN$.
Given an abelian ribbon tensor category $\CC$ with some finiteness properties
we define a functor on a category $EN$ labeled by objects of $\CC$ in
\secref{to a functor}. Composing the functors we get a representation of the
central extension of the category $Sur\!f$. This is a direct generalization
of Moore and Seiberg's results \cite{MooSei}.
Examples of such categories $\CC$ can be found in \cite{LyuMaj}. Besides
familiar semisimple abelian categories with finite number of simple objects,
there are other, non-semisimple examples, such as the category of
representations of a quantum group at a root of unity.
All the relevant categories and functors make part of the following diagram
\[
\begin{array}{ccccccccc}
\text{Extended} & \makebox[0mm][l]{\put(0,0){\vector(1,0){125}}} &&&&&
\text{Extended} & \to & k\text{-vect} \\
\text{Surfaces} &&&&&& \text{Nets} \\
\Big\downarrow&&&&&& \raisebox{0mm}[1mm][1mm]{\put(0,5){\vector(0,-1){50}}}\\
\text{Surfaces with} &&&& \text{Trivalent} \\
\text{homeomorphisms} &&&& \text{Nets} \\
\Big\downarrow & \searrow && \swarrow & \Big\downarrow \\
\text{Surfaces with} & \leftarrow & \text{Ribbon}
& \simeq & \text{Nets} & \simeq & \text{Oriented} \\
\text{glueings} && \text{Graphs} &&&& \text{Nets}
\end{array}
\]
Proofs of the obtained results will be published
elsewhere~\cite{Lyu:rib=mod}.
\sEction{Surfaces}\label{surfaces}
\subsEction{Labeled surfaces}
Let $\CC=\{A,B,C,...\}$ be a set of labels with an involution
$\cdot\haj{{\ }}: \CC\to\CC$. By a {\sl labeled surface} we shall understand the
following: a compact oriented surface $\Sigma$ with a boundary, with a
labelling of boundary circles
$L: \pi_0(\partial\Sigma)\to \CC$, $i\mapsto A_i$ and
with a chosen point $x_i$ on $i^{\text{th}}$ boundary circle, i.e. a section
$x:\pi_0(\partial\Sigma)\to\partial\Sigma$ of the projection
$\partial\Sigma\to \pi_0(\partial\Sigma)$ is fixed. So we write
$\partial\Sigma=\coprod_i \und{C}_i$, where a circle is described as
$\und{C}_i=(C_i,A_i,x_i)$.
\begin{defn}
Let $\Sigma,\tilde\Sigma$ be two labeled surfaces. A continuous surjective
mapping $f:\Sigma\to\tilde\Sigma$ is called {\sl glueing}, if its
restriction to interior part is an orientation preserving homeomorphism
$\operatorname{Int}\Sigma\to f(\operatorname{Int}\Sigma)$ and $\tilde\Sigma$ is a quotient space of
$\Sigma$ with respect to the following equivalence relation $\sim$ in
$\partial\Sigma$. We choose in $\pi_0(\partial\Sigma)$ some mutually
disjoint pairs $(C_j',A_j,x_j')$ and $(C_j'',A_j\haj{{\ }},x_j'')$ and
homeomorphisms $\phi_j:C_j'\to C_j''$ such that $\phi_j(x_j')=x_j''$. If
$\phi_j(x')=x''$, we write $x'\sim x''$. For any boundary circle
$(C_i,A_i,x_i)$ in $\tilde\Sigma$ its preimage must have the same labels
$(f^{-1}(C_i),A_i,f^{-1}(x_i))$.
\end{defn}
Note that a particular case of a glueing is an orientation, labelling and
distinguished points preserving homeomorphism. Since a composition of
glueings is again a glueing, they form a category.
\begin{defn}
Consider a category $\CS= \CS_{\CC}$ which objects are labeled surfaces and
morphisms are isotopy classes of glueings (an isotopy here is a continuous
family of glueings). The category $\CS$ is a symmetric monoidal category
with disjoint union as a monoidal product.
\end{defn}
\begin{prop}
The equality $gf_1=gf_2$ of morphisms in the category $\CS$ implies $f_1=f_2$
(we shall say that the category $\CS$ has {\sl left cancellations} property).
\end{prop}
\begin{pf} Glueings are surjective maps.
\end{pf}
\subsEction{Ribbon graphs} \label{Ribgra}
\begin{defn}
A ribbon graph is an oriented surface $X$ such that its each component has
a non-empty boundary, equipped with a subset $B\subset\partial X$
homeomorphic to finite disjoint sum of closed intervals, with a labelling
$L:\pi_0(B)\to \CC$, and with a chosen point in the interior of each
component of $B$, i.e. with a section $x:\pi_0(B)\to \operatorname{Int} B$ of the
projection $B\to \pi_0(B)$.
\end{defn}
\begin{defn}
Let $\CR\CG$ be a category which objects are ribbon graphs and morphisms
are isotopy classes of glueings of ribbon graphs. A {\em glueing}
$f:(X_1,B_1,L_1)\to (X_2,B_2,L_2)$ is a continuous surjective orientation
preserving mapping $f:X_1\to X_2$, such that the preimage of each point
consists of one or two points and the induced relation $\sim$ in $X_1$
($x,y\in X_1$ are equivalent if $f(x)=f(y)$) reduces to pairwise
identification of some components of $B_1$ with other components having dual
labels. Distinguished points must be pairwise identified or mapped to
distinguished points, preimage $f^{-1}(B_2)$ must be a union of components
of $B_1$, and homeomorphism $f:f^{-1}(B_2)\to B_2$ must preserve labeling.
The category $\RG$ is a symmetric monoidal category with respect to disjoint
union.
\end{defn}
The category $\CR\CG$ has left cancellation property.
There is a functor $dupl:\CR\CG\to \CS$ called duplication. Having a ribbon
graph $X$, construct a surface $\Sigma=X\cup\bar X$, where $\bar X$ is a
second copy of $X$ with reversed orientation and $\partial X-\operatorname{Int} B$ is
identified with $\partial\bar X-\operatorname{Int} \bar B$ via ``identity map''. Boundary
of $\Sigma$ is the suspension of $B$ and it obtains labeling from the
labeling of $B$. The chosen points in $B$ become chosen points in
$\partial\Sigma$. To each glueing $f:X\to Y$ of ribbon graphs corresponds
the glueing $f\cup\bar f:X\cup\bar X\to Y\cup\bar Y$ of surfaces.
The duplication functor is injective on morphisms and essentially surjective
on objects. Duplication of the following ribbon graphs (double lines mark the
subset $B\subset\partial X$)
\begin{equation}\label{D0123}
D_0= \put(20,3){\circle{32}} \qquad \qquad, \ D_1=\diskone A ,
\ D_2=\disktwo AE ,\ D_3=\disktre CAE
\end{equation}
\begin{equation}\label{A0A1}
A_0=\cylindre , \qquad A_1= \annone A
\end{equation}
gives sphere, disk, annulus, pants, torus and torus with one hole
correspondingly.
The category $\CS$ has more morphisms than $\CR\CG$. For instance, the
following are automorphisms of annulus, pants (sphere $S^2=\bar{\Bbb C}$ with 3
holes) and torus with one hole
\[ R:
\unitlength=1mm
\makebox[31mm][l]{\raisebox{-15mm}[21mm][16mm]{
\put(15,15){\circle{10}}
\put(15,15){\oval(30,30)[]}
\put(15,35){\makebox(0,0)[ct]{$A$}}
\put(15,17){\makebox(0,0)[cc]{$B$}}
\put(15,20){\line(0,1){10}}
}}
\hbox to 30pt {\rightarrowfill}\
\unitlength=1mm
\makebox[31mm][l]{\raisebox{-15mm}[21mm][16mm]{
\put(15,15){\circle{10}}
\put(15,15){\oval(30,30)[]}
\put(15,35){\makebox(0,0)[ct]{$A$}}
\put(15,17){\makebox(0,0)[cc]{$B$}}
\bezier{80}(15,30)(24,24)(24,15)
\put(15,15){\oval(18,20)[b]}
\bezier{64}(6,15)(6,22)(15,20)
}}
\]
\begin{equation}\label{omega}
\omega=\, _C\omega_{AE} :\spherepantstriline AEC \hbox to 30pt {\rightarrowfill}
\spherepantsrightlines EAC
\end{equation}
\begin{equation}\label{Shomeo}
S:
\unitlength=0.75mm
\makebox[31mm][l]{ \raisebox{-13.5mm}{
\put(0,0){\framebox(38,38)[cc]{}}
\put(19,19){\circle{10}}
\put(0,30){\line(1,0){38}}
\put(30,38){\line(0,-1){38}}
\put(19,24){\line(0,1){6}}
}}
\hbox to 30pt {\rightarrowfill}
\unitlength=0.75mm
\makebox[31mm][l]{ \raisebox{-13.5mm}{
\put(0,0){\framebox(38,38)[cc]{}}
\put(19,19){\circle{10}}
\put(30,38){\line(0,-1){38}}
\put(38,8){\line(-1,0){38}}
\put(30,24){\oval(22,12)[lt]}
}}
\end{equation}
but not of the ribbon graph they came from. Here additional lines on annulus,
pants and torus start from the chosen points on the boundary. The additional
lines on the left hand side go to the additional lines in the right hand side
under the homeomorphism $R,\omega$ or $S$, which completely determines its
isotopy class. Indeed, when the surface (annulus, sphere $S^2$ with 3 holes,
or torus with 1 hole represented by a square with identified edges) is cut
along additional lines it becomes a disk, and one knows that a homeomorphism
of the boundary of the disk can be extended to a homeomorphism of the disk
unique up to isotopy.
$R$ is called inverse Dehn twist, or shortly twist, $\omega$ is called
braiding, $S$ is called switch. Our first goal will be to give a presentation
of the category $\CS$ over the category $\CR\CG$, i.e. to add new generators
of type $R,\omega,S$ to $\CR\CG$ and to find relations, which give
a category, equivalent to $\CS$.
When we present a symmetric monoidal category with left cancellations ${\cal D}} \newcommand{\CR}{{\cal R}$
by generators and relations we mean the following: take a free category ${\cal F}} \newcommand{\CS}{{\cal S}$
on given generators; consider a minimal multiplicative equivalence relation
$\sim$ in $\operatorname{Mor}{\cal F}} \newcommand{\CS}{{\cal S}$ such that in ${\cal F}} \newcommand{\CS}{{\cal S}/\sim$ given relations are satisfied;
enlarge the multiplicative equivalence relation $\sim$ to $\sim_1$ so that
${\cal F}} \newcommand{\CS}{{\cal S}/\sim_1$ were symmetric monoidal category for a priori given tensor
product; enlarge the multiplicative equivalence relation $\sim_1$ to
$\sim_2$ so that in ${\cal D}} \newcommand{\CR}{{\cal R}={\cal F}} \newcommand{\CS}{{\cal S}/\sim_2$ any equation $gf_1=gf_2$ with
$g,f_1,f_2\in \operatorname{Mor}{\cal D}} \newcommand{\CR}{{\cal R}$ would imply $f_1=f_2$.
\begin{defn}\label{defrib}
Let $RG$ be a symmetric monoidal category with left cancellations having
ribbon graphs as objects and with morphisms generated over $\CR\CG$ by
new morphisms described as follows.
Let $\gamma:[0,1]\hookrightarrow X$ be a curve in a ribbon graph $(X,B,L)$
such that $\gamma(]0,1[)\subset \operatorname{Int} X$, $\gamma (0),\gamma (1)\in
\partial X-B$. Such curve will be called a cut. We associate with
$\gamma ([0,1])$ a new morphism $Tw_{\gamma}:X\to X$ called twist.
Let $h:D_3\hookrightarrow X$ be a continuous orientation preserving map such
that $h(\partial D_3-B_3)\subset \partial X-B$, where hexagon ribbon graph
$(D_3,B_3,L_3)$ is from \eqref{D0123} with distinguished interval $C$. We
associate with $h$ a new morphism $Br_h:X\to X'$ called braiding. Here $X'$
is obtained from $X$ by a surgery. Denote by $A,E$ subintervals of $B_3$
labeled by $A,E$. Cut $X$ along $h(A)$ and $h(E)$ and reglue the edges
cross-wise preserving the orientation.
Let $j:A_1\hookrightarrow X$ be a continuous orientation preserving map such
that $j(\partial A_1-B_1)\subset\partial X-B$, where annulus ribbon graph
$(A_1,B_1,L_1)$ is from \eqref{A0A1}. We associate with it a new morphism
$S_j:X\to X$ called switch.
These generators are subject to the following relations. If the curve
$\gamma :[0,1]\hookrightarrow X$ is shrinkable in the class of such curves
that $\gamma(0),\gamma(1)\in\partial X-B$, we set
\[Tw_\gamma=\operatorname{id}_X.\]
If the map $h:D_3\hookrightarrow X$ can be deformed inside the class of maps
with condition $h(\partial D_3-B_3)\subset\partial X-B$ so that $h(A)$ or
$h(E)$ shrinks to one point we set
\[ Br_h=\phi:X\to X' \]
if $h(C)$ shrinks to one point we set
\[Br_h=Tw_\gamma^{-1}\cdot\phi:X\to X'\]
where $\phi:X\to X'\in \CR\CG$ is the unique up to isotopy homeomorphism
between $X$ and $X'$ which is identity outside of the region $h(D_3)\cup F$
\[
\unitlength=1mm
\begin{picture}(50,30)
\put(0,0){\line(1,0){50}}
\put(50,30){\line(-1,0){50}}
\put(10,30){\line(0,-1){30}}
\put(41,0){\line(0,1){30}}
\put(16,30){\line(0,-1){30}}
\put(3,20){\makebox(0,0)[cc]{$X$}}
\put(19,6){\makebox(0,0)[cc]{$\gamma$}}
\put(28.50,30){\oval(13,24)[b]}
\put(29,24){\makebox(0,0)[cc]{$F$}}
\put(26,13){\makebox(0,0)[cc]{$h(D_3)$}}
\end{picture}
\]
and $\gamma:[0,1]\hookrightarrow h(D_3)$ is a curve isotopic to one of
non-shrinkable $h(A),h(E),h(C)$.
For each glueing $g:X\to Y$ and a map
$\gamma:[0,1]\hookrightarrow X, h:D_3\hookrightarrow X$ or $j:A_1
\hookrightarrow X$ the following diagrams commute:
\begin{equation}\label{gTw=Twg}
\begin{CD}
X @>Tw_\gamma>> X \\
@VgVV @VVgV \\
Y @>Tw_{\gamma g}>> Y
\end{CD}
\end{equation}
\begin{equation}\labl{gBr=Brg}
\begin{CD}
X @>Br_h>> X' \\
@VgVV @VVg'V \\
Y @>Br_{hg}>> Y'
\end{CD}
\end{equation}
\begin{equation}\label{gS=Sg}
\begin{CD}
X @>S_j>> X \\
@VgVV @VVgV \\
Y @>S_{jg}>> Y
\end{CD}
\end{equation}
where the glueing $g'$ is defined as composition of surgery in $h(A),h(B)$,
glueing $g$ and surgery in $g\circ h(A)$, $g\circ h(B)$.
In these diagrams $g,g'$ denote isotopy classes of glueings $g,g'$.
Particular case when $g$ is isotopic to identity shows that $Tw_\gamma,
Br_h,S_j$ depend only on isotopy classes of $\gamma,h,j$.
In the case $X=D_3$ there are the following relations
\begin{equation}\label{Br=TwTwTw}
Br^2_{BC} = Tw_A\ Tw_B^{-1} \ Tw_C^{-1} : \disktre ABC \hbox to 20pt {\rightarrowfill} \disktre ABC
\end{equation}
\begin{multline}\labl{TwA=BrBr}
Tw_A^{-1} = \left( \disktre CAB @>Br_{AB}>> \disktre CBA @>>> \right. \\
\left. @>Br_{AC}>> \disktre ABC @>rot>> \disktre CAB \right)
\end{multline}
\begin{multline}\labl{TwB=BrBr}
Tw_B^{-1}= \left( \disktre CAB @>Br_{AB}>> \disktre CBA @>>> \right. \\
\left. @>Br_{CB}>> \disktre BCA @>rot^{-1}>> \disktre CAB \right)
\end{multline}
Here $Br_{AB}$ stands for $Br_{\operatorname{id}}$, similarly $Br_{AC},Br_{CD}$ are defined
and $rot$ denotes a morphism from $\CR\CG$, rotation by $2\pi/3$.
There are two relations for $X=D_4=$ a disk with 4 intervals marked on the
boundary:
\begin{equation}\labl{Br=BrBr}
\begin{array}{rcl}
\diskfoursix DABCZY & @>_DBr_{YC}^{\pm1}>> & \diskfoursix DCABYU \\
_ZBr_{BC}^{\pm1} \searrow && \nearrow\, _UBr_{AC}^{\pm1} \\
& \diskfoursix DABCZU &
\end{array}
\end{equation}
Here three inclusions $D_3\hookrightarrow D_4$ are determined by 3 indices
appended to braidings. Image of $D_3$ covers a half of $D_4$.
Relations for $X=A_1$ are
\begin{equation}\labl{ST3=S2}
(S T)^3 =S^2 : \annonetwo M\gamma \hbox to 20pt {\rightarrowfill} \annonetwo M\gamma ,
\end{equation}
where $T=Tw_\gamma$, and \eqref{S2=Br-1Tw-1} (see Figure~\ref{A_1}).
\begin{figure}[htb]
\begin{equation}\label{S2=Br-1Tw-1}
\begin{CD}
\annonetwo X\gamma @>S^2>> \annone X \\
@V_XBr_{\gamma\gamma}^{-1}VV @| \\
\annonetwo X\gamma @>Tw_\gamma^{-1}>> \annonetwo X\gamma
\end{CD}
\end{equation}
\caption{A relation for annulus ribbon graph $A_1$\label{A_1}}
\end{figure}
Finally, there is a relation for annulus ribbon graph $A_2$ with two
intervals marked on the boundary (see diagram~\ref{A2mainS},
Figure~\ref{A_2}).
\begin{figure}[htbp]
\begin{equation}\label{A2mainS}
\begin{CD}
\anntwoline YX @>Br_{XY}>> \anntwo XY \\
@VS^{-1}VV @VVrot_\pi V \\
\anntwogamdel YX @.
\unitlength=1mm
\makebox[31mm][l]{\raisebox{-15mm}[21mm][15mm]{
\put(15,20){\circle{10}}
\put(15,20){\oval(30,30)[]}
\put(6,6){\line(1,0){7}}
\put(17,6){\line(1,0){7}}
\put(9,0){\makebox(0,0)[cb]{$Y$}}
\put(21,0){\makebox(0,0)[cb]{$X$}}
}}
\\
@VTw^{-1}_\gamma\ Tw_\delta VV @VV\phi V \\
\anntwoline YX @>S>> \anntwo YX
\end{CD}
\end{equation}
\caption{A relation for annulus ribbon graph $A_2$\label{A_2}}
\end{figure}
Here $rot_\pi\in \CR\CG$ is the rotation by $\pi$, and $\phi\in\CR\CG$ is a
homeomorphism, which is identity in a neighbourhood of the hole and slides
intervals $X,Y$ along the boundary.
\end{defn}
\begin{rem}\labl{locprince}
The requirement that $RG$ is a symmetric monoidal category with left
cancellations produces more relations than just in a category with relations
\eqref{gTw=Twg}--\eqref{A2mainS}. For instance, any two generators $f_1,f_2$
with essentially non-intersecting supports $(\gamma([0,1]),h(D_3),j(A_1))$
commute (this can be called a ``locality principle''). Essentially
non-intersecting means that images of some $\gamma',h',j'$ isotopic to
$\gamma,h,j$ do not intersect. Indeed, cut the ribbon graph $X$ into pieces
so that one of the pieces contained support of $f_1$ and another contained
support of $f_2$. Denote the resulting ribbon graph $\tilde X$. The liftings
$\tilde{f_1},\tilde{f_2}: \tilde X\to \tilde X$ commute because they are
tensor products (disjoint unions) of a generator with identity map. The
axioms \eqref{gTw=Twg}--\eqref{gS=Sg} imply that
$gf_1f_2 = gf_2f_1 : \tilde X\to X$. The left cancellation property implies
that $f_1,f_2:X\to X$ commute.
Also \eqref{Br=TwTwTw}--\eqref{A2mainS} imply relations of the type
\eqref{Br=TwTwTw}--\eqref{A2mainS} in $X$
associated with embeddings of $D_3$, $D_4$, $A_1$, $A_2$ into $X$.
\end{rem}
\begin{prop}
The duplication functor $dupl:\CR\CG\to \CS$ extends to a functor
$dupl:RG\to \CS$. It sends $Tw_\gamma:X\to X$ to the inverse Dehn twist $R$
in $X\cup\bar X$ performed in neighbourhood of the cycle
$\gamma\cup\bar\gamma$. The braiding $Br_h:X\to X'$ goes to a braiding
homeomorphism $\omega:X\cup\bar X\to X'\cup\bar X'$, supported in
$h(D_3)\cup \bar{h(D_3)}$, looking as \eqref{omega}
\[ \omega:
\unitlength=0.5mm
\makebox[30mm][l]{\raisebox{-13mm}[13.5mm][13mm]{
\put(30,45){\circle{10}}
\put(15,15){\circle{10}}
\put(45,15){\circle{10}}
\put(15,15){\makebox(0,0)[cc]{$A$}}
\put(45,15){\makebox(0,0)[cc]{$B$}}
\put(30,45){\makebox(0,0)[cc]{$C$}}
\put(30,40){\line(0,-1){14}}
\put(30,26){\line(-4,-3){11}}
\put(30,26){\line(4,-3){11}}
\put(45,20){\line(-1,2){11}}
\put(26,42){\line(-1,-2){11}}
\put(20,15){\line(1,0){20}}
}}
\hbox to 30pt {\rightarrowfill}
\unitlength=0.5mm
\makebox[38mm][l]{\raisebox{-13mm}[13.5mm][13mm]{
\put(30,45){\circle{10}}
\put(15,15){\circle{10}}
\put(45,15){\circle{10}}
\put(15,15){\makebox(0,0)[cc]{$B$}}
\put(45,15){\makebox(0,0)[cc]{$A$}}
\put(30,45){\makebox(0,0)[cc]{$C$}}
\put(42,19){\line(2,3){6.67}}
\put(48.67,29){\line(-5,3){18.67}}
\bezier{472}(49,29)(74,-17)(19,18)
\put(45,20){\line(-1,2){11}}
\put(26,42){\line(-1,-2){11}}
\put(20,15){\line(1,0){20}}
}}
\]
(exterior of the first figure is $\bar{h(D_3)}$).
The switch $S_j:X\to X$ goes to a switching homeomorphism
$S:X\cup\bar X\to X\cup\bar X$ supported in $j(A_1)\cup\bar {j(A_1)}$ looking
as \eqref{Shomeo}
\[ S:
\unitlength=0.75mm
\makebox[31mm][l]{ \raisebox{-13.5mm}{
\put(0,0){\framebox(38,38)[cc]{}}
\put(19,19){\circle{10}}
\put(0,30){\line(1,0){38}}
\put(30,38){\line(0,-1){38}}
\put(19,24){\line(0,1){6}}
\put(19,19){\makebox(0,0)[cc]{$A$}}
\put(14,19){\line(-1,0){14}}
\put(24,19){\line(1,0){14}}
}}
\hbox to 30pt {\rightarrowfill}
\unitlength=0.75mm
\makebox[31mm][l]{ \raisebox{-13.5mm}{
\put(0,0){\framebox(38,38)[cc]{}}
\put(19,19){\circle{10}}
\put(30,38){\line(0,-1){38}}
\put(38,8){\line(-1,0){38}}
\put(30,24){\oval(22,12)[lt]}
\put(19,19){\makebox(0,0)[cc]{$A$}}
\put(24,19){\line(1,0){14}}
\put(14,19){\line(-1,0){14}}
}}
\]
(the upper half of the square with identified edges is $j(A_1)$, the lower
half is $\bar{j(A_1)}$).
\end{prop}
\sEction{Ribbon graphs compared to surfaces} \label{compared to}
We say that a ribbon graph $(X,B)$ has $g$ loops with $n$ entries if
$\dim H_1(X,{\Bbb R})=g, \operatorname{Card}\pi_0(B)=n$. Duplicated, it gives a surface
$X\cup\bar X$ of genus $g$ with $n$ holes.
\begin{thm} \label{StoRGthm}
There exists a functor $\Phi: Sur\!f\to RG$ such that the composition
$dupl\circ \Phi:Sur\!f\to\CS$ is isomorphic to the inclusion functor
$Sur\!f \hookrightarrow\CS$.
\end{thm}
\begin{pf}
The theorem reduces to constructing a splitting of the homomorphism
\[ \operatorname{Aut}_{RG}(Y_{g,n}) \to \operatorname{Aut}_{\CS}(dupl(Y_{g,n})) \simeq
\operatorname{Aut}_{Sur\!f}(dupl(Y_{g,n})) .\]
Here $Y_{g,n}$ means a graph with $g$ loops (of genus $g$) and with $n$
components of $B$ (entries). In general, $Y_{g,n}$ is labeled by $n_1$
copies of $L_1\in\CC$, $n_2$ copies of $L_2\in\CC$, etc. Each morphism
induces a permutation of the set
$\pi_0(B)\simeq\pi_0(\partial(dupl(Y_{g,n})))$, so there are group
homomorphisms
\[ \operatorname{Aut}_{RG}(Y_{g,n})\to \operatorname{Aut}_\CS(dupl(Y_{g,n}))\to \frak S_n.\]
In particular case when all labels coincide, the automorphism groups
$G_{RG},G_\CS$ will be the biggest, and the general case groups are just
preimages under projections $G_{RG}\to \frak S_n$, $G_\CS\to\frak S_n$ of the
subgroup of permutations leaving invariant $n_k$-element subset of $\pi_0(B)$
with label $L_k, k\ge 1, n=n_1+n_2+\dots$. Thus it suffices to consider the
case when all labels $L\in\CC$ coincide. We have to construct a splitting of
the homomorphism
\[ \operatorname{Aut}_{RG}(Y_{g,n})\to\operatorname{Aut}_{\CS}(dupl(Y_{g,n}))=M_{g,n}, \]
where the last group is the mapping class group of a surface
of genus $g$ with $n$ holes.
Using braiding isomorphisms any ribbon graph $Y_{g,n}$ can be reduced to a
standard form
\[
\unitlength=0.7mm
\begin{picture}(143,30)
\put(0,15){\makebox(0,0)[cc]{$X_{g,n}\,=$}}
\put(131.50,15){\oval(243,30)[l]}
\put(30,15){\circle{10}}
\put(55,15){\circle{10}}
\put(105,15){\circle{10}}
\put(132,30){\line(0,-1){30}}
\put(131,28){\line(0,-1){4}}
\put(131,21){\line(0,-1){4}}
\put(131,2){\line(0,1){4}}
\put(136,26){\makebox(0,0)[cc]{1}}
\put(136,19){\makebox(0,0)[cc]{2}}
\put(136,4){\makebox(0,0)[cc]{$n$}}
\end{picture}
\]
so we have to consider only standard case $Y_{g,n}=X_{g,n}$. Using results of
Dehn \cite{Dehn} and Lickorish \cite{Lic:3,Lic:gen}, Birman \cite{Bir:mcg}
gave a set of generators of $M_{g,n}$. These consist of braidings
permuting the holes, some Dehn twists coming from $Tw_\gamma\in RG$, and
some Dehn twists along cycles in $dupl(X_{g,n})$ which can be obtained from
cycles of the form $\gamma\cup\bar\gamma$ by action of homeomorphisms $S$.
This implies that the map
\[ \operatorname{Aut}_{RG}(X_{g,n})\to M_{g,n} \]
is an epimorphism. More details will be given below.
When $g\le 1$ or $n\le 1$ not only generators of $M_{g,n}$ but also relations
are known (Magnus \cite{Mag} for $g=0$, Birman~\cite{Bir:mcg} for $g=1$ and
Wajnryb \cite{Waj} for $n=0,1$). For generic values of $g,n$ defining
relations were not written, but it is known (Birman~\cite{Bir:mcg}, Scott
\cite{Scott}) that there is an exact sequence
\[ 1 \to \bar B_n(\Sigma_g) \to M_{g,n} \to M_{g,0} \to 1 ,\]
where $\bar B_n(\Sigma_g)$ is the extension of the braid group of a closed
surface of genus $g$ by ${\Bbb Z}^n$. Its defining relations are found by Scott
\cite{Scott}. This information is sufficient to show case by case that
generators of $M_{g,n}$ admit liftings to $\operatorname{Aut}_{RG}(X_{g,n})$ and the
relations in $M_{g,n}$ lift to relations in $\operatorname{Aut}_{RG}(X_{g,n})$. This
implies the theorem.
\end{pf}
The topological version of the Grothendieck conjecture
about Teichm\"uller's tower is
\begin{conjecture}
The duplication functor $dupl:RG\to \CS$ is an equivalence of categories.
\end{conjecture}
\sEction{A category of trivalent nets}\label{trivalent}
\subsEction{Trivalent nets}
\begin{defn}
Let a trivalent net $\Gamma$ be a 1-complex with set of vertices
$V\sqcup B$ and set of edges $E$. Elements of $V$, called 3-vertices, occur
thrice as endpoints of edges, and elements of $B$, called ends, occur once as
an endpoint of an edge. Each edge has at least one endpoint in $V$ and
$\Gamma$ is equipped with labeling of ends $B\to \CC$. For each vertex
$v\in V$ belonging to $3$ edges $a,b,c$ denote $a',b',c'$ 3 different isotopy
classes of embeddings $\gamma: [0,1] \hookrightarrow a,b,c$ such that
$\gamma(0)=a$. A cyclic order in the set $\{a',b',c'\}$
is chosen (which is one of the two cyclic permutations $(a',b',c')$ or
$(a',c',b')$).
\end{defn}
An edge connecting a 3-vertex and an end is called external leg, others are
internal edges. Trivalent nets will be drawn in the plane in such a way
that cyclic orientation of edges around a 3-vertex is coherent with the
orientation of the plane of drawing.
Let $\Gamma_1,\Gamma_2$ be trivalent nets.
\begin{defn}
A glueing $g:\Gamma_1\to\Gamma_2$ is a surjective continuous mapping of
1-complexes (sending vertices to vertices and edges into edges), bijective on
3-vertices and preserving cyclic order for edges incident to each 3-vertex,
and such that preimage of an end is an end with the same
label from $\CC$, preimage of an internal edge is an internal edge or a pair
of external legs with dual labels.
\end{defn}
Denote ${\cal T\cal N}} \newcommand{\RG}{{\cal R\cal G}_0$ the category of trivalent nets with glueings as morphisms. It
is a symmetric monoidal category with disjoint union as a monoidal product.
\begin{defn}\label{deftrivalnet}
Let ${\cal T\cal N}} \newcommand{\RG}{{\cal R\cal G}$ be a symmetric monoidal category with left cancellations having
trivalent nets as objects
and morphisms, generated over $\operatorname{Mor}{\cal T\cal N}} \newcommand{\RG}{{\cal R\cal G}_0$ by fusing moves: for each internal
edge $a\in\Gamma\in\operatorname{Ob} {\cal T\cal N}} \newcommand{\RG}{{\cal R\cal G}_0$ with different endpoints the fusing move
$fus_a:\Gamma\to\Gamma'$ is a new morphism, where $\Gamma'$ differs from
$\Gamma$ in a neighbourhood of $a$ as shown in the figure
\[ fus_a: \tarahor{}{}{}{}a \hbox to 30pt {\rightarrowfill} \taraver{}{}{}{}{} .\]
New morphisms are subject to the following relations (commutative diagrams).
For any glueing $g$ and any internal edge $a\in \Gamma_1$
\begin{equation}\label{gfus=fusg}
\begin{CD}
\Gamma_1 @>fus_a>> \Gamma_1' \\
@VgVV @VVg'V \\
\Gamma_2 @>fus_{g(a)}>> \Gamma_2'
\end{CD}
\end{equation}
The following relations are written literally, without any complementary part
\begin{equation}\labl{fusfus}
\left( \tarahor{}{}{}{}a @>fus_a>> \taraver{}{}{}{}b @>fus_b>>
\tarahor{}{}{}{}{} \right) =\operatorname{id}
\end{equation}
and \eqref{pentagon}, Figure~\ref{5horses}.
\begin{figure}[htb]
\begin{equation}\label{pentagon}
\unitlength=0.75mm
\begin{picture}(159,131)
\put(80,131){\line(2,-3){8}}
\put(88,119){\line(5,-2){12}}
\put(88,119){\circle*{2}}
\put(88,119){\line(-2,-1){14}}
\put(74,112){\line(1,-2){5.67}}
\put(79.67,101){\line(5,-3){11.33}}
\put(74,112){\circle*{2}}
\put(80,100){\circle*{2}}
\put(80,100){\line(-2,-1){11}}
\put(74,112){\line(-5,1){14}}
\put(20,91){\line(3,-5){7.67}}
\put(27.67,78){\line(5,-2){12.33}}
\put(28,78){\circle*{2}}
\put(28,78){\line(-1,-3){4.33}}
\put(23.67,65){\line(-4,1){12.67}}
\put(11,68){\line(-2,1){11}}
\put(11,68){\circle*{2}}
\put(23,65){\circle*{2}}
\put(23,65){\line(2,-3){8}}
\put(11,68){\line(-1,-3){5}}
\put(140,91){\line(-3,-5){7.67}}
\put(132.33,78){\line(3,-2){13.67}}
\put(146,69){\line(-4,-5){7.33}}
\put(138.67,60){\line(5,-3){12.33}}
\put(133,78){\circle*{2}}
\put(133,78){\line(-5,-2){13}}
\put(146,69){\circle*{2}}
\put(146,69){\line(3,1){13}}
\put(139,60){\circle*{2}}
\put(139,60){\line(-3,-2){10}}
\put(115,36){\line(-3,-5){7.33}}
\put(107.67,24){\line(-3,-1){12.67}}
\put(108,24){\circle*{2}}
\put(108,24){\line(1,-3){4}}
\put(112,12){\line(-5,-6){9}}
\put(112,12){\circle*{2}}
\put(112,12){\line(5,1){14}}
\put(126,14.67){\line(5,3){9}}
\put(126,15){\circle*{2}}
\put(126,15){\line(1,-5){2.67}}
\put(45,36){\line(0,-1){13}}
\put(45,23){\line(-6,-5){10}}
\put(45,23){\circle*{2}}
\put(45,23){\line(6,-5){9}}
\put(54,15.67){\line(1,-5){3}}
\put(54,15){\circle*{2}}
\put(54,15){\line(2,1){11}}
\put(35,15){\circle*{2}}
\put(35,15){\line(-2,1){10}}
\put(35,15){\line(-1,-5){2.67}}
\put(70,14){\vector(1,0){20}}
\put(27,50){\vector(2,-3){11.33}}
\put(40,87){\vector(2,1){19}}
\put(104,97){\vector(2,-1){19}}
\put(126,33){\vector(3,4){12.67}}
\end{picture}
\end{equation}
\caption{The pentagon relation\label{5horses}}
\end{figure}
\end{defn}
There is a functor $fat:{\cal T\cal N}} \newcommand{\RG}{{\cal R\cal G}\to\CR\CG$ called fattening. Given a trivalent
net $\Gamma$ with a set of 3-vertices $V$, we take $\operatorname{Card} V$ copies of a
ribbon graph $D_3$, glue pairwise intervals corresponding to internal edges
of $\Gamma$, and put on $fat(\Gamma)$ the same labels as on $\Gamma$. It is
important to glue respecting the orientation of $D_3$'s, and to make a
bijection between $\pi_0(B_3)$ and the set of edges incident to a 3-vertex
respecting the cyclic ordering ($\pi_0(B_3)$ has a canonical cyclic ordering
coming from the orientation of $D_3$). Fattening of a glueing is a glueing
and fattening of the fusing move $fus_a$ is the isotopy class of a
homeomorphism $fat(\Gamma)\to fat(\Gamma')$, which is identity outside of
the two $D_3$'s glued by $a$.
\[
\unitlength=1mm
\makebox[28mm][l]{
\raisebox{-14mm}[15mm][13mm]{
\put(1,10){\line(1,-1){7}}
\put(8,3){\line(1,0){10}}
\put(18,3){\line(1,1){7}}
\put(25,10){\line(0,1){10}}
\put(25,20){\line(-1,1){7}}
\put(18,27){\line(-1,0){10}}
\put(8,27){\line(-1,-1){7}}
\put(1,20){\line(0,-1){10}}
\put(1,11){\line(1,-1){8}}
\put(9,27){\line(-1,-1){8}}
\put(17,3){\line(1,1){8}}
\put(17,27){\line(1,-1){8}}
\put(13,3){\line(0,1){24}}
}}
\hbox to 30pt {\rightarrowfill}
\unitlength=1mm
\makebox[28mm][l]{
\raisebox{-14mm}[15mm][13mm]{
\put(1,10){\line(1,-1){7}}
\put(8,3){\line(1,0){10}}
\put(18,3){\line(1,1){7}}
\put(25,10){\line(0,1){10}}
\put(25,20){\line(-1,1){7}}
\put(18,27){\line(-1,0){10}}
\put(8,27){\line(-1,-1){7}}
\put(1,20){\line(0,-1){10}}
\put(1,11){\line(1,-1){8}}
\put(9,27){\line(-1,-1){8}}
\put(17,3){\line(1,1){8}}
\put(17,27){\line(1,-1){8}}
\put(1,15){\line(1,0){24}}
}} .\]
The ribbon graphs $fat(\Gamma)$ are canonically marked. A {\em marking} $M$
of a ribbon graph $(X,B)$ is by definition an isotopy class of
$\bigcup_k\gamma_k([0,1])$ for non-intersecting cuts $\gamma_k:[0,1]
\hookrightarrow X$ (such that $\gamma_k(]0,1[)\subset\operatorname{Int} X$,
$\gamma_k(0),\ \gamma_k(1)\in\partial X-B$) and such that
$X-\bigcup_k\gamma_k([0,1])$ is a union of hexagons $D_3$.
\subsEction{Coherence theorem for markings of ribbon graphs}
\begin{thm}\labl{markingfusing}
Any marking of a ribbon graph can be obtained from any other marking by a
sequence of fusing moves. Any such sequence can be transformed into any
other using the pentagon, quadrilateral and $fus^2=\operatorname{id}$ identity.
\end{thm}
\begin{cor}
For all trivalent nets $\Gamma_1,\Gamma_2$
\[ {\cal T\cal N}} \newcommand{\RG}{{\cal R\cal G}(\Gamma_1,\Gamma_2)\to \CR\CG(fat(\Gamma_1),fat(\Gamma_2)) \]
is bijective.
\end{cor}
\begin{pf}
Any morphism $f$ in ${\cal T\cal N}} \newcommand{\RG}{{\cal R\cal G}$ or $\CR\CG$ is uniquely decomposed as $gh$,
where $g$ is a canonical glueing determined by the set of pairs of ends glued
by $f$, and $h$ is an isomorphism. Hence, we can consider only isomorphisms.
Let $\phi:fat(\Gamma_1)\to fat(\Gamma_2)$ be a homeomorphism. Use it to
transfer the canonical marking of $fat(\Gamma_2)$ to a marking $M$ of
$fat(\Gamma_1)$. By Theorem it can be obtained from the canonical marking of
$fat(\Gamma_1)$ by a sequence of fusing moves. This sequence determines a
morphism $j:\Gamma_1\to\Gamma\in{\cal T\cal N}} \newcommand{\RG}{{\cal R\cal G}$ where $\Gamma$ is the net determined
by the marking $M$. It is identified with $\Gamma_2$ by an isomorphism
$i:\Gamma\to\Gamma_2\in{\cal T\cal N}} \newcommand{\RG}{{\cal R\cal G}_0$. Finally, for $h=ji$ we have $\phi=fat(h)$.
For the second statement,
it suffices to consider the case of isomorphic $\Gamma_1$ and
$\Gamma_2$. Furthermore, we have only to prove that any
$f:\Gamma\to\Gamma\in{\cal T\cal N}} \newcommand{\RG}{{\cal R\cal G}$, such that $fat(f)$ is isotopic to
$\operatorname{id}_{fat(\Gamma)}$, equals $\operatorname{id}_\Gamma$. Represent $f$ as a product
$fus_{a_1}\ fus_{a_2}\ \dots\ fus_{a_n}$. The sequence of fusing moves in
$fat(\Gamma)$ corresponds to it. This sequence produces a new marking of
$fat(\Gamma)$, which equals to the image of the canonical marking under the
homeomorphism $fat(f)^{-1}$. Hence, it is isotopic to the canonical marking.
This gives a loop in ${\cal M}} \newcommand{\CZ}{{\cal Z}$ which by the Theorem can be shrunk by the
pentagon, quadrilateral and $fus^2=\operatorname{id}$ moves. Parallelly we reduce the
expression for $f$ to 1, using the pentagon, quadrilateral and $fus^2=\operatorname{id}$
identities in ${\cal T\cal N}} \newcommand{\RG}{{\cal R\cal G}$.
\end{pf}
This corollary implies
\begin{thm}
The functor $fat: {\cal T\cal N}} \newcommand{\RG}{{\cal R\cal G} \to \RG$ is an equivalence of ${\cal T\cal N}} \newcommand{\RG}{{\cal R\cal G}$ with
a full subcategory $\RG_+$ of ribbon graphs, each component of which satisfy
the inequalities
\begin{equation}\label{g>n>}
n\ge3\quad \text{ or }\quad g\ge1,\, n\ge1\quad \text{ or } \quad g\ge2
\end{equation}
($g,n$ stand for genus and number of marked intervals).
\end{thm}
\begin{rem}
Formally adjoining the objects $D_0, D_1, D_2, A_0$ and some morphisms to
$\RG_+$ we can reconstruct $\RG$. We shall do a similar thing later.
\end{rem}
\subsEction{Trivalent nets with twists, braiding and switching}
Now we add to ${\cal T\cal N}} \newcommand{\RG}{{\cal R\cal G}$ new morphisms just as we added them to $\RG$. The
obtained category $TN$ will be a full subcategory of $RG$.
\begin{defn}\label{defTN}
Let $TN$ be a symmetric monoidal category with left cancellations which
objects are trivalent nets and morphisms are generated over ${\cal T\cal N}} \newcommand{\RG}{{\cal R\cal G}$ by the
following morphisms.
For an edge $A$ of a trivalent net $\Gamma$ there is a twist
automorphism $Tw_A:\Gamma \to \Gamma$.
For a 3-vertex $a\in \Gamma$ and two incident edges $B,C$ there is
a braiding morphism $_aBr_{BC}: \Gamma \to \Gamma'$, denoted also $_ABr_{BC}$
or $Br_{BC}$ if there is no ambiguity, where $\Gamma'$ is obtained from
$\Gamma$ by a surgery cutting edges $B$ and $C$ and reglueing the ends
crosswise. $\Gamma'$ is isomorphic to $\Gamma$ with the changed cyclic
orientation at $a$. Locally it looks like
\[ _aBr_{BC}: \triup ABC \hbox to 30pt {\rightarrowfill} \triup ACB \]
For each loop-edge $A$ of $\Gamma$ there is a switch automorphism
$S_A:\Gamma \to \Gamma$, e.g.
\[ S_A: \tennis {}A @>>> \tennis {}A .\]
The morphisms are subject to the following relations. For any glueing $g$
\begin{equation}\label{gTw=TN}
\begin{CD}
\Gamma_1 @>Tw_A>> \Gamma_1 \\
@VgVV @VVgV \\
\Gamma_2 @>Tw_{g(A)}>> \Gamma_2
\end{CD}
\end{equation}
\begin{equation}
\begin{CD}
\Gamma_1 @>_aBr_{BC}>> \Gamma_1' \\
@VgVV @VVg'V \\
\Gamma_2 @>_{g(a)}Br_{g(B),g(C)}>> \Gamma_2'
\end{CD}
\end{equation}
\begin{equation}\label{gS=TN}
\begin{CD}
\Gamma_1 @>S_A>> \Gamma_1 \\
@VgVV @VVgV \\
\Gamma_2 @>S_{g(A)}>> \Gamma_2
\end{CD}
\end{equation}
commute. Also
\begin{equation}\label{Br2=3TwTN}
Br^2_{BC} = Tw_A\ Tw_B^{-1} \ Tw_C^{-1} : \triup ABC \hbox to 20pt {\rightarrowfill} \triup ABC
\end{equation}
\begin{equation}\labl{TwA=BrBrTN}
Tw_A^{-1} = \left( \triup CAB \buildrel Br_{AB}\over\hbox to 30pt {\rightarrowfill} \triup CBA
\buildrel Br_{AC}\over\hbox to 30pt {\rightarrowfill} \triup ABC @>rot>> \triup CAB \right)
\end{equation}
\begin{equation}\labl{TwB=BrBrTN}
Tw_B^{-1}= \left( \triup CAB \buildrel Br_{AB}\over\hbox to 30pt {\rightarrowfill} \triup CBA
\buildrel Br_{CB}\over\hbox to 30pt {\rightarrowfill} \triup BCA @>rot^{-1}>> \triup CAB \right)
\end{equation}
(here $rot$ is an isomorphism of rotation),
\begin{equation}\labl{hexagon}
\begin{array}{ccccc}
\tarahor XABC{} & \stackrel{Br_{BC}^{\pm1}}{\hbox to 45pt {\rightarrowfill}} & \tarahor XACB{} &
\stackrel{fus}{\hbox to 45pt {\rightarrowfill}} & \taraver XACB{} \\
fus \bigg\downarrow \qquad &&&& \qquad \bigg\downarrow Br_{AC}^{\pm1} \\
\taraver XABCY & \stackrel{Br_{YC}^{\pm1}}{\hbox to 45pt {\rightarrowfill}} & \tarahor XCABY &
\stackrel{fus}{\hbox to 45pt {\rightarrowfill}} & \taraver XCAB{}
\end{array}
\end{equation}
\begin{equation}\labl{ST3=S2TN}
(S_M Tw_M)^3 =S^2_M : \tennis {}M \hbox to 20pt {\rightarrowfill} \tennis {}M ,
\end{equation}
\begin{equation}\labl{S2=Br-1Tw-1TN}
\begin{array}{ccc}
\tennis XM & \buildrel S_M^2\over\hbox to 45pt {\rightarrowfill} & \tennis XM \\
_XBr_{MM}^{-1}\bigg\downarrow\qquad & & \quad \bigg\Vert \\
\tennis XM & \buildrel Tw_M^{-1}\over\hbox to 45pt {\rightarrowfill} & \tennis XM
\end{array}
\end{equation}
and \eqref{mainSdiagTN} (see Figure~\ref{A-2net}) are satisfied.
\begin{figure}[htbp]
\begin{equation}\label{mainSdiagTN}
\begin{array}{ccc}
\celodown YXAN & \stackrel{Br_{XY}}{\hbox to 100pt {\rightarrowfill}} & \celodown XYAN \\
S_N^{-1}\bigg\downarrow\quad && \quad\bigg\downarrow rot \\
\celodown YXAM && \celoup YXAN \\
fus_A\bigg\downarrow\qquad && \qquad\bigg\downarrow fus_A \\
\sun YXLM && \sun YXNP \\
\nquad\nquad Tw^{-1}_L\ Tw_M\bigg\downarrow\qquad\quad &&\qquad\bigg\downarrow fus_N\\
\sun YXLM & \stackrel{fus_L}{\hbox to 30pt {\rightarrowfill}} \celodown YXBM \stackrel{S_M}{\hbox to 30pt {\rightarrowfill}} &
\celodown YXBP
\end{array}
\end{equation}
\caption{A relation for $A_2$ net\label{A-2net}}
\end{figure}
\end{defn}
It is clear that the functor $fat: {\cal T\cal N}} \newcommand{\RG}{{\cal R\cal G} \to \RG$ extends to a functor
$fat: TN \to RG$.
\begin{thm}\labl{fatTNRG}
The functor $fat:TN\to RG$ is an equivalence of $TN$ with a full subcategory
$RG_+$ of ribbon graphs satisfying condition \eqref{g>n>}.
\end{thm}
\begin{pf} A quasiinverse functor $\Phi: \RG_+ \to {\cal T\cal N}} \newcommand{\RG}{{\cal R\cal G}$ is constructed like
that. For any ribbon graph $X$ choose a marking $M_X$ and set $\Phi(X)$ to be
the trivalent net $N(M_X)$ associated with $M_X$. Let $g:X\to Y$ be a
glueing, then there is a marking $g^*(M_X)$ of $Y$ consisting of $g(M_X)$
and images of those boundary intervals of $X$ which are glued by $g$. To $g$
corresponds the morphism
\[ \Phi(X) = N(M_X) @>g'>> N(g^*(M_X)) @>f>> N(M_Y) = \Phi(Y) ,\]
where $g'$ is a canonical glueing, and $f$ is a composition of fusings,
transforming $g^*(M_X)$ into $M_Y$.
The functor $\Phi$ extends to a functor $\Phi: RG_+ \to TN$. Indeed, take a
map $\gamma :[0,1] \hookrightarrow X$, or $h: (D_3,B_3) \hookrightarrow X$,
or $j:(A_1,B_1) \hookrightarrow X$ which determines a morphism $Tw_\gamma$,
$Br_h$, or $S_j$ as in Definition \ref{defrib}. There exists a marking $M$ of
$X$ containing $\gamma([0,1])$, or $h(B_3)$, or $j(B_1)$ and a composition
$f$ of fusing moves relating $M_X$ and $M$. Set
$\Phi(Tw_\gamma) = f\ Tw_A\ f^{-1}$, or $\Phi(S_j) = f\ S_A\ f^{-1}$ for $A$
from $M$ determined by $\gamma$ or $h$ and similarly for $Br_h$. The morphism
$f\in {\cal T\cal N}} \newcommand{\RG}{{\cal R\cal G}$ is unique and a different choice of marking $M$, say $M'$, will
give $f'= fg$, where $g$ is a composition of fusing moves $fus_a$ with
$a\ne \gamma([0,1])$, $a\not\subset h(B_3)$, $a\ne j(B_1)$. Hence,
$\Phi(Tw_\gamma)$, $\Phi(Br_h)$, $\Phi(S_j)$ are correctly defined morphisms
of $TN$.
The relations \eqref{gTw=TN}--\eqref{gS=TN} follow from the relations
\eqref{gTw=Twg}--\eqref{gS=Sg}, and $\Phi$ applied to
\eqref{Br2=3TwTN}--\eqref{mainSdiagTN} is just
\eqref{Br=TwTwTw}--\eqref{A2mainS}. Thus the functor $\Phi:RG_+\to TN$ is
constructed and one can easily see that it is quasiinverse to
$fat:TN \to RG_+$.
\end{pf}
\sEction{A category of nets}
\subsEction{Nets}
Now we consider nets with valency not bigger than 3.
\begin{defn}
Let a net be a 1-complex $\Gamma$ with the set of vertices
$V_3\sqcup V_2 \sqcup V_1\sqcup B$ and the set of edges $E$. The elements of
$V_i$, called $i$-vertices ($i=1,2,3$) occur $i$ times as endpoints of edges,
elements of $B$ called ends occur once as an endpoint of an edge. We assume
that each edge has at least one endpoint in $V_3\sqcup V_2 \sqcup V_1$. A net
$\Gamma$ is equipped with a labeling of ends $B\to \CC$ and for each 3-vertex
a cyclic order in the set of incident edges is chosen.
\end{defn}
Denote $\CN_0$ the symmetric monoidal category of nets with glueings as
morphisms. Glueings are defined similarly to trivalent case with requirement
that a glueing is bijective on $i$-vertices, $i=1,2,3$.
\begin{defn}
Let $\CN$ be a symmetric monoidal category with left cancellations which has
nets as objects, and morphisms are generated over $\CN_0$ by the following
morphisms:
insertion $ins_a:\Gamma\to \Gamma'$ is an isomorphism of $\Gamma$ to
$\Gamma'$ with a 1-vertex $a\in\Gamma'$ and the incident edge deleted or a
2-vertex $a\in\Gamma'$ deleted and the two incident edges combined in one;
deletion $del_a:\Gamma'\to \Gamma$ is the inverse morphism.
These morphisms are subject to the following relations. Whenever the diagram
\begin{equation}\label{gins=N}
\begin{CD}
\Gamma_1 @>ins_a>> \Gamma_1' \\
@VgVV @VVg'V \\
\Gamma_2 @>ins_{g'(a)}>> \Gamma_2'
\end{CD}
\end{equation}
commutes as a diagram of mappings of 1-complexes, it commutes also in $\CN$.
Also the diagrams
\begin{equation}
\begin{CD}
\edgepoint a @>ins_b>> \lezhakpt ab \\
@Vins_cVV @VVins_cV \\
\lezhakpt ac @>ins_b>>
{\unitlength=0.75pt
\makebox[39 pt][l]{
\raisebox{3 pt}[19 pt][13 pt]{
\put(22,0){\line(-1,0){23}}
\put(22,0){\line(3,-5){12}}
\put(34,-20){\circle*{4}}
\put(22,0){\line(3,5){12}}
\put(34,20){\circle*{4}}
\put(22,0){\circle*{4}}
\put(2,4){$a$}
\put(35,-17){$b$}
\put(35,10){$c$}
}}}
\end{CD}
\end{equation}
\begin{equation}\labl{ins12ver}
\begin{CD}
\pointedge a @>ins_b>> \ptlezhak ba \\
@Vins_cVV \nquad\nquad\nqquad \nearrow \phi \qquad\quad \\
\ptlezhak ac
\end{CD}
\end{equation}
where $\phi$ is the isomorphism of nets,
\begin{equation}\label{ins2pointN}
\begin{CD}
\lezhak{}a @>ins_b>> \lintwopt ba \\
@Vins_cVV \nquad\nquad\nqquad \nearrow \psi \qquad\quad\qquad \\
\lintwopt ac
\end{CD}
\end{equation}
where $\psi$ is the isomorphism of nets sending $a \mapsto b$, $c \mapsto a$,
are all commutative.
Finally, the isomorphism
\begin{equation}\label{rot=1}
rot: \twopointedge \hbox to 20pt {\rightarrowfill} \twopointedge
\end{equation}
interchanging the vertices is set equal to identity morphism.
\end{defn}
Let $\CN_+$ be the full subcategory of such nets that genus $g$ and the
number of ends $n$ for each component satisfy \eqref{g>n>}. There is an
obvious inclusion functor $inc: {\cal T\cal N}} \newcommand{\RG}{{\cal R\cal G}_0 \to \CN_+$. Let us construct a functor
$tri: \CN_+ \to {\cal T\cal N}} \newcommand{\RG}{{\cal R\cal G}_0$. Take a net $\Gamma$. Denote $red_1(\Gamma)$ the net
$\Gamma$ with all 1-vertices and incident edges deleted. Repeat the reduction
until $\Gamma'=(red_1)^k(\Gamma)$ does not contain 1-vertices. Change each
connected string of 2-vertices in $\Gamma'$ together with incident edges to
one edge. The obtained net $tri(\Gamma)$ is trivalent. Any glueing
$g:\Gamma_1 \to \Gamma_2$ is bijective on 1-vertices, hence, induces a
glueing $red_1(\Gamma_1) \to red_1(\Gamma_2)$, therefore, a glueing
$\Gamma_1' \to \Gamma_2'$ which leads to a glueing
$tri(g): tri(\Gamma_1) \to tri(\Gamma_2)$. Any morphism $ins_a$, $del_a$ is
sent to identity by the functor $tri$.
\begin{thm}\labl{inctriCal}
The functors $inc: {\cal T\cal N}} \newcommand{\RG}{{\cal R\cal G}_0 \to \CN_+$ and $tri: \CN_+ \to {\cal T\cal N}} \newcommand{\RG}{{\cal R\cal G}_0$ are
quasi-inverse to each other.
\end{thm}
\subsEction{Nets with fusing, braiding, twists and switches}
Now we add more morphisms to the category of nets.
\begin{defn}
Let $N$ be a symmetric monoidal category with left cancellations having
nets as objects and having morphisms generated over $\CN$ by
$fus_a:\Gamma \to\Gamma'$, $Tw_a: \Gamma \to \Gamma$,
$_aBr_{BC}:\Gamma \to\Gamma'$, $S_a:\Gamma \to\Gamma$ as in Definitions
\ref{deftrivalnet}, \ref{defTN}. We subject them to the relations of
Definitions \ref{deftrivalnet}, \ref{defTN} and the following. There are
equations
\begin{equation}\labl{TwA=TwB}
Tw_A = Tw_B : \lezhak AB \hbox to 20pt {\rightarrowfill} \lezhak AB ,
\end{equation}
\begin{equation}
Tw_a = 1 : \pointedge a \hbox to 20pt {\rightarrowfill} \pointedge a .
\end{equation}
The following diagrams commute
\begin{equation}
\unitlength=0.75pt
\begin{array}{ccccc}
{\makebox[33 pt]{
\raisebox{3 pt}[22.5 pt][16.5 pt]{
\put(-2,0){\line(5,-3){20}}
\put(-2,0){\line(5,3){20}}
\put(-2,0){\line(-5,3){20}}
\put(-2,0){\circle*{4}}
}}}
& @>ins>> &
{\makebox[45 pt]{
\raisebox{3 pt}[27 pt][21 pt]{
\put(-19,0){\line(-3,5){12}}
\put(15,0){\line(3,-5){12}}
\put(15,0){\line(3,5){12}}
\put(-19,0){\line(1,0){34}}
\put(-19,0){\circle*{4}}
\put(15,0){\circle*{4}}
}}}
& @>ins>> &
{\unitlength=0.75pt
\makebox[45 pt]{
\raisebox{3 pt}[27 pt][21 pt]{
\put(-19,0){\line(-3,5){12}}
\put(-19,0){\line(-3,-5){12}}
\put(-31,-20){\circle*{4}}
\put(15,0){\line(3,-5){12}}
\put(15,0){\line(3,5){12}}
\put(-19,0){\line(1,0){34}}
\put(-19,0){\circle*{4}}
\put(15,0){\circle*{4}}
}}}
\\
& ins\searrow &&& \quad \Big\downarrow fus \\
&&
{\makebox[45 pt]{
\raisebox{3 pt}[27 pt][21 pt]{
\put(-2,17){\line(-5,3){24}}
\put(-2,-17){\line(5,-3){24}}
\put(-2,17){\line(5,3){24}}
\put(-2,17){\line(0,-1){34}}
\put(-2,17){\circle*{4}}
\put(-2,-17){\circle*{4}}
}}}
& @>ins>> &
{\unitlength=0.75pt
\makebox[45 pt]{
\raisebox{3 pt}[27 pt][21 pt]{
\put(-2,17){\line(-5,3){24}}
\put(-2,-17){\line(-5,-3){24}}
\put(-26,-31.6){\circle*{4}}
\put(-2,-17){\line(5,-3){24}}
\put(-2,17){\line(5,3){24}}
\put(-2,17){\line(0,-1){34}}
\put(-2,17){\circle*{4}}
\put(-2,-17){\circle*{4}}
}}}
\end{array}
\end{equation}
\begin{equation}\labl{braidwith1N}
\begin{array}{rcl}
{\unitlength=0.75pt
\makebox[33 pt]{
\raisebox{3 pt}[22.5 pt][16.5 pt]{
\put(-2,0){\line(0,1){23}}
\put(-2,0){\line(-5,-3){20}}
\put(-22,-12){\circle*{4}}
\put(-2,0){\line(5,-3){20}}
\put(-2,0){\circle*{4}}
\put(-17,16){$C$}
\put(-21,-23){$A$}
\put(7,-23){$B$}
}}}
& @>Br_{AB}^{\pm1}>> & \triuprpt CBA \\
del_A\searrow\!\!\!\!\!\! && \!\!\!\!\!\! \swarrow del_A \\
& \stoyak CB &
\end{array}
\end{equation}
\end{defn}
Denote $N_+$ a full subcategory of nets whose components satisfy
inequalities \eqref{g>n>}.
\begin{thm}\labl{triNTN}
The functor $tri$ uniquely extends to a functor $tri:N_+ \to TN$
quasi-inverse to inclusion functor $inc: TN\to N_+$.
\end{thm}
\subsEction{Nets compared with ribbon graphs}
Extend the functor $N_+ @>tri>> TN @>fat>> RG$ to a functor $fat:N\to RG$
setting
\begin{align*}
fat(\twopointedge) &= \put(16,2){\circle{26}} \labl{2pt=disk} \\
fat(\pointedge A ) &= \diskone A , \\
fat(\lezhak AB ) &= \disktwo AB , \\
fat(\okrug) &= \cylindre , \labl{okrugcyl}
\end{align*}
The fattening of an insertion or deletion morphism is set to the unique
``identity'' homeomorphism of the ribbon graphs. Fattenings of other
generators are already defined.
\begin{thm} \label{fatNtoRG}
The functor $fat:N\to RG$ is an equivalence of categories.
\end{thm}
\sEction{A category of oriented nets}\label{oriented}
\subsEction{Oriented nets}
By an {\em oriented net} we mean a net $\Gamma$, equipped with such
orientation of edges that each 2-vertex or 3-vertex is a source for some edge
and a target for some edge. Given an oriented net $\Gamma$ with labeling
$L:B\to\CC$, we define its desorientation $deso(\Gamma)$ as a net $\Gamma$
with labeling $L':B\to\CC$ constructed as follows. $L'(b)=L(b)$ if the end
$b$ is a target and $L'(b)=L(b)\haj{{\ }}$ if the end $b$ is a source of the
incident external leg. A glueing of oriented nets is an orientation
preserving map, which is a glueing of underlying desoriented nets. In
particular, only those external legs can be glued which carry the same label
from $\CC$ and one of them has the incident end as a target, and another as
a source.
\begin{defn}
Let $\ON$ be a symmetric monoidal category with left cancellations having
oriented nets as objects and morphisms generated over glueings by the
following new morphisms:
reversal $rev_a:\Gamma\to\Gamma'$, where $a$ is an internal edge of $\Gamma$
and $\Gamma'$ differs from $\Gamma$ by a reversed orientation of $a$;
dualising morphism $du_a:\Gamma\to\Gamma'$, where $a$ is an external leg of
$\Gamma$, $\Gamma'$ differs from $\Gamma$ by a reversed orientation of $a$,
and the labels $A,A'\in\CC$ attached to $a$ are dual to each other;
$ins_a:\Gamma\to\Gamma',\ del_a:\Gamma\to\Gamma'$, insertion and deletion
of a 2-vertex.
These morphisms are subject to relations
\[ins\cdot del=1, \quad del\cdot ins=1, \quad rev_a^2=1, \quad du_a^2=1. \]
For each glueing $g:\Gamma_1\to\Gamma_2$ the diagram \eqref{gins=N} and
\begin{equation}\labl{grev=ON}
\begin{CD}
\Gamma_1 @>rev_a>> \Gamma_1' \\
@VgVV @VVg'V \\
\Gamma_2 @>rev_{g(a)}>> \Gamma_2'
\end{CD}
\end{equation}
\begin{equation}\labl{gdu=ON}
\begin{CD}
\Gamma_1 @>du_a>> \Gamma_1' \\
@VgVV @VVg'V \\
\Gamma_2 @>du_{g(a)}>> \Gamma_2'
\end{CD}
\end{equation}
(if the external leg $a$ is not glued by $g$) commute. The diagrams
\begin{equation}
\begin{CD}
\lezhakright{}a @>ins_b>> \lintwoptright ba \\
@Vins_cVV \nquad\nquad\nqquad \nearrow \psi \qquad\quad\qquad \\
\lintwoptright ac
\end{CD}
\end{equation}
\begin{equation}\label{gluerev}
\begin{CD}
\trirightr XAB \trileftr XCD @>du_X\ du_X>>
\trirightl {X\haj{{\ }}}AB \trileftl {X\haj{{\ }}}CD \\
@Vg'VV @VVg''V \\
\tarahorr ABCD{} @>rev>> \tarahorl ABCD{}
\end{CD}
\end{equation}
\begin{equation}\label{1ptdudurev}
\begin{CD}
\pointedger X \ \trileftr XAB @>du_X\ du_X>>
\pointedgel {X\haj{{\ }}} \ \trileftl {X\haj{{\ }}}AB \\
@Vg'VV @VVg''V \\
{\unitlength=0.75pt
\makebox[39 pt][l]{
\raisebox{3 pt}[19 pt][13 pt]{
\put(22,0){\line(-1,0){23}}
\put(12,0){\vector(1,0){0}}
\put(-1,0){\circle*{4}}
\put(22,0){\line(3,-5){12}}
\put(22,0){\line(3,5){12}}
\put(22,0){\circle*{4}}
\put(35,-17){$A$}
\put(35,10){$B$}
}}}
@>rev>>
{\unitlength=0.75pt
\makebox[39 pt][l]{
\raisebox{3 pt}[19 pt][13 pt]{
\put(22,0){\line(-1,0){23}}
\put(10,0){\vector(-1,0){0}}
\put(-1,0){\circle*{4}}
\put(22,0){\line(3,-5){12}}
\put(22,0){\line(3,5){12}}
\put(22,0){\circle*{4}}
\put(35,-17){$A$}
\put(35,10){$B$}
}}}
\end{CD}
\end{equation}
\begin{equation}\label{2ptdudurev}
\begin{CD}
\pointedger X \edgepointright X @>du_X\ du_X>>
\pointedgel {X\haj{{\ }}}
{\unitlength=0.75pt
\makebox[22 pt][l]{
\raisebox{3 pt}[16 pt][2 pt]{
\put(2,0){\line(1,0){20}}
\put(10,0){\vector(-1,0){0}}
\put(22,0){\circle*{4}}
\put(10,4){$X\haj{{\ }}$}
}}}
\\
@Vg'VV @VVg''V \\
\twopointedger @>rev>> \twopointedgel
\end{CD}
\end{equation}
\begin{equation}\label{dddddd}
\begin{array}{ccccc}
&& \triupioo CAB && \\
& du_A \nearrow && \searrow du_B & \\
\triupioi C{A\haj{{\ }}}B &&&& \triupiio CA{B\haj{{\ }}} \\
du_C\Big\uparrow\quad &&&& \quad\Big\downarrow du_C \\
\triupooi {C\haj{{\ }}}{A\haj{{\ }}}B &&&& \triupoio {C\haj{{\ }}}A{B\haj{{\ }}} \\
& du_B \nwarrow && \swarrow du_A & \\
&& \triupoii {C\haj{{\ }}}{A\haj{{\ }}}{B\haj{{\ }}}
\end{array}
\end{equation}
also commute.
Finally, the morphism
\[ rev: a \twopointedger b\ \hbox to 30pt {\rightarrowfill}\ a \twopointedgel b \]
equals to the isomorphism of graphs, which sends $a$ to $b$, $b$ to $a$.
\end{defn}
\begin{notation}
Both morphisms $rev_a$ and $du_a$ will be sometimes denoted $Du_a$.
\end{notation}
Denote $\CN'\subset \CN$ the category of nets $\CN_0$ extended by morphisms
$ins_a, del_a$ of insertion and deletion of a 2-vertex with relations
(\ref{gins=N}), (\ref{ins2pointN}), (\ref{rot=1}).
There is a desorienting functor $deso:\ON\to \CN'$ which sends
$rev_a,du_a:\Gamma\to\Gamma'$ to $\operatorname{id}_{deso(\Gamma)}$.
\begin{thm}\labl{desoONN'}
The functor $deso:\ON\to\CN'$ is an equivalence of categories.
\end{thm}
\subsEction{Oriented nets with fusing, twists, braiding and switches}
Now we add to $\ON$ new morphisms. The obtained category $ON$ will be made
equivalent to $N$.
\begin{defn}
Let $ON$ be a symmetric monoidal category with left cancellations which
objects are oriented nets and morphisms are generated over $\ON$ by the
following isomorphisms:
$ ins_a:\Gamma\to\Gamma',\ del_a:\Gamma'\to\Gamma $
where $\Gamma'$ differs from $\Gamma$ by a 1-vertex $a$ and an incident edge;
$ fus_a:\Gamma\to\Gamma', $
where a neighbourhood of $a$ looks as
\[ fus_a: \tarahoriooor {}{}{}{}a \hbox to 30pt {\rightarrowfill} \taraverioood {}{}{}{}{} \]
(only part of $\Gamma,\Gamma'$ is drawn here, the complements of it in
$\Gamma$ and $\Gamma'$ are the same). It is important that cyclic ordering
for all 3-vertices is coherent here with the orientation of the plane of
drawing;
$Tw_a:\Gamma\to\Gamma$ for an edge $a$ of $\Gamma$;
$_aBr_{BC}:\Gamma\to\Gamma'$ for a 3-vertex $a\in\Gamma$ and two incident
edges $B,C$ with sources in $a$. The net $\Gamma'$ is obtained from $\Gamma$
as in unoriented case. Locally the morphism looks as
\[ _aBr_{BC}: \triupioo ABC \hbox to 30pt {\rightarrowfill} \triupioo ACB ; \]
$S_a:\Gamma\to\Gamma$ for a loop-edge $a\in\Gamma$ looking as
\[ S_a: \tennisu {}a \hbox to 30pt {\rightarrowfill} \tennisu {}a .\]
They are subject to the following relations. For any glueing
$g:\Gamma_1\to\Gamma_2$ whenever upper row in diagrams \eqref{gins=N},
\eqref{gfus=fusg}, \eqref{gTw=TN}--\eqref{gS=TN} is defined, the whole
diagram is defined and commute. The following diagrams commute
\begin{equation}
\begin{CD}
\lezhakright{}{} @>ins>> \torchokur{}{} \\
@| @VVrevV \\
\lezhakright{}{} @>ins>> \torchokdr{}{}
\end{CD}
\end{equation}
\begin{equation}
\begin{array}{ccccc}
\lezhakright ab & @>Ins>> \torchokur ab @>Du_b>>
&
{\unitlength=0.75pt
\makebox[36 pt][l]{
\raisebox{-4 pt}[15 pt][7.5 pt]{
\put(0,0){\line(1,0){40}}
\put(12,0){\vector(1,0){0}}
\put(28,0){\vector(-1,0){0}}
\put(20,0){\line(0,1){20}}
\put(20,12){\vector(0,1){0}}
\put(20,20){\circle*{4}}
\put(20,0){\circle*{4}}
\put(3,4){$a$}
\put(28,4){$b$}
}}}
& @>Du_a>> & \torchokul ab
\\
Del \bigg\downarrow\quad &&&& \quad\bigg\downarrow Del \\
\begin{picture}(25,4)
\put(0,3){\line(1,0){20}}
\put(12,3){\vector(1,0){0}}
\end{picture}
& @>{d\text{ or }rev}>> &
\begin{picture}(25,4)
\put(0,3){\line(1,0){20}}
\put(8,3){\vector(-1,0){0}}
\end{picture}
& @>Ins>> &
{\unitlength=0.75pt
\makebox[36 pt][l]{
\raisebox{3 pt}[16 pt][2 pt]{
\put(0,0){\line(1,0){40}}
\put(8,0){\vector(-1,0){0}}
\put(28,0){\vector(-1,0){0}}
\put(20,0){\circle*{4}}
\put(3,4){$a$}
\put(28,4){$b$}
}}}
\end{array}
\end{equation}
\begin{equation}
\begin{CD}
\edgepointright a @>ins_b>> \lezhakptright ab \\
@Vins_cVV @VVins_cV \\
\lezhakptright ac @>ins_b>>
{\unitlength=0.75pt
\makebox[39 pt][l]{
\raisebox{3 pt}[19 pt][13 pt]{
\put(22,0){\line(-1,0){23}}
\put(13,0){\vector(1,0){0}}
\put(22,0){\line(3,-5){12}}
\put(34,-20){\circle*{4}}
\put(28,-10){\vector(2,-3){0}}
\put(22,0){\line(3,5){12}}
\put(34,20){\circle*{4}}
\put(28,10){\vector(2,3){0}}
\put(22,0){\circle*{4}}
\put(2,4){$a$}
\put(35,-17){$b$}
\put(35,10){$c$}
}}}
\end{CD}
\end{equation}
\begin{equation}
\begin{CD}
\pointedger a @>ins_b>> \ptlezhakright ba \\
@Vins_cVV \nquad\nquad\nqquad \nearrow \phi\qquad\quad \\
\ptlezhakright ac
\end{CD}
\end{equation}
(where $\phi$ is the isomorphism of oriented nets)
\begin{equation}\labl{TwforI}
Tw_a = 1 : \pointedger a \hbox to 20pt {\rightarrowfill} \pointedger a .
\end{equation}
and similar diagrams with opposite orientation of edges also commute.
The following diagrams commute
\begin{equation}
\unitlength=0.75pt
\begin{array}{ccccc}
{\makebox[33 pt]{
\raisebox{3 pt}[22.5 pt][16.5 pt]{
\put(-2,0){\line(5,-3){20}}
\put(8,-6){\vector(3,-2){0}}
\put(-2,0){\line(5,3){20}}
\put(8,6){\vector(3,2){0}}
\put(-2,0){\line(-5,3){20}}
\put(-12,6){\vector(-3,2){0}}
\put(-2,0){\circle*{4}}
}}}
& @>ins>> &
{\unitlength=0.75pt
\makebox[45 pt]{
\raisebox{3 pt}[27 pt][21 pt]{
\put(-19,0){\line(-3,5){12}}
\put(-25,10){\vector(2,-3){0}}
\put(15,0){\line(3,-5){12}}
\put(21,-10){\vector(2,-3){0}}
\put(15,0){\line(3,5){12}}
\put(21,10){\vector(2,3){0}}
\put(-19,0){\line(1,0){34}}
\put(-2,0){\vector(1,0){0}}
\put(-19,0){\circle*{4}}
\put(15,0){\circle*{4}}
}}}
& @>ins>> &
{\unitlength=0.75pt
\makebox[45 pt]{
\raisebox{3 pt}[27 pt][21 pt]{
\put(-19,0){\line(-3,5){12}}
\put(-25,10){\vector(2,-3){0}}
\put(-19,0){\line(-3,-5){12}}
\put(-31,-20){\circle*{4}}
\put(-25,-10){\vector(-2,-3){0}}
\put(15,0){\line(3,-5){12}}
\put(21,-10){\vector(2,-3){0}}
\put(15,0){\line(3,5){12}}
\put(21,10){\vector(2,3){0}}
\put(-19,0){\line(1,0){34}}
\put(-2,0){\vector(1,0){0}}
\put(-19,0){\circle*{4}}
\put(15,0){\circle*{4}}
}}}
\\
& ins\searrow &&& \quad \Big\downarrow fus \\
&&
{\unitlength=0.75pt
\makebox[45 pt]{
\raisebox{3 pt}[27 pt][21 pt]{
\put(-2,17){\line(-5,3){24}}
\put(-12,23){\vector(3,-2){0}}
\put(-2,-17){\line(5,-3){24}}
\put(8,-23){\vector(3,-2){0}}
\put(-2,17){\line(5,3){24}}
\put(8,23){\vector(3,2){0}}
\put(-2,17){\line(0,-1){34}}
\put(-2,0){\vector(0,-1){0}}
\put(-2,17){\circle*{4}}
\put(-2,-17){\circle*{4}}
}}}
& @>ins>> &
{\unitlength=0.75pt
\makebox[45 pt]{
\raisebox{3 pt}[27 pt][21 pt]{
\put(-2,17){\line(-5,3){24}}
\put(-12,23){\vector(3,-2){0}}
\put(-2,-17){\line(-5,-3){24}}
\put(-26,-31.6){\circle*{4}}
\put(-12,-23){\vector(-3,-2){0}}
\put(-2,-17){\line(5,-3){24}}
\put(8,-23){\vector(3,-2){0}}
\put(-2,17){\line(5,3){24}}
\put(8,23){\vector(3,2){0}}
\put(-2,17){\line(0,-1){34}}
\put(-2,0){\vector(0,-1){0}}
\put(-2,17){\circle*{4}}
\put(-2,-17){\circle*{4}}
}}}
\end{array}
\end{equation}
\begin{equation}
\unitlength=0.75pt
\begin{array}{ccccc}
{\makebox[33 pt]{
\raisebox{3 pt}[22.5 pt][16.5 pt]{
\put(-2,0){\line(-5,3){20}}
\put(-12,6){\vector(3,-2){0}}
\put(-2,0){\line(-5,-3){20}}
\put(-12,-6){\vector(-3,-2){0}}
\put(-2,0){\line(5,-3){20}}
\put(8,-6){\vector(3,-2){0}}
\put(-2,0){\circle*{4}}
}}}
& @>ins>> &
{\unitlength=0.75pt
\makebox[45 pt]{
\raisebox{3 pt}[27 pt][21 pt]{
\put(-19,0){\line(-3,5){12}}
\put(-25,10){\vector(2,-3){0}}
\put(-19,0){\line(-3,-5){12}}
\put(-25,-10){\vector(-2,-3){0}}
\put(15,0){\line(3,-5){12}}
\put(21,-10){\vector(2,-3){0}}
\put(-19,0){\line(1,0){34}}
\put(-2,0){\vector(1,0){0}}
\put(-19,0){\circle*{4}}
\put(15,0){\circle*{4}}
}}}
& @>ins>> &
{\unitlength=0.75pt
\makebox[45 pt]{
\raisebox{3 pt}[27 pt][21 pt]{
\put(-19,0){\line(-3,5){12}}
\put(-25,10){\vector(2,-3){0}}
\put(-19,0){\line(-3,-5){12}}
\put(-25,-10){\vector(-2,-3){0}}
\put(15,0){\line(3,-5){12}}
\put(21,-10){\vector(2,-3){0}}
\put(15,0){\line(3,5){12}}
\put(27,20){\circle*{4}}
\put(21,10){\vector(2,3){0}}
\put(-19,0){\line(1,0){34}}
\put(-2,0){\vector(1,0){0}}
\put(-19,0){\circle*{4}}
\put(15,0){\circle*{4}}
}}}
\\
& ins\searrow &&& \quad \Big\downarrow fus \\
&&
{\unitlength=0.75pt
\makebox[45 pt]{
\raisebox{3 pt}[27 pt][21 pt]{
\put(-2,17){\line(-5,3){24}}
\put(-12,23){\vector(3,-2){0}}
\put(-2,-17){\line(-5,-3){24}}
\put(-12,-23){\vector(-3,-2){0}}
\put(-2,-17){\line(5,-3){24}}
\put(8,-23){\vector(3,-2){0}}
\put(-2,17){\line(0,-1){34}}
\put(-2,0){\vector(0,-1){0}}
\put(-2,17){\circle*{4}}
\put(-2,-17){\circle*{4}}
}}}
& @>ins>> &
{\unitlength=0.75pt
\makebox[45 pt]{
\raisebox{3 pt}[27 pt][21 pt]{
\put(-2,17){\line(-5,3){24}}
\put(-12,23){\vector(3,-2){0}}
\put(-2,-17){\line(-5,-3){24}}
\put(-12,-23){\vector(-3,-2){0}}
\put(-2,-17){\line(5,-3){24}}
\put(8,-23){\vector(3,-2){0}}
\put(-2,17){\line(5,3){24}}
\put(22,31.6){\circle*{4}}
\put(8,23){\vector(3,2){0}}
\put(-2,17){\line(0,-1){34}}
\put(-2,0){\vector(0,-1){0}}
\put(-2,17){\circle*{4}}
\put(-2,-17){\circle*{4}}
}}}
\end{array}
\end{equation}
\begin{equation}\labl{ins4fus3}
\unitlength=0.75pt
\begin{array}{ccccc}
{\makebox[33 pt]{
\raisebox{3 pt}[22.5 pt][16.5 pt]{
\put(-2,0){\line(-5,-3){20}}
\put(-12,-6){\vector(-3,-2){0}}
\put(-2,0){\line(5,3){20}}
\put(8,6){\vector(3,2){0}}
\put(-2,0){\line(-5,3){20}}
\put(-12,6){\vector(3,-2){0}}
\put(-2,0){\circle*{4}}
}}}
& @>ins>> & \tarahoriobor{}{}{}{}{} & @>ins>> &
\tarahoriooorptdr {}{}{}{}{} \\
& ins\searrow &&& \quad \Big\downarrow fus \\
&& \taraveriobod {}{}{}{}{} & @>ins>> & \taraveriooodptdr {}{}{}{}{}
\end{array}
\end{equation}
\begin{equation}\label{fus1?}
\begin{array}{ccccccc}
\tarahoriooor ABCD{} & \buildrel d_C\over\hbox to 20pt {\rightarrowfill} & \tarahorioior AB{C\haj{{\ }}}D{}
& \buildrel rev\over\hbox to 20pt {\rightarrowfill} & \tarahorioiol AB{C\haj{{\ }}}D{}
& \buildrel d_A\over\hbox to 20pt {\rightarrowfill} & \tarahorooiol {A\haj{{\ }}}B{C\haj{{\ }}}D{} \\
Fus\bigg\downarrow\quad &&&&&& \quad\bigg\downarrow Fus \\
\taraverioood ABCD{} & \buildrel d_C\over\hbox to 20pt {\rightarrowfill} & \taraverioiod AB{C\haj{{\ }}}D{}
& \buildrel rev\over\hbox to 20pt {\rightarrowfill} & \taraverioiou AB{C\haj{{\ }}}D{}
& \buildrel d_A\over\hbox to 20pt {\rightarrowfill} & \taraverooiou {A\haj{{\ }}}B{C\haj{{\ }}}D{}
\end{array}
\end{equation}
\begin{equation}\label{fus2?}
\begin{array}{ccccc}
\!\!\!\!\!\! \tarahoroooil ABCD{} & \buildrel d_A\over\hbox to 20pt {\rightarrowfill}
\tarahoriooil {A\haj{{\ }}}BCD{} & \buildrel rev\over\hbox to 45pt {\rightarrowfill} &
\tarahoriooir {A\haj{{\ }}}BCD{} \buildrel d_D\over\hbox to 20pt {\rightarrowfill}
& \tarahoriooor {A\haj{{\ }}}BC{D\haj{{\ }}}{} \\
\!\!\!\!\!\! Fus \bigg\uparrow\quad &&&& \quad\bigg\downarrow Fus \\
\!\!\!\!\!\! \taraveroooid ABCD{} & \buildrel d_A\over\hbox to 60pt {\leftarrowfill} &
\taraveriooid {A\haj{{\ }}}BCD{}
& \buildrel d_D\over\hbox to 60pt {\leftarrowfill} & \taraverioood {A\haj{{\ }}}BC{D\haj{{\ }}}{}
\end{array}
\end{equation}
\begin{equation}\labl{TwA=TwB_or}
Tw_A = Tw_B : \lezhakright AB \hbox to 20pt {\rightarrowfill} \lezhakright AB ,
\end{equation}
\begin{figure}[htb]
\begin{equation}\label{pentagon_or}
\unitlength=0.75mm
\begin{picture}(160,131)
\put(80,131){\line(2,-3){8}}
\put(88,119){\line(5,-2){12}}
\put(88,119){\circle*{2}}
\put(88,119){\line(-2,-1){14}}
\put(74,112){\line(1,-2){5.67}}
\put(79.67,101){\line(5,-3){11.33}}
\put(74,112){\circle*{2}}
\put(80,100){\circle*{2}}
\put(80,100){\line(-2,-1){11}}
\put(74,112){\line(-5,1){14}}
\put(20,91){\line(3,-5){7.67}}
\put(27.67,78){\line(5,-2){12.33}}
\put(28,78){\circle*{2}}
\put(28,78){\line(-1,-3){4.33}}
\put(23.67,65){\line(-4,1){12.67}}
\put(11,68){\line(-2,1){11}}
\put(11,68){\circle*{2}}
\put(23,65){\circle*{2}}
\put(23,65){\line(2,-3){8}}
\put(11,68){\line(-1,-3){5}}
\put(140,91){\line(-3,-5){7.67}}
\put(132.33,78){\line(3,-2){13.67}}
\put(146,69){\line(-4,-5){7.33}}
\put(138.67,60){\line(5,-3){12.33}}
\put(133,78){\circle*{2}}
\put(133,78){\line(-5,-2){13}}
\put(146,69){\circle*{2}}
\put(146,69){\line(3,1){13}}
\put(139,60){\circle*{2}}
\put(139,60){\line(-3,-2){10}}
\put(115,36){\line(-3,-5){7.33}}
\put(107.67,24){\line(-3,-1){12.67}}
\put(108,24){\circle*{2}}
\put(108,24){\line(1,-3){4}}
\put(112,12){\line(-5,-6){9}}
\put(112,12){\circle*{2}}
\put(112,12){\line(5,1){14}}
\put(126,14.67){\line(5,3){9}}
\put(126,15){\circle*{2}}
\put(126,15){\line(1,-5){2.67}}
\put(45,36){\line(0,-1){13}}
\put(45,23){\line(-6,-5){10}}
\put(45,23){\circle*{2}}
\put(45,23){\line(6,-5){9}}
\put(54,15.67){\line(1,-5){3}}
\put(54,15){\circle*{2}}
\put(54,15){\line(2,1){11}}
\put(35,15){\circle*{2}}
\put(35,15){\line(-2,1){10}}
\put(35,15){\line(-1,-5){2.67}}
\put(70,14){\vector(1,0){20}}
\put(27,50){\vector(2,-3){11.33}}
\put(40,87){\vector(2,1){19}}
\put(104,97){\vector(2,-1){19}}
\put(126,33){\vector(3,4){12.67}}
\put(84,125){\vector(-2,3){0}}
\put(98,115){\vector(3,-1){0}}
\put(82,116){\vector(2,1){0}}
\put(69,113){\vector(4,-1){0}}
\put(78,104){\vector(1,-2){0}}
\put(88,96){\vector(3,-2){0}}
\put(74,97){\vector(-2,-1){0}}
\put(23,86){\vector(-2,3){0}}
\put(35,75){\vector(3,-1){0}}
\put(26,72){\vector(1,3){0}}
\put(29,56){\vector(2,-3){0}}
\put(20,66){\vector(4,-1){0}}
\put(8,59){\vector(-1,-3){0}}
\put(7,70){\vector(2,-1){0}}
\put(128,76){\vector(3,1){0}}
\put(137,86){\vector(2,3){0}}
\put(140,73){\vector(3,-2){0}}
\put(155,72){\vector(3,1){0}}
\put(142,64){\vector(-3,-4){0}}
\put(147,55){\vector(3,-2){0}}
\put(133,56){\vector(-3,-2){0}}
\put(33,5){\vector(-1,-4){0}}
\put(31,17){\vector(2,-1){0}}
\put(39,18){\vector(4,3){0}}
\put(45,31){\vector(0,1){0}}
\put(51,18){\vector(4,-3){0}}
\put(62,19){\vector(2,1){0}}
\put(56,6){\vector(1,-4){0}}
\put(102,22){\vector(3,1){0}}
\put(112,31){\vector(2,3){0}}
\put(110,18){\vector(1,-3){0}}
\put(107,6){\vector(-3,-4){0}}
\put(122,14){\vector(4,1){0}}
\put(133,19){\vector(2,1){0}}
\put(128,5){\vector(1,-4){0}}
\end{picture}
\end{equation}
\caption{The oriented pentagon equation\labl{5giraffe}}
\end{figure}
\begin{equation}\labl{TwDu=DuTw}
Tw_a Du_a = Du_a Tw_a
\end{equation}
\begin{equation}\labl{braidwith1}
\begin{array}{rcl}
\triupioolpt CAB & @>Br_{AB}^{\pm1}>> & \triupioorpt CBA \\
del_A\searrow\!\!\!\!\!\! && \!\!\!\!\!\! \swarrow del_A \\
& \stoyakd CB &
\end{array}
\end{equation}
\begin{equation}\label{b2ttt}
Br^2_{BC} = Tw_A\ Tw_B^{-1} \ Tw_C^{-1} : \triupioo ABC \hbox to 20pt {\rightarrowfill} \triupioo ABC
\end{equation}
\begin{equation}\label{t=bb1}
\begin{array}{ccccccc}
\triupioo CAB & \buildrel Br_{AB}\over\hbox to 30pt {\rightarrowfill} & \triupioo CBA &
\buildrel du_B\over\hbox to 30pt {\rightarrowfill} & \triupioi C{B\haj{{\ }}}A & \buildrel du_C\over\hbox to 30pt {\rightarrowfill}
& \triupooi{C\haj{{\ }}}{B\haj{{\ }}}A\\
Tw_A\bigg\uparrow \qquad &&&&&& \quad \bigg\downarrow Br_{A,C\haj{{\ }}}\!\!\!\!\!\! \\
\triupioo CAB &\buildrel rot\over\hbox to 30pt {\leftarrowfill} &\triupoio ABC
&\buildrel du_B\over\hbox to 30pt {\leftarrowfill} & \triupoii A{B\haj{{\ }}}C
& \buildrel du_C\over\hbox to 30pt {\leftarrowfill} & \triupooi A{B\haj{{\ }}}{C\haj{{\ }}}
\end{array}
\end{equation}
\begin{equation}\label{t=bb2}
\begin{array}{ccccccc}
\triupioo CAB & \buildrel Br_{AB}\over\hbox to 30pt {\rightarrowfill} & \triupioo CBA
& \buildrel du_A\over\hbox to 30pt {\rightarrowfill} & \triupiio CB{A\haj{{\ }}} & \buildrel du_C\over\hbox to 30pt {\rightarrowfill}
& \triupoio{C\haj{{\ }}}B{A\haj{{\ }}} \\
Tw_B \bigg\uparrow \qquad &&&&&& \quad \bigg\downarrow Br_{C\haj{{\ }},B} \!\!\!\!\!\! \\
\triupioo CAB&\stackrel{rot^{-1}}\hbox to 30pt {\leftarrowfill} &\triupooi BCA
&\buildrel du_A\over\hbox to 30pt {\leftarrowfill} & \triupoii BC{A\haj{{\ }}}
&\buildrel du_C\over\hbox to 30pt {\leftarrowfill} & \triupoio B{C\haj{{\ }}}{A\haj{{\ }}}
\end{array}
\end{equation}
\begin{equation}\label{hexagon_or}
\begin{array}{ccccc}
\tarahoriooor XABC{} & \stackrel{Br_{BC}^{\pm1}}{\hbox to 45pt {\rightarrowfill}} &
\tarahoriooor XACB{} & \stackrel{fus}{\hbox to 45pt {\rightarrowfill}} & \taraverioood XACB{} \\
fus \bigg\downarrow \qquad &&&& \qquad \bigg\downarrow Br_{AC}^{\pm1} \\
\taraverioood XABCY & \stackrel{Br_{YC}^{\pm1}}{\hbox to 45pt {\rightarrowfill}} &
\tarahoriooor XCABY & \stackrel{fus}{\hbox to 45pt {\rightarrowfill}} & \taraverioood XCAB{}
\end{array}
\end{equation}
\begin{equation}\label{ST3=S2_or}
(S_M Tw_M)^3 =S^2_M : \tennisu {}M \hbox to 20pt {\rightarrowfill} \tennisu {}M ,
\end{equation}
\begin{equation}\label{S2=Br-1Tw-1_or}
\begin{array}{ccc}
\tennisu XM & \buildrel S_M^2\over\hbox to 45pt {\rightarrowfill} & \tennisu XM \\
_XBr_{MM}^{-1}\bigg\downarrow\qquad\quad & & \qquad \bigg\uparrow rev \\
\tennisd XM & \buildrel Tw_M^{-1}\over\hbox to 45pt {\rightarrowfill} & \tennisd XM
\end{array}
\end{equation}
and \eqref{mainSdiag_or} (see Figure~\ref{orientA_2}).
\begin{figure}[htbp]
\begin{equation}\label{mainSdiag_or}
\begin{array}{ccc}
\celodownor YXAN & \stackrel{Br_{XY}}{\hbox to 100pt {\rightarrowfill}} & \celodownor XYAN \\
S_N^{-1}\bigg\downarrow\quad && \quad\bigg\downarrow rot \\
\celodownor YXAM &&
{\unitlength=0.75pt
\makebox[42 pt][l]{
\raisebox{-39 pt}[45 pt][39 pt]{
\put(24,76){\oval(40,40)[]}
\put(22,96){\vector(-1,0){0}}
\put(24,28){\line(0,1){28}}
\put(24,44){\vector(0,1){0}}
\put(24,28){\line(-5,-3){24}}
\put(14,22){\vector(3,2){0}}
\put(24,28){\line(5,-3){24}}
\put(34,22){\vector(-3,2){0}}
\put(24,28){\circle*{4}}
\put(24,56){\circle*{4}}
\put(5,5){$Y$}
\put(34,5){$X$}
\put(29,38){$A$}
\put(32,96){$N$}
}}}
\\
fus_A\bigg\downarrow\qquad && \qquad\bigg\downarrow fus_A \\
\sunor YXLM && \sunor YXNP \\
\nquad\nquad Tw^{-1}_L\ Tw_M\bigg\downarrow\qquad\quad &&\qquad\bigg\downarrow fus_N\\
\sunor YXLM & \stackrel{fus_L}{\hbox to 30pt {\rightarrowfill}} \celodownor YXBM
\stackrel{S_M}{\hbox to 30pt {\rightarrowfill}} & \celodownor YXBP
\end{array}
\end{equation}
\caption{A relation for $A_2$ oriented net\label{orientA_2}}
\end{figure}
\end{defn}
\begin{thm} \label{desoONtoN}
The functor $deso:ON\to N$ is an equivalence.
\end{thm}
\sEction{Extensions of the category of nets}\label{Extensions}
Let us look for central extensions of the category $RG$, or equivalently
$N$, or $ON$. Introduce a new type of morphisms---central charge.
\begin{defn}
Let $C_\Gamma:\Gamma\to\Gamma$ for each connected oriented net $\Gamma$ be a
morphism commuting with other generators
$du$, $rev$, $ins$, $del$, $fus$, $Tw$, $Br$, $S$. We assume that
\begin{equation}\label{empty}
C_\O =\operatorname{id}_\O :\O \to\O .
\end{equation}
If $\Gamma_1$, $\Gamma_2$, $\Gamma$ are connected and
$g:\Gamma_1 \sqcup \Gamma_2 \to\Gamma$ is a glueing we assume a commutative
diagram
\begin{equation}\label{centralC}
\begin{CD}
\Gamma_1\sqcup\Gamma_2 @>C^k_{\Gamma_1}\sqcup C^l_{\Gamma_2}>>
\Gamma_1\sqcup\Gamma_2 \\
@VgVV @VVgV \\
\Gamma @>C_\Gamma^{k+l}>> \Gamma
\end{CD}
\end{equation}
\end{defn}
The diagram \eqref{centralC} together with \eqref{empty} implies a similar
diagram with $\Gamma_1\sqcup \dots \sqcup\Gamma_n$.
We look for extensions of $ON$ by the generators $C_\Gamma$ with relations
between $du$, $rev$, $fus$, $Tw$, $Br$, $S$ changed by insertion of some
powers of $C_\Gamma$. We assume that the relations where $ins$, $del$ enter
are not changed. Then the relations involving $du$, $rev$, $fus$, $Tw$, $Br$
also do not change. Indeed, assume that $C^k$ is inserted into diagrams
\eqref{pentagon_or}--\eqref{hexagon_or} and such, and glue these diagrams
with as many $\pointedge{}$ as there are external legs. The generators
$fus$, $Tw$, $Br$ will turn into compositions of $ins$ and $del$, and
commutativity of such diagram would imply $k=0$. Also the diagram
\eqref{mainSdiag_or} after such operation become $S=SC^k$, hence, it also
does not change.
Therefore, only relations \eqref{ST3=S2_or} and \eqref{S2=Br-1Tw-1_or} could
change to
\begin{equation}\label{(ST)3cen}
(S_M\,Tw_m)^3 = S_M^2 C^k ,
\end{equation}
\begin{equation}\label{SM2}
S_M^2 = Br^{-1}_{MM}\,Tw^{-1}_M\,rev\,C^m .
\end{equation}
Renormalizing $S$ to $SC^{-k}$ we can restore \eqref{(ST)3cen}. For the sake
of symmetry we shall restore \eqref{SM2} changing \eqref{(ST)3cen} though it
might require half-integer powers of $C$.
\begin{defn}
Let $EN$ be a symmetric monoidal category with left cancellations which
objects are oriented nets and morphisms are generated over $\ON$ by the
isomorphisms $ins$, $del$, $fus$, $Tw$, $Br$, $S$, $C$ with all relations of
the category $ON$ except \eqref{ST3=S2_or} which is substituted by
\begin{equation}\label{ST3=CS2}
(S_M\,Tw_M)^3 = C\,S^2_M : \tennisu {}M \hbox to 20pt {\rightarrowfill} \tennisu {}M
\end{equation}
and relations \eqref{empty}, \eqref{centralC}.
\end{defn}
In topology to such extension corresponds the category of
framed \cite{Wit:Jones} surfaces. Here we define a central extension
$ESur\!f\toSur\!f$ as a pull-back of the central extension $EN\to ON$ along
the functor $Sur\!f\to RG \to N \to ON$ obtained in Theorems~\ref{StoRGthm},
\ref{fatNtoRG}, \ref{desoONtoN}. Thus the latter
extends to a functor $ESur\!f\to EN$.
\sEction{Ribbon categories}\label{intro}
{\sl Ribbon} (also {\sl tortile} \cite{Shu}) category is the following
thing: a braided monoidal category $\CC$ \cite{JoyStr:tor} with the tensor
product $\otimes$, the associativity $a:X\otimes(Y\otimes Z)\to (X\otimes Y)\otimes Z$,
the braiding (commutativity) $c:X\otimes Y\to Y\otimes X$ and a unity object $I$,
such that $\CC$ is rigid (for any object $X\in\CC$ there are dual objects
$\haj{{\ }} X$ and $X\haj{{\ }}$ with evaluations $\operatorname{ev}:\haj{{\ }} X\otimes X\to I$,
$\operatorname{ev}:X\otimes X\haj{{\ }}\to I$ and coevaluations $\operatorname{coev}:I\to X\otimes\haj{{\ }} X$,
$\operatorname{coev}:I\to X\haj{{\ }}\otimes X$) and possess a ribbon twist $\nu$. A ribbon twist
\cite{JoyStr:tor,Res:rib,Shu} $\nu=\nu_X:X\to X$ is a self-adjoint
($\nu_{X\haj{{\ }}}=\nu_X^t$) functorial automorphism such that
$c^2=\nu_X^{-1}\otimes\nu_Y^{-1}\circ\nu_{X\otimes Y}$.
In a ribbon category there are functorial isomorphisms \cite{Lyu:tan}
\[
\unitlength=0.8mm
\linethickness{0.4pt}
\begin{picture}(146.33,35)
\put(23,18){\oval(10,10)[r]}
\put(23,20){\oval(6,6)[lt]}
\put(11,35){\makebox(0,0)[cc]{$X$}}
\put(11,1){\makebox(0,0)[cc]{$X\haj{{\ }}\pti$}}
\put(1,18){\makebox(0,0)[cc]{$u_1^2\ =$}}
\put(61,18){\oval(10,10)[r]}
\put(61,20){\oval(6,6)[lt]}
\put(49,35){\makebox(0,0)[cc]{$X$}}
\put(49,1){\makebox(0,0)[cc]{$X\haj{{\ }}\pti$}}
\put(39,18){\makebox(0,0)[cc]{, $u_{-1}^2\ =$}}
\put(92,20){\oval(6,6)[rt]}
\put(106,35){\makebox(0,0)[cc]{$X$}}
\put(106,1){\makebox(0,0)[cc]{$\haj{{\ }}\pti X$}}
\put(76,18){\makebox(0,0)[cc]{, $u_1^{-2}=$}}
\put(132,20){\oval(6,6)[rt]}
\put(146,35){\makebox(0,0)[cc]{$X$}}
\put(146,1){\makebox(0,0)[cc]{$\haj{{\ }}\pti X$}}
\put(116,18){\makebox(0,0)[cc]{, $u_{-1}^{-2}=$}}
\put(20,20){\line(-2,-3){9.33}}
\put(11,30){\line(2,-3){6.67}}
\put(58,16){\line(-2,3){9.33}}
\put(49,6){\line(3,5){6}}
\put(106,6){\line(-4,5){8}}
\put(135,20){\line(4,-5){11.33}}
\put(146,30){\line(-4,-5){8}}
\put(23,16.50){\oval(6,7)[lb]}
\put(61.50,16){\oval(7,6)[lb]}
\put(92,18){\oval(12,10)[l]}
\put(132,18){\oval(12,10)[l]}
\put(91.50,16){\oval(7,6)[rb]}
\put(95,16){\line(4,5){11.20}}
\put(131.50,16){\oval(7,6)[rb]}
\end{picture}
\]
\[u_0^2=u_1^2\circ\nu^{-1}=u_{-1}^2\circ\nu:X\to X\haj{{\ }}\pti,\qquad
u_0^{-2}=u_1^{-2}\circ\nu^{-1}=u_{-1}^{-2}\circ\nu:X\to\haj{{\ }}\pti X.\]
Changing the category $\CC$ by an equivalent one, we can (and we will) assume
that ${}\haj{{\ }} X =X\haj{{\ }}$, $X\haj{{\ }}\pti = {}\haj{{\ }}\pti X = X$ and
$u_0^2 = u_0^{-2} = \operatorname{id}_X$ (see \cite{Lyu:tan}).
If in addition $\CC$ is additive, it is $k$-linear with $k=\operatorname{End} I$. We assume
in the following that $k$ is a field, in which each element has a square
root. Often $\CC$ will be noetherian abelian category with finite dimensional
$k$-vector spaces $\operatorname{Hom}_{\CC}(A,B)$. In such case there exists a coend
$F=\int X\otimes X\haj{{\ }}$ as an object of a cocompletion $\hat{\CC}$
\cite{Lyu:tan} of $\CC$. It is a Hopf algebra
(see \cite{Lyu:mod,LyuMaj,Maj:bra}).
There is a Hopf pairing $\omega:F\otimes F\to I$ \cite{Lyu:mod},
\[
\unitlength=1mm
\linethickness{0.4pt}
\begin{picture}(61,19)
\put(40,5){\line(0,1){4}}
\put(40,9){\line(4,5){7.33}}
\put(61,9){\line(0,1){9}}
\put(40,13){\line(-4,5){4}}
\put(28,19){\makebox(0,0)[cc]{$F$}}
\put(54,19){\makebox(0,0)[cc]{$F$}}
\put(5,10){\makebox(0,0)[cc]{$\omega\ =$}}
\put(32,9){\oval(24,18)[b]}
\put(45,10){\oval(32,20)[rb]}
\put(20,8){\line(0,1){10}}
\end{picture}
\]
such that
\[\operatorname{Ker}\omega = \operatorname{Ker}^{\text{left}}\omega =
\operatorname{Ker}^{\text{right}}\omega\in\hat{\CC}.\]
The quotient ${\bold f}=F/\operatorname{Ker}\omega\in\hat{\CC}$ is also a Hopf algebra and the
first modular axiom is \cite{Lyu:mod}
(M1) ${\bold f}$ is an object of $\CC$ (and not only of a cocompletion $\hat{\CC}$)
\noindent (more scrupulously, it means that there exists an exact sequence
$0\to\operatorname{Ker}\omega\to F\to {\bold f} \to 0$ in $\hat{\CC}$, where ${\bold f}$ is an
object from $\CC\subset\hat{\CC}$).
Being the coend $\int X\otimes X\haj{{\ }}$, the object $F\in \hat{\CC}$ has an
automorphism $\und{\nu\tens1} \overset{\text{def}}= \int\nu\tens1$ (notations
are from \cite{Lyu:mod}). The second modular axiom says \cite{Lyu:mod}
(M2) $\und{\nu\tens1}(\operatorname{Ker}\omega)\subset\operatorname{Ker}\omega$
\noindent (more scrupulously, there exist morphisms
$T':\operatorname{Ker}\omega\to\operatorname{Ker}\omega\in\hat{\CC}$,
$T:{\bold f}\to{\bold f}\in\CC$ such that the diagram
\[
\begin{array}{ccccrcrcc}
0 & \to & \operatorname{Ker}\omega & \to & F & \to & {\bold f} & \to & 0\\
&& T'\bigg\downarrow && \und{\nu\tens1} \bigg\downarrow &&
T\bigg\downarrow &&\\
0 & \to & \operatorname{Ker}\omega & \to & F & \to & {\bold f} & \to & 0
\end{array}
\]
commutes).
\begin{defn}
A noetherian abelian ribbon category $\CC$ with finite dimensional $k$-vector
spaces of morphisms $\operatorname{Hom}_{\CC}(A,B)$ is called {\sl modular}, if axioms
(M1), (M2) are satisfied.
\end{defn}
It was shown in \cite{Lyu:mod} that in the case of a modular category there
exists a morphism $\mu:I\to{\bold f}$, which is the integral of a dual Hopf algebra
$\haj{{\ }}{\bold f}\simeq{\bold f}$, and
\[
\unitlength=0.8mm
\linethickness{0.4pt}
\begin{picture}(143,39)
\put(1,20){\makebox(0,0)[cc]{$\nu^{-1}$}}
\put(9,16){\framebox(4,8)[cc]{}}
\put(27,24){\oval(32,16)[lt]}
\put(30,5){\line(0,-1){4}}
\put(30,11){\line(0,1){28}}
\put(23,32){\line(0,1){5}}
\put(18,36){\makebox(0,0)[cc]{$\mu$}}
\put(35,37){\makebox(0,0)[cc]{$X$}}
\put(42,20){\makebox(0,0)[cc]{$=$}}
\put(49,20){\makebox(0,0)[cc]{$\lambda^{-1}$}}
\put(56,16){\framebox(4,8)[cc]{}}
\put(58,24){\line(0,1){15}}
\put(58,16){\line(0,-1){15}}
\put(63,37){\makebox(0,0)[cc]{$X$}}
\put(65,20){\makebox(0,0)[cc]{$\nu$ ,}}
\put(79,20){\makebox(0,0)[cc]{$\nu$}}
\put(87,16){\framebox(4,8)[cc]{}}
\put(105,24){\oval(32,16)[lt]}
\put(108,5){\line(0,-1){4}}
\put(108,11){\line(0,1){28}}
\put(101,32){\line(0,1){5}}
\put(96,36){\makebox(0,0)[cc]{$\mu$}}
\put(113,37){\makebox(0,0)[cc]{$X$}}
\put(120,20){\makebox(0,0)[cc]{$=$}}
\put(127,20){\makebox(0,0)[cc]{$\lambda$}}
\put(134,16){\framebox(4,8)[cc]{}}
\put(136,24){\line(0,1){15}}
\put(136,16){\line(0,-1){15}}
\put(141,37){\makebox(0,0)[cc]{$X$}}
\put(143,20){\makebox(0,0)[cc]{$\nu^{-1}$}}
\put(23,16){\oval(24,18)[b]}
\put(101,16){\oval(24,18)[b]}
\put(113,26){\line(0,-1){11}}
\put(35,26){\line(0,-1){11}}
\end{picture}
\]
for some invertible constant $\lambda\in k^{\times}$. The pair
$(\mu,\lambda)$ is unique up to a sign. Morphisms $S,S^{-1}:{\bold f}\to{\bold f}$
\[ \fourier \qquad\quad,\qquad\quad \invfourier \]
are inverse to each other. Morphisms $S$ and $T$ (defined via (M2)) yield a
projective representation of a mapping class group of a torus with one hole:
\[ (ST)^3=\lambda S^2, \qquad S^2=\gamma^{-1},\]
\[ T\gamma=\gamma T, \qquad \gamma^2=\nu. \]
Here $\gamma:{\bold f}\to {\bold f}$ is the antipode of the Hopf algebra ${\bold f}$,
\[
\unitlength=0.70mm
\linethickness{0.4pt}
\begin{picture}(38,40)
\put(11,2){\line(0,1){11}}
\put(11,13){\line(6,5){17}}
\put(28,27){\line(0,1){12}}
\put(32.50,14.50){\oval(11,9)[r]}
\put(32,19){\line(-1,-4){4.33}}
\put(28,13){\line(-6,5){7}}
\put(18,21){\line(-6,5){7}}
\put(11,26.67){\line(0,1){12.33}}
\put(20,40){\makebox(0,0)[cc]{${\bold f}$}}
\put(20,1){\makebox(0,0)[cc]{${\bold f}$}}
\put(1,20){\makebox(0,0)[cc]{$\gamma\ =$}}
\end{picture}
\]
\sEction{From a ribbon category to a modular functor}
\label{to a functor}
Assume that we are given a small ribbon category $(\CC,\otimes,a,c,\nu,I)$ for
which the duality map $\cdot\haj{{\ }}:\operatorname{Ob}\CC\to \operatorname{Ob}\CC$ is involutive and
$u_0^2:A\to A\haj{{\ }}\pti=A$ equals $1_A$ for all objects $A\in\CC$. Further,
we assume that $\CC$ is a noetherian abelian category and $\operatorname{End} I=k$ is a
field, thus, $\CC$ is $k$-linear. We assume also that $k$-vector spaces
$\operatorname{Hom}(A,B)$ are finite dimensional for any $A,B\in \CC$. Finally, we assume
that algebra ${\bold f}$, quotient of $F$, belongs to $\CC$ and
$\operatorname{Ker}(F\to{\bold f})=\operatorname{Ker}\omega$ is $\underline{\nu\tens1}$-invariant.
\subsEction{Preliminaries about exact functors}
\begin{lem}\label{lemcoend}
Let $F:\CC\to k\text{\fontshape{n}\selectfont-Vect}$, $G:\CC^{op}\to k\text{\fontshape{n}\selectfont-Vect}$ be functors. Then
\[\int^X F(X)\otimes \operatorname{Hom}(X,B)\to F(B),\qquad v\otimes f\mapsto F(f).v\]
\[\int^X \operatorname{Hom}(B,X)\otimes G(X)\to G(B),\qquad f\otimes v\mapsto G(f).v\]
are isomorphisms of vector spaces.
\end{lem}
Let $F:(\CC^{k+n})^{op}\times\CC^{n+l}\to k\text{-vect}$ be a $k$-linear left exact
(preserving kernels) functor. By the ``parameter theorem for coends'' of
Mac Lane \cite{Mac:cat} a coend of the bifunctor
\[ F(A,-;-;B):(\CC^n)^{op}\times\CC^n\to k \text{-vect}\]
with fixed $A\in\CC^k$, $B\in\CC^l$ is identified as a functor in $A,B$ with
a coend of the bifunctor
\[F':(\CC^n)^{op}\times\CC^n\to
\text{Functors}((\CC^{op})^k\times\CC^l \to k \text{-vect}).\]
Here functors are $k$-linear functors. But, generally, the coend of the
bifunctor
\[F'':(\CC^n)^{op}\times\CC^n\to
\text{Left Exact Functors}((\CC^{op})^k\times\CC^l \to k \text{-vect}).\]
is a different thing. In any case we have a morphism
\[ \int^X F'(X,X)\to \int^X F''(X,X) .\]
To stress the difference, we introduce different notations for the usual
coend
\[ \int^X F(A,X;X,B) \equiv \Bigl(\int^X F'(X,X) \Bigr)(A,B) \]
and for the second type of coend, the left exact functor
\[\oint^X F(A,X;X,B) \buildrel {\text{def}}\over =
\bigl( \int^X F''(X,X)\bigr)(A,B).\]
The Fubini theorem for left exact functors (repeated coend theorem) is
proved exactly in the same way as for arbitrary functors (see Mac Lane's
book \cite{Mac:cat}). Nevertheless, we present a proof here
for the sake of completeness.
\begin{thm}\label{LEcoend}
Let $\CC_1,\CC_2,\CC_3,{\cal A}} \newcommand{\CC}{{\cal C}$ be abelian $k$-linear
categories and let $F:\CC_2^{op}\times\CC_1^{op}\times\CC_1\times\CC_2
\times\CC_3\to {\cal A}} \newcommand{\CC}{{\cal C}$ be left exact $k$-linear functor. Assume that there
exists coend with parameters
\[j_X(U;V,A):F(U,X;X,V,A)\to G(U;V,A)=\int^{X\in\CC_1} F(U,X;X,V,A).\]
Assume that $G$, viewed as a functor $G:\CC_2^{op}\times\CC_2
\to \text{ Functors } (\CC_3\to{\cal A}} \newcommand{\CC}{{\cal C})$, has a left exact coend
$H\in \text{ LEF }(\CC_3\to{\cal A}} \newcommand{\CC}{{\cal C})$ (set of Left Exact Functors)
\[k_U(A):G(U;U,A)\to H(A)=\int^U G(U;U,A) \]
(that means that $k$ is dinatural, $H$ is left exact and the pair $(k,H)$ is
universal between all dinatural transformations of $G$ into left exact
functors). Then
\[i_{X,U}(A):F(U,X;X,U,A) \buildrel j_X(U;U,A)\over\hbox to 45pt {\rightarrowfill}
G(U;U,A)\buildrel k_U(A)\over\hbox to 30pt {\rightarrowfill} H(A) \]
is a coend $\oint^{X,U} F(U,X;X,U,A)$ of $F$, viewed as a bifunctor
\[(\CC_1\times\CC_2)^{op}\times(\CC_1\times\CC_2)\to
\text{ LEF } (\CC_3\to {\cal A}} \newcommand{\CC}{{\cal C}).\]
\end{thm}
We write the statement of the theorem as an isomorphism
\[\oint^{X,U} F(U,X;X,U,A)\simeq \oint^U \! \int^X F(U,X;X,U,A). \]
In particular case $\CC_2=0$ we have
\begin{cor}
Let $G(A)=\int^{X\in\CC_1} F(X;X,A)$ for left exact $k$-linear functor
$F:\CC_1^{op}\times\CC_1\times\CC_3\to {\cal A}} \newcommand{\CC}{{\cal C}$. Then
\[\oint^X F(X;X,A)=\oint^0 G(A) \]
if there exists a left exact functor $\oint^0 G:\CC_3\to{\cal A}} \newcommand{\CC}{{\cal C}$ with
isomorphisms $Nat(G,E)\cong Nat(\oint^0 G,E)$ for any left exact
$E:\CC_3\to{\cal A}} \newcommand{\CC}{{\cal C}$.
\end{cor}
\subsEction{Modular functor}\label{Modular}
Our goal is to construct a functor $Z:EN\to k$-vect, which will
satisfy to the following conditions:
(i) $Z$ is a symmetric monoidal functor.
(ii) To $D_2$ corresponds
\[Z(D_2;B;C)=\operatorname{Hom}(B,C),\]
to glueing of such corresponds the composition
\[\operatorname{Hom}(A,B)\otimes\operatorname{Hom}(B,C)\to\operatorname{Hom}(A,C).\]
(iii) To $D_3$ correspond
\[Z\left(\tridowniio CBA \right) =\operatorname{Hom}(A\otimes B,C),\]
\[Z\left(\triupioo ABC \right)=\operatorname{Hom}(A,B\otimes C).\]
To glueing of $D_3$ with $D_2$ corresponds an action of $\operatorname{Hom}(-,-)$ on
$\operatorname{Hom}(-\otimes -,-)$ or $\operatorname{Hom}(-,-\otimes -)$ via composition.
(iv) To $D_1$ correspond
\[Z(D_1;X;\ )=\operatorname{Hom}(X,I),\]
\[Z(D_1;\ ;X)=\operatorname{Hom}(I,X),\]
where $I$ is unity object of $\CC$. To glueing of $D_1$ with $D_2$
correspond compositions
\[\operatorname{Hom}(X,Y)\otimes\operatorname{Hom}(Y,I)\to\operatorname{Hom}(X,I),\]
\[\operatorname{Hom}(I,X)\otimes\operatorname{Hom}(X,Y)\to\operatorname{Hom}(I,Y).\]
(v) To isomorphisms
\[du_B: \tridowniio CBA \hbox to 30pt {\rightarrowfill} \tridownioo C{B\haj{{\ }}}A ,\]
\[du_A: \tridowniio CBA \hbox to 30pt {\rightarrowfill} \tridownoio CB{A\haj{{\ }}} \]
correspond adjunctions
\[d_l:\operatorname{Hom}(A\otimes B,C)\to \operatorname{Hom}(A,C\otimes B\haj{{\ }}),\]
\[d_r:\operatorname{Hom}(A\otimes B,C)\to \operatorname{Hom}(B,A\haj{{\ }}\otimes C).\]
(vi) For any oriented net $\Gamma$ glueings with $D_2$ makes $Z(\Gamma)$ into
a functor
\[Z(\Gamma):(\CC^{op})^k\times\CC^l\to k{\rm -vect}\]
We assume that $Z(\Gamma)$ is left exact (additive and preserving kernels).
(vii) If $f:\Gamma\to\tilde\Gamma$ is a glueing and $g(\Gamma)=g(\tilde
\Gamma)$, then the morphism of functors
\[\int^{X\in\CC^k} Z(\Gamma,\dots,X;X,\dots)\to Z(\tilde\Gamma;\dots;
\dots)\]
is an isomorphism.
(viii) To the morphism
\[ \tarahoriooor XABC{} \stackrel{fus}{\hbox to 45pt {\rightarrowfill}} \taraverioood XABCV \]
corresponds
\begin{multline*}
\operatorname{Hom}(X,A\otimes(B\otimes C))\simeq \int^{U\in\CC} \operatorname{Hom}(X,A\otimes U)\otimes
\operatorname{Hom}(U,B\otimes C) \simeq \\
\simeq Z\left(\tarahoriooor XABC{} \right) \buildrel Z(fus)\over\hbox to 45pt {\rightarrowfill}
Z\left( \taraverioood XABCV \right) \simeq \\
\simeq \int^{V\in\CC} \operatorname{Hom}(X,V\otimes C)\otimes\operatorname{Hom}(V,A\otimes B)\simeq
\operatorname{Hom}(X,(A\otimes B)\otimes C),
\end{multline*}
which coincides with $\operatorname{Hom}(X,a_{A,B,C})$.
(ix) To the morphism
\[ \triupioo XAB \stackrel{Br_{AB}}{\hbox to 30pt {\rightarrowfill}} \triupioo XBA \]
corresponds
\[\operatorname{Hom}(X,c_{A,B}):\operatorname{Hom}(X,A\otimes B)\to\operatorname{Hom}(X,B\otimes A).\]
(x) To the morphism
\[\lezhakright XY \buildrel Tw_Y\over\hbox to 30pt {\rightarrowfill} \lezhakright XY \]
corresponds
\[\operatorname{Hom}(X,\nu_Y):\operatorname{Hom}(X,Y)\to\operatorname{Hom}(X,Y).\]
(xi) To a tadpole $A_1$, obtained via glueing from $D_3$, corresponds
\begin{multline*}
Z\left(
{\unitlength=0.75pt
\makebox[39 pt][l]{
\raisebox{3 pt}[19 pt][13 pt]{
\put(22,0){\line(-1,0){23}}
\put(13,0){\vector(1,0){0}}
\put(22,0){\line(3,-5){12}}
\put(28,-10){\vector(2,-3){0}}
\put(22,0){\line(3,5){12}}
\put(28,10){\vector(-2,-3){0}}
\put(22,0){\circle*{4}}
\put(2,4){$X$}
\put(35,-17){$M$}
\put(35,10){$M$}
}}}
\ \right)=\operatorname{Hom}(X\otimes M,M)\simeq \operatorname{Hom}(X,M\otimes M\haj{{\ }}) \to \\
\to \operatorname{Hom}(X,{\bold f})=Z\left(\tennisu XM \right).
\end{multline*}
$Z$ can be viewed as a monoidal functor
\[\{ \text{extended nets} \}\to\hat{\CC}_{*,*}.\]
Now we shall construct such a functor step by step, proving the following
\begin{thm}
There exists one and only one up to equivalence functor
$Z:EN\to k\text{-vect}$, satisfying assumptions (i)-(xi) above. It has also
property:
(xii) Let $f:\Gamma\to\tilde\Gamma$ be a glueing and let the boundary of each
connected component of $\tilde\Gamma$ not be empty. The functor
\[Z(\Gamma;\dots,X;X,\dots):
(\CC^{op})^{k+n}\times\CC^{n+l} \to k{\rm -vect}\]
can be represented as a bifunctor
\[Z'(\Gamma)(X,X):(\CC^n)^{op}\times(\CC^n)\to \hat{\CC}_{k,l},\]
where $\hat{\CC}_{k,l}$ is the category of left exact functors
$(\CC^{op})^k\times\CC^l \to k{\rm -Vect}$.
The coend of this bifunctor is mapped to $Z(\tilde\Gamma)\in\hat{\CC}_{k,l}$
\[\int^{X\in\CC^k} Z'(\Gamma)(X,X)\to Z(\tilde\Gamma).\]
We claim that this is an epimorphism in $\hat{\CC}_{k,l}$.
\end{thm}
Proof is the matter of the remaining part of this chapter.
\subsEction{A functor on the category of oriented nets}\labl{functorCZ
Let $\ON_>$ be a symmetric monoidal category of glueings of oriented nets,
having at least one end at each connected component. Let us construct a
functor $\CZ:\ON_> \to k\text{-vect}$.
Fix the value of the functor $\CZ$ on elementary objects---$D_1,D_2,D_3$
as in (ii)--(iv). So it is fixed on disjoint unions $\bigsqcup_i X_i$
of such objects: $\CZ(\bigsqcup X_i)=\otimes_i \CZ(X_i)$. (In fact, it is fixed
only up to a permutation of tensor multiplicands. To fix it completely, we
could define nets with chosen total ordering of the set
$V_3\sqcup V_2\sqcup V_1$, and add to $\ON$ new morphisms, which
change only that ordering. Obtained category is equivalent to the old one.
We shall not remind about such tricks in the following.)
The following condition determines the value of $\CZ$ on $\ON_>$:
(xiii) Let $f:\Gamma\to\tilde\Gamma$ be a glueing, and let the boundary of
each connected component of $\tilde\Gamma$ be not empty, then there exists a
coend $\oint \CZ(\Gamma)$ and the morphism
\[\oint^{X\in\CC^k} \CZ(\Gamma;\dots,X;X,\dots)\to
\CZ(\tilde\Gamma;\dots;\dots)\]
is an isomorphism of left exact functors.
Let $\Gamma \in\ON_>$ be a connected net of genus $g$ with $k$ incoming and
$l$ outgoing legs, $k+l>0$. We construct a space $\CZ(\Gamma)$ in the
following way. Let $\Gamma_1$ be the net obtained from $\Gamma$ by cutting
all internal edges. There is a canonical glueing $f:\Gamma_1\to\Gamma$.
We show the existence of the coend
\[\oint^{X\in\CC^n} \CZ(\Gamma_1;A,X;X,B) \in \hat{\CC}_{k,l} \]
and define $\CZ(\Gamma)$ to be that functor.
Cut $\Gamma$ at such edges $w_1,\dots,w_g$ that the obtained net
$\Gamma_2$ is a tree. Introduce ``internal variables''
$W_1,\dots,W_g\in\CC$ corresponding to $w_i$, $1\le i \le g$, and let
$Y_1,\dots,Y_{n-g}\in\CC$ be other ``internal variables'' corresponding
to other edges. Cutting along them breaks $\Gamma_2$ into pieces,
forming $\Gamma_1$. We have the whole collection of variables
$\{X_1,\dots,X_n\}=\{W_1,\dots,W_g,Y_1,\dots,Y_{n-g}\}$. We apply
\thmref{LEcoend}. The coend\linebreak[4]
$\int^{Y\in\CC^{n-g}}\CZ(\Gamma_1;A,W,Y;Y,V,B)$ exists and gives the
functor $\CZ(\Gamma_2;A,W;V,B)$. It is isomorphic to a functor
\[ \operatorname{Hom}(A_1\otimes\dots\otimes A_k,B_1\otimes\dots\otimes B_l\otimes
(V_1\otimes W_1\haj{{\ }})\otimes\dots\otimes (V_g\otimes W_g\haj{{\ }}))\]
with some parenthesis for tensor product. This is proved by induction on
the number of vertices of the tree. Inductive step use Lemma~\ref{lemcoend}
similarly to calculation
\begin{multline*}
\int^X\operatorname{Hom}(A,X\otimes B)\otimes\operatorname{Hom}((C\otimes X)\otimes D),E)
\buildrel d_B\over\simeq \\
\simeq \int^X\operatorname{Hom}(A\otimes B\haj{{\ }},X)\otimes\operatorname{Hom}((C\otimes X)\otimes D),E)
\simeq\operatorname{Hom}((C\otimes(A\otimes B\haj{{\ }}))\otimes D,E).
\end{multline*}
By Theorem~\ref{LEcoend}
\[\oint^{X\in\CC^n} \CZ(\Gamma_1;A,X;X,B)\simeq
\oint^{W\in\CC^g} \CZ(\Gamma_2;A,W;W;B) \simeq \]
\[\simeq\oint^{W\in\CC^g} \operatorname{Hom}(A_1\otimes\dots\otimes A_k,B_1\otimes\dots\otimes
B_l\otimes(W_1\otimes W_1\haj{{\ }})\otimes\dots\otimes (W_g\otimes W_g\haj{{\ }})) \]
if the latter exists. We show that, indeed, it exists and equals
\[\operatorname{Hom}(A_1\otimes\dots\otimes A_k,B_1\otimes\dots\otimes B_l\otimes
(\int^{W_1\in\CC} W_1\otimes W_1\haj{{\ }}) \otimes\dots\otimes
(\int^{W_g\in\CC} W_g\otimes W_g\haj{{\ }})) \]
for $k+l>0$.
Using duality isomorphisms, we can assume that $k=1$. Assume that we are
given functorial morphisms
\begin{multline*}
i_{W_1\dots W_g}(A,B):\operatorname{Hom}(A,B_1\otimes\dots\otimes B_l\otimes C\otimes
(W_1\otimes W_1\haj{{\ }})\otimes\dots\otimes (W_g\otimes\haj{{\ }} W_g))\to \\
\to G(A;B_1,\dots,B_l)
\end{multline*}
which define dinatural transformation. Here $C$ is an object of
$\hat\CC$ and $G:\CC^{op}\times\CC^l\to k$-Vect is a left exact functor.
We prove by induction on $g$ that they all factorize through the unique
morphism
\[\operatorname{Hom}(A,B_1\otimes\dots\otimes B_l\otimes C\otimes(\int^{W_1} W_1\otimes W_1\haj{{\ }})
\otimes\dots\otimes (\int^{W_g} W_g\otimes W_g\haj{{\ }}))\to G(A;B). \]
If $g\ge 1$ denote by $D$ the product $B_1\otimes\dots\otimes B_l\otimes C
\otimes (W_1\otimes W_1\haj{{\ }})\otimes\dots\otimes (W_{g-1}\otimes W_{g-1}\haj{{\ }})$ and
denote by $G_1(A)$ the left exact functor $G(A;B_1,\dots,B_l)$ with fixed
$B_1,\dots,B_l$. There exists tautologically $T\in\hat{\CC}$, such that
$G_1(A)=\operatorname{Hom}(A,T)$. Dinaturality implies commutativity of the diagram
\[
\begin{CD}
\operatorname{Hom}(A,D\otimes(V_g\otimes W_g\haj{{\ }})) @>\operatorname{Hom}(A,D\otimes f\otimes W_g\haj{{\ }})>>
\operatorname{Hom}(A,D\otimes(W_g\otimes W_g\haj{{\ }})) \\
@V{\operatorname{Hom}(A,D\otimes(V_g\otimes f^t))}VV @VVi_{\dots,W_g}V \\
\operatorname{Hom}(A,D\otimes(V_g\otimes V_g\haj{{\ }})) @>i_{\dots,V_g}>> \operatorname{Hom}(A,T)
\end{CD}
\]
Hence, the following diagram in $\hat{\CC}$ commutes for any
$f:V_g\to W_g$:
\[
\begin{CD}
D\otimes(V_g\otimes W_g\haj{{\ }}) @>D\otimes f\otimes W_g\haj{{\ }}>> D\otimes(W_g\otimes W_g\haj{{\ }})\\
@V{D\otimes (V_g\otimes f^t)}VV @VVV \\
D\otimes (V_g\otimes V_g\haj{{\ }}) @>>> T
\end{CD}
\]
Consequently, there is a morphism in $\hat{\CC}$
\[D\otimes\int^W W\otimes W\haj{{\ }}\cong \int^W D\otimes (W\otimes W\haj{{\ }})\to T.\]
The first isomorphism here follows from the fact that $\otimes$ preserves
colimits in $\hat{\CC}$. Thus, for any collection of objects
$B_1,\dots,B_l,W_1,\dots,W_{g-1}$ we obtained morphisms
\begin{multline*}
j_{B_1,\dots,B_l,W_1,\dots,W_{g-1}}(A):\operatorname{Hom}(A,B_1\otimes\dots\otimes B_l
\otimes C\otimes (W_1\otimes W_1\haj{{\ }})\otimes\dots\otimes \\
\otimes (W_{g-1}\otimes W_{g-1}\haj{{\ }})\otimes (\int^{W_g} W_g\otimes W_g\haj{{\ }}))
\to G(A;B_1,\dots,B_l)
\end{multline*}
functorial in $A$. Morphisms $i_W(A;B)$ are factorized through $j_{B,W}(A)$
and the latter are characterized by that property.
One can prove that $j_W(A;B)\equiv j_{B,W}(A)$ is functorial in $A$
and $B_i$ and that $j_W$ is dinatural in $W$.
Constant tensor multiple $F=\int^{W_p} W_p\otimes W_p\haj{{\ }}$ can be adjoint
to $C$. We finish computation of $\oint \CZ(\Gamma_1)$ by induction.
Thus, we defined a functor $\CZ:\ON \to k$-Vect.
\[\CZ(\Gamma)\buildrel {\text{def}}\over = \oint \CZ(\Gamma_1) \simeq
\operatorname{Hom}(A_1\otimes\dots\otimes A_k,B_1\otimes\dots\otimes B_l \otimes F
\otimes\dots\otimes F).\]
This functor satisfies (i)--(iv), (vi), (vii).
Now we prove the property (xiii) for $\CZ$. We assume that $\tilde\Gamma$ is
connected with non-empty boundary. Any glueing $f:\Gamma\to\tilde\Gamma$
can be factorized into $\Gamma\buildrel f_1\over\to\Gamma'
\buildrel f_2\over\to\tilde\Gamma$, where genus of $\Gamma'$ equals genus of
$\Gamma$ and $\Gamma'$ is connected. Let variables $X_i$ correspond to
circles, glued by $f_1$, and let $Y_j$ correspond to ones, glued by $f_2$.
Considered morphism $\CZ(f)$ factorizes as
\[\oint \CZ(\Gamma)\simeq\oint^Y \! \int^X \CZ(\Gamma)
\simeq\oint^Y \CZ(\Gamma')\to \CZ(\Gamma) \]
by Theorem~\ref{LEcoend} and (vii). This is an isomorphism, because
\[\oint^Y\operatorname{Hom}(A_1\otimes\dots\otimes A_k,B_1\otimes\dots\otimes B_l
\otimes(Y_1\otimes Y_1\haj{{\ }})\otimes\dots\otimes (Y_p\otimes Y_p\haj{{\ }})
\otimes F\otimes\dots\otimes F)\to \]
\[\to\operatorname{Hom}(A_1\otimes\dots\otimes A_k,B_1\otimes\dots\otimes B_l\otimes F
\otimes\dots\otimes F\otimes\dots\otimes F)\]
is an isomorphism.
\subsEction{Relations for insertions and reversals}
We represent morphisms $X=ins,del:\Gamma\to\Gamma'$ from \secref{oriented}
by the following procedure. Let $\Gamma=\Gamma_1\cup\sigma$,
$\Gamma'=\Gamma_1\cup\sigma'$, where $X=ins$, $del:\sigma\to\sigma'$ is a
standard morphism, and $X\vert_{\Gamma_1}=\operatorname{id}$. Then we define $\CZ(X)$ as
\[\CZ(\Gamma)\simeq \oint \CZ(\Gamma_1)\otimes \CZ(\sigma)
\buildrel \oint \operatorname{id}\otimes \CZ(X)\over\hbox to 45pt {\rightarrowfill}
\oint \CZ(\Gamma_1)\otimes \CZ(\sigma')\simeq \CZ(\Gamma'),\]
an isomorphism of coends, induced by isomorphism of underlying bifunctors.
Now we construct a functor $\CZ$ on $\ON_>$ extended by $ins$ and $del$
satisfying the axioms. To deletion or insertion of a 2-vertex correspond
isomorphisms of Lemma~\ref{lemcoend}. To deletion or insertion of a 1-vertex
correspond isomorphisms which are glueings of identity with
\[\CZ\left(\torchokdr XY \hbox to 20pt {\rightarrowfill} \lezhakright XY \right)=\operatorname{Hom}(X\otimes I,Y)
\buildrel \operatorname{Hom}(r_X^{-1},Y)\over\hbox to 60pt {\rightarrowfill} \operatorname{Hom}(X,Y)\]
\[\CZ\left(
{\unitlength=0.75pt
\makebox[36 pt][l]{
\raisebox{-4 pt}[15 pt][7.5 pt]{
\put(0,0){\line(1,0){40}}
\put(8,0){\vector(-1,0){0}}
\put(28,0){\vector(-1,0){0}}
\put(20,0){\line(0,1){20}}
\put(20,8){\vector(0,-1){0}}
\put(20,20){\circle*{4}}
\put(20,0){\circle*{4}}
\put(3,4){$Y$}
\put(28,4){$X$}
}}}
\hbox to 20pt {\rightarrowfill} \lezhakleft YX \right)=\operatorname{Hom}(I\otimes X,Y)
\buildrel \operatorname{Hom}(l_X^{-1},X)\over\hbox to 60pt {\rightarrowfill} \operatorname{Hom}(X,Y)\]
\[\CZ\left(\torchokur XY \hbox to 20pt {\rightarrowfill} \lezhakright XY \right)=\operatorname{Hom}(X,Y\otimes I)
\buildrel \operatorname{Hom}(X,r_Y)\over\hbox to 60pt {\rightarrowfill} \operatorname{Hom}(X,Y)\]
\[\CZ\left(\torchokul YX \hbox to 20pt {\rightarrowfill} \lezhakleft YX \right)=\operatorname{Hom}(X,I\otimes Y)
\buildrel \operatorname{Hom}(X,l_Y)\over\hbox to 60pt {\rightarrowfill} \operatorname{Hom}(X,Y)\]
The orientation reversing morphism $du$ is realized on external legs as a
glueing of identity with duality adjunctions
\[\CZ(du_A):\CZ\left(\triupioo XAB \right)=\operatorname{Hom}(X,A\otimes B)
\buildrel d_r^{-1}\over\hbox to 20pt {\rightarrowfill} \operatorname{Hom}(A\haj{{\ }}\otimes X,B)=
\CZ\left(\triupioi X{A\haj{{\ }}}B \right),\]
\[\CZ(du_B):\CZ\left(\triupioo XAB \right)=\operatorname{Hom}(X,A\otimes B)
\buildrel d_l^{-1}\over\hbox to 20pt {\rightarrowfill} \operatorname{Hom}(X\otimes B\haj{{\ }},A)=
\CZ\left(\triupiio XA{B\haj{{\ }}} \right),\]
\[\CZ(du_X):\CZ\left(\tridowniio AYX \right)=\operatorname{Hom}(X\otimes Y,A)
\buildrel d_r\over\hbox to 20pt {\rightarrowfill} \operatorname{Hom}(Y,X\haj{{\ }}\otimes A)=
\CZ\left(\tridownoio AY{X\haj{{\ }}} \right),\]
\[\CZ(du_Y):\CZ\left(\tridowniio AYX \right)=\operatorname{Hom}(X\otimes Y,A)
\buildrel d_l\over\hbox to 20pt {\rightarrowfill} \operatorname{Hom}(X,A\otimes Y\haj{{\ }})=
\CZ\left(\tridownioo A{Y\haj{{\ }}}X \right).\]
The orientation reversing morphism $rev$ for internal arrow is obtained
from the diagram of isomorphisms:
\begin{gather*}
\int^X \CZ\left(\!\!\!\!\!\!\trirightr X{}{} \right)\otimes
\CZ\left(\trileftr X{}{} \!\!\!\!\!\! \right) \buildrel du_X\otimes du_X\over\hbox to 45pt {\rightarrowfill}
\int ^{X\in\CC} \CZ\left(\!\!\!\!\!\! \trirightl{X\haj{{\ }}}{}{} \right)\otimes
\CZ\left(\trileftl{X\haj{{\ }}}{}{} \!\!\!\!\!\! \right) \\
\begin{array}{ccc}
\wr\Big\vert && \wr\Big\vert \\
\CZ\left(\tarahorr {}{}{}{}{} \right) & \buildrel rev\over\hbox to 20pt {\rightarrowfill}
\CZ\left(\tarahorl {}{}{}{}{} \right) \overset f\simeq &
\int^{Y\in\CC} \CZ\left(\!\!\!\!\!\! \trirightl Y{}{} \right)\otimes
\CZ\left(\trileftl Y{}{} \!\!\!\!\!\! \right)
\end{array}
\end{gather*}
where the isomorphism $f$ is that from Lemma~\ref{lemcoend}. This shows that
the relation~\eqref{gluerev} is satisfied. Similarly for \eqref{1ptdudurev},
\eqref{2ptdudurev}.
The existence of $\CZ(rev): \CZ(\Gamma) \to \CZ(\Gamma')$ for arbitrary net
$\Gamma$ is guaranteed by
\begin{prop}\label{proptipti}
Let $B:\CC^{op}\times\CC\to k\text{-vect}$ be a bifunctor. Then
$B_1(X,Y)=B(Y\haj{{\ }},X\haj{{\ }})$ is also a bifunctor. Let
\[B(X,X) \buildrel i_X\over\hbox to 30pt {\rightarrowfill} \int^X B(X,X), \qquad
B_1(Y,Y) \buildrel j_Y\over\hbox to 30pt {\rightarrowfill} \int^Y B_1(Y,Y)\]
be their coends. Then there exists the unique isomorphism $\alpha$ of coends,
which make the diagram
\[
\begin{CD}
B(X\haj{{\ }},X\haj{{\ }}) @>i_{X\haj{{\ }}}>> \int^X B(X,X) \\
@| @VV\alpha V \\
B_1(X,X) @>j_X>> \int^Y B_1(Y,Y)
\end{CD}
\]
commute for any $X\in\CC$.
\end{prop}
\begin{prop}
The identity~\eqref{dddddd} is satisfied, that is, the diagram
\begin{equation*}\labl{6dHom}
\begin{array}{ccccc}
&& \operatorname{Hom}(C,A\otimes B) && \\
& d_r \nearrow & & \searrow d_l^{-1} & \\
\operatorname{Hom}(A\haj{{\ }}\otimes C,B) &&&& \operatorname{Hom}(C\otimes B\haj{{\ }},A) \\
d_l^{-1}\big\uparrow &&&& \big\downarrow d_r \\
\operatorname{Hom}(A\haj{{\ }},B\otimes C\haj{{\ }}) &&&& \operatorname{Hom}(B\haj{{\ }},C\haj{{\ }}\otimes A) \\
& d_r \nwarrow && \swarrow d_l^{-1} & \\
&& \operatorname{Hom}(B\haj{{\ }}\otimes A\haj{{\ }},C\haj{{\ }})
\end{array}
\end{equation*}
is commutative.
\end{prop}
The obtained functor satisfies (i)-(vii) and (xiii).
\subsEction{Nets without ends}
We define also $\CZ(\Gamma)$ for a connected net without ends via insertion
of a 1-vertex
\[\CZ(\Gamma)\buildrel \CZ(Ins)\over\hbox to 45pt {\rightarrowfill} \CZ(\Gamma_\bullet) \simeq
\CZ(\Gamma_I) .\]
Previous subsection shows that different choices give isomorphic answers.
The property (xiii) is still true.
\subsEction{Relations for fusing}
Now we construct a functor $\CZ$ on the category $\ON$ extended by $ins$,
$del$, $fus$. We put
\begin{multline*}
\CZ(fus):\CZ\left(\tarahoriooor MABC{} \right)\simeq \int^{X\in \CC}
\operatorname{Hom}(M,A\otimes X)\otimes\operatorname{Hom}(X,B\otimes C) \simeq \\
\simeq\operatorname{Hom}(M,A\otimes(B\otimes C))
\buildrel \operatorname{Hom}(M,a_{A,B,C})\over\hbox to 60pt {\rightarrowfill} \operatorname{Hom}(M,(A\otimes B)\otimes C) \simeq \\
\simeq\int^{Y\in\CC} \operatorname{Hom}(M,Y\otimes C)\otimes\operatorname{Hom}(Y,A\otimes B)
\simeq \CZ\left(\taraverioood MABC{} \right).
\end{multline*}
The fusing pentagon~\eqref{pentagon_or} follows from the associativity
pentagon in $\CC$. We have to prove commutativity of
diagrams~\eqref{fus1?}, \eqref{fus2?}.
\begin{prop}
The relations~\eqref{fus1?}, \eqref{fus2?} are satisfied, that is, the
diagrams
\begin{equation*}\labl{realfus1?}
\begin{CD}
\operatorname{Hom}(A,B\otimes(C\otimes D))
\text{\makebox[0mm][l]{\put(13,0){$\stackrel{\operatorname{Hom}(A,a_{B,C,D})}\hbox to 45pt {\rightarrowfill}$}}}
@. \operatorname{Hom}(A,(B\otimes C)\otimes D) \\
@V\wr VV @VV\wr V \\
\int^X\operatorname{Hom}(A,B\otimes X)\otimes\operatorname{Hom}(X,C\otimes D) @.
\int^X\operatorname{Hom}(A,X\otimes D)\otimes\operatorname{Hom}(X,B\otimes C) \\
@V\int 1\otimes d_r^{-1}VV @VV\int 1\otimes d_l^{-1}V \\
\int^X\operatorname{Hom}(A,B\otimes X)\otimes \operatorname{Hom}(C\haj{{\ }}\otimes X,D) @.
\int^X\operatorname{Hom}(A,X\otimes D)\otimes\operatorname{Hom}(X\otimes C\haj{{\ }},B) \\
@V\int d_l^{-1}\otimes d_lVV @VV\int d_r^{-1}\otimes d_rV \\
\int^X\operatorname{Hom}(A\otimes X\haj{{\ }},B)\otimes\operatorname{Hom}(C\haj{{\ }},D\otimes X\haj{{\ }}) @.
\int^X\operatorname{Hom}(X\haj{{\ }}\otimes A,D)\otimes\operatorname{Hom}(C\haj{{\ }},X\haj{{\ }}\otimes B) \\
@V\wr VV @VV\wr V \\
\int^Y\operatorname{Hom}(A\otimes Y,B)\otimes\operatorname{Hom}(C\haj{{\ }},D\otimes Y) @.
\int^Y\operatorname{Hom}(Y\otimes A,D)\operatorname{Hom}(C\haj{{\ }},Y\otimes B) \\
@V\int d_r\tens1VV @VV\int d_l\tens1V \\
\int^Y\operatorname{Hom}(Y,A\haj{{\ }}\otimes B)\otimes\operatorname{Hom}(C\haj{{\ }},D\otimes Y) @.
\int^Y\operatorname{Hom}(Y,D\otimes A\haj{{\ }})\otimes\operatorname{Hom}(C\haj{{\ }},Y\otimes B) \\
@V\wr VV @VV\wr V \\
\operatorname{Hom}(C\haj{{\ }},D\otimes(A\haj{{\ }}\otimes B))
\text{\makebox[0mm][l]{\put(6,0)
{$\stackrel{\operatorname{Hom}(C\haj{{\ }},a_{D,A\haj{{\ }},B})}\hbox to 45pt {\rightarrowfill}$}}}
@. \operatorname{Hom}(C\haj{{\ }},(D\otimes A\haj{{\ }})\otimes B)
\end{CD}
\end{equation*}
\begin{equation*}\labl{realfus2?}
\begin{CD}
\operatorname{Hom}(D,A\otimes(B\otimes C))
\text{\makebox[0mm][l]{\put(11,0){$\stackrel{\operatorname{Hom}(D,a_{A,B,C,})}\hbox to 45pt {\rightarrowfill}$}}}
@. \operatorname{Hom}(D,(A\otimes B)\otimes C) \\
@A\wr AA @VV\wr V \\
\int^X\operatorname{Hom}(D,A\otimes X)\otimes\operatorname{Hom}(X,B\otimes C) @.
\int^X\operatorname{Hom}(D,X\otimes C)\otimes\operatorname{Hom}(X,A\otimes B) \\
@A\int d_r\tens1AA @VV\int 1\otimes d_r^{-1}V \\
\int^X\operatorname{Hom}(A\haj{{\ }}\otimes D,X)\otimes \operatorname{Hom}(X,B\otimes C) @.
\int^X\operatorname{Hom}(D,X\otimes C)\otimes\operatorname{Hom}(A\haj{{\ }}\otimes X,B) \\
@A\int d_l^{-1}\tens1AA @VV\int d_r^{-1}\otimes d_lV \\
\int^X\operatorname{Hom}(A\haj{{\ }},X\otimes D\haj{{\ }})\otimes\operatorname{Hom}(X,B\otimes C) @.
\ \int^X\operatorname{Hom}(X\haj{{\ }}\otimes D,C)\otimes\operatorname{Hom}(A\haj{{\ }},B\otimes X\haj{{\ }}) \\
@A\wr AA @VV\wr V \\
\operatorname{Hom}(A\haj{{\ }},(B\otimes C)\otimes D\haj{{\ }}) @.
\int^Y\operatorname{Hom}(Y\otimes D,C)\otimes \operatorname{Hom}(A\haj{{\ }},B\otimes Y) \\
@A\operatorname{Hom}({A\haj{{\ }}},a_{B,C,D\haj{{\ }}})AA @VV\int d_l\tens1V \\
\operatorname{Hom}(A\haj{{\ }},B\otimes (C\otimes D\haj{{\ }}))
\text{\makebox[0mm][l]{\put(5,0){$\stackrel{\sim}\hbox to 30pt {\leftarrowfill}$}}}
@. \int^Y\operatorname{Hom}(Y,C\otimes D\haj{{\ }})\otimes\operatorname{Hom}(A\haj{{\ }},B\otimes Y)
\end{CD}
\end{equation*}
are commutative.
\end{prop}
\subsEction{Relations for braiding and twists}
We extend the functor $\CZ$ to the category generated over $\ON$ by $ins$,
$del$, $fus$, $Tw$ and $Br$. We prove the relations involving $Tw$ and $Br$.
All conditions with $Tw$ are obvious or follow from equations
\[
\unitlength=0.70mm
\linethickness{0.4pt}
\begin{picture}(135,24)
\put(5,8){\framebox(4,8)[cc]{}}
\put(1,12){\makebox(0,0)[cc]{$\nu$}}
\put(7,8){\line(0,-1){8}}
\put(14,16){\oval(14,16)[t]}
\put(21,16){\line(0,-1){16}}
\put(30,12){\makebox(0,0)[cc]{=}}
\put(53,8){\line(0,-1){8}}
\put(51,8){\framebox(4,8)[cc]{}}
\put(59,12){\makebox(0,0)[cc]{$\nu$\ ,}}
\put(46,16){\oval(14,16)[t]}
\put(39,16){\line(0,-1){16}}
\put(80,8){\framebox(4,8)[cc]{}}
\put(76,12){\makebox(0,0)[cc]{$\nu$}}
\put(82,16){\line(0,1){8}}
\put(89,8){\oval(14,16)[b]}
\put(96,8){\line(0,1){16}}
\put(105,12){\makebox(0,0)[cc]{=}}
\put(126,8){\framebox(4,8)[cc]{}}
\put(135,12){\makebox(0,0)[cc]{$\nu$ .}}
\put(128,16){\line(0,1){8}}
\put(121,8){\oval(14,16)[b]}
\put(114,8){\line(0,1){16}}
\end{picture}
\]
The hexagon~\eqref{hexagon_or} for braiding
\[\operatorname{Hom}(C,A\otimes B) \buildrel \operatorname{Hom}(C,c_{AB})\over\hbox to 60pt {\rightarrowfill} \operatorname{Hom}(C,B\otimes A)\]
follows from that one for commutativity $c$. Property~\eqref{b2ttt} follows
from the equation
\[\operatorname{Hom}(C,c_{AB}^2)=\operatorname{Hom}(C,\nu_{A\otimes B}\circ\nu_A^{-1}\otimes\nu_B^{-1})=
\operatorname{Hom}(\nu_C,\nu_A^{-1}\otimes\nu_B^{-1}).\]
\begin{prop}
The relations~\eqref{t=bb1}, \eqref{t=bb2} are satisfied. That is, the
diagrams
\[
\begin{array}{rcl}
\operatorname{Hom}(C,A\otimes B) & \buildrel \operatorname{Hom}(C,c)\over\hbox to 45pt {\rightarrowfill} \operatorname{Hom}(C,B\otimes A)
\buildrel d_r^{-1}\over\hbox to 20pt {\rightarrowfill} & \operatorname{Hom}(B\haj{{\ }}\otimes C,A) \\
&& \qquad\quad \Big \downarrow d_l \\
\operatorname{Hom}(C,\nu\otimes B)
\raisebox{0pt}[0pt][0pt]{\put(3,-20){\vector(0,1){40} }} \qquad\quad &&
\operatorname{Hom}(B\haj{{\ }},A\otimes C\haj{{\ }}) \\
&& \qquad\quad \Big\downarrow \operatorname{Hom}(B\haj{{\ }},c) \\
\operatorname{Hom}(C,A\otimes B) & \buildrel d_l\over\hbox to 45pt {\leftarrowfill} \operatorname{Hom}(C\otimes B,A)
\buildrel d_r^{-1}\over\hbox to 20pt {\leftarrowfill} & \operatorname{Hom}(B\haj{{\ }},C\haj{{\ }}\otimes A)
\end{array}
\]
\[
\begin{array}{rcl}
\operatorname{Hom}(C,A\otimes B) & \buildrel \operatorname{Hom}(C,c)\over\hbox to 45pt {\rightarrowfill} \operatorname{Hom}(C,B\otimes A)
\buildrel d_l^{-1}\over\hbox to 20pt {\rightarrowfill} & \operatorname{Hom}(C\otimes A\haj{{\ }},B) \\
&& \qquad\quad \Big \downarrow d_r \\
\operatorname{Hom}(C,A\otimes\nu)
\raisebox{0pt}[0pt][0pt]{\put(3,-20){\vector(0,1){40} }} \qquad\quad &&
\operatorname{Hom}(A\haj{{\ }},C\haj{{\ }}\otimes B) \\
&& \qquad\quad \Big \downarrow \operatorname{Hom}(A\haj{{\ }},c) \\
\operatorname{Hom}(C,A\otimes B) & \buildrel d_r\over\hbox to 45pt {\leftarrowfill} \operatorname{Hom}(A\haj{{\ }}\otimes C,B)
\buildrel d_l^{-1}\over\hbox to 20pt {\leftarrowfill} & \operatorname{Hom}(A\haj{{\ }},B\otimes C\haj{{\ }})
\end{array}
\]
commute.
\end{prop}
\subsEction{A quotient functor}
Let $g:\Gamma'\to\Gamma$ be a glueing and $gf_1= gf_2:\Gamma'\to\Gamma''$.
Then $\CZ(g)\CZ(f_1)= \CZ(g) \CZ(f_2)$. The morphism
$\CZ(g):\CZ(\Gamma';A,X;X,B) \to \CZ(\Gamma;A;B)$ can be regarded as a
morphisms of left exact functors belonging to $\hat\CC_{k,l}$
\[ \CZ(g): \oplus_{X\in\CC^n} \CZ(\Gamma';A,X;X,B) \hbox to 20pt {\rightarrowfill} \CZ(\Gamma;A;B) .\]
It is an epimorphism if $k+l>0$. Hence, by definition $\CZ(f_1) = \CZ(f_2)$.
In case $k=l=0$ we add one more leg $A$ to our nets, obtaining new morphisms
$\bar g\bar f_1= \bar g\bar f_2$. By the above considerations
$\CZ(\bar f_1) = \CZ(\bar f_2): \CZ(\bar\Gamma;A;) \to \CZ(\bar\Gamma'';A;)$.
Setting $A=I$ and applying isomorphisms we change the end $A$ to a 1-vertex.
Deleting it we again deduce $\CZ(f_1) = \CZ(f_2)$.
When $g$ is any other generator it is invertible, hence, in all cases we
proved that $gf_1= gf_2$ implies $\CZ(f_1) = \CZ(f_2)$. This means that the
functor $\CZ$ is, in fact, defined on a category having the left cancellation
property.
Now we construct a functor $Z$ as an epimorphic image of $\CZ$ on the
category with left cancellations $\ON$ extended by
$ins$, $del$, $fus$, $Tw$, $Br$ to $k\text{-vect}$.
\begin{prop}
There exists a unique up to equivalence functor $Z$ on the category $\ON$
extended by $ins$, $del$, $fus$, $Tw$, $Br$ to $k\text{-vect}$ satisfying
conditions (i)--(xi) of Section~\ref{Modular}. It has also the property
(xii). There is an epimorphism $\CZ\to Z$.
\end{prop}
\subsEction{Construction of switches}
Now we extend the functor $Z$ to $EN$. We shall show that
there are exactly two such extensions, which differ by a sign of $Z(S)$.
Morphisms $S^{\pm 1}$ in $EN$ must be represented by functorial in
$X$ isomorphisms
\[\operatorname{Hom}(X,{\bold f})\simeq Z\Biggl(\tennisu X{} \!\! \Biggr) \buildrel Z(S^{\pm 1})
\over\hbox to 45pt {\rightarrowfill} Z\Biggl(\tennisu X{} \!\! \Biggr) \simeq\operatorname{Hom}(X,{\bold f}). \]
Hence, they are induced by automorphisms $S^{\pm 1}:{\bold f}\to {\bold f}$ in $\CC$.
\begin{prop}
The axiom \eqref{mainSdiag_or} is satisfied, or equivalently the diagram
at Figure~\ref{realrela} made of morphisms of left exact functors in $X,Y$
(quotients are taken in the category of left exact functors) with
$K=\operatorname{Ker}(\int^M M\otimes M\haj{{\ }} \to {\bold f})$
\begin{figure}[htbp]
\[
\begin{CD}
\operatorname{Hom}(Y\otimes X,{\bold f})
\text{\makebox[0mm][l]{\put(20,0){$\stackrel{\operatorname{Hom}(c_{XY},{\bold f})}\hbox to 60pt {\rightarrowfill}$}}}
@. \operatorname{Hom}(X\otimes Y,{\bold f}) \\
@V\operatorname{Hom}(Y\otimes X,S^{-1})VV @VV\wr V \\
\operatorname{Hom}(Y\otimes X,{\bold f}) @. \operatorname{Hom}(X\otimes Y,\int^N N\otimes N\haj{{\ }})/\operatorname{Hom}(X\otimes Y,K) \\
@V\wr VV @VV\wr V \\
\frac{\!\!\!\!\!\!{\displaystyle\operatorname{Hom}(Y\otimes X,\int^M M\otimes M\haj{{\ }})}\!\!\!\!\!\!}
{\displaystyle\operatorname{Hom}(Y\otimes X,K)} @.
\oint^N\operatorname{Hom}((X\otimes Y)\otimes N,N)/\operatorname{Hom}(X\otimes Y,K) \\
@V\wr VV @VV\operatorname{Hom}(a,N)V \\
\frac{\!\!\!\!\!\!\oint^M{\displaystyle\operatorname{Hom}((Y\otimes X)\otimes M,M)}\!\!\!\!\!\!}
{\displaystyle\operatorname{Hom}(Y\otimes X,K)} @.
\oint^N\operatorname{Hom}(X\otimes (Y\otimes N),N)/\operatorname{Hom}(X\otimes Y,K) \\
@V\operatorname{Hom}(a,M)VV @VV\wr V \\
\frac{\!\!\!\!\!\!\oint^M{\displaystyle\operatorname{Hom}(Y\otimes (X\otimes M),M)}\!\!\!\!\!\!}
{\displaystyle\operatorname{Hom}(Y\otimes X,K)} @.
\frac{\!\!\!\!\!\!\oint^{N,P}{\displaystyle\operatorname{Hom}(X\otimes P,N)
\otimes\operatorname{Hom}(Y\otimes N,P)}\!\!\!\!\!\!} {\displaystyle\operatorname{Hom}(X\otimes Y,K)} \\
@V\operatorname{Hom}(Y\otimes\nu^{-1}_{X\otimes M},\nu)VV @VV\wr V \\
\frac{\!\!\!\!\!\!\oint^M{\displaystyle\operatorname{Hom}(Y\otimes(X\otimes M),M)}\!\!\!\!\!\!}
{\displaystyle\operatorname{Hom}(Y\otimes X,K)} @.
\oint^P\operatorname{Hom}(Y\otimes(X\otimes P),P)/\operatorname{Hom}(X\otimes Y,K) \\
@V\operatorname{Hom}(a^{-1},M)VV @VV\operatorname{Hom}(a^{-1},P)V \\
\qquad \frac{\!\!\!\!\!\!\oint^M{\displaystyle\operatorname{Hom}((Y\otimes X)\otimes M,M)}\!\!\!\!\!\!}
{\displaystyle\operatorname{Hom}(Y\otimes X,K)}\qquad @.
\quad \oint^P\operatorname{Hom}((Y\otimes X)\otimes P,P)/\operatorname{Hom}(Y\otimes X,K)\quad \\
@V\wr VV @VV\wr V \\
\frac{\!\!\!\!\!\!{\displaystyle\operatorname{Hom}(Y\otimes X,\int^M M\otimes M\haj{{\ }})}\!\!\!\!\!\!}
{\displaystyle\operatorname{Hom}(Y\otimes X,K)} @.
\operatorname{Hom}(Y\otimes X,\int^P P\otimes P\haj{{\ }})/\operatorname{Hom}(Y\otimes X,K) \\
@V\wr VV @VV\wr V \\
\operatorname{Hom}(Y\otimes X,{\bold f})
\text{\makebox[0mm][l]{\put(20,0){$\stackrel{\operatorname{Hom}(Y\otimes X,S)}\hbox to 60pt {\rightarrowfill}$}}}
@. \operatorname{Hom}(Y\otimes X,{\bold f})
\end{CD}
\]
\caption{The realization of a relation for a torus with
two holes\label{realrela}}
\end{figure}
is commutative if and only if
\begin{equation}\label{*}
\invfourier
\end{equation}
for some morphism $\mu:I\to {\bold f}$.
\end{prop}
The relation~\eqref{S2=Br-1Tw-1_or} implies that
\[
\begin{array}{ccc}
\int^X X\otimes X\haj{{\ }} & @>\int c>> \int^X X\haj{{\ }}\otimes X @>\int1\otimes\nu>> &
\int^X X\haj{{\ }}\otimes X \\
i_X\bigg\downarrow\quad && \quad\bigg\downarrow i_{X\haj{{\ }}} \\
{\bold f} & \stackrel{S^{-2}}\hbox to 100pt {\rightarrowfill} & {\bold f}
\end{array}
\]
by \propref{proptipti}. That is,
\begin{equation}\label{S2=gamma}
S^{-2}=\gamma: {\bold f}\to{\bold f}
\end{equation}
(see \secref{intro}). So we find
\begin{equation}\label{!}
\quad
\unitlength=0.7mm
\begin{picture}(112,46)
\put(88,25){\oval(20,18)[b]}
\put(102,21.50){\oval(20,19)[t]}
\put(112,22){\line(0,-1){18}}
\put(92,4){\line(0,1){8}}
\put(98,35){\line(0,1){9}}
\put(78,25){\line(0,1){19}}
\put(88,45){\makebox(0,0)[cc]{${\bold f}$}}
\put(102,3){\makebox(0,0)[cc]{${\bold f}$}}
\put(102,31){\line(0,1){8}}
\put(107,37){\makebox(0,0)[cc]{$\mu$}}
\put(68,24){\makebox(0,0)[cc]{$=$}}
\put(44,27){\oval(20,18)[b]}
\put(28.50,22){\oval(21,20)[t]}
\put(39,14){\line(0,-1){10}}
\put(18,22){\line(0,-1){18}}
\put(28,32){\line(0,1){8}}
\put(44,46){\makebox(0,0)[cc]{${\bold f}$}}
\put(29,3){\makebox(0,0)[cc]{${\bold f}$}}
\put(23,38){\makebox(0,0)[cc]{$\mu$}}
\put(8,24){\makebox(0,0)[rc]{$S= \gamma^{-1} S^{-1} =$}}
\put(57,28){\line(-3,2){23}}
\put(57,32){\oval(10,8)[r]}
\put(55,43){\line(-5,-3){9}}
\put(43,36){\line(-3,-2){6}}
\put(57.50,32){\oval(7,8)[lt]}
\end{picture}
\end{equation}
Theorem 6.13 from \cite{Lyu:mod} states
that if morphisms (\ref{*}) and (\ref{!}) are inverse to each other, then
$\mu$ is an integral of the Hopf algebra ${\bold f}$. It is unique up to a constant.
The normalizing constant for
the integral $\mu$ is fixed up to a sign by \eqref{S2=gamma}. The relation
\[(ST)^3=\lambda S^2 \]
for $T=\int 1\otimes\nu:{\bold f}\to {\bold f}$ and some constant $\lambda$ is proven in
\cite{Lyu:mod}. Hence, putting the central charge $C$ equal this constant
$\lambda$ on $Z(A_1)$ we get the relation~\eqref{ST3=CS2}. For arbitrary
net $\Gamma$ of genus $g$ we set $Z(C_\Gamma)=\lambda^g$.
Thus, we obtained a functor $EN \to k$-vect satisfying all conditions of
\secref{Modular}. It is unique up to a choice of sign of the normalizing
constant.
\ifx\undefined\leavevmode\hbox to3em{\hrulefill}\,
\newcommand{\leavevmode\hbox to3em{\hrulefill}\,}{\leavevmode\hbox to3em{\hrulefill}\,}
\fi
\bibliographystyle{amsplain}
|
2,869,038,154,583 | arxiv | \section{Introduction}\label{sec:introduction}
Recently, quaternion-valued signal processing has been introduced to solve problems related to three or four-dimensional
signals, such as vector-sensor array signal processing~\cite{BihanN2004,liu14e,liu14k}, and wind profile
prediction~\cite{took09a,liu13j}. In many of the cases, the traditional complex-valued adaptive filtering operation needs to be extended to the quaternion domain to derive the corresponding adaptive algorithms. One key operation involved in derivation of quaternion-valued adaptive algorithms is the gradient operator. Although there have been some derivations of this operator in literature with different level of details, it is still not fully clear how this operator can be derived in the most general case and how it can be applied to various signal processing problems.
In this work, we will give a general derivation of the quaternion-valued gradient operator and then implement this into two different applications. One case is to combine with the classic computational fluid dynamics (CFD) approach in wind profile prediction. Wind profile prediction is a classical signal prediction problem, and we can try to solve it using traditional linear and nonlinear (neural networks) prediction techniques.
On the other hand, wind/atmospheric flow analysis is also a traditional problem in CFD,
which employs conservation laws, various physical models and numerical methods
to predict wind signals.
It might be more accurate compared to other approaches, but not without disadvantages.
For example, it is time-consuming with high computational complexity,
and it also contains uncertainties/errors in initial/boundary conditions as well as models.
Therefore, we intend to combine the two approaches in a way that retains the
efficiency of the former and the accuracy of the latter.
As a preliminary study, we will apply a quaternion-valued linear predictor
to the data generated by the CFD method to show the feasibility of
the combined approach.
Another application of quaternion is the adaptive beamforming problem in vector sensor arrays. Adaptive beamforming has been studied extensively in the past for traditional sensor array systems~\cite{vantrees02a,liu10g}. With the introduction of vector sensor arrays, such as those consisting of crossed-dipoles and tripoles, adaptive beamforming has been extended to this area too~\cite{nehorai99a,liu14e}. A reference signal based adaptive beamformer will be set up employing the derived quaternion-valued least mean square (LMS) algorithm.
This paper is organized as follows. The general quaternion-valued gradient operator is derived and then applied to develop the quaternion-valued LMS (QLMS) as well as the augmented QLMS (AQLMS) algorithms in Section \ref{sec:QLMS and AQLMS}. Application of the algorithms to the data generated by CFD is provided in Section \ref{sec:cfd}, and their application in adaptive beamforming is studied in Section \ref{sec:vector_sensor}. Simulation results are presented in Section \ref{sec:simulations} and conclusions are drawn in Section \ref{sec:conclusions}.
\section{Derivation of a Quaternion-valued Gradient Operator and Adaptive Filtering}
\label{sec:QLMS and AQLMS}
\subsection{Differentiation with respect to a quaternion-valued vector}\label{sec:Differentiation to a quaternion-valued vector}
We first introduce the definition of differentiation with
respect to a quaternion $q$. Assume that $f(q)$ is a function of the quaternion variable $q$, expressed as
\begin{equation}
f(q)=f_{a} + if_{b} + jf_{c} + kf_{d}\;,
\end{equation}
where $f(q)$ is in general quaternion-valued.
$f(q)$, as well as its components $f_a(q)$, $f_b(q)$, $f_c(q)$, and $f_d(q)$,
can be viewed as functions of $q_a$, $q_b$, $q_c$ and $q_d$, which can be
expressed in terms of $q$ and its involutions \cite{ell2007a}:
\begin{eqnarray}
q^{i}&=&-iqi=q_{a} + q_{b}i - q_{c}j - q_{d}k\nonumber\\
q^{j}&=&-jqj=q_{a} - q_{b}i + q_{c}j - q_{d}k\nonumber\\
q^{k}&=&-kqk=q_{a} - q_{b}i - q_{c}j + q_{d}k.
\label{eq:involutions}
\end{eqnarray}
As a
consequence, we have
\begin{eqnarray}
&~&q_{a}=\frac{1}{4}(q+q^{i}+q^{j}+q^{k})\;,
q_{b}=\frac{1}{4i}(q+q^{i}-q^{j}-q^{k})\nonumber\\
&~&q_{c}=\frac{1}{4j}(q-q^{i}+q^{j}-q^{k})\;,
q_{d}=\frac{1}{4k}(q-q^{i}-q^{j}+q^{k})\nonumber\\
\end{eqnarray}
and
\begin{eqnarray}
q+iqi+jqj+kqk&=&-2q^{*}\nonumber\\
q-iqi-jqj-kqk&=&4q_{a}.
\end{eqnarray}
Given the above relations between the involutions and the real and imaginary
parts of $q$, $f(q)$ can be regarded as a function of $q$, $q^i$, $q^j$ and
$q^k$. Therefore, in what follows, we consider generally a function of $q$ and
the involutions,
i.e., $f(q,q^{i},q^{j},q^{k})$. Using the Taylor expansion of $f$, the
differential $df$ is given by
\begin{equation}\label{eq:df1}
df=\frac{\partial f}{\partial q}dq+\frac{\partial f}{\partial q^{i}}dq^{i}+\frac{\partial f}{\partial q^{j}}dq^{j}+\frac{\partial f}{\partial q^{k}}dq^{k}
\end{equation}
Note $\partial f/\partial q$ and $dq$ are both quaternion, therefore they do
not commute. On the other hand,
$df=df_a+idf_b+jdf_c+kdf_d$.
Since $df_a$ is the differential of a real function of real numbers $q_a$,
$q_b$, $q_c$ and $q_d$, we have
\begin{eqnarray}
&~&df_a(q_a,q_b,q_c,q_d)\nonumber\\
&~&=\frac{\partial f_a}{\partial q_a}dq_a+\frac{\partial f_a}{\partial q_{b}}dq_{b}+\frac{\partial f_a}{\partial q_{c}}dq_{c}+\frac{\partial f_a}{\partial q_{d}}dq_{d}\nonumber\\
&~&=\frac{\partial f_a}{\partial q_a}[\frac{1}{4}(dq+dq^{i}+dq^{j}+dq^{k})]\nonumber\\
&~&+\frac{\partial f_a}{\partial q_{b}}[\frac{1}{4i}(dq+dq^{i}-dq^{j}-dq^{k})]\nonumber\\&~&+\frac{\partial f_a}{\partial q_{c}}[\frac{1}{4j}(dq-dq^{i}+dq^{j}-dq^{k})]\nonumber\\
&~&+\frac{\partial f_a}{\partial q_{d}}[\frac{1}{4k}(dq-dq^{i}-dq^{j}+dq^{k})]\nonumber\\
&~&=\frac{1}{4}(\frac{\partial f_a}{\partial q_a}-i\frac{\partial f_a}{\partial q_{b}}-j\frac{\partial f_a}{\partial q_{c}}-k\frac{\partial f_a}{\partial q_{d}})dq\nonumber\\
&~&+\frac{1}{4}(\frac{\partial f_a}{\partial q_a}-i\frac{\partial f_a}{\partial q_{b}}+j\frac{\partial f_a}{\partial q_{c}}+k\frac{\partial f_a}{\partial q_{d}})dq^{i}\nonumber\\
&~&+\frac{1}{4}(\frac{\partial f_a}{\partial q_a}+i\frac{\partial f_a}{\partial q_{b}}-j\frac{\partial f_a}{\partial q_{c}}+k\frac{\partial f_a}{\partial q_{d}})dq^{j}\nonumber\\
&~&+\frac{1}{4}(\frac{\partial f_a}{\partial q_a}+i\frac{\partial f_a}{\partial q_{b}}+j\frac{\partial f_a}{\partial q_{c}}-k\frac{\partial f_a}{\partial q_{d}})dq^{k}
\end{eqnarray}
Similar expressions for $idf_b$, $jdf_c$, $kdf_d$ can be derived in the same
way. The sum of the four expressions give an expressions for $df$. Comparing
the resulted expression with equation (\ref{eq:df1}), we observe that the
coefficient for $dq$ should be the same, hence:
\begin{eqnarray}
\frac{\partial f}{\partial q}&=&\frac{1}{4}(\frac{\partial f_a}{\partial q_a}
-i\frac{\partial f_a}{\partial q_{b}}-j\frac{\partial f_a}{\partial q_{c}}-k\frac{\partial f_a}{\partial q_{d}})\nonumber\\
&~&+\frac{i}{4}(\frac{\partial f_b}{\partial q_a}-i\frac{\partial f_b}{\partial q_{b}}-j\frac{\partial f_b}{\partial q_{c}}-k\frac{\partial f_b}{\partial q_{d}})\nonumber\\
&~&+\frac{j}{4}(\frac{\partial f_c}{\partial q_a}-i\frac{\partial f_c}{\partial q_{b}}-j\frac{\partial f_c}{\partial q_{c}}-k\frac{\partial f_c}{\partial q_{d}})\nonumber\\
&~&+\frac{k}{4}(\frac{\partial f_d}{\partial q_a}-i\frac{\partial f_d}{\partial q_{b}}-j\frac{\partial f_d}{\partial q_{c}}-k\frac{\partial f_d}{\partial q_{d}})\nonumber\\
&=&\frac{1}{4}(\frac{\partial f}{\partial q_a}-\frac{\partial f}{\partial q_{b}}i-\frac{\partial f}{\partial q_{c}}j-\frac{\partial f}{\partial q_{d}}k)
\end{eqnarray}
Therefore, $\dfrac{\partial f(q)}{\partial q}$ is given by
\begin{equation}
\dfrac{\partial f(q)}{\partial q}=\frac{1}{4}(\displaystyle\frac{\partial{f(q)}}{\partial q_a}-\displaystyle\frac{\partial{f(q)}}{\partial q_b} i-\displaystyle\frac{\partial{f(q)}}{\partial q_c} j- \displaystyle\frac{\partial{f(q)}}{\partial q_d} k)
\label{eq:general_definition}
\end{equation}
Expressions for $\partial f/\partial q^i$, $\partial f/\partial q^j$ and
$\partial f/\partial q^k$ can be derived similarly. Note that, in general
$\partial f(q)/ \partial q_b$ is a quaternion, therefore $\partial
f(q)/\partial q_b i \neq i \partial f(q) /\partial q_b$, i.e., the two factors
do not commute. The same argument applies to the last two terms in equation
(\ref{eq:general_definition}).
$f(q)$ can also be viewed as a function of $q^*$ and its
involutions. Following the same arguments, we can also find the
derivative of $f(q)$ with respect to $q^{*}$, which is given by
\begin{equation}
\dfrac{\partial f(q)}{\partial q^{*}}=\frac{1}{4}(\displaystyle\frac{\partial{f(q)}}{\partial q_a}+\displaystyle\frac{\partial{f(q)}}{\partial q_b} i+\displaystyle\frac{\partial{f(q)}}{\partial q_c} j+ \displaystyle\frac{\partial{f(q)}}{\partial q_d} k)
\label{eq:conj_general_definition}
\end{equation}
where $q^{*}=q_a-q_{b}i-q_{c}j-q_{d}k$.
With these results, we can then calculate the derivatives of some simple
quaternion functions. For example, we easily obtain
\begin{equation}
\frac{\partial q}{\partial q}=1,~\frac{\partial q}{\partial q^{*}}=-\frac{1}{2}\;.
\end{equation}
On the other hand, the product rule is not true in general due to the
non-commutativity of quaternion products. However, it holds for
the differentiation of quaternion-valued functions to real variables.
Suppose $f(q)$ and $g(q)$ are two quaternion-valued functions of the quaternion variable $q$,
and $q_a$ is the real variable. Then we can have the following result
\begin{eqnarray}
\frac{\partial f(q)g(q)}{\partial q_a}&=&\frac{\partial }{\partial q_a}(f_a+if_b+jf_c+kf_d)g\nonumber\\
&=&\frac{\partial f_a g}{\partial q_a}+i\frac{\partial f_b g}{\partial q_a}+j\frac{\partial f_c g}{\partial q_a}+k\frac{\partial f_d g}{\partial q_a}\nonumber\\
&=&(f_a\frac{\partial g}{\partial q_a}+\frac{\partial f_a}{\partial q_a}g)+i(f_b\frac{\partial g}{\partial q_a}+\frac{\partial f_b}{\partial q_a}g)\nonumber\\
&~&+j(f_c\frac{\partial g}{\partial q_a}+\frac{\partial f_c}{\partial q_a}g)+k(f_d\frac{\partial g}{\partial q_a}+\frac{\partial f_d}{\partial q_a}g)\nonumber\\
&=&(f_a+if_b+jf_c+kf_d)\frac{\partial g}{\partial q_a}\nonumber\\
&~&+(\frac{\partial f_a}{\partial q_a}+i\frac{\partial f_b}{\partial q_a}+j\frac{\partial f_c}{\partial q_a}+k\frac{\partial f_d}{\partial q_a})g\nonumber\\
&=&f(q)\frac{\partial g(q)}{\partial q_a}+\frac{\partial f(q)}{\partial q_a} g(q)
\end{eqnarray}
When the quaternion variable $q$ is replaced by a quaternion-valued vector $\textbf{w}$, given by
\begin{equation}
\textbf{w} = [w_1~w_2~\cdots~w_{M}]^{T}
\end{equation}
where $w_m = a_m+b_mi+c_mj+d_mk$, $m=1, ..., M$, the differentiation of $f(\textbf{w})$ with respect to $\textbf{w}$ can be derived using a combination of (\ref{eq:general_definition}) as follows
\begin{eqnarray}
\dfrac{\partial f}{\partial \textbf{w}}=\frac{1}{4}\left[\begin{matrix}
\frac{\partial f}{\partial a_1}-\frac{\partial f}{\partial b_1} i-\frac{\partial f}{\partial c_1} j-\frac{\partial f}{\partial d_1} k\\
\frac{\partial f}{\partial a_2}-i\frac{\partial f}{\partial b_2} i-\frac{\partial f}{\partial c_2} j-\frac{\partial f}{\partial d_2} k\\
\vdots \\
\frac{\partial f}{\partial a_{M}}-\frac{\partial f}{\partial b_{M}} i-\frac{\partial f}{\partial c_{M}} j-\frac{\partial f}{\partial d_{M}} k
\end{matrix}\right]
\label{eq:vector_definition}
\end{eqnarray}
Similarly, we define $\dfrac{\partial f}{\partial \textbf{w}^{*}}$ as
\begin{eqnarray}
\dfrac{\partial f}{\partial \textbf{w}^{*}}=\frac{1}{4}\left[\begin{matrix}
\frac{\partial f}{\partial a_1}+\frac{\partial f}{\partial b_1} i+\frac{\partial f}{\partial c_1} j+\frac{\partial f}{\partial d_1} k\\
\frac{\partial f}{\partial a_2}+\frac{\partial f}{\partial b_2} i+\frac{\partial f}{\partial c_2} j+\frac{\partial f}{\partial d_2} k\\
\vdots \\
\frac{\partial f}{\partial a_{M}}+\frac{\partial f}{\partial b_{M}} i+\frac{\partial f}{\partial c_{M}} j+ \frac{\partial f}{\partial d_{M}} k
\end{matrix}\right]
\label{eq:conj_vector_definition}
\end{eqnarray}
Obviously, when $M=1$, (\ref{eq:vector_definition}) and (\ref{eq:conj_vector_definition}) are reduced to (\ref{eq:general_definition}) and (\ref{eq:conj_general_definition}), respectively.
\subsection{The QLMS algorithm}
The output $y[n]$ and error $e[n]$ of a standard adaptive filter can be expressed as
\begin{eqnarray}
y[n]&=&{\textbf{w}^{T}[n]}{\textbf{x}[n]}\\
e[n]&=&d[n]-{\textbf{w}^{T}[n]}{\textbf{x}[n]},
\end{eqnarray}
where $\textbf{w}[n]$ is the adaptive weight vector with a length of $M$, $d[n]$ is the reference signal, $\textbf{x}[n]=[x[n-1], x[n-2], \cdots, x[n-M]]^{T}$ is the input sample sequence, and $\{\cdot\}^{T}$ denotes the transpose operation.
The cost function with the quaternion-valued error
is $J_0[n]=e[n]e^{*}[n]$. Its gradient is given by
\begin{eqnarray}
\nabla_{\textbf{w}^{*}}J_0[n]=\frac{\partial {J_0[n]}}{\partial \textbf{w}^{*}}\\
\label{eq:conj_gradient_cost_function}
\nabla_{\textbf{w}}J_0[n]=\frac{\partial {J_0[n]}}{\partial \textbf{w}}
\label{eq:gradient_cost_function}
\end{eqnarray}
with respect to $\textbf{w}^{*}[n]$ and $\textbf{w}[n]$, respectively. According to \cite{mandic2011a,brandwood83a},
the conjugate gradient gives the maximum steepness direction for the optimization surface.
Therefore, the conjugate gradient $\nabla_{\textbf{w}^{*}}J_0[n]$ will be used to derive the update of the
coefficient weight vector.
First we have
\begin{eqnarray}
J_0[n]=d[n]d^{*}[n]-d[n]{\textbf{x}^{H}[n]}{\textbf{w}^{*}[n]}-{\textbf{w}^{T}[n]} {\textbf{x}[n]}d^{*}[n]\nonumber\\
+{\textbf{w}^{T}[n]} {\textbf{x}[n]}{\textbf{x}^{H}[n]}{\textbf{w}^{*}[n]}
\label{eq:extended_cost_function}
\end{eqnarray}
For different parts, we obtain the following results
\begin{equation}
\frac {\partial (d[n]d^{*}[n])}{\partial {\textbf{w}^{*}[n]}} = 0
\label{eq:part_1}
\end{equation}
\begin{equation}
\frac {\partial (d[n]{\textbf{x}^{H}[n]}{\textbf{w}^{*}[n]})}{\partial {\textbf{w}^{*}[n]}} = d[n]\textbf{x}^{*}[n]
\label{eq:part_2}
\end{equation}
\begin{equation}
\frac {\partial ({\textbf{w}^{T}[n]}{\textbf{x}[n]}d^{*}[n])}{\partial {\textbf{w}^{*}[n]}} = -\frac{1}{2}d[n]\textbf{x}^{*}[n]
\label{eq:part_3}
\end{equation}
\begin{equation}
\frac{\partial({\textbf{w}^{T}[n]} {\textbf{x}[n]}{\textbf{x}^{H}[n]}{\textbf{w}^{*}[n]})}{\partial {\textbf{w}^{*}[n]}}=\frac{1}{2}{\textbf{w}^{T}[n]}{\textbf{x}[n]}{\textbf{x}^{*}[n]}
\label{eq:part_4}
\end{equation}
Then we have the final gradient result
\begin{equation}
\nabla_{\textbf{w}^{*}}J_0[n]=-\frac{1}{2}e[n]\textbf{x}^{*}[n].
\end{equation}
With the general update equation for the weight vector
\begin{equation}
\textbf{w}[n+1] = \textbf{w}[n]-\mu \nabla_{\textbf{w}^{*}}J_0[n],
\end{equation}
we arrive at the following update equation for the QLMS algorithm with step size $\mu$
\begin{equation}
\textbf{w}[n+1] = \textbf{w}[n]+\mu(e[n]\textbf{x}^{*}[n]).
\label{eq:update_weight_vector}
\end{equation}
\subsection{The AQLMS algorithm}
Recently, to fully exploit the second-order statistics of the signals, an augmented formulation of the data vector has been proposed, first for complex-valued signals and then for quaternion-valued ones. For complex-valued signals, the augmented vector is composed of the original data and its conjugate, while for the latter, due to existence of the three perpendicular quaternion involutions, the choice for the augmented vector is not unique. Without loss of generality, here we adopt the simplest formulation by combining the data vector $\textbf{x}[n]$ and its conjugate $\textbf{x}^{*}[n]$ to produce an augmented vector $\textbf{x}_{a}[n]=\big[\textbf{x}^{T}[n]~~\textbf{x}^{H}[n]\big]^{T}$~\cite{took10a}, where $\{\cdot\}^{H}$ is a combination of the operations of $\{\cdot\}^{T}$ and $\{\cdot\}^{*}$ for a quaternion. For such a ``widely linear'' model, the quaternion-valued output for the conjugate part of the input is given by
\begin{equation}
\hat{y}[n]=\textbf{g}^{T}[n]\textbf{x}^{*}[n],
\label{eq:aug_output}
\end{equation}
where $\textbf{g}[n]$ denotes the weight vector for the conjugate part of the input $\textbf{x}[n]$.
As to the AQLMS algorithm, the update of the weight vector of the conjugate part $\textbf{g}[n]$
can be found with the same method as that of the QLMS in (\ref{eq:update_weight_vector}), i.e.
\begin{equation}
\textbf{g}[n+1]=\textbf{g}[n]+\mu(e[n]\textbf{x}[n]).
\label{eq:conj_update_weight_vector}
\end{equation}
With the augmented weight vector $\textbf{h}_{a}[n]$ defined as
\begin{equation}
\textbf{h}_{a}[n]=\big[\textbf{w}^{T}[n]~~\textbf{g}^{T}[n]\big]^{T},
\label{eq:aug_weight}
\end{equation}
we obtain the following update equation
\begin{equation}
\textbf{h}_{a}[n+1]=\textbf{h}_{a}[n]+\mu(e_{a}[n]{\textbf{x}_{a}}^{*}[n])
\label{eq:augmentedweight}
\end{equation}
where $e_{a}[n]=d[n]-{\textbf{h}_{a}}^{T}[n]\textbf{x}_{a}[n]$.
\section{Application to CFD data}\label{sec:cfd}
CFD is a branch of fluid mechanics. It uses numerical approaches to solve fluid flow problems.
\subsection{Fluid Dynamics Equations}\label{sec:cfd_equations}
The Navier-Stokes equations are the basis of fluid problems.
They are essentially the mathematical formulation of the Newton's second law
applied to fluid motions.
The general expression of the equations is
\begin{equation}
\rho(\frac{\partial \textbf{u}}{\partial t}+(\textbf{u} \cdot \nabla) \textbf{u})=-\nabla{P}+\eta \Delta{\textbf{u}}
\label{eq:cfd}
\end{equation}
where $\textbf{u}$ is the fluid velocity at a particular spatial location at a
given time, $P$ is the pressure and $\rho$ is the fluid density.
The left hand side of the equation is the acceleration of the fluid,
whilst on the right side are (the gradient of) the forces, including pressure and viscous force.
Together with the conservation of mass and suitable boundary conditions,
the Navier-Stokes equations can model a large class of fluid motions accurately~\cite{Ferziger2001a}.
\subsection{Turbulence}
The second term on the left hand side of equation (\ref{eq:cfd}) represents the
contribution from the advection of fluid particles to fluid acceleration,
and is customarily called
the inertial force. The second term on the right hand side represents the
viscous force.
The ratio of these two forces is defined as the Reynolds
number ($Re$).
As it turns out, when $Re$ is large, the flows tend to become unstable and
generate a spectrum of high frequency components in the velocity signal.
Such a regime of fluid motions is called turbulence.
Atmospheric flows, including
the wind fields around wind farms, are always turbulence~\cite{Pope2000a}. Due
to the presence of the high frequency components, the CFD calculation of the
velocity signal in turbulent wind fields becomes very time consuming unless
simplifying models are introduced.
\subsection{Direct numerical simulation (DNS)}
DNS solves the Navier-Stokes equations directly without any turbulence models.
The advantage of this method is that it is simple as well as accurate with complete
information. However, the computational cost can be very high if $Re$ is large.
Therefore, this method
is not yet applicable to practical situations, for example, the atmospheric
flows we will deal with \cite{Ferziger2001a}. Nevertheless, as a first step,
we choose to use DNS to generate the velocity signals in this study.
\subsection{Data Generation Using CFD}
The velocity signals are generated by DNS, where the Navier-Stokes equations are solved
using a pseudo-spectral method.
The CFD code is written in FORTRAN 90.
Running the code, we generate a time series of
three dimensional turbulent wind velocity fields in a 3-D periodic box.
We consider the flow field as an idealized wind field with the mean velocity
having been subtracted, and the signal normalized.
\section{Application to Adaptive Beamforming}\label{sec:vector_sensor}
\subsection{Quaternionic array signal model}
\begin{figure}[http]
\begin{center}
\includegraphics[width=0.65\linewidth]{ula_array.eps}
\caption{A ULA with crossed-dipoles.
\label{fig:ula}}
\end{center}
\end{figure}
A uniform linear array (ULA) with $M$ crossed-dipole pairs is shown in Fig.~\ref{fig:ula}. These pairs are located along the y-axis with an adjacent spacing $d$, and at each location the two crossed components are parallel to x-axis and y-axis, respectively. Assume there is a far-field incident signal with direction of arrival (DOA) defined by the angles $\theta$ and $\phi$ impinging upon the array from the y-z plane, so that $\phi=\pi/2$ as well as $\phi=-\pi/2$, and $0 \leq \theta \leq \pi/2$. As a result, the spatial steering vector for the signal is expressed as
\begin{eqnarray}
\textbf{S}_c(\theta,\phi)&=&[1,e^{-j2\pi d sin{\theta} sin{\phi}/{\lambda}},\nonumber\\
&~&\cdots, e^{-j2\pi (M-1)d sin{\theta} sin{\phi}/{\lambda}}]^{T}
\end{eqnarray}
where $\lambda$ is the wavelength of the incident signal. For a crossed dipole the spatial-polarization coherent vector can be given by~\cite{compton81a,li91a}
\begin{equation}
\textbf{S}_p(\theta,\phi,\gamma,\eta) =
\begin{cases}
[ -cos{\gamma},cos{\theta} sin{\gamma} e^{j\eta} ] & \text{for $\phi=\pi/2$} \\
[ cos{\gamma},-cos{\theta} sin{\gamma} e^{j\eta} ] & \text{for $\phi=-\pi/2$}
\end{cases}
\end{equation}
where $\gamma$ is the auxiliary polarization angle with $\gamma \in [0,\pi/2]$, and the $\eta \in [-\pi,\pi]$ is the polarization phase difference.
The array structure can be divided into two sub-arrays. One is parallel to the x-axis and the other is parallel to the y-axis. The complex-valued steering vector of the x-axis sub-array is given by
\begin{equation}
\textbf{S}_x(\theta,\phi,\gamma,\eta) =
\begin{cases}
-cos{\gamma}\textbf{S}_c(\theta,\phi) & \text{for $\phi=\pi/2$} \\
cos{\gamma}\textbf{S}_c(\theta,\phi) & \text{for $\phi=-\pi/2$}
\end{cases}
\end{equation}
and for the y-axis it is expressed as
\begin{equation}
\textbf{S}_y(\theta,\phi,\gamma,\eta) =
\begin{cases}
cos{\theta} sin{\gamma} e^{j\eta}\textbf{S}_c(\theta,\phi) & \text{for $\phi=\pi/2$} \\
-cos{\theta} sin{\gamma} e^{j\eta}\textbf{S}_c(\theta,\phi) & \text{for $\phi=-\pi/2$}
\end{cases}
\end{equation}
Combining these two steering vectors together, we have a quaternion-valued composite steering vector given as below
\begin{equation}
\textbf{S}_q(\theta,\phi,\gamma,\eta)=\textbf{S}_x(\theta,\phi,\gamma,\eta)+i\textbf{S}_y(\theta,\phi,\gamma,\eta).
\end{equation}
The response of the array is
\begin{eqnarray}
r(\theta,\phi,\gamma,\eta)=\textbf{w}^{H}\textbf{S}_q(\theta,\phi,\gamma,\eta)
\end{eqnarray}
where $\textbf{w}$ is the quaternion-valued weight vector.
\subsection{Reference signal based adaptive beamforming}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.65\linewidth]{multi-time.eps}
\caption{Structure of a reference signal based adaptive beamformer.
\label{fig:multi_time_structure}}
\end{center}
\end{figure}
When a reference signal $d[n]$ is available, adaptive beamforming can be implemented by the standard adaptive filter structure, as shown in Fig.~\ref{fig:multi_time_structure}, where $x_m[n]$, $m=1, 2, \cdots, M$ are the received quaternion-valued vector sensor signals, $w_m[n]$, $m=1, 2, \cdots, M$ are the corresponding quaternion-valued coefficients, $y[n]$ is the beamformer output and $e[n]$ is the error signal.
\section{Simulation Results}\label{sec:simulations}
\subsection{Scenario one}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.9\linewidth]{time_domain.eps}
\caption{Prediction results using the QLMS algorithm.
\label{fig:time_signal}}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.9\linewidth]{aug_time_domain.eps}
\caption{Prediction results using the AQLMS algorithm.
\label{fig:aug_time_signal}}
\end{center}
\end{figure}
In this part, both the QLMS and the AQLMS algorithms are applied to the wind
data generated by CFD simulations with a sampling frequency of 1 Hz.
The parameters are as follows. The step size is $\mu=2.5\times10^{-4}$ and the adaptive filter length is $L=16$. The prediction step is 2. The adaptive weight vector is initialized as an all-zero vector. Fig. \ref{fig:time_signal} and Fig. \ref{fig:aug_time_signal} show the results for the QLMS and AQLMS algorithms, respectively. As we can see from the results, both algorithms can track the change of the wind speed signal effectively.
\subsection{Scenario two}
\begin{figure}
\begin{center}
\includegraphics[width=0.9\linewidth]{learning_curve_qlms.eps}
\caption{Learning curve using the QLMS algorithm for adaptive beamforming.
\label{fig:learning_curve_qlms}}
\end{center}
\end{figure}
Now we run simulations for the adaptive beamforming scenario. The vector sensor array with 10 crossed-dipoles and half-wavelength spacing is considered to obtain the output using the QLMS algorithm. The stepsize $\mu$ here is set to be $1\times10^{-6}$. A desired signal with 20 dB SNR impinges from the broadside and two interfering signals with the signal to interference ratio (SIR) of 0 dB arrive from $30^\circ$ and $-20^\circ$, respectively. All the signals have the same polarisation of $(\gamma, \eta)=(0,0)$. The learning curve averaged over 100 simulation runs is shown in Fig.~\ref{fig:learning_curve_qlms}, where we can see the normalised error has reached about -10 dB, indicating an effective beamforming operation.
\section{Conclusion}\label{sec:conclusions}
In this paper, a general quaternion-valued gradient operator has been derived in detail, based on which two adaptive algorithms were developed including the QLMS and the AQLMS algorithms. These algorithms were applied to two different areas. One is to combine with the classic computational fluid dynamics (CFD) approach in wind profile prediction and the other one is to apply the result to the adaptive beamforming problem for vector sensor arrays. Simulation results have shown that the derived algorithms can work in different scenarios effectively, highlighting the importance and usefulness of the derived gradient operator. One important note is that although there have been some derivations of this operator in literature with different level of details, this is the first time to give the most general form with a solid theoretical basis.
\section{Acknowledgements}
This work is partially funded by National Grid UK.
|
2,869,038,154,584 | arxiv | \section{Introduction}
Optimal Mass Transport (OMT) is a well-studied problem with a variety of applications in a diverse set of fields, ranging from physics to computer vision and in particular statistics and data science.
Fueled by the appearance of OMT in transformation-based density estimation and random sampling algorithms in machine learning applications, several new numerical frameworks for solving this optimization problem (in high dimensions) have been recently proposed, see, e.g., \cite{korotin2021neural} and the references therein.
Our article adds to this expanding list by developing a new framework for the estimation of the $L^2$-optimal transport problem.
Our algorithm, which is based on Brenier's theorem, builds on recent developments of input convex neural networks and physics-informed neural networks for solving PDE's. Before we describe the contributions of the article in more detail, we will briefly summarize the motivation of our investigations and recent developments in the field.
{\bf Density estimation and random sampling:}
Density estimation and random sampling are fundamental problems in machine learning and statistical inference. The density estimation problem is to estimate a smooth probability density based on a discrete finite set of observations. In traditional parametric density estimation techniques, we assume that the data is drawn from a known parametric family of distributions, and it only remains to best estimate these parameters. These methods require that we have a basis to believe that the data is indeed derived from a specific family of distributions and are consequently limited in their applicability to many modern tasks. One of the most ubiquitous parametric techniques is Gaussian Mixture Modeling~\citep{mclachlan1988mixture}.
Nonparametric techniques were first proposed by \cite{fix1951nonparametric} (\cite{silverman}) to move away from such rigid distributional assumptions. The most used approach is the kernel density estimation, which dates back to~\citet{rosenblatt} and~\citet{parzen}.
Despite decades of work in this field, many challenges remain regarding the implementation and practical performance of kernel density estimators, including in particular, the bandwidth selection and the lack of local adaptivity resulting in a large sensitivity to outliers~\citep{loader}. These problems are particularly exacerbated in high dimensions with the curse of dimensionality.
Recently, diffeomorphic transformation-based algorithms have been proposed to tackle this problem~\citep{dinh2017,marzouk2016sampling,younes2020,bauer2017diffeomorphic}. The basic concept of transformation-based algorithms is to find a diffeomorphic mapping between a reference probability distribution and the unknown target distribution, from which the data is drawn. Consequently, transformation-based density estimation leads at the same time to an efficient generative model, as new samples from the estimated density can be generated at a low cost by sampling from the reference density and transforming the samples by the estimated transformation. The fundamental problem in diffeomorphic transformation-based approaches is how to estimate and select the transformation: from a theoretical point of view there exists an infinite set of transformations that map two given probability densities onto each other. Recently, several deep learning methods have been devised for this task, where Normalizing Flows (NF) stand out among these methods. Examples of such models include Real NVP \citep{dinh2017}, Masked Autoregressive Flows ~\citep{papamakarios2017masked}, iResNets ~\citep{behrmann2019invertible}, Flow++ \citep{ho2019flow++} and Glow ~\citep{kingma2018glow}. For a review
of the vast NF literature, we refer to the the overview article~\citep{kobyzev2020normalizing}. Although these methods have shown to perform well in density estimation applications, the interpretability of the obtained transformation is less clear, e.g. in Real NVP~\citep{dinh2017}, the solution selection is obtained by restricitng the transformations to the class of diffeomorphisms with triangular Jacobians that are easy to invert, which is closely related to the Knothe-Rosenblatt rearrangement~\citep{knothe1957contributions,rosenblatt1952remarks}.
{\bf Optimal mass transport:} Optimal mass transport, on the other hand, formulates the transport map selection as the minimizer of a cost function~\citep{villani2008optimal,villani2003topics}. The optimal transportation cost induces a metric structure, the Wasserstein metric, on the space of probability densities and is sometimes referred to as the Earth Mover’s Distance. This theory, which dates back to 1781, was originally formulated by the French mathematician Gaspard~\citet{monge1781memoire}. The difficulty in applying this framework to the proposed density estimation problem lies in solving the corresponding optimization problem, which in a dimension greater than one is highly non trivial. The fully discrete OMT problem (optimal assignment problem) can be solved using linear programming and can be approximated by the Sinkhorn algorithm~\citep{cuturi,papadakis}. However, these algorithms do not lead to a continuous transformation map and thus can't be used for the proposed diffeomorphic density estimation and generative modelling. Previous algorithmic solutions for the continuous OMT problem include fluid mechanics-based approaches~\citep{benamou2000computational}, finite element or finite difference-based methods~\citep{benamou2010two,benamou2019minimal} and steepest descent-based energy minimization approaches~\citep{angenent2003minimizing,carlier2010knothe,loeper2005numerical}.
In recent years, several deep learning methods have been deployed for solving the OMT problem. In these methods, the OMT problem is typically embedded in the loss function for the neural network model. Recent work by~\cite{OTICNN} proposed to approximate the OMT map as the solution of min-max optimization using input convex neural networks (ICNN), see~\cite{amos}. The min-max nature of this algorithm arises from the need to train an ICNN to represent a convex function and the conjugate of the convex function. Building upon this approach, \citet{korotin2019wasserstein} imposed a cyclic regularisation that converts the min-max optimization problem to a standard minimization problem. The conjugate convex function composed of the convex function itself should be identity helps in avoiding the min-max optimization.
This change results in a faster converging algorithm that scales well to higher dimensions and also prevents convergence to local saddle points and instabilities during training, as is the case in the min-max algorithm.
Another class of neural networks which have been proposed to solve OMT problems are Generative Adversarial Networks(GANs) \citep{goodfellow2014generative}. GANs are defined through a min-max game of two neural networks where one of the networks tries to generate new samples from a data distribution, while the other network judges whether these generated samples originate from the data population or not. Later, \citet{gulrajani2017improved} proposed using the Wasserstein-1 distance in GANs instead of the Jensen-Shannon divergence between the generated distribution and the data distribution as in the original formulation. They demonstrated that this new loss functions leads to better stability of the training of networks attributed to the Wasserstein metric being well defined even when the two distributions do not share the same support.
{\bf Contributions:}
In this paper, we propose a different deep learning-based framework to approximate the optimal transport maps.
The approach we present relies on Brenier's celebrated theorem~\citep{brenier}, thereby reducing the optimal transport problem to that of solving a partial differential equation: a Monge-Ampere type equation. We frame this PDE in the recently
developed paradigm of Physics Informed Neural Networks (PINNs)~\citep{raissi}. Similar to other deep learning-based algorithms, our framework directly inherits the dimensional scalability of neural networks~\citep{shin2020convergence}, which traditional finite element or finite difference methods for solving PDEs do not possess. Brenier's theorem further states that the optimal transport map is given by the gradient of a convex function- the Brenier potential. To incorporate this information in our PINN approach, we parameterize the Brenier potential using an ICNN, thereby guaranteeing its convexity.
We test the accuracy of our OMT solver on numerous synthetic examples for which analytical solutions are known. Our experiments show that our algorithm indeed approximates the true solution well, even in high dimensions. To further quantify the performance of the new framework, we compare it to
two other deep learning-based algorithms, for which we guided the selection by the results of the recent benchmarking paper by \citet{korotin2021neural}, in which they evaluate the methods presented in ~\cite{seguy2017large,nhan2019threeplayer,taghvaei20192,OTICNN,liu2019wasserstein,mallasto2019q,korotin2019wasserstein}. We restricted our comparision
to the algorithms of \citet{OTICNN} and \citet{korotin2019wasserstein}, as these two showed the best performance in this benchmark. Our results showed that the newly proposed method significantly outperforms these methods in terms of accuracy.
As an explicit application of our solution of OMT, we focus on the density estimation problem.
In synthetic examples, we show that we can estimate the true density based on a limited amount of samples. In the appendix we also demonstrate the generative power of our framework by combining it with a traditional autoencoder and applying it to the MNIST data set.
In accordance with the best practices for reproducible research, we are providing an open-source version of the code, which is publicly available on \href{https://github.com/4m4npr33t/PICANNs}{github}.
\section{OMT using Deep Learning}
In this section, we will present our framework for solving the Optimal Mass Transport (OMT) problem. Our approach will combine methods of deep learning with
the celebrated theorem of Brenier, which reduces the solution of the OMT problem to solving a Monge-Ampere type equation. To be more precise, we will
tackle this problem by embedding the Monge-Ampere equation into the broadly applicable concept of Physics Informed Neural Networks.
\subsection{Mathematical Background of OMT}
We start by summarizing the mathematical background of OMT, including a description of Brenier's theorem. For more information we refer to the vast literature on OMT, see e.g., \cite{villani2003topics,villani2008optimal}.
Let $\Omega$ be a convex and bounded domain of $\mathbb{R}^n$ and let $dx$ denote the standard measure on $\mathbb{R}^n$.
For simplicity, we restrict our presentation to the set $\mathcal{P}(\Omega)$ of all absolutely continuous measures on $\Omega$,
i.e., $\mathcal{P}(\Omega) \ni\mu=fdx$ with $f\in L^1(\Omega)$, such that $\int_{\Omega} f dx=1$.
From here, on we will identify the measure $\mu$ with its density function $f$.
We aim to minimize the cost
of transporting a density $\mu$ to a density $\nu$ using a (transport) map $T$, which leads to the so-called Monge Optimal Transport Problem. To keep the presentation as simple as possible, we will consider only the special case of a quadratic cost function.
\begin{dfn}[$L^2$-Monge Optimal Transport Problem] Given $\mu, \nu \in \mathcal{P}(\Omega)$,
minimize
$$\mathbb{M}(T) = \int_{\Omega} \|x- T(x)\|^2 d\mu(x)$$
over all $\mu$-measureable maps $T: \Omega \to \Omega$ subject to $\nu = T_*\mu$.
We will call an optimal $T$ an optimal transport map.
\end{dfn}
Here, the constraint is formulated in terms of the push forward action of a measurable map $T: \Omega\to \Omega$, which is defined via
\begin{equation}
T_*\mu(B)=\mu(T^{-1}(B)),
\end{equation}
for every measurable set $A\subset \Omega$.
By a change of coordinates, the constraint $T_*\mu=T_*(fdx)=\nu=g dx$ can be thus reduced to the equation
\begin{equation}\label{eq:pushforward}
f(x) = g(T(x))|\operatorname{det}(D T(x))|.
\end{equation}
The above equation can be also expressed via the pullback action as $\mu=T^*\nu$.
The existence of an optimal transport map is not always guaranteed. We will, however, see, that in our situation, i.e., for absolutely continuous measures, the existence and uniqueness is indeed guaranteed. First, we will introduce a more general formulation of the Monge problem, the Kantorovich formulation of OMT.
Therefore, we define the space of all transport plans $\Pi(\mu, \nu)$, i.e., of all measures on the product space
$\Omega\times \Omega$, such that the first marginal is $\mu$ and the second marginal is $\nu$. The OMT problem in the Kantorovich formulation then reads as:
\begin{dfn}[$L^2$-Kantorovich's Optimal Transport Problem]
Given $\mu, \nu \in \mathcal{P}(\Omega)$, minimize
$$
\mathbb{K}(\pi) = \int_{\Omega \times \Omega} \|x- y\|^2d\pi(x, y)
$$
over all $\pi \in \Pi(\mu,\nu)$.
\end{dfn}
Note that the $L^2$-Wasserstein metric $W_{2}(\mu,\nu)$ between $\mu$ and $\nu$ is defined as the infimum of $\mathbb{K}$.
We will now formulate Brenier's theorem, which guarantees the existence of an optimal transport map and will be the central building block of our algorithm:
\begin{thm}[\cite{brenier}] \label{thm:Brenier} Let $\mu, \nu \in \mathcal{P}(\Omega)$. Then there exists a unique optimal transport plan $\pi^* \in \Pi(f, g)$, which is given by $\pi^*(x, y) =(\operatorname{id}\times T)$
where $T=\nabla u$ is the gradient of a convex function $u$ that pushes $\mu$ forward to $\nu$, i.e., $(\nabla u)_*\mu= \nu$. The inverse $T^{-1}$ is also given by the gradient of a convex function that is the Legendre transform of the convex function $u$.
\end{thm}
Thus, Brenier's Theorem guarantees the existence and the uniqueness of the optimal transport map of the OMT problem.
Consequently, we can determine this optimal transport map by solving for the function $u$ in the form of a Monge-Ampère equation:
\begin{equation}\label{eq:Monge-Ampere}
\operatorname{det}(D^2(u)(x))\cdot g(\nabla u(x)) = f(x)
\end{equation}
where $D^2$ is the Hessian, $\mu=fdx$ and $\nu=gdx$.
We obtain~\eqref{eq:Monge-Ampere} directly from~\eqref{eq:pushforward} using the constraint that $T=\nabla u$ as required by Brenier's theorem. We will also refer to this map as the Brenier map. This map is a diffeomorphism as it is a gradient of a strictly convex function.
Using methods of classical numerical analysis, Brenier's theorem has been used e.g. in~\cite{peyre2019computational} to obtain a numerical framework for the continous OMT problem.
In the following section we will propose a new discretization to this problem, which will make use of recent advances in deep learning.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Images/PICANN-Fig1.jpg}
\caption{PICANN architecture. We present how a combination of two ICNN networks can be used to learn the forward and the inverse map between two distributions. Both these networks are trained independently with their respective loss functions. The inverse network uses the gradient of the output of the first network as its input.}
\label{fig:PICANN_Architecture}
\end{figure*}
\subsection{Solving OMT using PINNs} \label{sec:OMTviaPINN}
Physics Informed Neural Networks (PINNs) were proposed by~\citet{raissi} to solve general nonlinear partial differential equations (PDEs). The basic concept is to use the universal approximation property of deep neural networks to represent the solution of a PDE via a network. Using
the automatic differentiation capability of modern machine learning frameworks, a loss function is formulated, such that its minimizer solves the PDE in a weak sense. Such a loss function encodes the structured information, which results in the amplification of the information content of the data the network sees~\citep{raissi}. This formulation of the PDE results in good generalization even when only few training examples are available.
PINNs have found widespread applications in a short period of time since their introduction. These applications include a wide variety of PDEs, including the Navier-Stokes equation~\citep{jin}, nonlinear stochastic PDEs~\citep{zhang} or Allen Cahn PDEs \citep{mcclenny}.
In this work, we propose to use the PINN approach to solve the Monge-Ampere equation, as presented in~\eqref{eq:Monge-Ampere}, and hence implicitly the OMT problem. This equation has been extensively studied and the properties of its solutions are well established. By Theorem~\ref{thm:Brenier}, we know that the solution is given by a convex function $u$. Recently, \cite{amos} proposed a new architecture of neural networks, Input Convex Neural Networks (ICNNs), that explicitly constrains the function approximated by the network to be convex. Consequently, this architecture naturally lends itself to our proposed application, as it directly encodes Brenier's theorem.
In the ICNN architecture, the activation function is a nondecreasing convex function and the internal weights ($W_n^{(x)}$) are constrained to be non-negative; see Figure~\ref{fig:PICANN_Architecture} for a schematic description of this class of networks. This architecture is derived from two simple facts: non-negative sums of convex functions are also convex, and the composition of a convex and convex nondecreasing function is again convex.
In the following equation, we assume that we are given $\mu=fdx$ and $\nu =gdx$.
The loss function corresponding to~\eqref{eq:Monge-Ampere} is then given by
\begin{equation}
\| \operatorname{det}(D^2(u))\cdot g(\nabla u) - f \|_{L^2}^2
\end{equation}
where $u$ is expressed as the output of a ICNN of sufficient depth and width.
Once we have estimated the optimal transport map, the $L^2$-Wasserstein metric between $\mu$ and $\nu$ is given by
\begin{equation}\label{L2-loss}
\int \|x-\nabla u(x)\|^2\; g(x) dx.
\end{equation}
We call this combination of the PINN approach with the ICNN structure, Physics Informed Convex Artificial Neural Networks (PICANNs).
In several applications, we are interested in computing the inverse transformation
at the same time. By a duality argument, we know that this map is also given by the gradient of a convex function. Thus, we use a second ICNN to compute the inverse optimal transport map ($\nabla v$) by solving the minimization problem:
\begin{equation}\label{InverseLoss}
\| \nabla v (\nabla u (x))- x\|_{L^2},
\end{equation}
where $\nabla u$ is the optimal transport map solving $(\nabla u)_*\mu=\nu$.
\subsection{Diffeomorphic Random Sampling and Density Estimation}
In many applications, such as the Bayesian estimation, we can evaluate the density rather easily but generating samples from a given density is not trivial. Traditional methods include Markov Chain Monte Carlo methods, e.g., the Metropolis Hastings algorithm~\citep{hastings1970monte}. An alternative idea is to use diffeomorphic density matching between the given density $\nu$ and a standard density $\mu$ from which samples can be drawn easily. Once we have calculated the transport map, standard samples are transformed by the push-forward diffeomorphism to generate samples from the target density $\nu$. This approach has been followed in several articles, where the optimal transport map selection was based on both, the Fisher-Rao metric~\citep{bauer2017diffeomorphic} and the Knothe–Rosenblatt rearrangement~\citep{marzouk2016sampling}. The efficient implementation of the present paper directly leads to an efficient random sampling algorithm in high dimensions.
We now recall the density estimation problem using the OMT framework. We are given samples $x_i$ drawn from an unknown density $\mu \in \mathcal{P}(\Omega)$ that we aim to estimate. The main idea of our algorithm is to represent
the unknown density as the pullback via a (diffeomorphic) Brenier map $\nabla u$ of a given background density $\nu=gdx$, i.e., $(\nabla u)^*\nu=\mu$ or equivalently $(\nabla u)_*\mu=\nu$.
As we do not have an explicit target density, but only a finite number of samples, we need to find a replacement for the $L^2$-norm used in \eqref{L2-loss} to estimate the transport map $\nabla u$. We
do this by
maximizing the log-likelihood of the data with respect to the density $(\nabla u)^*\nu$:
\begin{equation}\label{eq:16}
\frac{1}{N} \sum_{i} \log\left(\operatorname{det}(D^2(u(x_i)))\cdot g(\nabla u(x_i)) \right).
\end{equation}
Using our PINNs framework, we represent the convex function $u$ again via an ICNN, which serves as an implicit regularizer.
This equation can be alternatively interpreted as minimizing
the empirical Kullback-Leibler divergence
between $\mu$ and the pullback of the background density $\nu$.
To generate new samples from the estimated density, we use the inverse map to transform the samples from the background density $\nu$. We calculate the inverse map using a second neural network and explicit loss function given by~\eqref{InverseLoss}.
\begin{figure*}[!htb]
\centering
\begin{subfigure}[t]{0.22\textwidth}
\centering
\includegraphics[width=\textwidth]{Images/New_True_Annulus.png}
\caption{Ground truth}
\label{subfig:True_Annulus}
\end{subfigure}
\begin{subfigure}[t]{0.22\textwidth}
\centering
\includegraphics[width=\textwidth]{Images/PICANN.png}
\caption{Est. pdf (PICANN)}
\label{subfig:Estimated_Annulus}
\end{subfigure}
\vspace{.1cm}
\begin{subfigure}[t]{0.22\textwidth}
\centering
\includegraphics[width=\textwidth]{Images/W2Gen.png}
\caption{Est. pdf (W2 Gen)}
\label{subfig:Diff}
\end{subfigure}
\begin{subfigure}[t]{0.22\textwidth}
\centering
\includegraphics[width=\textwidth]{Images/OT-ICNN.png}
\caption{Est. pdf (OT ICNN)}
\label{subfig:Diff}
\end{subfigure}
\begin{subfigure}[t]{0.22\textwidth}
\centering
\includegraphics[width=\textwidth]{Images/Annulus_True_Transport_Map.png}
\caption{Ground truth}
\label{subfig:True_Map}
\end{subfigure}
\begin{subfigure}[t]{0.22\textwidth}
\centering
\includegraphics[width=\textwidth]{Images/PICANN_Deformation.png}
\caption{Est. map (PICANN)}
\label{subfig:Estimated_Map}
\end{subfigure}
\begin{subfigure}[t]{0.22\textwidth}
\centering
\includegraphics[width=\textwidth]{Images/W2Gen_Deformation.png}
\caption{Est. map (W2 Gen)}
\label{subfig:OT_ICNN_Estimated_Map}
\end{subfigure}
\begin{subfigure}[t]{0.22\textwidth}
\centering
\includegraphics[width=\textwidth]{Images/OT-ICNN_Deformation.png}
\caption{Est. map (OT ICNN)}
\label{subfig:OT_ICNN_Estimated_Map}
\end{subfigure}
\caption{Validation: Panel~(a) shows the true annulus distribution, the estimated annulus distribution using the PICANN approach. The W2-GEN and OT-ICNN approaches are shown in Panels~(b), ~(c) and ~(d), respectively. Panel~(e) shows the analytical optimal transport map between the unit Gaussian and the annulus distribution. The estimated optimal transport map using the PICANN approach is presented in Panel~(f) and the maps estimated using the W2-GEN and OT-ICNN methods are shown in Panels~(g) and ~(h), respectively.}
\label{fig:Annulus_Results}
\end{figure*}
\section{Experimental Results}
In this section, we will detail our implementation and present several experiments demonstrating both the applicability and accuracy of our framework. In particular, we will compare our results in several experiments to state-of-the-art deep learning-based OMT solvers, and we will show that we outperform these methods in terms of accuracy.
\subsection{Network details} \label{sec:NetworkDetails}
As explained in Section~\ref{sec:OMTviaPINN}, we use an ICNN architecture for both the forward and the backward map in all of our experiments, c.f. Figure~\ref{fig:PICANN_Architecture}. As with every deep learning approach, we need to tune the hyperparameters, including width/depth of the network, activation functions and batch size. The width of the network needs to increase with the dimension of the ambient space of the data to ensure sufficient flexibility. For our experiments in lower dimensions, we used a network with three hidden layers with 128 neurons in each layer, whereas for experiments in 30d, we used a network with four hidden layers with 128 neurons in each layer. To initialize the network, we first train the networks to learn the identity transformation, i.e., $\nabla u = I$, which we use as the initial starting point for all our experiments. In all our experiments, 10,000 target samples were used.
To guarantee the convexity of the output function, the activation functions need to be convex and non-decreasing. Since simple ReLUs are not strictly convex and have a vanishing second derivative almost everywhere, we experimented with the family of Rectified Power Units (RePUs), the log exponential family and the 'Softplus' function. The Softplus function to the power of $\alpha$, which is defined via
$
\operatorname{Softplus}^{\alpha}(x) = \left(\log{\left(1 + \exp{x}\right)}\right)^{\alpha}
$, turned out to be best suited for our applications, where we chose $\alpha=1.1$. In particular our experiments suggested that networks with this activation function were able to generalize well to regions where no or only limited training data were available.
\subsection{Validation and comparison to other methods}\label{sec:Valdidation}
To demonstrate the accuracy of our implementation, we present several experiments, in which analytic solutions to the OMT problem are available. To further quantify the quality of our results, we compare them to results obtained with two state-of-the-art deep learning-based OMT solvers: ~\cite{OTICNN} and \cite{korotin2019wasserstein}. We choose these two specific algorithms among the available plethora of available OMT solvers based on the recent benchmark paper by~\cite{korotin2021neural}.
Since both of these algorithms are also based on an ICNN structure, we were able to choose the same architecture with same hyperparameters for all three algorithms, thereby ensuring a fair comparison. We want to emphasize that these parameters could be further fine tuned for all the algorithms and specific experiments to improve the results. To demonstrate the scalability of our algorithm by performing the same experiment in dimensions 2, 3, 5, 8, 15 and 30.
We do not present comparisons of our approach to more traditional OMT algorithms such as the Sinkhorn algorithm~\citep{cuturi2013sinkhorn} or the linear programming approaches~\citep{peyre2019computational}, as these frameworks, although they approximate the OMT distances, do not compute the continuous optimal transport map, which is essential for the proposed density estimation. While finite element or finite difference based Monge-Ampere solvers, see e.g.~\citep{benamou2019minimal,jacobs2020fast,benamou2000computational}, calculate the continuous OMT map, they are not suitable in dimensions greater than two or three.
To quantify the quality of an estimated transport plan $T$, we present two quantities: the percentage error between the analytic Wasserstein distance and the approximated distance and the $\mathcal{L}^2$-UVP unexplained variance percentage (UVP), which is given by
\begin{equation}\label{eq:uvp}
\mathcal{L}^2\mbox{-}\operatorname{UVP}(T) = 100 \cdot \|T - T^*\|^2_{\mathcal{L}^2(\mu)}/\operatorname{Var}(\nu),
\end{equation}
where $T^*$ denotes the (analytic) optimal transport plan.
Our first series of experiments is the same as in the benchmark paper~\citep{korotin2021neural}:
we use the gradient of a random convex function to transport the unit Gaussian to a random density. By Brenier's Theorem, as the optimal transport map is the gradient of a convex function, this map is the optimal transport map. In each dimension, we repeated this experiment 20 times to compute the error statistics, which are presented in Table~\ref{tbl:L2UVP}. Wheras all three algorithms seem to work well for this experiment, PICANNs consistently outperform the other algorithms. As already observed in \cite{korotin2021neural}, this experiment is favoring the ICNN architecture, as the true solution was chosen to be of the same nature, which explains the nearly perfect performance of all three algorithms. Next, we turn to cases where the analytical solution is known in closed form. The first is the special case where both densities are from a family of Gaussians distributions. In that case, the OMT map is simply given by an affine transform and the OMT distance is again given in closed form. We again repeat this experiment 20 times for each dimension,
where we generate Gaussian distributions with a random mean and covariances. Here the means are sampled from a uniform distribution on $[-1,1]$. To construct the random covariance matrices, we recall that we need to enforce the matrix to be positive definite and symmetric. Therefore, we generate a random matrix $A$ of dimension $d\times 3d$, where $d$ is the dimension of the space and where the entries are i.i.d. chosen from a uniform distribution on $[0,0.75]$. Then, a random covariance matrix can be constructed by letting $\Sigma=AA^T$ (the particular form of $\Sigma$ almost surely guarantees positive definiteness).
The results and comparisons with the other two methods are again presented in Table~\ref{tbl:L2UVP}. In general, all three algorithms still lead to a good approximation, where one can see that W2Gen and PICANNs are performing significantly better than OT-ICNN.
The experiments so far do not utilize any complex densities as target distributions. To further validate our algorithm, we choose a more challenging problem: an annulus density for which we know the transport map in closed form. The annulus distribution is given by a push forward of the Gaussian distribution by a gradient of a radially symmetric convex function. This distribution is given by
$f = g((X^TX)X)\cdot 3(X^TX)^d$ where $g$ is the unit Gaussian and $d$ is the number of dimensions. One can easily check that the transport map $X \mapsto (X^T X)X$ is the gradient of the convex function $\frac{1}{4} (X^T X)^2$. Thus, we again have access to the optimal transport map; see Figure \ref{fig:Annulus_Results} for a visualization in dimension two. In this figure and in Table \ref{tbl:L2UVP} one can see that PICANNs outperforms both other algorithms by orders of magnitudes.
\begin{table}[htbp]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{8}{|c|}{\textbf{$\mathcal{L}^2$- Unexplained Variance Percentage ($\mathcal{L}^2$-UVP)}} \Tstrut\Bstrut \\
\hline
\multirow{3}{*}{\textbf{Experiment}} & \multirow{3}{*}{\textbf{Method}} & \multicolumn{6}{c|}{\textbf{Dimensions}} \TstrutS\\
\cline{3-8}
& & 2d & 3d & 5d & 8d & 15d & 30d \TstrutS\\
\hline
\multirow{3}{*}{
\begin{minipage}[t]{0.23\columnwidth}
\centering
Random Cvx Function
\end{minipage}
} & PICANNs & \textbf{0.004} & \textbf{0.007} &\textbf{ 0.021} & \textbf{0.034} & \textbf{0.144} & \textbf{0.38 } \TstrutS \\
& OT-ICNN & 0.043 & 0.052 & 0.145 & 0.276 & 0.746 & 3.98\\
& W2GEN & 0.040 & 0.043 & 0.046 & 0.052 & 0.150 & 0.60\\
\hhline{|=|=|=|=|=|=|=|=|}
\multirow{3}{*}{
\begin{minipage}[t]{0.23\columnwidth}
\centering
Random Gaussian
\end{minipage}
} & PICANNs & 0.33 & 0.15 & \textbf{0.15} & 0.28 & \textbf{0.30} & 1.13 \TstrutS \\
& OT-ICNN & 0.28 & 0.71 & 0.86 & 2.38 & 2.84 & 2.24 \\
& W2GEN & \textbf{0.17} & \textbf{0.14} & 0.16 & \textbf{0.23} & 0.37 & \textbf{0.67} \\
\hhline{|=|=|=|=|=|=|=|=|}
\multirow{3}{*}{
\begin{minipage}[t]{0.23\columnwidth}
\centering
Annulus
\end{minipage}
} & PICANNs & \textbf{0.29} & \textbf{0.43} & \textbf{0.63 } & \textbf{1.61} & \textbf{7.53} & \textbf{21.71} \TstrutS \\
& OT-ICNN & 23.84 & 9.98 & 28.21 & 43.87 & 44.52 & 2725.15 \\
& W2GEN & 1.33 & 6.86 & 18.31 & 20.50 & 23.28 & 34.19 \\
\hline
\multicolumn{8}{|c|}{\textbf{Avg \% error between true and approximated Wasserstein distance}} \Tstrut\Bstrut \\\hline
\multirow{3}{*}{
\begin{minipage}[t]{0.23\columnwidth}
\centering
Random Cvx Function
\end{minipage}
} & PICANNs & 0.12 & \textbf{0.05} & \textbf{0.03} & \textbf{0.03} &\textbf{ 0.02} & \textbf{0.04} \TstrutS\\
& OT-ICNN & 0.10 & 0.10 & 0.08 & 0.07 & 0.10 & 0.09 \\
& W2GEN & \textbf{0.09} & 0.067 & \textbf{0.03} & 0.04 & 0.06 & 0.52 \\
\hhline{|=|=|=|=|=|=|=|=|}
\multirow{3}{*}{
\begin{minipage}[t]{0.23\columnwidth}
\centering
Random Gaussian
\end{minipage}
} & PICANNs & \textbf{1.56} & 0.88 & \textbf{0.35} & \textbf{0.21} & \textbf{0.19} & \textbf{0.15} \TstrutS \\
& OT-ICNN & 1.66 & 1.40 & 0.93 & 0.95 & 0.27 & 0.31 \\
& W2GEN & 1.59 & \textbf{0.75} & 0.41 & 0.25 & 0.35 & 0.19 \\ \hhline{|=|=|=|=|=|=|=|=|}
\multirow{3}{*}{
\begin{minipage}[t]{0.23\columnwidth}
\centering
Annulus
\end{minipage}
} & PICANNs & \textbf{5.37} & \textbf{1.25} & \textbf{1.81} & \textbf{1.56} & 7.44 & 5.07 \TstrutS \\
& OT-ICNN & 6.54 & 25.89& 33.50 & 20.88 & 2.03 & 36.13 \\
& W2GEN & 12.36 & 19.39 & 8.10 & 3.34 & \textbf{0.96} & \textbf{0.66} \\
\hline
\end{tabular}
\caption{ We present a comparison between our PICANN approach and \cite{OTICNN} and \cite{korotin2019wasserstein}. In this table, we present the L2-UVP and the percentage error between the theoretical W2 metric and the approximated W2 metric for three experiments. In all these experiments, the source density is the unit Gaussian. The target density in the case of the "Random Cvx Function" experiment is the unit Gaussian deformed by the gradient of a random convex function. In the case "Random Gaussian", the target density is another Gaussian with a randomly sampled mean and co-variance matrix. In the third experiment, the target density is the annulus distribution. The results in the first two experiments are averages of 20 realizations.}
\label{tbl:L2UVP}
\end{table}
\subsection{Density Estimation Examples}
\begin{figure*}[htbp]
\centering
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{Images/GM_Training_Samples.png}
\caption{Data}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{Images/Predicted_GM.png}
\caption{Est. density}
\label{subfig:GM_approx}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{Images/Transport_MAP_GM.png}
\caption{Est. map}
\label{subfig:GM_Deformation}
\end{subfigure}
\caption{Density Estimation 1: In this figure we show an example for density estimation using a simple Gaussian mixture. Panel (a) shows the given data; the approximated density and the inverse map as found using our PICANN approach are shown in Panels (b) and (c).}
\label{fig:Gaussian_Mixture}
\end{figure*}
\begin{figure*}[htbp]
\centering
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{Images/FunnyDist_TrainingSamples.png}
\caption{Data}
\label{subfig:Funny_Samples}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{Images/FunnyDist_EstimatedDensity.png}
\caption{Est. density}
\label{subfig:Funny_Approx}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{Images/FunnyDist_Deformation.png}
\caption{Est. map}
\label{subfig:Funny_Deformation}
\end{subfigure}
\caption{Density Estimation 2: In this figure we show a second example for density estimation. Panel (a) shows the given data; the approximated density and the inverse map as found using our PICANN approach are shown in Panels (b) and (c).}
\label{fig:FunnyDist}
\end{figure*}
In this section, we present the results for using the PICANN framework for density estimation. We consider the problem of estimating a continuous density from discrete finite samples. Shown in Figure~\ref{fig:Gaussian_Mixture} are 10k random samples generated from a known Gaussian mixture model of $4$ Gaussians. We use the standard normal distribution as the reference distribution and estimate the optimal transport map between the data and the reference using our PICANN approach. The pushforward of the reference distribution by the estimated transport map and the estimated transport map are both shown in Figure~\ref{fig:Gaussian_Mixture}. We can see that the estimated density matches the original Gaussian mixture.
Next, we consider a more challenging example: Figure~\ref{fig:FunnyDist}, shows 20k random samples from an nonsymmetric distribution that has been constructed in~\cite{bauer2017diffeomorphic}. We again present the estimated density and transport map. Similar as in the first example, we obtain a good match with the original distribution, where one needs a highly nonlinear transport map.
\section{Conclusion}
In this paper, we use the $L^2$-Wasserstein metric and optimal mass transport (OMT) theory to formulate a density estimation estimation and generative modeling framework. We develop a new deep learning-based solver for the continuous OMT problem, which is rooted in Brenier's celebrated theorem. This theorem allows us to formulate the density estimation problem as a solution to a nonlinear PDE -- a Monge-Ampere equation. Recent developments in deep learning for PDEs, namely PINNS and ICNNs, allow us to develop an efficient solver. We demonstrate the accuracy of our framework by comparing our results to analytic Wasserstein distances. To further quantify the quality of our results we compare them to the results obtained with the two best performing algorithms of the recent benchmark paper for deep learning based OMT solvers~\citep{korotin2021neural}. Our experiments show that our approach significantly outperforms these methods in term of accuracy.
Finally, we present examples of diffeomorphic density estimation within our framework and, in the appendix, we showcase an example of a generative model.
|
2,869,038,154,585 | arxiv | \section{Introduction}
Reinforcement learning (RL) is promising in solving sequential decision making problems such as robotic navigation with obstacle avoidance as it seeks long-term optimal policies \cite{RL1998Sutton, Q-learningML1992}. Recent advances in deep reinforcement learning (DRL), which combines deep neutral networks with RL, have shown the capability of achieving super human performance in diverse complex environments \cite{natureDQN2015Mnih, DDPG2016ICLR, EndToEndVisuomotor2015Levine}. The majority of current DRL methods are designed for maximizing the expectation of accumulated future returns, omitting to consider the risk of rare catastrophic events. However, when it comes to applying RL to safety-critical robots like drones, instead of aiming at achieving a high expected return, dealing with risks and making decisions under uncertainty is crucial and remains a challenge.
A natural way to risk-sensitive RL is considering the worst-case of the stochastic return rather than its expectation, but this may lead to over-conservative policies \cite{HowRoRisk}. Recent works proposed to model the distribution of the future return and to generate multiple policies with different risk-sensitivities by changing levels of a risk metric \cite{TangZS19}. While \cite{TangZS19} captures the stochasticity in accumulated returns by approximating the mean and variance of a gaussian distribution, \textit{distributional RL} reconstructs the true intrinsic distribution of future returns \cite{C51,QR,IQN2018}. A major merit of distributional RL is that it can generate multiple policies with different levels of risk-tendencies \cite{RAAC, IQN2018, DSAC}.
\begin{figure}
\centerline{\includegraphics[width=0.5\textwidth]{figures/pipeline.png}}
\caption{ART-IQN framework that enables a Crazyflie \cite{Crazyflie} nano drone navigating through a cluttered environment under partial observability with adaptive risk-tendency.}
\label{fig:drone_demo}
\end{figure}
Distributional RL has been applied to safety-critical applications such as autonomous driving at occluded intersections \cite{MinimizeCar} and mobile-robot indoor navigation \cite{DSAC-condition}. These methods learn a policy that can vary its risk-tendency during training, but they still rely on a fixed risk-tendency for each deployment task. However, a proficient pilot would not be as cautious while cruising in fair weather as when landing in stormy weather. In other words, the ideal degree of risk-tendency varies as a function of not only the task but also the real-time feedback from the environment. A step towards building intelligent robots is \textit{adapting risk-tendency} on the fly automatically.
To achieve this goal, we propose Adaptive Risk-Tendency Implicit Quantile Network (ART-IQN) that can adapt risk-tendency by reacting to the context. We propose to let intrinsic uncertainty \cite{uncertainty} (estimated by lower tail conditional variance) set the way in which the agent acts - adapting risk-tendency by forecasting the intrinsic uncertainty.
The effectiveness of ART-IQN is validated on safety-critical tasks - autonomous drone navigation in cluttered environments with constrained sensors (shown in Fig. \ref{fig:drone_demo}). Both in simulation and real-world experiments, our method shows superior performance in the trade-off between navigation efficiency and safety in comparison with risk-neutral and risk-averse baselines. Our main contributions are:
\begin{itemize}
\item an automatic adaption in risk-tendency on the fly in accordance with intrinsic uncertainty estimation;
\item a drone navigation algorithm based on distributional RL, that can learn a variety of risk-sensitive policies;
\item a sim-to-real RL framework and a light-weight simulation environment, enabling seamless transfer from simulation to reality.
\end{itemize}
\section{Related Work}
\subsection{Risk and Uncertainty in RL-based Navigation}
RL-based robot navigation methods surged recently due to the capability of generalization and robustness\cite{DRL2018introduction, mobileRobotNavMapless2017iros}. Several navigation and obstacle avoidance algorithms based on RL have also emerged to address risks and uncertainties in the environment. For instance, \cite{uncertaintyAware-MPC-RL-2017} proposed to use a neural network to predict collision probability at future steps for obstacle avoidance tasks, utilizing MC-dropout \cite{MC-dropout2016Y.Gal} and bootstrapping \cite{BootstrappedDQN} to estimate the uncertainty of the model prediction. Additionally, \cite{SafeRLWithModelUncertaintyEstimation} enabled estimation of the regional increase of uncertainty in novel dynamic scenarios by introducing LSTM \cite{LSTM} to add memory of historical motion of the robot. \cite{ResillientBehaviorForNavigation} resorts to a model-free policy network as action selector, a GRU \cite{GRU} to predict uncertainty in local observation and uses the prediction variance to adjust the variance of the stochastic policy.
However, these methods either use MPC \cite{MPC} as action selector, which consumes a lot of computational resources, or they require an additional predictor model to estimate the uncertainty. In our method, the risk measure and uncertainty estimation are easily and efficiently implemented based on distributional RL (more details in Section \ref{sec:method}), which requires minimal additional computational resources.
\subsection{Distributional Reinforcement Learning}
\label{sec:drl}
Distributional RL has gained momentum recently, which takes into account the whole distribution of value functions rather than the expectation \cite{TangZS19, C51, QR, IQN2018}. Since the whole distribution contains more information beyond the first moment, one can utilize it to make more informed decisions that lead to higher rewards. Recent literature shows that similar mechanisms also exist in human brains \cite{naturedistri}.
A forerunner of distributional RL is categorical DQN \cite{C51}, which uses categorical distribution with fixed supports to approximate the probability density function (PDF) of the return. A more flexible way to approximate the distribution is quantile regression \cite{quantileRegressionModels1998}. For instance, the quantile regression DQN (QR-DQN) algorithm \cite{QR} learns the distribution by approximating the quantile function (QF) with fixed quantile values. The implicit quantile network (IQN) algorithm \cite{IQN2018} further improved the flexibility and approximation accuracy compared to QR-DQN by learning quantile values from quantile fractions sampled from a uniform distribution $\mathcal{U}[0, 1]$. This is achieved with a deep neutral network representing the QF by mapping quantile fractions to quantile values under Wasserstein distance, a loss metric which indicates the minimal cost for transporting mass to make two distributions identical \cite{IQN2018}.
There are also applications of distributional RL to safety-critical environments in the literature. \cite{MinimizeCar} incorporates IQN to solve an autonomous driving task at intersections by combing risk-averse IQN with safety guarantees. Based on \cite{DSAC}, \cite{DSAC-condition} proposes a method enabling a mobile robot navigating office scenarios with multiple risk-sensitivities.
Even though risk-tendency can be altered without retraining a policy, those methods require a fixed risk-tendency for each deployment task.
Our algorithm is able to adjust its risk-tendency by reacting to dynamic uncertainty levels rather than following a fixed manually set risk-tendency.
\section{Methodology}
\label{sec:method}
\subsection{Problem Statement}
\label{sec:method-a}
We formulate the drone navigation task as Partially Observable Markov Decision Process (POMDP) \cite{spaan2012partially}.
\subsubsection{POMDP Setup}
The POMDP can be defined as a tuple $(\mathcal{S}, \mathcal{A}, \mathcal{O}, \mathcal{P}, R, \gamma)$, where $\mathcal{S}$, $\mathcal{A}$ and $\mathcal{O}$ represent the state, action and observation spaces. The drone interacts with the environment in discrete timesteps. At each timestep $t$, it receives the observation $o_t \in \mathcal{O}$ from the environment and performs an action $a_t \in \mathcal{A}$ based on its policy function $\pi_t(a_t | o_t)$, which causes a transition of the state from $s_t$ to $s_{t+1} \sim \mathcal{P}(\cdot | s_t, a_t)$, generating a reward $r_t = R(s_t, a_t)$ and a new observation $o_{t+1} \sim \mathcal{O}(\cdot|s_{t+1}, a_t)$. Following policy $\pi$, the discounted sum of future rewards is denoted by the random variable $Z^{\pi}(s_t, a_t) = \sum_{k=0}^{\infty}\gamma^{k}R(s_{t+k}, a_{t+k})$ with $\gamma \in (0, 1)$ as the discount factor. Standard RL aims at maximizing the expectation of $Z^{\pi}$, which is known as the action-value function $Q^\pi(s_t,a_t) = \mathbb{E}[Z^\pi(s_t, a_t)]$.
\begin{figure}[!h]
\centering
\includegraphics[width=0.43\textwidth]{figures/crazyflie.jpeg}
\caption{A Crazyflie with 4 lasers to detect obstacles. [Picture by Guus Schoonewille, reprinted TU Delft]}
\label{fig:crazyflie}
\end{figure}
\subsubsection{States and Observations}
\label{sec:space}
We use Crazyflie nano quadrotor as our experiment platform. As shown in Fig. \ref{fig:crazyflie}, the Crazyflie is equipped with four lasers in the drone's positive and negative $x$ and $y$ axis to detect obstacles. It also has an optical flow camera to estimate velocity for low-level flight control. Given a navigation task, $\mathcal{S}$ contains information about the drone itself, the goal and obstacles. The state can be parameterized as $s_t = \langle\mathbf{p}, d_g, \mathbf{d}_{o}\rangle$, where $\mathbf{p}$ is drone's global position, $d_g = ||\mathbf{p} - \mathbf{p}_g||_2$ is the distance from the drone to the goal, and $\mathbf{d}_{o}$ is a vector consisting of distances from the drone to surrounding obstacles.
Due to the constrained sensors onboard, $\mathcal{S}$ is not fully observable to the drone. Instead, the drone receives a partial observation which is formulated as a tuple $o_t = \langle\mathbf{p}, d_g, \mathbf{d}_{l}\rangle$ where $\mathbf{d}_{l}$ denotes laser reflections. The laser detects obstacles at a maximum range of $4$ meters. $\mathbf{p}$ and $d_g$ are given by a global motion capturing system in real-world experiments.
\subsubsection{Action Space}
To incorporate our algorithm, $\mathcal{A}$ consists of discretized velocities with multiple magnitudes and directions.
The velocity magnitudes that can be chosen are $m$ discretized values exponentially spaced in $(0, v_{m}]$, in which $v_{m}$ is the maximum velocity. Since only obstacles that intersect with the laser beams will be detected, there are 4 possible moving directions evenly spaced between $[0, 2\pi)$.
\subsubsection{Reward Function}
The reward function is manually designed to award the drone for reaching the goal as fast as possible, while penalizing for collisions or getting close to obstacles:
\begin{equation}
R(s_t, a_t) = \left\{
\begin{array}{lll}
50 & d_g < d_f &\\
5(d_o-d_s) & r_d < d_o < d_{s} & \\
-25 & d_o < r_d & \\
-0.1 &\text{otherwise,} \\
\end{array}\right.
\end{equation}
where $d_f$ is the goal-reaching threshold, ${d}_{o}$ is the distance from drone to the closest obstacle, $d_s$ is the safety margin, $r_d$ represents the radius of drone.
\subsection{Adaptive Risk-Tendency Implicit Quantile Network}
To adjust the risk-tendency on the fly dynamically, we propose the Adaptive Risk Tendency Implicit Quantile Network (ART-IQN) algorithm. We introduce the key components of ART-IQN as shown in Fig. \ref{fig:drone_demo}, which are (1) the risk-sensitive IQN, (2) the intrinsic uncertainty estimation and (3) the EWAF uncertainty forecasting.
\subsubsection{Implicit Quantile Network}
\label{sec:iqn}
In distributional RL, the distributional Bellman equation \cite{C51} can be defined as
\begin{equation}
Z^\pi(s, a) \stackrel{D}{=}R(s, a) + \gamma Z^\pi(s', a'),
\label{eq:bellman}
\end{equation}
where $\stackrel{D}{=}$ denotes equality in distribution, state $s'$ and action $a'$ at next timestep are distributed according to $s'\sim \mathcal{P}(s,a), a' \sim \pi(\cdot|s')$.
We represent $Z^\pi(s, a)$ implicitly by its quantile function as in IQN \cite{IQN2018}. Concretely, the quantile function is approximated by a neural network with learnable parameters $\theta$. We express such implicit quantile function as $Z_{\theta}^\pi(s,a;\tau)$, where $\tau \in [0, 1]$ is the quantile level. To optimize $\theta$, quantile regression \cite{quantileRegressionModels1998} is used with quantile Huber-loss as a surrogate of the Wasserstein distance \cite{IQN2018}.
A neural network with parameters $\theta'$ is used as the target distribution approximator and the temporal difference (TD) at sample $(s,a,r,s')$ is computed as
\begin{equation}
\delta_{\tau, \tau'} = r + \gamma Z_{\theta'}^\pi(s',a';\tau') - Z_{\theta}^\pi(s,a;\tau),
\end{equation}
for $\tau$, $\tau'$ independently sampled from the uniform distribution, i.e. $\tau, \tau' \sim \mathcal{U}[0,1]$.
The $\tau$-quantile Huber-loss is defined as
\begin{equation}
\centering
\begin{aligned}
& \rho_\kappa(\delta; \tau) = |\tau-\mathbb{I}\{\delta < 0\}|\frac{\mathcal{L}_\kappa(\delta)}{\kappa}, \text{with} &\\
& \mathcal{L}_{\kappa}(\delta) = \left\{
\begin{array}{ll}
\frac{1}{2}\delta^2 & \text{if} |\delta|\leq \kappa\\
\kappa(|\delta| - \frac{1}{2}\kappa) & \text{otherwise,} \\
\end{array}\right. &
\end{aligned}
\end{equation}
where $\mathbb{I}$ is an indicator operator. The threshold $\kappa$ provides smooth gradient-clipping. We approximate the quantile loss by sampling $N$ independent quantiles $\tau$ and $N'$ independent targets $\tau'$. The loss function to update $\theta$ is
\begin{equation}
\label{eq:loss}
\mathcal{L}(\theta) = \frac{1}{N\cdot N'}\sum\limits_{i=1}^{N}\sum\limits_{j=1}^{N'}\rho_{\kappa}(\delta_{{\tau_i}, {\tau'_j}};\tau_i).
\end{equation}
By backpropagating $\mathcal{L}(\theta)$ with respect to $\theta$, the Wasserstein distance is minimized between the current return distribution $Z^\pi(s, a)$ and the target $R(s, a) + \gamma Z^\pi(s', a')$.
\subsubsection{Risk-sensitive Policy and Risk Metric}
\label{sec:cvar}
Distributional RL is inherently risk-sensitive by combining \textit{risk metrics} \cite{HowRoRisk} to create a \textit{distorted expectation} \cite{DSAC} on the return distribution.
A distorted expectation is a risk weighted expectation of the distribution under a specific distortion function - which indicates a non-decreasing function $\beta:[0,1]\rightarrow [0, 1]$ satisfying $\beta(0) = 0$ and $\beta(1) = 1$. The distorted expectation of $Z$ under $\beta$ is defined as $Q_\beta = \int_0^1F_Z^{-1}(\tau)d\beta(\tau)$, where $F_Z^{-1}(\tau)$ is the quantile function or cumulative density function. According to \cite{IQN2018}, any distorted expectation can be represented as a weighted sum over the quantiles. A corresponding sample-based risk-sensitive policy is obtained by approximating $Q_\beta$ by $K$ samples of $\tilde{\tau} \sim \mathcal{U}[0, 1]$:
\begin{equation}
\label{eq:policy}
\pi_{\beta}(s) = \argmax_{a \in \mathcal{A}}\frac{1}{K}\sum\limits_{k=1}^{K}Z_{\beta(\tilde{\tau}_k)}(s,a).
\end{equation}
Altering the sampling principle for $\tau$ creates various risk-sensitive policies. Specifically, we consider Conditional Value-at-Risk (CVaR) \cite{CVarFinance}, a \textit{coherent risk metric} \cite{HowRoRisk} as our distortion function.
CVaR is applied to IQN by modifying $\tilde{\tau} \sim \mathcal{U}[0, 1]$ to $\tilde{\tau} \sim \mathcal{U}[0, \alpha]$, where $\alpha$ is the CVaR value. We get risk-averse policies as $\alpha$ decreases to near zero and reduce back to risk-neutral when $\alpha=1$.
\subsubsection{Lower Tail Conditional Variance for Intrinsic Uncertainty Estimation}
\label{sec:rtv}
One major source of risk comes from intrinsic uncertainty, which is due to stochasticity of the environment or partial observability. Opposite to \textit{epistemic uncertainty} \cite{uncertainty}, intrinsic uncertainty is independent of the agent's knowledge about the task.
In distributional RL, a more or less spread out return distribution acts as a measure of intrinsic uncertainty \cite{Tactical}.
Inspired by \cite{explordistri}, where the decaying upper tail conditional variance of the return distribution is used for more efficient exploration, we use the lower tail conditional variance as the intrinsic uncertainty estimation for risk-tendency adaption. The lower half tail conditional variance is equivalent to the \textit{right truncated variance} (RTV):
\begin{equation}
\label{eq:rtv}
\text{RTV} = \frac{2}{N}\sum\limits_{i=1}^{\frac{N}{2}}(F_Z^{-1}(\tau_i) - F_Z^{-1}(\tau_{\frac{N}{2}}))^2,
\end{equation}
in which $\tau_i$ are $\frac{i}{N}$-th quantile levels. Intuitively, RTV is biased towards negative returns.
We calculate RTV with respect to the median rather than the mean due to its statistical robustness \cite{explordistri, Huber.Wiley.ea1981Robuststatistics}. Note that $F_Z^{-1}(\tau)$ is implicitly approximated by $Z_{\theta}^\pi(;\tau)$ in our method.
\subsubsection{EWAF for Risk-Tendency Adaption}
\label{sec:ewaf}
In our framework, the risk-tendency can be cast in the choice of CVaR.
To formulate CVaR as a function of RTV, inspired by \cite{Tactical}, we propose to model CVaR by an exponentially weighted categorical distribution. Specifically, consider a categorical distribution $C$ with two logits: $C_i = \exp{(w_i)} / \sum_{i}{}\exp{(w_i)}$, in which $w_i \in \mathbb{R}, i = 1, 2$. By letting $\alpha = C_1$, the CVaR is restricted to a range of $(0, 1)$ and can be adjusted by altering the logit weights. Concretely, at each timestep $t$, the CVaR is adapted by updating $w_i$ with feedback $f$ by a step size $\eta$: $w_1 = w_1 - \eta f, w_2 = w_2 + \eta f$. We set $f = \text{RTV}_t - \text{RTV}_{t-1}$ as an indicator of intrinsic uncertainty feedback.
To avoid the CVaR approaching zero, an additional term: $\sum_{i}{}\exp(w_i) / b$ is added both to the denominator of $C_i$ for $i=1,2$ and to the numerator of $C_1$, which results in a CVaR range of $(\frac{1}{b+1}, 1)$. For example, $\alpha \in (0.1, 1)$ when $b=9$.
Up to now, we get ART-IQN that can adapt its risk-tendency by reacting to intrinsic uncertainty variations as explained in Algorithm \ref{alg:art-iqn}. In principle, when RTV is increasing and the current CVaR is relatively large, the agent will behave more risk-unwillingly by choosing smaller CVaR.
\SetKwComment{Comment}{\triangleright}{}
\SetAlgoNoLine%
\begin{algorithm}[hbt!]
\caption{ART-IQN for Drone Navigation}
\label{alg:art-iqn}
\SetKwInput{KwInput}{Input}
\KwInput{Post-training $\texttt{IQN}_{\theta}$; $\mathcal{A}, K, N, w_1, w_2, b, \eta$}
Initialize state $s$, CVaR $\alpha$\\
\While{$d_g > d_f$ and $t \le H$}{
Observe $o_t \leftarrow \langle\mathbf{p}, d_g, \mathbf{d}_{l}\rangle$ from $s_t$\\
Get quantile function $Z_\theta(o_t, a; \tau) = \texttt{IQN}_{\theta}(o_t; \tau)$\\
Distorted sampling $\tilde{\tau_k} \sim \mathcal{U}[0, \alpha], k = \{1, \ldots, K\}$\\
Take action $a_t = \argmax\limits_{a \in \mathcal{A}}\frac{1}{K}\sum_{k=1}^{K}Z_\theta(o_t, a; \tilde{\tau_k})$\\
Calculate right truncated variance: \\
$\text{RTV}_t = \frac{2}{N}\sum_{i=1}^{\frac{N}{2}}(Z_\theta(o_t, a_t; \tau_i) - Z_\theta(o_t, a_t; \tau_{\frac{N}{2}}))^2$ \\
Obtain feedback $f_t = \text{RTV}_t - \text{RTV}_{t-1}$\\
Forecasting $w_{1} = w_1 - \eta f_t, w_{2} = w_2 + \eta f_t$\\
Adapt CVaR $\alpha = \frac{(b + 1)\exp{(w_1)} + \exp{(w_2)}}{(b+1)\sum_{i}{}\exp{(w_i) }}, i=1, 2$ \\
}
\end{algorithm}
\subsection{Training and Evaluation Pipelines}
\subsubsection{Environment}
We design an OpenAI gym \cite{OpenAIgym} like $2$D environment for training with state, observation and action spaces defined in Section \ref{sec:space}. To achieve fast simulation, the drone is modelled as a point mass and the velocity command is assumed to be immediately executed. The state is updated every $T$ seconds in simulation time. We utilize \textit{domain randomization} \cite{domainRandom2017IROS} to train policies that can generalize to diverse scenarios. Specifically, the goal distance is uniformly sampled $d_g \sim \mathcal{U}[5, 7] (m)$ with drone initialized at a fixed start point for each training episode. The number of obstacles, the shape and position of each obstacle are randomly generated for each episode as demonstrated in Fig. \ref{fig:obstacle_demo}. Laser beams and obstacle outlines are modelled as line segments for simplicity. Gaussian noise $\mathcal{N}(\mu, \sigma)$ with mean $\mu=0.0$ and standard deviation $\sigma=0.01$ is added to the measurement of each laser to simulate a noisy sensor.
\subsubsection{Training Process}
We follow \textit{curriculum learning} to train the agent - the complexity of the environment increases as training process goes on. The first stage of training is implemented with relative small number of randomized obstacles, $n_{obs} \in [0, 5]$. After training several episodes (until a navigation success rate of $0.8$ is reached), the complexity of the environment is increased by adding more obstacles, $n_{obs} \in [6, 12]$. To make sure the agent accumulates a diverse range of experiences under a variety of risk-tendencies, the CVaR value is uniformly sampled $\alpha \sim \mathcal{U}(0, 1]$ at each episode. An episode is terminated after a collision or if the goal is not reached within $H$ timesteps. The whole training process is ended after the average return is empirically converged. It took $\approx 3.5$ hours on a $2.2$ GHz Intel i$7$ Core CPU, achieving a success rate of $0.88$.
\begin{figure}[!h]
\centering
\includegraphics[width=0.49\textwidth]{figures/obstacles_demo.png}
\caption{Randomly generated environments for training.}
\label{fig:obstacle_demo}
\end{figure}
The agent is modelled as a fully connected network that has $3$ hidden layers with $512$ units per layer. Each fully-connected layer is followed with a ReLU \cite{ReLU2018Agarap} activation function except the output layer. Adam \cite{Adam2015ICLR} is used as our optimizer with $lr$ as the learning rate. In each update step for every $D$ episodes, a batch size of $B$ samples are drawn from the experience replay buffer with size $E$. The hyper-parameters used are listed in Table \ref{tab:hyper}.
\begin{table}[!h]
\centering
\caption{\uppercase{hyper-parameters}}
\begin{tabular}{cl|cl|cl|cl}
\hline
\multicolumn{8}{c}{Hyper-parameter symbols and values} \\
\hline
$lr$ & $2 \times 10^{-4}$ & $v_{m}$ & 1 $[m/s]$ & $D$ & 5 & $N, N'$ & 16 \\
$E$ & $5 \times 10^{4}$ & $r_d$ & 0.05 $[m]$ & $K$ & 64 & $m$ & 3 \\
$\gamma$ & 0.99 & $d_f$ & 0.1 $[m]$ & $B$ & 32 & $b$ & 9 \\
$T$ & 0.1 [$s$] & $d_s$ & 0.2 $[m]$ & $H$ & 200 & $\eta$ & 0.5 \\ \hline
\end{tabular}
\label{tab:hyper}
\end{table}
\begin{table*}[]
\centering
\caption{\uppercase{Quantitative Simulation Results}}
\label{tab:eval}
\begin{tabular}{c|c|ccc|ccc|ccc|ccc}
\hline
& \multicolumn{1}{c|}{CVaR} & \multicolumn{3}{c|}{Average episodic return (mean $\pm$ std)} & \multicolumn{3}{c|}{Success rate} & \multicolumn{3}{c|}{Collision rate} & \multicolumn{3}{|c}{Navigation time $[s]$} \\ \hline
\multicolumn{1}{c|}{$n_{obs}$} & \multicolumn{1}{c|}{-} & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{6} & 12 & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{6} & 12 & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{6} & 12 & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{6} & 12 \\ \hline
DQN\cite{natureDQN2015Mnih} & \multicolumn{1}{c|}{-} & \multicolumn{1}{c}{37.45 $\pm$ 9.01} & \multicolumn{1}{c}{25.60 $\pm$ 13.91} & 17.16 $\pm$ 15.85 & \multicolumn{1}{c}{0.85} & \multicolumn{1}{c}{0.65} & 0.56 & \multicolumn{1}{c}{0.13} & \multicolumn{1}{c}{0.31} & 0.39 & \multicolumn{1}{c}{5.23} & \multicolumn{1}{c}{8.11} & 8.34 \\ \hline
& 0.1 & \multicolumn{1}{c}{39.42 $\pm$ 7.28} & \multicolumn{1}{c}{33.49 $\pm$ 12.16} & 20.42 $\pm$ 12.43 & \multicolumn{1}{c}{0.87} & \multicolumn{1}{c}{0.74} & 0.59 & \multicolumn{1}{c}{\textbf{0.09}} & \multicolumn{1}{c}{\textbf{0.15}} & 0.17 & \multicolumn{1}{c}{5.41} & \multicolumn{1}{c}{12.52} & 18.52 \\
& 0.25 & \multicolumn{1}{c}{\textbf{40.36 $\pm$ 8.07}} & \multicolumn{1}{c}{32.37 $\pm$ 12.02} & 21.61 $\pm$ 12.88 & \multicolumn{1}{c}{\textbf{0.88}} & \multicolumn{1}{c}{0.72} & 0.67 & \multicolumn{1}{c}{0.10} & \multicolumn{1}{c}{0.17} & 0.25 & \multicolumn{1}{c}{5.31} & \multicolumn{1}{c}{10.23} & 13.23 \\
IQN\cite{IQN2018} & 0.5 & \multicolumn{1}{c}{38.47 $\pm$ 8.56} & \multicolumn{1}{c}{29.90 $\pm$ 13.98} & 19.76 $\pm$ 14.72 & \multicolumn{1}{c}{0.87} & \multicolumn{1}{c}{0.73} & 0.66 & \multicolumn{1}{c}{0.11} & \multicolumn{1}{c}{0.19} & 0.28 & \multicolumn{1}{c}{5.30} & \multicolumn{1}{c}{8.46} & 09.46 \\
& 0.75 & \multicolumn{1}{c}{37.62 $\pm$ 9.23} & \multicolumn{1}{c}{25.72 $\pm$ 13.71} & 18.32 $\pm$ 14.22 & \multicolumn{1}{c}{0.84} & \multicolumn{1}{c}{0.69} & 0.62 & \multicolumn{1}{c}{0.13} & \multicolumn{1}{c}{0.22} & 0.31 & \multicolumn{1}{c}{5.25} & \multicolumn{1}{c}{8.57} & 08.60 \\
& 1.0 & \multicolumn{1}{c}{38.70 $\pm$ 7.89} & \multicolumn{1}{c}{23.39 $\pm$ 14.86} & 16.29 $\pm$ 15.47 & \multicolumn{1}{c}{0.86} & \multicolumn{1}{c}{0.67} & 0.57 & \multicolumn{1}{c}{0.12} & \multicolumn{1}{c}{0.30} & 0.41 & \multicolumn{1}{c}{\textbf{5.12}} & \multicolumn{1}{c}{\textbf{7.89}} & \textbf{08.05} \\ \hline
ART-IQN & \multicolumn{1}{c|}{-} & \multicolumn{1}{c}{39.85 $\pm$ 7.23} & \multicolumn{1}{c}{\textbf{36.43 $\pm$ 13.51}} & \textbf{24.88 $\pm$ 12.31} & \multicolumn{1}{c}{0.87} & \multicolumn{1}{c}{\textbf{0.77}} & \textbf{0.70} & \multicolumn{1}{c}{0.11} & \multicolumn{1}{c}{0.17} & \textbf{0.15} & \multicolumn{1}{c}{5.32} & \multicolumn{1}{c}{8.26} & 11.76 \\ \hline
\end{tabular}
\end{table*}
\subsubsection{Evaluation in Simulation}
To show the efficacy of our algorithm, ART-IQN is compared with IQN with multiple risk-tendencies $\alpha = \{0.1, 0.25, 0.5, 0.75, 1.0\}$. In addition, we also trained a DQN \cite{natureDQN2015Mnih} agent as our baseline following the same training procedure.
The average episodic return, success rate, collision rate and average navigation time are compared among agents across various environments. Specifically, the evaluation is executed on three sets of environments with $n_{obs} = \{2, 6, 12\}$ for each agent. Each set has $100$ diverse randomized environments.
\section{Results}
\label{sec:results}
\subsection{Simulation Results}
\subsubsection{Quantitative Analysis}
Table \ref{tab:eval} gives quantitative results by comparing: (1) DQN, (2) IQN with different risk tendencies and (3) ART-IQN. For all the environments, DQN performs similar to risk-neutral IQN. For $n_{obs}=2$, all the agents finish the navigation task at a high success rate and a low collision rate since the environment is comparatively easy. For $n_{obs}=6$, while IQN with lower CVaR values can maintain a lower collision rate compared to higher CVaR values, the task finishing time is longer and the timeout rate also increases.
\begin{figure}[!h]
\centering
\includegraphics[width=0.48\textwidth]{figures/behaviors.png}
\caption{Drone behavior and navigation time comparison between agents with different risk-tendencies.}
\label{fig:behavior}
\end{figure}
In contrast, ART-IQN achieves a success rate of $0.77$, maintains low collision rate and a average navigation time of only $0.37s$ longer compared to risk-neutral policy. For $n_{obs}=12$, the average return and success rate of all the agents decrease. This drop can be explained by the severe partial observability the agents face. While there are more cases of timeout with lower CVaR values and more collisions with a higher one, ART-IQN performs best in success rate and collision rate with a decent navigation time.
\subsubsection{Qualitative Analysis}
We demonstrate the behavior of agents in Fig. \ref{fig:behavior} by considering a typical environment encountered in evaluation. In Fig. \ref{fig:behavior} (a), risk-neutral IQN achieves the goal as fast as possible, ignoring the risk of getting too close to obstacles that leads to a higher rate of collision. On the other hand, risk-averse policies, especially the one with $\alpha=0.1$ as shown in Fig. \ref{fig:behavior} (e), generate safer policies but sacrifice in navigation efficiency. In contrast, as shown in Fig. \ref{fig:behavior} (f), ART-IQN acts adaptively - avoiding obstacles cautiously in the middle area where there is more uncertainty encountered, and flying at a higher speed when it's more certain about the current observation.
Fig. \ref{fig:tcv-cvar} shows the RTV and adaptive CVaR along the trajectory in Fig. \ref{fig:behavior} (f). The drone starts with CVaR $\alpha=1.0$, which is risk-neutral since it does not know the environment yet. During $0s$ to $2s$, the drone flies at its maximum speed with risk-neutral policy as the current uncertainty is relatively low. Around $2s$ to $8s$, the intrinsic uncertainty estimated by RTV increases and stays at a high level, which resulted a decrease in CVaR values, corresponding to the risk-averse behavior in the middle area of Fig. \ref{fig:behavior} (f). When the RTV drops and stays at a low level from $8s$ until the end of the episode, the CVaR also decreases, resulting in a risk-neutral policy to reach the goal point efficiently. It is clear that, as the drone navigates through the environment, it can adjust its risk-tendency to be risk-averse when the uncertainty increases and to be risk-neutral when there is less uncertainty in the environment.
\begin{figure}[]
\centering
\includegraphics[width=0.48\textwidth]{figures/tcv.png}
\caption{RTV and adaptive CVaR. ART-IQN can adapt its CVaR value accordingly with RTV as an estimation of intrinsic uncertainty in the environment.}
\label{fig:tcv-cvar}
\end{figure}
\begin{table*}[]
\caption{\uppercase{Real-world experiment results}}
\centering
\label{tab:real}
\begin{tabular}{c|c|ccc|ccc}
\hline
& CVaR & \multicolumn{3}{c|}{\# Success / \# Collision} & \multicolumn{3}{c}{Navigation time (mean $\pm$ std) $[s]$} \\ \hline
Environment & - & \multicolumn{1}{c}{1} & \multicolumn{1}{c}{2} & {3} & \multicolumn{1}{c}{1} & \multicolumn{1}{c}{2} & {3} \\ \hline
\multirow{2}{*}{IQN} & \multicolumn{1}{c|}{0.1} & \multicolumn{1}{c}{9 / 0} & \multicolumn{1}{c}{3 / 0} & \multicolumn{1}{c|}{3 / 0} & \multicolumn{1}{c}{14.41 $\pm$ 3.67} & \multicolumn{1}{c}{16.73 $\pm$ 0.49} & \multicolumn{1}{c}{21.52 $\pm$ 1.76} \\
& \multicolumn{1}{c|}{1.0} & \multicolumn{1}{c}{8 / 1} & \multicolumn{1}{c}{2 / 1} & \multicolumn{1}{c|}{0 / 3} & \multicolumn{1}{c}{10.12 $\pm$ 3.29} & \multicolumn{1}{c}{12.54 $\pm$ 0.58} & - \\ \hline
ART-IQN & - & \multicolumn{1}{c}{9 / 0} & \multicolumn{1}{c}{3 / 0} & \multicolumn{1}{c|}{3 / 0} & \multicolumn{1}{c}{12.32 $\pm$ 3.46} & \multicolumn{1}{c}{13.37 $\pm$ 0.47} & \multicolumn{1}{c}{17.95 $\pm$ 1.85} \\ \hline
\end{tabular}
\end{table*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/traj_adaptive.jpg}
\caption{Environment 1: ART-IQN}
\label{fig:traj_adaptive}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/traj_averse.jpg}
\caption{Environment 1: Risk-averse (CVaR=0.1)}
\label{fig:traj_averse}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/traj_neutral.jpg}
\caption{Environment 1: Risk-neutral}
\label{fig:traj_neutral}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/traj_addition.jpg}
\caption{Environment 2}
\label{fig:traj_addition}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/traj_env3.jpg}
\caption{Environment 3}
\label{fig:traj_env3}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/traj_adversarial.jpg}
\caption{Adversarial environment: ART-IQN}
\label{fig:traj_adversarial}
\end{subfigure}
\caption{Image frames for real-world experiments. The trajectory is the blue LED on the drone. For environment 2 and 3, the trajectories generated by different agents are distinguished with recolored LEDs.}
\label{fig:cyberzoo}
\end{figure*}
\subsection{Real-World Experiments}
\subsubsection{Hardware Setup}
The Crazyflie nano drone we used for real-world experiments is shown in Fig. \ref{fig:crazyflie}. It has dimensions $92\times92\times29mm$ and weighs $27.5 g$. Policy is performed on a laptop, which gives velocity command and communicates with Crazyflie via a radio-to-USB dongle. The velocity command period $T$ is set to be the same as in simulation. The drone navigation task is implemented in $3$ different $10 \times 10 m$ cluttered environments. As shown in Fig. \ref{fig:cyberzoo}, we put artificial trees, boards and cylinders as obstacles in the environment that the agent has not seen in the simulation. Reflective markers are attached on four propeller hubs to let the motion capturing system track the global position of the drone.
\subsubsection{Evaluation Results}
We test IQN with $\alpha=\{0.1, 1.0\}$ and ART-IQN for comparison. In environment 1, each agent is initialized at $3$ different starting points for $3$ runs, resulted a total number of $27$ runs. In environment 2 and a more complex environment 3, the drone takes off at the same starting point. As shown in Table \ref{tab:real}, all runs succeeded except risk-neutral policies. Although risk-neutral IQN achieves fastest navigation in succeeded runs, it ignores the risk in the environment causing the drone to collide with obstacles. Risk-averse IQN succeeds in all experiments without collisions, but with a loss of navigation efficiency. In contrast, ART-IQN navigates through all the environments safely and efficiently.
Fig. \ref{fig:cyberzoo} (a)-(e) demonstrate diverse behaviors among different risk-tendencies. Unlike risk-neutral IQN, both ART-IQN and risk-averse IQN keep a safe distance to obstacles to avoid collisions. The advantage of ART-IQN compared to risk-averse IQN is mainly reflected in the shorter navigation time as in Table \ref{tab:real}. Additionally, we also designed a U-shape adversarial environment to study the generalization capability of IQN agents. However, all agents including ART-IQN stick around in the corner of the U-shape obstacle as shown in Fig. \ref{fig:cyberzoo} (f). It is highly possible that U-shape obstacles have not been seen often by the agent and can be solved by generating similar situations in the training environment.
\section{Conclusion}
In conclusion, focusing on the autonomous drone navigation under partial observability, we propose an adaptive risk-tendency algorithm based on distributional RL to adapt risk-tendency according to the estimated intrinsic uncertainty. Our algorithm uses EWAF to adjust risk-tendency represented by the CVaR, with lower tail conditional variance as an estimation of the intrinsic uncertainty. We show the effectiveness of our algorithm both in simulation and real-world experiments. Empirical results show that our algorithm can adaptively balance the efficiency-safety trade-off.
However, the step size $\eta$ is currently pre-specified, it would be worthy to optimize it. Despite that, we believe our method could serve as a first step to develop risk-tendency adaptation methodologies for distributional RL applications especially in risk-sensitive settings.
\section*{Acknowledgment}
The authors would like to thank Jinke He for the discussions and Bart Duisterhof, Yingfu Xu for the real-world experiment setup.
\bibliographystyle{./style/IEEEtran}
|
2,869,038,154,586 | arxiv | \section*{To-do list and general comments}
\section{Introduction}
Stochastic Reaction Networks (SRNs) are a class of continuous-time Markov
chains, $X{\equiv}\{X(t)\}_{t\in[0,T]}$, that take values in $\mbox{$\zset_+^d$}$, \emph{i.e.}, the
lattice of $d$-tuples of non-negative integers. SRNs are mathematical models
employed to describe the time evolution of many natural and {artificial}
systems. Among them we find biochemical reactions, spread of epidemic
diseases, communication networks, social networks, transcription and
translation in genomics, and virus kinetics.
For historical reasons, the jargon from chemical kinetics is used to describe
the elements of SRNs. The integer $d{\geq}1$ is the number of chemical species
reacting in our system. The coordinates of the Markov chain,
$X(t){=}(X_1(t),\ldots,X_d(t))$, account for the number of molecules or
individuals of each species present in the system at time $t$. The
transitions in our system are given by a finite number $J$ of
\emph{reaction channels}, $\seqof{\mathcal{R}_j}{j=1}{J}$. Each reaction channel
$\mathcal{R}_j$ is a pair formed by a vector $\nu_j$ of $d$ integer components and a non-negative function $a_j(x)$ of the state of the system. Usually, $\nu_j$ and $a_j$ are named \emph{stoichiometric vector} and
\emph{propensity function}, respectively. Because our state space is a lattice, our
system evolves in time by jumping from one state to the next, and for that reason
$X$ is a pure jump process.
The propensity functions, $a_j$, are usually derived through \emph{the mass action
principle} also known as \emph{the law of mass action}, see for instance Section 3.2.1 in \cite{Holmes}.
For that reason,
we assume that $a_j(x) {=} c_j\, g_j(x)$, where $c_j$ is a non negative
coefficient and $g_j(x)$ is a given monomial in the coordinates of the
process, $X$. However, our results can be easily extended to polynomial
propensities.
In this work, we address the statistical inference problem of estimating the
coefficients $\theta {=} (c_1,\ldots,c_J)$ from \emph{discretely observed data}, \emph{i.e.},
data collected by observing one or more paths of the process $X$ at
a certain finite number of \emph{observational times} or epochs. It means that our data, $\mathcal{D}$, is a finite collection
$\{(t_{n,m},x(t_{n,m}))\}$, where $m{=}1,2,\ldots,M$ indicates the
observed path, $n{=}1,2,\ldots,N(m)$ indicates the $n$-th observational
time corresponding to the $m$-th path, and the datum $x(t_{n,m})$ can be considered as
an observation of the $m$-path of the process $X$ at time time $t_{n,m}$.
The observational times, $t_{n,m}$,
are either deterministic or random but independent from the state of the
process $X$.
In what follows, we denote
with $X_{i,n,m}$ the $i$-th coordinate of $X(t_{n,m},\omega_m)$,
with $X_{\cdot,n,m}$ the vector $X(t_{n,m},\omega_m)$, where
$\omega_m$ is the $m$-th path of the process $X$.
{Let us remark that we observe all the coordinates of $X$ and not only a fixed subset at each observational time $t_{n,m}$. In that sense, we are not treating the case of \emph{partially observed data} where only a fixed proper subset of coordinates of $X$ is observed.}
\begin{rem}
The partially observed case can in principle also be treated by a variant of the FREM algorithm based on \cite{Bayer} (Corollary 3.8).
\end{rem}
For further convenience, we organize the information in our data set,
$\mathcal{D}$, as a finite collection,
\begin{align}\label{def:data}
\mathcal{D} = \seqof{[s_k,t_k],x(s_k),x(t_k)}{k=1}{K},
\end{align}
such that for each $k$, $I_k:=[s_k,t_k]$ is the time interval determined by
two consecutive observational points $s_k$ and $t_k$, where the states
$x(s_k)$ and $x(t_k)$ have been observed.
{Notice that the set
$\mathcal{D}$ collects all the data corresponding to the $M$ observed paths of the process $X$.
For that reason, it is possible to have $[s_k,t_k]{=}[s_{k'},t_{k'}]$ for $k{\neq} k'$, for instance, in the case of repeated measurements.}
For technical reasons,
we need to define a sequence of \emph{intermediate times}, $\seqof{t_k^*}{k=1}{K}$;
for instance, $t_k^*$ could be the midpoint of $[s_k,t_k]$.
It turns out that the likelihood function, $\text{lik}^c(\theta)$,
corresponding to data obtained from continuously observed paths of $X$ is
relatively easy to derive (see Section \ref{sec:contobservedpaths}). It depends on the total
number of times that each reaction channel fires over the time interval
$[0,T]$ and the values of the monomials $g_j$ evaluated at the jump times of
$X$.
Since the observational times, $t_{n,m}$, are not necessarily equal to the jump times of the process $X$,
we can not directly deal with the likelihood $\text{lik}^c(\theta)$.
For that reason, we consider the Monte Carlo version of the expectation-maximization (EM) algorithm \cite{Dempster77,Casella, WatanabeYamaguchi, McLachlanEM} in which we treat the jump times of $X$ and their corresponding reactions as missing data.
The ``missing data'' can be gathered by simulating \emph{SRN bridges} of the process $X$ conditional on
$\mathcal{D}$, \emph{i.e.}, $X(s_k){=}x(s_k)$ and $X(t_k){=}x(t_k)$ for all intervals $[s_k,t_k]$.
To simulate SRN bridges, we extend the \emph{forward-reverse} technique developed by Bayer and Schoenmakers \cite{Bayer} for It\^o diffusions to the case of SRNs.
As explained in Section \ref{sec:forwardreverse}, the forward-reverse algorithm generates forward paths from $s_k$ to $t_k^*$ and backward paths from $t_k$ to $t_k^*$. An exact SRN bridge is formed when forward and backward paths meet at
$t_k^*$. Observe that the probability of producing SRN bridges strongly depends on the approximation of $\theta$ that we use to generate the forward and backward paths. In addition to exact bridges, in this work we also relax this meeting condition by using a kernel $\kappa$.
{Here, we present a two-phase algorithm that approximates the Maximum Likelihood Estimator, $\hat{\theta}_{\text{MLE}}$, of the vector $\theta$ using the collected data, $\mathcal{D}$.
Phase I is the result of a deterministic procedure while phase II is the result of a stochastic one.
The purpose of phase I is to generate an estimate of $\theta$ that will be used as initial point for phase II.
To this end, in the phase I we solve a deterministic global optimization problem obtained by substituting at each time interval, $[s_k,t_k]$,
the ODE approximations to the mean of the forward and reverse stochastic paths and minimizing a weighted sum of the squares of the Euclidean distances of the ODE approximations at the times $t^*_k$. Using this value as a starting point for phase II, we hope to simulate an acceptable number of SRN bridges in the interval $[s_k,t_k]$ without too much computational effort.
Phase I starts at ${\theta^{(0)}_{I}}$ and provides
$\theta^{(0)}_{I\!I}$.
In phase II we run a Monte Carlo EM stochastic sequence $\seqof{\hat{\theta}^{(p)}_{I\!I}}{p=1}{+\infty}$ until a certain convergence criterion is fulfilled. Here we have a schematic representation of the two-phase method:
\begin{equation*}
\theta^{(0)}_{I} \rightarrow \theta^{(0)}_{II}\rightarrow
\hat{\theta}^{(1)}_{II} \rightarrow \cdots \,\,
\hat{\theta}^{(p)}_{II}\rightarrow \cdots \rightarrow \hat \theta.
\end{equation*}
During phase II, we intensively use a computationally efficient implementation of the SRN-bridge simulation algorithm for simulating the ``missing data'' that feeds the Monte Carlo EM algorithm. Details are provided in Section \ref{sec:FREM}.
Our two-phase algorithm is named FREM as the acronym for Forward-Reverse Expectation Maximization. }
Although our FREM algorithm has certain similarity with the estimation methodology proposed in \cite{daigle2012accelerated}, there are also notable differences.
In terms of the similarity, in \cite{daigle2012accelerated} the authors propose a two-phase method where the first phase is intended to select a seed for the second phase, which is an implementation of the Monte Carlo EM algorithm.
While our first phase is deterministic and uses the reaction-rate ODEs as approximations of the SRN paths,
theirs is stochastic and a number of parameters should be chosen to determine the amount of computational work and the accuracy of the estimates.
There is also a main difference is the implementation of the second phase:
while the FREM algorithm is focused in efficiently generating kernel-based SRN bridges using the novel forward-reverse technology introduced by Bayer and Schoenmakers in \cite{Bayer}, the authors of \cite{daigle2012accelerated} propose a trial-and-error shooting method for sampling SRN bridges. This shooting method can be viewed as a particular case of the FREM algorithm by systematically choosing the intermediate point $t^*_k$ as the right extreme point $t_k$, giving no place for backward paths.
To quantify the uncertainty in our estimates, we prefer to have the outputs of our algorithm starting from a set of over dispersed initial points without assuming Gaussianity in its distribution (see \cite{Casella}).
The variance of our estimators can be easily assessed by bootstrap calculations. In our numerical experiments, we observe that the outputs lie on a low-dimensional manifold in parameter space; this is a motivation against the use of the Gaussiantiy assumption.
Regarding the stopping criterion proposed in \cite{daigle2012accelerated}, we found that the condition imposed there, of obtaining three consecutive iterations close to each other up to a certain tolerance, could be considered as a rare event in some examples and it may lead to the generation of an excessive number of Monte Carlo EM iterations. We refer to \cite{daigle2012accelerated} for comparisons against other existing related statistical inference methods for SRNs.
In \cite{wang} the authors propose a method based on maximum likelihood for parameter inference. It is based on first estimating the gradient of the likelihood function with respect to the parameters by using reversible-jump Markov chain Monte Carlo sampling (RJMCMC) \cite{green95,BoysEtAl2008} and then applying a gradient descent method to obtain the maximum likelihood estimation of the parameter values. The authors provide a formula for the gradient of the likelihood function given the observations.
The idea of the RJMCMC method is to generate an initial reaction path and then generate new samples by adding or deleting a set of reactions from the path using an acceptance method. The authors propose a general method for obtaining a sampler that can work for any reaction system. This sampler can be inefficient in the case of large observation intervals. At this point, we would like to observe that their approach can be combined with ours if, instead of using the RJMCMC method for computing the gradient of the likelihood function, we use our forward-reverse method.
We think that this combination may be useful in cases in which many iterations of our method are needed (see Section \ref{ex:bd} for such an example). This is left as future work.
In the remainder of this section, we formally introduce SRNs and their reaction-rate ODE approximations, the stochastic simulation algorithm and the forward-reverse method. In Section \ref{sec:forwardreverse}, we develop the main result of this article: the extension of the forward-reverse technique to the context of SRNs. The EM algorithm for SRNs is introduced in Section \ref{sec:EM}. Next, in Section \ref{sec:FREM}, we introduce the main application of this article: the forward-reverse EM (FREM) algorithm for SRNs. In Section \ref{sec:compdetails}, we provide computational details for the practical implementation of the FREM algorithm. Later, in Section \ref{sec:numex}, we present numerical examples to illustrate the FREM algorithm and finally, we present our conclusions in Section \ref{conclusions}.
Appendix \ref{sec:algorithms} contains the pseudo-code for the implementation of the FREM algorithm.
\subsection{Stochastic Reaction Networks}
Stochastic Reaction Networks are continuous time Markov chains, $X:[0,T]\times \Omega \to\mbox{$\zset_+^d$}$, that describe the stochastic evolution of a system of $d$ interacting species.
In this context, the $i$-th coordinate of the process $X$, $X_i(t)$, can be interpreted as the
number of individuals of species $i$ present in the system at time $t$.
The system evolves randomly through $J$ different reaction channels $\mathcal{R}_j:=(\nu_j,a_j)$.
Each stoichiometric vector $\nu_j{\in}\mathbb{Z}^d$ represents a possible jump of the system, $x \rightarrow x{+}\nu_j$.
The probability that the reaction $j$ occurs during an infinitesimal interval
$(t,t+\mathrm{d} t)$ is given by
\begin{equation}\label{eq:infdefX}
\prob{\text{reaction } j \text{ fires during } (t,t+\mathrm{d} t) \bigm|
X(t) = x} =
a_j(x) \mathrm{d} t + \ordo{\mathrm{d} t},
\end{equation}
where $a_j:\mathbb{R}^d \to [0,\infty)$ are known as propensity functions.
We set $a_j(x){=}0$ for those $x$ such that $x{+}\nu_j\notin \mbox{$\zset_+^d$}$.
We assume that the initial condition of $X$, $X(0)=x_0\in\mbox{$\zset_+^d$}$ is deterministic and known.
The \emph{stoichiometric matrix} $\nu$ is defined as the matrix whose $j$-column is $\nu_j$ ($\nu^T$ denotes its transpose). The \emph{propensity vector} $a(x) \in \mathbb{R}^J$ has $a_j(x)$ as components.
\begin{ex}[Simple decay model]\label{ex:AB}
Consider the reaction $X \xrightarrow{c} \emptyset$ where one particle is
consumed. In this case,
the state vector $X(t)$ is in $\mathbb{Z}_+$ where $X$ denotes the number
of particles in the system. The vector for this
reaction is $\nu = -1$.
The propensity functions in this case could be, for example, $a(X)= c\,X$, where $c>0$.
\end{ex}
Section \ref{sec:numex} contains more examples of stochastic reaction networks.
\subsection{ Deterministic Approximations of SRNs}
The infinitesimal generator $\mathcal{L}_X$ of the process $X$
is a linear operator defined on the set of bounded functions \cite{kurtzmp} .
In the case of SRN, it is given by
\begin{equation}\label{eq:genX}
\mathcal{L}_X(f)(x) := \sum_j a_j(x) ({f(x+\nu_j)-f(x)}).
\end{equation}
The Dynkin formula, (see \cite{Klebaner})
\begin{equation} \label{eq:Dynkin}
\expt{f(X(t))} = f(X(0)) + \int_0^t \expt{\mathcal{L}_X(f) (s)}\mathrm{d} s,
\end{equation}
can be used to obtain integral equations describing the time evolution of any observable of the process $X$.
In particular, taking the canonical projections $f_i(x)=x_i$, we obtain a system of equations for $\expt{X_i(t)}$,
\begin{align*}
\expt{X_i(t)} = x_0 + \int_0^t \sum_j \expt{a_j(X(s))} \nu_{j,i} \mathrm{d} s
\end{align*}
If all the propensity functions, $a_j$, are affine functions of the state, then this system of equations leads to a closed system of ODEs.
In general, some propensity functions may not depend on their
coordinates $x$ in an affine way, and for that reason, the integral equations for
$\expt{X_i(t)}$ obtained from the Dynkin formula depend on higher moments of
$X$. This can be treated using moment closure techniques \cite{MomentClousure, MomentClousure2} or by
taking a different approach: using a formal first-order Taylor expansion of
$f$ in \eqref{eq:genX}, we obtain the generator
\begin{equation*}
\mathcal{L}_Z(f)(x) := \sum_j a_j(x) {\partial_x f(x) \nu_j },
\end{equation*}
which corresponds to the reaction-rate ODEs (also known as the {mean field}
ODEs)
\begin{align}\label{eq:ODE}
\left\{ \!
\begin{array}{l@{\;}c@{\;}l}
dZ(t) &=& \nu a(Z(t)) dt , \,\, t \in \mathbb{R}_+, \\
Z(0) &=& x_0,
\end{array}
\right.
\end{align}
where the $j$-column of the matrix $\nu$ is $\nu_j$ and $a$ is a column vector with components $a_j$.
This derivation motivates the use of $Z(t)$ as an approximation of $\expt{X(t)}$
in phase I of our FREM algorithm.
\subsection{The Stochastic Simulation Algorithm}
\label{sec:SSA_num_approx}
To simulate paths of the process $X$, we employ the stochastic simulation algorithm (SSA) by Gillespie \cite{Gillespie1976}.
The SSA simulates statistically exact paths of $X$, \emph{i.e.}, the probability law of any path generated by the SSA
satisfies (\ref{eq:infdefX}).
It requires one to sample two independent uniform random variables per time step: one is used to find
the time of the next reaction and the other to determine which is the
reaction that fires at that time.
Concretely, given the current state of the system, $x:= X(t)$, we simulate
two independent uniform random numbers, $U_1,U_2 \sim \mathcal{U}(0,1)$
and compute:
\begin{equation*}
j = \min \Big \{ k\in \{1,\ldots,J\}: \sum_{i=1}^{k} {a_i(x)} {>}
U_1\, {a_0(x)}\Big\}
, \, \, \tau_{\min} = -\left( a_0(x)\right) ^{-1} \ln \left( U_2 \right),
\end{equation*}
where $a_0(x):=\sum_{j=1}^J a_j(x)$.
The system remains in the state $x$ until the time $t+\tau_{\min}$ when it jumps, $X(t+\tau_{\min})= x+\nu_j$.
In this way, we can simulate a full path of the process $X$.
{Exact paths can be generated using more efficient algorithms like the modified next reaction method by Anderson \cite{Anderson2007}, where only one uniform variate is needed at each step. However, in regimes where the total propensity, $a_0(x)$, is high, approximate path-simulation methods like the hybrid Chernoff tau-leap \cite{ourSL} or its multilevel versions \cite{ourML,ourMixed} may be required.}
\subsection{Bridge Simulation for SDEs}\label{sec:orders}
In \cite{Bayer}, Bayer and Schoenmakers introduced the
so-called forward-reverse algorithm for computing conditional expectations of
path-dependent functionals of a diffusion process conditioned on the values of
the diffusion process at the end-points of the time interval. More precisely,
let $X = X(t)$, $0 \le t \le T$, denote the solution of a $d$-dimensional
stochastic differential equation (SDE) driven by standard Brownian motion. Under
mild regularity conditions, a \emph{stochastic representation} is provided for
conditional expectations of the form,
\begin{equation*}
\mathcal{H} \equiv \expt{\left. g(X) \ \right| \ X_0 = x, \, X_T = y },
\end{equation*}
for fixed values $x, y \in \mathbb{R}^d$ and a (sufficiently regular)
functional $g$ on the path-space.\footnote{In fact, Bayer and Schoenmakers
\cite{Bayer} require $g$ to be a smooth function of the values $X_{t_i}$
of the process $X$ along a grid $t_i$, but a closer look at the paper reveals
that more general, truly path-dependent functionals can be allowed.} More
precisely, they prove an limiting equality of the form
\begin{equation}
\label{eq:forrev-sde}
\mathcal{H} = \frac{\lim_{\epsilon \to 0} \expt{ g( X^{(f)} \circ X^{(b)})
\kappa_\epsilon(X^{(f)}(t^\ast) - X^{(b)}(t^\ast)) \mathcal{Y} }}{
\lim_{\epsilon \to 0} \expt{ \kappa_\epsilon(X^{(f)}(t^\ast) -
X^{(b)}(t^\ast)) \mathcal{Y} } }.
\end{equation}
Here, $X^{(f)}$ is the solution of the original SDE (i.e., is a copy of $X$)
started at $X^{(f)}(0) = x$ and solved until some time $0 < t^\ast <
T$. $X^{(b)}$ is the time-reversal of another diffusion process $Y$ whose
dynamics are again given by an SDE (with coefficients \emph{explicitly} given
in terms of the coefficients of the original SDEs) started at $Y(t^\ast) = y$
and run until time $T$. Hence, $X^{(b)}$ starts at $t^\ast$ and ends at
$X^{(b)}(T) = y$. We then evaluate the functional $g$ on the ``concatenation''
$X^{(f)} \circ X^{(b)}$ of the paths $X^{(f)}$ and $X^{(b)}$, which is a path
defined on the full interval $[0,T]$ defined by
\begin{equation*}
X^{(f)} \circ X^{(b)} (s) \equiv
\begin{cases}
X^{(f)}(s), & 0 \le s \le t^\ast, \\
X^{(b)}(s), & t^\ast < s \le T.
\end{cases}
\end{equation*}
In particular, we remark that $X^{(f)} \circ X^{(b)}$ may exhibit a jump at
$t^\ast$. Here, $\mathcal{Y}$ is an exponential weighting term of the
form $\mathcal{Y} = \exp\left( \int_{t^\ast}^T c(Y_s) ds \right)$. At last,
$\kappa_\epsilon$ denotes a \emph{kernel} with bandwidth $\epsilon >
0$. Notice that the processes $X^{(f)}$ and the pair $\left( X^{(b)},
\mathcal{Y} \right)$ are chosen to be independent.
Let us roughly explain the structure of the representation
(\ref{eq:forrev-sde}). First note that the term on the right-hand side only
contains standard (unconditional) expectations, implying that the right-hand
side (unlike the left-hand side) is amenable to standard Monte
Carlo simulation which is why we call (\ref{eq:forrev-sde}) a ``stochastic
representation''. The denominator of (\ref{eq:forrev-sde}) actually equals the transition density $p(0,x,T,y)$ of the solution $X$, and its presence
directly follows from the same term in the (analytical) definition of the
conditional expectation in terms of densities. In fact, it was precisely in
this context (i.e., in the context of density estimation) that Milstein,
Schoenmakers and Spokoiny introduced the general idea for the first time
\cite{Milstein2004}.
In essence, the \emph{reverse} process $Y$ can be thought as
an ``adjoint'' process to $X$, as its infinitesimal generator is essentially
the adjoint operator of the infinitesimal generator of $X$ (see below for a
more detailed discussion in the SRN setting).
In a nutshell, the idea is that the law of the diffusion bridge admits a
Radon-Nikodym density with respect to the law of the concatenated process
$X^{(f)} \circ X^{(b)}$ with density given by $\mathcal{Y}$, \emph{provided}
that the trajectories meet at time $t^\ast$, i.e., provided that
$X^{(f)}(t^\ast) = X^{(b)}(t^\ast)$. Of course, this happens only with zero
probability\footnote{In the SRN setting, the probability is
positive, since the state space is discrete.}, so we relax the above
equality with the help of a kernel with a positive bandwidth
$\epsilon$. Furthermore, note that by the independence of $X^{(f)}$ and
$X^{(b)}$, we can independently sample many trajectories of $X^{(f)}$ and many
trajectories of $X^{(b)}$ and then identify all pairs of trajectories
satisfying the approximate identity $X^{(f)}(t^\ast) \approx X^{(b)}(t^\ast)$
as determined by the kernel $\kappa_\epsilon$. This results in a Monte Carlo
algorithm, which, in principle, requires the calculation of a huge double sum
by summing over all pairs of $N$ samples from $X^{(f)}$ and $M$ samples from
$X^{(b)}$. A naive implementation of that algorithm would require a
prohibitive computational cost of order $O(M^2)$ operations, but
fortunately there are more efficient implementation relying on the structure
of the kernel and often reducing the complexity to $O(M \log(M))$
(see \cite{Bayer, BayerMC}). In this way, the
forward-reverse algorithm can nearly achieve the optimal Monte Carlo
convergence rate of $1/2$. More precisely, assuming enough regularity on
the density of $X$ and assuming the use of a kernel of sufficiently high order
(depending on the dimension), the root-mean-squared error of the estimator is
$O(M^{-1/2})$ with a complexity $O(M\log(M))$ and a
bandwidth of $\epsilon = O(M^{-1/d})$. These statements
assume that we can exactly solve the SDEs driving the forward and the
reverse processes. Otherwise, the error induced by, say, the Euler scheme,
will be added.
The structure of the construction of the forward-reverse representation
(\ref{eq:forrev-sde}) and later of the corresponding Monte Carlo estimator in
\cite{Bayer} strongly suggests that the forward-reverse approach does not
rely on the continuity of diffusion processes, but merely on the Markov
property. Hence, the approach was generalized to discrete time Markov chains
in \cite{BayerMC} and is generalized to the case of continuous time
Markov chains with discrete state space in the this work.
For a literature review on computational algorithms for computing conditional
expectations of functionals of diffusion processes we refer to \cite{Bayer}.
\section{Expectations SRN-Bridge Functionals}
\label{sec:forwardreverse}
In this section, we derive the dynamics of the reverse paths and the expectation formula for SRN-brige functionals.
The derivation follows the same scheme used in \cite{Milstein2004} , that is,
i) write the master equation, ii) manipulate the master equation to obtain a backward Kolmogorov equation and, iii) derive the infinitesimal generator of the reverse process.
\subsection{The Master Equation}
Let $X$ be a SRN defined by the intensity-reaction pairs $\seqof{(\nu_j,
a_j(x))}{j=1}{J}$. Let $p(t,x,s,y)$ be its transition probability function,
\emph{i.e.}, $p(t,x,s,y){:=}\prob{X(s){=}y\, \big| \, X(t){=}x}$ where $x,y\in\mbox{$\zset_+^d$}$ and
$0{<}t{<}s{<}T$. The function $p$ satisfies the following linear system of
ODEs known as the master equation \cite{Gardiner,Risken,Kampen}:
\begin{align}\label{eq:ME}
\left\{
\begin{array}{rl}
\partial_s p(t,x,s,y) &= \sum_{j=1}^J \left( a_j(y-\nu_j)p(t,x,s,y-\nu_j) - a_j(y)p(t,x,s,y)\right),\\
p(t,x,t,y) &= \delta_{x=y},
\end{array}
\right.
\end{align}
where $\delta$ is the Kronecker delta function.
A general analytic solution of \eqref{eq:ME} is in general computationally infeasible.
Even numerical solutions are infeasible for systems with infinite or large number of states.
For continuous state spaces, \eqref{eq:ME} becomes a parabolic PDE known as the Fokker-Planck Equation.
Next, we derive the generator of the reverse process in the SRN setting.
\subsection{Derivation of the Reverse Process}\label{sec:reverse}
Let us consider a fixed time interval $[t,T]$. For $s\in[t,T]$ and
$x,y\in\mbox{$\zset_+^d$}$, let us define $v(s,y):= \sum_x g(x) p(t,x,s,y)$ { provided
that the sum converges. We remark here that $v$ cannot in general be
interpreted as an expectation of $g$. Indeed, while $\sum_y p(t,x,s,y) = 1$,
the sum over {$x$} could, in principle, even diverge. Hence, it is not
a priori clear that $v$ admits a stochastic representation. However, }
multiplying both sides of the master equation \eqref{eq:ME} by $g(x)$ and
summing over $x$, we obtain:
\begin{align}\label{eq:sumoverx}
\left\{
\begin{array}{rl}
\partial_s v(s,y) &= \sum_{j=1}^J \left( a_j(y-\nu_j)v(s,y-\nu_j) - a_j(y) v(s,y)\right),\\
v(t,y) &= g(y).
\end{array}
\right.
\end{align}
Now, let us consider a time reversal induced by a change of variables
$\tilde s = T+t-s$ with $\tilde v(\tilde s, y) := v(T+t-\tilde s,y) = v(s,y)$ leading to the following backward equation:
\begin{align}\label{eq:sumoverxback}
\left\{
\begin{array}{rl}
-\partial_{\tilde s} \tilde v(\tilde s, y) &= \sum_{j=1}^J \left( a_j(y-\nu_j) \tilde v (\tilde s, y-\nu_j) - a_j(y) \tilde v(\tilde s, y) \right), \,\,t<\tilde s <T ,\\
\tilde v(T,y) &= v(t,y) = g(y) .
\end{array}
\right.
\end{align}
Let $\tilde \nu_j := - \nu_j$. By adding and subtracting the term $a_j(y+\tilde \nu_j)\tilde v(\tilde s, y)$, we can write the first equation of \eqref{eq:sumoverxback} as
\begin{align*}
\partial_{\tilde s}\tilde v(\tilde s, y) + \sum_{j=1}^J \left( a_j(y+\tilde \nu_j)\left( \tilde v(\tilde s, y +\tilde \nu_j) - \tilde v(\tilde s, y)\right) + \left( a_j(y+\tilde \nu_j)- a_j(y)\right) \tilde v(\tilde s, y) \right)=0.
\end{align*}
As a consequence, the system \eqref{eq:sumoverxback} can be written as
\begin{align}\label{eq:kbeprevious}
\left\{
\begin{array}{ll}
\partial_{\tilde s} \tilde v(\tilde s, y) + \sum_{j=1}^J a_j(y+\tilde \nu_j)\left( \tilde v(\tilde s, y+\tilde \nu_j) - \tilde v(\tilde s, y) \right) + c(y) \tilde v(\tilde s, y) = 0,\\
\tilde v(T,y) = g(y),
\end{array}
\right.
\end{align}
where $c(y):= \sum_{j=1}^J a_j(y+\tilde \nu_j) {-} a_j(y)$.
Let us now define $\tilde a_j(y) := a_j(y+\tilde \nu_j)$ and substitute it into \eqref{eq:kbeprevious}. We have arrived at
the following backward Kolmogorov equation \cite{RogersWilliams} for the cost-to-go function $v(\tilde s, y)$,
\begin{equation}\label{eq:reverse}
\left\{
\begin{array}{ll}
\partial_{\tilde s} \tilde v(\tilde s, y) + \sum_{j=1}^J \tilde a_j(y)\left( \tilde v(\tilde s, y+\tilde \nu_j) - \tilde v(\tilde s, y) \right) + c(y) \tilde v(\tilde s, y) = 0,\\
\tilde v(T,y) = g(y) .
\end{array}
\right.
\end{equation}
We recognize in \eqref{eq:reverse} the generator
$\mathcal{L}_Y (\tilde v)(\tilde s, y):= \sum_{j=1}^J \tilde a_j(y)\left( \tilde v(\tilde s, y+\tilde \nu_j) - \tilde v(\tilde s, y)\right)$ that defines the so-called reverse process $Y\equiv \{Y(\tilde s,\omega)\}_{t \leq \tilde s \leq T}$ by
\begin{align}
\label{eq:reverse-dynamics}
\prob{Y(\tilde s+d\tilde s) = y+\tilde \nu_j \, \big| \, Y(\tilde s)=y} = \tilde a_j(y) d\tilde s
\end{align}
or equivalently by,
\begin{align}
\prob{Y(\tilde s+d\tilde s) = y-\nu_j \, \big| \, Y(\tilde s)=y} = a_j(y-\nu_j) d\tilde s.
\end{align}
The Feynman-Kac formula \cite{RogersWilliams} provides a stochastic representation of the solution of \eqref{eq:reverse},
\begin{align}
\tilde v(\tilde s, y) = \expt{g(Y(T)) \exp\left( \int_{\tilde s}^T c(Y(s))ds\right)\, \big| \, Y(\tilde s)= y}.
\end{align}
Notice that $Y$ is a SRN in its own right.
We note in passing that stochastic representations based on shifted evaluations of the propensities have been derived independently in \cite{kt,kkst} to estimate
variations and differences of the cost to go function.
\subsection{The Forward-Reverse Formula for SRN}\label{sec:SRNformula}
Let us consider a time interval $[s,t]$ and assume that we only observe the
process $X$ on the end points, i.e., that we have $X(s) = x$ and $X(t) = y$
for some observed values $x,y \in \mathbb{Z}^d_+$. Fix an intermediate time
$s {<}t^\ast {<} t$, which will be considered a numerical input parameter later
on. Denote by $X^{(f)}$ the process $X$ conditioned on \emph{starting} at
$X^{(f)}(s) = x$ and restricted to the time domain $[s, t^\ast]$.
Furthermore, let $Y$ denote the reverse process constructed
in~(\ref{eq:reverse-dynamics}) on the time domain $[t^\ast, t]$ (i.e.,
inserting $t^\ast$ for $t$ and $t$ for $T$ in the above subsection) started at
$Y(t^\ast) = y$. As noted above, $Y$ is again an SRN with reaction channels
$\seqof{(-\nu_j,\tilde a_j)}{j{=}1}{J}$. For convenience, we also introduce
the notation $X^{(b)}$ for the process $Y$ run backward in time, i.e., we
define $X^{(b)}(u){:=}Y(t^\ast{+}t{-}u)$ for $u{\in}[t^*,t]$, and notice that
$X^{(b)}(t) = y$.
Recall that we aim to provide a \emph{stochastic representation},
\emph{i.e.}, a representation containing standard expectations only, for
conditional expectations of the form,
\begin{equation}
\label{eq:H-general}
\mathcal{H}(x,y) \equiv \expt{\left. \Phi\left(X, [s,t] \right) \, \right| \, X(s) =
x,\, X(t) = y},
\end{equation}
for $\Phi$ mapping $\mathbb{Z}^d_+$-valued paths to real numbers. Obviously, $\Phi$
needs to be integrable in order for $\mathcal{H}$ to be well defined, and we shall also
assume polynomial growth conditions on $\Phi$ and its derivatives
with respect
to jump-times of the underlying path. Moreover, we assume that $p(s,x,t,y) >
0$.
Once again, the fundamental idea of
the forward-reverse algorithm of Bayer and Schoenmakers \cite{Bayer} is to
simulate trajectories of $X^{(f)}$ and (independently) of $X^{(b)}$ and then
look for any pairs that are ``linked''. Since the state space is now
discrete, we may, in principle, require exact linkage in the sense that we may
only consider pairs such that $X^{(f)}(t^\ast) = X^{(b)}(t^\ast)$. However, in
order to decrease the variance of the estimator, it may once again be
advantageous to relax this condition by introducing a \emph{kernel}.
By a kernel, we understand a function $\kappa: \mathbb{Z}^d \to \mathbb{R}$ satisfying
\begin{equation*}
\sum_{x \in \mathbb{Z}^d} \kappa(x) = 1.
\end{equation*}
Moreover, we call $\kappa$ a kernel of order $r \ge 0$ if, in addition,
\begin{equation*}
\sum_{x \in \mathbb{Z}^d} x^\alpha \kappa(x) = 0
\end{equation*}
for any multi-index $\alpha$ with $1\le|\alpha| \le r$,
$\alpha := \alpha_1+\cdots+\alpha_d$, and $x^{\alpha}:= x_1^{\alpha_1}\cdots x_d^{\alpha_d},\, \alpha\in \{0,1,2,\ldots\}$. For instance, any
non-negative symmetric kernel has order $r=1$ in this sense.
Having fixed one such kernel $\kappa$, we define a whole family of kernels
$\kappa_\epsilon$, indexed by the \emph{bandwidth} $\epsilon \ge 0$, by
\begin{equation*}
\kappa_\epsilon(x) = C_\epsilon \kappa\left( \frac{x}{\epsilon} \right)
\end{equation*}
with the constant $C_\epsilon$ being defined by the normalization condition
$\sum_{x \in \mathbb{Z}^d} \kappa_\epsilon(x) = 1$. {Here, we implicitly assume the
kernel, $\kappa$, to be extended to $\mathbb{R}^d$, for instance in a piecewise constant way.}
As we necessarily have
$\kappa(x) \to 0$ as $|x| \to \infty$, it is easy to see that we have the
special case
\begin{equation*}
\kappa_0(x) =
\begin{cases}
1, & x = 0,\\
0, & x \neq 0.
\end{cases}
\end{equation*}
\begin{rem}
\label{rem:kernel-bandwidth}
The Kronecker kernel $\kappa_0$ can also be realized as $\kappa_0 = \kappa_{\epsilon_0}$ for some
$\epsilon_0 > 0$, which will depend on the base kernel $\kappa$, provided
that the base kernel $\kappa$ has finite support.
\end{rem}
\begin{theorem}
\label{thr:representation}
Let $\Phi$ be a continuous real-valued functional on the space of piecewise
constant functions defined on $[s,t]$ and taking values in $\mathbb{Z}^d$
(w.r.t.~uniform topology) such that
both $\mathcal{H}$ and the right hand side of~(\ref{eq:themain}) is finite for any
$\epsilon$.
With $\kappa_\epsilon$, $X^{(f)}$ and $X^{(b)}$ as above, we have
\begin{equation}
\label{eq:themain}
\mathcal{H}(x,y) = \lim_{\epsilon \to 0} \frac{\expt{ \Phi\left( X^{(f)} \circ
X^{(b)}, [s,t] \right) \kappa_\epsilon(X^{(f)}(t^\ast) -
X^{(b)}(t^\ast)) \Psi\left( X^{(b)}, [t^\ast,t] \right)
}}{\expt{ \kappa_\epsilon(X^{(f)}(t^\ast) -
X^{(b)}(t^\ast)) \Psi\left( X^{(b)}, [t^\ast,t] \right) }},
\end{equation}
where $X^{(f)} \circ X^{(b)}$ denotes the \emph{concatenation} of the paths
$X^{(f)}$ and $X^{(b)}$ in the sense defined by
\begin{equation*}
X^{(f)} \circ X^{(b)} (u) \equiv
\begin{cases}
X^{(f)}(u), & s \le u \le t^\ast, \\
X^{(b)}(u), & t^\ast < u \le t,
\end{cases}
\end{equation*}
and
\begin{equation*}
\Psi(Z, [a,b]) {:=} \exp\left( \int_a^b c\left( Z(u) \right) du\right).
\end{equation*}
\end{theorem}
\begin{rem}
In line with Remark~\ref{rem:kernel-bandwidth}, we note that we could easily
have avoided taking limits in Theorem~\ref{thr:representation} by replacing
$\kappa_\epsilon$ with $\kappa_0$ everywhere in~(\ref{eq:themain}).
At this stage we note that the Monte Carlo estimator based on~(\ref{eq:themain})
with positive $\epsilon$ will have considerable smaller variance than the
version with $\epsilon = 0$, potentially outweighing the increased bias.
\end{rem}
\begin{proof}[Sketch of proof of Theorem~\ref{thr:representation}]
For simplicity, we assume that the kernel $\kappa$ has finite support and that the functional $\Phi$ is uniformly bounded.
We will prove convergence of the numerator and the denominator
in~(\ref{eq:themain}) separately. Let us, hence, prove the more general
case first, i.e., the convergence
\begin{multline}
\label{eq:auxiliary-rep}
\mathrm{h}(x,y) {:=} \mathcal{H}(x,y) \,p(s,x,t,y) = \\
\lim_{\epsilon\to0} \expt{ \Phi\left(
X^{(f)} \circ X^{(b)}, [s,t] \right) \kappa_\epsilon(X^{(f)}(t^\ast) -
X^{(b)}(t^\ast)) \Psi\left( X^{(b)}, [t^\ast,t] \right) }.
\end{multline}
In the first step, we assume that $\Phi(Z, [s,t])$ only depends on the
values of $Z$ on a fixed grid, say $s = t_0 < t_1 < \cdots < t_n = t$, i.e.,
\begin{equation*}
\Phi(Z, [s,t]) = g\left(Z(t_0), \ldots, Z(t_n) \right).
\end{equation*}
Then~(\ref{eq:auxiliary-rep}) is proved (with minor modifications) in
{\cite{Bayer} (Theorem 3.4)}. Indeed, a closer look at that proof reveals
that only Markovianity of $X$ is really used.
Furthermore, note that any continuous functional $\Phi$ can be approximated
by functionals $\Phi_n$ depending only on the values of the process on a
(ever finer) finite grid $t_0, \ldots, t_n$. As, on the one side,
\begin{multline*}
\mathrm{h}(x,y) = \expt{\left. \Phi\left(X, [s,t] \right) \, \right| \, X(s) =
x,\, X(t) = y }p(s,x,t,y) = \\
\lim_{n\to\infty}
\expt{\left. \Phi_n\left(X, [s,t] \right) \, \right| \, X(s) = x,\, X(t) =
y} p(s,x,t,y)
\end{multline*}
and, on the other side,
\begin{multline*}
\lim_{\epsilon\to0} \lim_{n\to\infty} \expt{ \Phi_n\left(
X^{(f)} \circ X^{(b)}, [s,t] \right) \kappa_\epsilon(X^{(f)}(t^\ast) -
X^{(b)}(t^\ast)) \Psi\left( X^{(b)}, [t^\ast,t] \right) } =\\
\lim_{\epsilon\to0} \expt{ \Phi\left(
X^{(f)} \circ X^{(b)}, [s,t] \right) \kappa_\epsilon(X^{(f)}(t^\ast) -
X^{(b)}(t^\ast)) \Psi\left( X^{(b)}, [t^\ast,t] \right) }.
\end{multline*}
We are left to prove that
\begin{multline*}
\lim_{\epsilon\to0} \lim_{n\to\infty} \expt{ \Phi_n\left(
X^{(f)} \circ X^{(b)}, [s,t] \right) \kappa_\epsilon(X^{(f)}(t^\ast) -
X^{(b)}(t^\ast)) \Psi\left( X^{(b)}, [t^\ast,t] \right) } =\\
\lim_{n\to\infty} \lim_{\epsilon\to0} \expt{ \Phi_n\left(
X^{(f)} \circ X^{(b)}, [s,t] \right) \kappa_\epsilon(X^{(f)}(t^\ast) -
X^{(b)}(t^\ast)) \Psi\left( X^{(b)}, [t^\ast,t] \right) },
\end{multline*}
which follows as $\kappa_0 = \kappa_{\epsilon_0}$ for some $\epsilon_0 >
0$. In fact, it even follows in the general case by
dominated convergence.
Finally, the proof of convergence of the denominator is a special case of
the proof for the numerator, and therefore, the convergence of the fraction follows from the
continuity of $(a,b) \mapsto a/b$ for $b > 0$.
\end{proof}
\section{The EM Algorithm for SRN}
\label{sec:EM}
In this section, we present the EM algorithm for SRN, which is the main step for computing the parameter estimation. First, we explain the EM algorithm in general, and then, we derive the log-likelihood function for a fixed realization of the process, $X$. Finally, we present the EM algorithm for SRN.
\subsection{The EM Algorithm}\label{met:EM}
{
The EM algorithm \cite{Dempster77,Casella, WatanabeYamaguchi, McLachlanEM}
its named from its two steps: expectation and maximization.
It is an iterative algorithm that, given an initial guess and a stopping rule, provides an approximation for a local maximum or saddle point of the likelihood function, $\text{lik}(\theta \, \big| \, \mathcal{D})$.
It is a data augmentation technique in the sense that the maximization of the likelihood $\text{lik}(\theta \, \big| \, \mathcal{D})$ is performed by treating the data $\mathcal{D}$ as a part of a larger data set, $(\mathcal{D},\tilde {\mathcal{D}})$, where the complete-likelihood, $\text{lik}^c(\theta \, \big| \, \mathcal{D},\tilde {\mathcal{D}})$, is amenable to maximization.
Given an initial guess $\theta^{(0)}$, the EM algorithm maps $\theta^{(p)}$ into $\theta^{(p+1)}$ by the
\bigskip
\begin{enumerate}
\item expectation step: $Q_{\theta^{(p)}}(\theta \, \big| \,\mathcal{D}) := \mathrm{E}_{\theta^{(p)}}\left[{\log(\text{lik}^c(\theta\, \big| \, \mathcal{D},\tilde {\mathcal{D}}))\, \big| \, \mathcal{D}}\right]$, and the
\item maximization step: $\theta^{(p+1)} := \arg \max_{\theta} Q_{ \theta^{(p)}}(\theta \, \big| \,\mathcal{D})$.
\end{enumerate}
\bigskip
Here, $\mathrm{E}_{\theta^{(p)}}\left[ \cdot \, \big| \, \mathcal{D}\right]$, denotes the expectation associated with the distribution of $\tilde {\mathcal{D}}$ under the parameter choice
$\theta^{(p)}$, conditional on the data, $\mathcal{D}$.
In many applications, the expectation step is computationally infeasible and $Q_{\theta^{(p)}}(\theta \, \big| \, \mathcal{D})$ should be approximated by some estimate,
\begin{align*}
\hat Q_{\theta^{(p)}}(\theta \, \big| \, \mathcal{D}) &:= \hat{\mathrm{E}}_{\theta^{(p)}} \left[\log(\text{lik}^c(\theta\, \big| \, \mathcal{D},\tilde {\mathcal{D}}))\, \big| \, \mathcal{D}\right]
.
\end{align*}
\begin{rem}[The Monte Carlo EM]\label{rem:MCEM}
If we know how to sample a sequence of $M$ independent variates $\seqof{\tilde {\mathcal{D}}_i}{i=1}{M} \sim \tilde {\mathcal{D}} \, \big| \, \mathcal{D}$, with parameter $\theta^{(p)}$, then we can define the following Monte Carlo estimator of $Q_{\theta^{(p)}}(\theta \, \big| \,\mathcal{D})$,
\end{rem}
\begin{align*}
\hat Q_{\theta^{(p)}}(\theta \, \big| \,\mathcal{D}) &:=
\frac{1}{M} \sum_{i=1}^M
\log(\text{lik}^c(\theta\, \big| \, \mathcal{D},\tilde {\mathcal{D}}_i)
.
\end{align*}
In Section \ref{sec:FREM}, we describe how to simulate exact and approximate samples of $\tilde {\mathcal{D}} \, \big| \, \mathcal{D}$.
}
\subsection{The Log-Likelihood Function for Continuously Observed Paths}\label{sec:contobservedpaths}
The goal of this section is to derive an expression for the likelihood of a particular path, $(X(t,\omega_0))_{t\in[0,T]}$, of the process $X$, where $\omega_0 \in \Omega$ is a fixed realization.
An important assumption in this work is that the propensity functions $a_j$
can be written as $a_j(x) = c_j g_j(x)$ for $j{=}1,\ldots,J$ and
$x\in\mbox{$\zset_+^d$}$ {where $g_j$ are known functionals and $c_j$ are
considered the unknown parameters.} Define $\theta{:=}(c_1,\ldots,c_J)$.
Let us denote the jump times of $(X(t,\omega_0))_{t\in[0,T]}$ in $(0,T)$
by $\xi_1, \xi_2, \ldots, \xi_{N-1}$.
Define $\xi_0:=0$, $\xi_N:=T$ and $\Delta \xi_i = \xi_{i+1} - \xi_{i}$ for $i=0,1,\ldots,N-1$.
Let us assume that the system is in the state $x_0$ at time $0$.
We have that $\xi_1$ is the time to the first reaction, or equivalently, the time that the system spend at $x_0$ (sojourn time or holding time at state $x_0$).
Let us denote by $\nu_{\xi_1}$ the reaction that takes place at $\xi_1$, and therefore, the system at time $\xi_1$ is in the state $x_1:= x_0 + \nu_{\xi_1}$.
From the SSA algorithm, it is easy to see that the probability density corresponding to this transition is the product $a_{\nu_{\xi_1}}(x_0) \exp{(-a_0(x_0) \Delta \xi_0)}$.
By the Markov property we can see that the density of one path $\seqof{(\xi_i,x_i)}{i=0}{N-1}$ is given by
\begin{equation}\label{likpath}
\prod_{i=1}^{N-1} a_{\nu_{\xi_i}}(x_{i-1})\exp{(-a_0(x_{i-1})\Delta \xi_{i-1})}\times \exp{(-a_0(x_{N-1})\Delta \xi_{N-1})}.
\end{equation}
The last factor in \eqref{likpath} is due to the fact that we know that the system will remain in the state $x_{N-1}$ in the time interval $[\xi_{N-1},T)$.
Rearranging the factors in \eqref{likpath}, we obtain
\begin{equation}\label{eprod}
\exp{\left( -\sum_{i=0}^{N-1} a_0(x_i) \Delta \xi_{i}\right)} \prod_{i=1}^{N-1} a_{\nu_{\xi_i}}(x_{i-1}).
\end{equation}
Now, taking logarithms in \eqref{eprod} we have
\begin{equation*}
-\sum_{i=0}^{N-1} a_0(x_i) \Delta \xi_{i} + \sum_{i=1}^{N-1} \log(a_{\nu_{\xi_i}}(x_{i-1})),
\end{equation*}
which by the definition of $a_0$ can be written as
\begin{equation*}
-\sum_{i=0}^{N-1} \sum_{j=1}^J a_j(x_i) \Delta\xi_{i} + \sum_{i=1}^{N-1} \log(c_{\nu_{\xi_i}}g_{\nu_{\xi_i}}(x_{i-1})).
\end{equation*}
Interchanging the order in the summation and denoting the number of times that the reaction $\nu_j$ occurred in the interval $[0,T]$ by $R_{j,[0,T]}$, we have
\begin{equation}\label{lastuseless}
\sum_{j=1}^J\left( -c_j \sum_{i=0}^{N-1} g_j(x_i) \Delta \xi_{i} + \log(c_j) R_{j,[0,T]} \right) + \sum_{i=1}^{N-1} \log(g_{\nu_{\xi_i}}(x_{i-1})).
\end{equation}
Observing that the last term in \eqref{lastuseless} does not depend on
$\theta$, the complete log-likelihood of the path $(X(t,\omega_0))_{t\in[0,T]}$
is {up to constant terms} given by
\begin{equation}
\ell^c(\theta) := \sum_{j=1}^J \log(c_j) R_{j,[0,T]} - c_j F_{j,[0,T]},\, \text{ with } \theta {=} (c_1,\ldots,c_J),
\end{equation}
where $F_{j,[0,T]}:= g_j(x_0)\Delta \xi_0 +\cdots+ g_j(x_{N-1})\Delta
\xi_{N-1} = \int_0^T
g_j(X(s))\,ds$. {The last equality is due to $g_j$ being piecewise constant in the partition $\{\xi_0,\xi_1,\ldots,\xi_N\}$.}
Now let us assume that we have a collection of intervals, $\seqof{I_k =[s_k,t_k]}{k=1}{K}\subset [0,T]$, where we have continuously observed the process $(X(t,\cdot))_{t\in I_k}$ at each $I_k$.
We define the log-likelihood function as:
\begin{equation*}
\ell^c(\theta) := \sum_{j=1}^J \left( \log(c_j)\sum_{k=1}^K R_{j,I_k} - c_j \sum_{k=1}^K F_{j,I_k}\right).
\end{equation*}
\begin{rem} \label{rem:nonrandom}
Note that $R_{j,I_k}$ and $F_{j,I_k}$ are random variables, which are functions of the full paths of $X$ but not of the discretely observed paths. Hence, they are random given the data $\mathcal{D}$ as defined in \eqref{def:data}.
\end{rem}
\vspace{1mm}
\subsection{The EM Algorithm for SRNs}
According to the Section \ref{met:EM}, for a particular value of the parameter $\theta$, say $\theta^{(p)}$, we define
\begin{align*}\label{loglikc}
Q_{\theta^{(p)}}(c_1,\ldots,c_J \, \big| \, \mathcal{D}) := \sum_{j=1}^J \left( \log(c_j)\sum_{k=1}^K
\mathrm{E}_{\theta^{(p)}}\left[{R_{j,I_k}\, \big| \, \mathcal{D}}\right] - c_j \sum_{k=1}^K \mathrm{E}_{\theta^{(p)}}\left[{F_{j,I_k}\, \big| \, \mathcal{D}}\right]\right),
\end{align*}
where
$\mathrm{E}_{\theta^{(p)}}\left[{R_{j,I_k}\, \big| \, \mathcal{D}}\right] =\mathrm{E}_{\theta^{(p)}}\left[{R_{j,I_k}\, \big| \, X(s_k){=}x(s_k),X(t_k){=}x(t_k)}\right]$ (by the Markov property), and analogously for $F_{j,I_k}$.
Now consider the partial derivatives of $Q_{\theta^{(p)}}(c_1,\ldots,c_J \, \big| \, \mathcal{D})$ with respect to $c_j$
\begin{align*}
\partial_{c_j} Q_{\theta^{(p)}}(c_1,\ldots,c_J \, \big| \, \mathcal{D})=
\frac{1}{c_j}\sum_{k=1}^K \mathrm{E}_{\theta^{(p)}}\left[{R_{j,I_k}\, \big| \, \mathcal{D}}\right]
-\sum_{k=1}^K \mathrm{E}_{\theta^{(p)}}\left[{F_{j,I_k}\, \big| \,\mathcal{D}}\right].
\end{align*}
Therefore, $\nabla Q_{\theta^{(p)}}(c_1,\ldots,c_J \, \big| \,\mathcal{D}) = 0$ is obtained at $\theta^*= \left( c^*_1,\ldots,c^*_J \right)$ such that
\begin{equation}
c^*_j = \frac{\sum_{k=1}^K \mathrm{E}_{\theta^{(p)}}\left[{ R_{j,I_k}\, \big| \,
\mathcal{D}}\right]}{\sum_{k=1}^K
\mathrm{E}_{\theta^{(p)}}\left[{F_{j,I_k}\, \big| \, \mathcal{D}}\right]}, \ j{=}1,
\ldots, J.
\end{equation}
This is clearly the global maximization point of the function $Q_{\theta^{(p)}}(\cdot \, \big| \, \mathcal{D})$.
The EM algorithm for this particular problem generates a {deterministic} sequence
$\seqof{\theta^{(p)}}{p=1}{+\infty}$ that starts from a deterministic initial guess
$\theta^{(0)}$ provided by phase I (see Section \ref{sec:phaseI}) and evolves by
\begin{equation}\label{eq:EMiteration1}
c^{(p+1)}_j = \frac{\sum_{k=1}^K \mathrm{E}_{\theta^{(p)}}\left[{ R_{j,I_k}\, \big| \, \mathcal{D}}\right]}{\sum_{k=1}^K \mathrm{E}_{\theta^{(p)}}\left[{F_{j,I_k}\, \big| \, \mathcal{D}}\right]},
\end{equation}
where $\theta^{(p)} = \left( c_1^{(p)},\ldots, c_J^{(p)} \right)$.
\section{Forward-Reverse Monte Carlo EM Algorithm for SRNs}
\label{sec:FREM}
In this section, we present a two-phase algorithm for estimating the parameter $\theta$.
Phase I is deterministic while phase II is stochastic.
We consider the data, $\mathcal{D}$, as given by \eqref{def:data}. The main goal of this section is to provide a Monte Carlo version of formula \eqref{eq:EMiteration1}.
\subsection{Phase I: using Approximating ODEs}\label{sec:phaseI}
The objective of phase I is to address the key problem of finding a suitable initial point ${\theta}^{(0)}_{I\!I}$ to reduce the variance (or the computational work) of phase II,
thereby increasing (in some cases dramatically) the number of SRN bridges from the sampled forward-reverse trajectories for all time intervals.
Let us now describe phase I. From the user-selected seed, $\theta^{(0)}_{I}$,
we solve the following deterministic optimization problem using some appropriate numerical iterative method:
\begin{align}\label{eq:seedI}
{\theta}^{(0)}_{I\!I} := \operatorname*{arg\,min}_{\theta\geq 0}
\sum_{k} w_k\,
\norm{\tilde Z^{(f)}(t_k^*;\theta)- \tilde Z^{(b)}(t_k^*;\theta)}^2
.
\end{align}
Here, $\tilde Z^{(f)}$ is the ODE approximation defined by \eqref{eq:ODE} in the interval $[s_k,t_k^*]$, to the SRN defined by the reaction channels, $\seqof{(\nu_j,a_j)}{j=1}{J}$, and the initial condition $x(s_k)$;
$\tilde Z^{(r)}$ is the ODE approximation in the interval $[t_k^*,t_k]$ to the SRN defined by the reaction channels,
$\seqof{(-\nu_j,\tilde a_j)}{j=1}{J}$, and by the initial condition $x(t_k)$.
Let us recall that in Section \ref{sec:reverse}, $\tilde a_j(x)$ was defined as $a_j(x{-}\nu_j)$.
We define $\tilde Z^{(b)}(u,\theta){:=}\tilde Z^{(r)}(t_k^*{+}t_k{-}u,\theta)$ for $u\in[t_k^*,t_k]$. Furthermore, $w_k {:=} (t_k{-}s_k)^{-1}$ and $\norm{\cdot}$ is the Euclidean norm in $\mathbb{R}^d$.
The rationale behind this particular choice of the weight factors is based on the mitigation of the effect of very large time intervals, where the evolution of the process, $X$, may be more uncertain. A better (but more costly) measure would be the inverse of the maximal variance of the SRN bridge.
\begin{rem}[An alternative definition of ${\theta}^{(0)}_{I\!I}$]\label{rem:alternative} In some cases, convergence issues arise when solving the problem \eqref{eq:seedI}. We found it useful to solve a set of simpler problems whose answers can be combined to provide a reasonable seed for phase II:
more precisely, we solve $K$ deterministic optimization problems, one for each time interval $[s_k,t_k]$:
\begin{align*
\lambda_{k} := \operatorname*{arg\,min}_{\theta\geq 0} \norm{\tilde Z^{(f)}(t_k^*;\theta)-\tilde Z^{(b)}(t_k^*;\theta)},
\end{align*}
all of which were solved iteratively with the same seed, $\theta^{(0)}_{I}$. Then, we define
\begin{align}\label{eq:seedIalt}
{\theta}^{(0)}_{I\!I}:= \frac{\sum_k w_k \lambda_{k} }{\sum_k w_k}.
\end{align}
\end{rem}
\subsection{Phase II: the Monte Carlo EM}
In our statistical estimation approach, the Monte Carlo EM Algorithm uses data (pseudo-data) generated by those forward and backward simulated paths that result in exact or approximate SRN bridges. In Figure \ref{fig:frpaths}, we illustrate this idea for the wear example data presented in Section \ref{ex:wear}. Phase II implements the Monte Carlo EM algorithm for SRNs.
\begin{figure}[h!]
\centering
\begin{minipage}{0.49\textwidth}
\includegraphics[scale=0.40]{Wear_Cilindri_T1_FR_paths.pdf}
\end{minipage}
\hfill
\begin{minipage}{0.49\textwidth}
\includegraphics[scale=0.3]{Wear_Cilindri_T1_FR_paths_zoom.pdf}
\end{minipage}
\caption{Left: Illustration of the forward-reverse path simulation in Phase II. The plot corresponds to a given interval for the wear data, presented in Section \ref{ex:wear}. The observed values are marked with a black circle (beginning and end of the interval). On the y-axis we plot the thickness process $X(t)$, derived from the wear process of the cylinder liner. Observe that every forward path that ends up at a certain value will be joined with every backward path that ends up at the same value when using the Kronecker kernel. For example, this happens at value 58, where several forward paths end and several backward paths start. Right: Zoom near value 58.}
\label{fig:frpaths}
\end{figure}
\subsubsection{Simulating Forward and Backward Paths}\label{sec:clouds}
This phase starts with the simulation of forward and backward paths at each time interval $I_k$, for $k{=}1,...,K$. More specifically, given an estimation of the true parameter $\theta$, say, $\hat{\theta} = (\hat{c}_1, \hat{c}_2,\ldots, \hat{c}_J)$, the fist step is to simulate $M_k$ forward paths with reaction channels $\seqof{\nu_j,\hat{c}_j g_j(x)}{j=1}{J}$ in $[s_k,t_k^*]$, all of them starting at $s_k$ from $x(s_k)$ (see Section \ref{sec:Mk} for details about the selection of $M_k$). Then, we simulate $M_k$ backward paths with reaction channels $\seqof{-\nu_j,\hat{c}_j g_j(x-\nu_j)}{j=1}{J}$ in $[t_k^*, t_k]$, all starting at $t_k$ from $x(t_k)$.
Let $\seqof{\tilde X^{(f)}(t_k^*,\tilde{\omega}_m)}{m=1}{M_k}$ and $\seqof{\tilde X^{(b)}(t_k^*,\tilde{\omega}_{m'})}{m'=1}{M_k}$ denote the values of the simulated forward and backward paths at the time $t_k^*$, respectively. If the intersection of these two sets of points is nonempty, then, there exists at least one $m$ and one $m'$ such that the forward and backward paths can be linked as one SRN path that connects $x(s_k)$ and $x(t_k)$ data values.
When the number of simulated paths $M_k$ is large enough, and an appropriate guess of the parameter ${\theta}$ is used to generate those paths, then, due to the discrete nature of our state space $\mbox{$\zset_+^d$}$, we expect to generate a sufficiently large number of \emph{exact SRN bridges} to perform statistical inference.
However, at early stages of the Monte Carlo EM algorithm, our approximations to the unknown parameter ${\theta}$ are not expected to provide a large number of exact SRN bridges.
In such a case, we can use kernels to relax the notion of an exact SRN bridge (see Section \ref{sec:SRNformula}).
Notice that in the case of exact SRN bridges, we are implicitly using a Kronecker kernel in the formula
\eqref{eq:themain},
that is, $\kappa$ takes the value $1$ when $\tilde X^{(f)}(t_k^*,\tilde{\omega}_m) = \tilde X^{(b)}(t_k^*,\tilde{\omega}_{m'})$ and $0$ otherwise.
We can relax this condition to obtain \emph{approximate SRN bridges}.
To make computationally efficient use of kernels, we sometimes transform the endpoints of the forward and backward paths generated in the interval $I_k$,
\begin{align}\label{eq:cloudX}
\mathcal{X}_k := (&
\tilde X^{(f)}(t_k^*,\tilde{\omega}_1),
\tilde X^{(f)}(t_k^*,\tilde{\omega}_2),\ldots,
\tilde X^{(f)}(t_k^*,\tilde{\omega}_{M_k}),\\ \nonumber
&
\tilde X^{(b)}(t_k^*,\tilde{\omega}_{M_k+1}),
\tilde X^{(b)}(t_k^*,\tilde{\omega}_{M_k+2}),\ldots,
\tilde X^{(b)}(t_k^*,\tilde{\omega}_{2M_k})
),
\end{align}
into
\begin{align}\label{eq:cloudY}
H(\mathcal{X}_k) := (&
\tilde Y^{(f)}(t_k^*,\tilde{\omega}_1),
\tilde Y^{(f)}(t_k^*,\tilde{\omega}_2),\ldots,
\tilde Y^{(f)}(t_k^*,\tilde{\omega}_{M_k}),\\ \nonumber
&
\tilde Y^{(b)}(t_k^*,\tilde{\omega}_{M_k+1}),
\tilde Y^{(b)}(t_k^*,\tilde{\omega}_{M_k+2}),\ldots,
\tilde Y^{(b)}(t_k^*,\tilde{\omega}_{2M_k})
),
\end{align}
by a linear transformation $H$ with the aim of eliminating possibly high correlations in the components of $\mathcal{X}_k$.
The original cloud of points $\mathcal{X}_k$ formed by extremes of the forward and backward paths is then transformed into
$H(\mathcal{X}_k)$, which hopefully has a covariance matrix close to a multiple of the $d$-dimensional identity matrix $\alpha I_d$. Ideally, the coefficient $\alpha$ should be chosen in such way that each $d$-dimensional unitary cube centered at $\tilde Y^{(f)}(t_k^*,\tilde{\omega}_m)$ contains on average one element of $\cup_{m'}\{\tilde Y^{(b)}(t_k^*,\tilde{\omega}_{m'})\}$.
Note that this transformation changes (generally slightly) the variances of our estimators (see Section \ref{sec:trasf} for details about the selection of $\alpha$ and $H$).
In our numerical examples, we use the Epanechnikov kernel
\begin{equation}\label{eq:epa}
\kappa(\eta) := \left( \frac{3}{4} \right)^d \,\prod_{i=1}^{d} (1-\eta_i^2) \indicator{\abs{\eta_i}\leq1},
\end{equation}
where $\eta$ is defined as
\begin{align}\label{eq:eta}
\eta \equiv \eta_k(m,m') := \tilde Y^{(f)}(t_k^*,\tilde{\omega}_m)-\tilde Y^{(b)}(t_k^*,\tilde{\omega}_{m'}).
\end{align}
{This choice is motivated by the way in which we compute $\eta_k(m,m')$ avoiding whenever possible to make $M_k^2$ calculations. The support of $\kappa$ is perfectly adapted to our strategy of dividing $\mathbb{R}^d$ into unitary cubes with vertices in $\mathbb{Z}^d$.}
\subsubsection{Kernel-weighted Averages for the Monte Carlo EM}
As we previously mentioned, the only available data in the interval $I_k$ correspond to the observed values of the process, $X$, at its extremes.
Therefore, the expected values $\mathrm{E}_{\theta^{(p)}}\left[{ R_{j,I_k}\, \big| \, \mathcal{D}}\right]$ and $\mathrm{E}_{\theta^{(p)}}\left[{ F_{j,I_k}\, \big| \, \mathcal{D}}\right]$ in the formula \eqref{eq:EMiteration1} must be approximated by SRN-bridge simulation.
{To this end, we generate a set of $M_k$ forward paths in the interval $I_k$ using $\hat{\theta}_{I\!I}^{(p)}$ as the current guess for the unknown parameter $\theta^{(p)}$. Having generated those paths, we record $R^{(f)}_{j,I_k}(\tilde{\omega}_m)$ and $F^{(f)}_{j,I_k}(\tilde{\omega}_m)$ for all $j=1,2,\ldots,J$ and $m=1,2,\ldots,M_k$
as defined in Section \ref{sec:contobservedpaths}.
Analogously, we record $R^{(b)}_{j,I_k}(\tilde{\omega}_{m'})$ and $F^{(b)}_{j,I_k}(\tilde{\omega}_{m' })$ for all $j=1,2,\ldots,J$ and $m'=1,2,\ldots,M_k$.}
Consider the following $\kappa$-weighted averages, where $\kappa=\kappa_{\epsilon}$ for an appropriate choice of bandwidth $\epsilon$ that approximate
$\mathrm{E}_{\theta^{(p)}}\left[{ R_{j,I_k}\, \big| \, \mathcal{D}}\right]$ and
$\mathrm{E}_{\theta^{(p)}}\left[{ F_{j,I_k}\, \big| \, \mathcal{D}}\right]$, respectively:
\begin{align}\label{eq:averagesIk}
\avgsub{ R_{j,I_k}\, \big| \, \mathcal{D}}{\kappa}{\hat{\theta}_{I\!I}^{(p)}}&:=\frac{\sum_{m,m'} \left( R^{(f)}_{j,I_k}(\tilde{\omega}_m) + R^{(b)}_{j,I_k}(\tilde{\omega}_{m'})\right)\kappa(\eta_k(m,m'))\psi_k(m')}{\sum_{m,m'} \kappa(\eta_k(m,m'))\psi_k(m')} \text{ and}\\
\nonumber \avgsub{ F_{j,I_k}\, \big| \, \mathcal{D}}{\kappa}{\hat{\theta}_{I\!I}^{(p)}}&:=\frac{\sum_{m,m'} \left( F^{(f)}_{j,I_k}(\tilde{\omega}_m) + F^{(b)}_{j,I_k}(\tilde{\omega}_{m'})\right)\kappa(\eta_k(m,m'))\psi_k(m')}{\sum_{m,m'} \kappa(\eta_k(m,m'))\psi_k(m')},
\end{align}
where $\eta_{\kappa}(m,m')$ has been defined in \eqref{eq:eta} and $m,m'=1,2,\ldots,M_k$ and
$\psi_k(m') := \exp\left( \int_{t^*_k}^{t_k} c_j(\tilde X^{(b)}(s,\tilde{\omega}_{m'}))ds \right)$ (according to Theorem \ref{thr:representation}).
Observe that we generate $M_k$ forward and reverse paths in the interval $I_k$ but we do not directly control the number of exact or approximate SRN bridges that are formed. The number $M_k$ is chosen using a coefficient of variation criterion, as explained in Section \ref{sec:Mk}.
In Section \ref{sec:complexity}, we indicate an algorithm to reduce the computational complexity of computing those $\kappa$-weighted averages from $O(M_k^2)$ to $O(M_k \log (M_k) )$.
Finally, the Monte Carlo EM algorithm for this particular problem generates a {stochastic} sequence
$\seqof{\hat{\theta}_{I\!I}^{(p)}}{p=1}{+\infty}$ staring from the initial guess
${\theta}^{(0)}_{I\!I}$ provided by phase I \eqref{eq:seedI} and evolving by
\begin{equation}\label{eq:MCEMiteration}
\hat{c}^{(p+1)}_{} = \frac{\sum_{k=1}^K \avgsub{ R_{j,I_k}\, \big| \, \mathcal{D}}{\kappa}{\hat{\theta}_{I\!I}^{(p)}}}{\sum_{k=1}^K \avgsub{ F_{j,I_k}\, \big| \, \mathcal{D}}{\kappa}{\hat{\theta}_{I\!I}^{(p)}}},
\end{equation}
where $\hat{\theta}_{I\!I}^{(p)} = \left( \hat{c}_{1}^{(p)},\ldots,\hat{c}_{J}^{(p)} \right)$. In Section \ref{sec:stopping}, a stopping criterion based on techniques widely used in Monte Carlo Markov chains is applied.
\section{Computational Details}
\label{sec:compdetails}
This section is intended to show computational details omitted in Section \ref{sec:FREM}.
Here, we explain why and how we transform the clouds $\mathcal{X}_k$ consisting of endpoints of forward and reverse paths in the time interval $I_k$ at the time $t_k^*$, for $k{=}1,...,K$.
Then, we explain how to chose the number of simulated forward and backward paths, $M_k$, in the time interval $I_k$ to obtain accurate estimates of the expected values of $R_{j,I_k}$ and $F_{j,I_k}$ for $j=1,2,\ldots,J$.
Next, we show how to reduce the computational cost of computing approximate SRN bridges from $O(M_k^2)$ to $O(M_k \log (M_k))$ using a strategy introduced by Bayer and Schoenmakers \cite{BayerMC}.
Finally, we indicate how to choose the initial seeds for phase I and a stopping criteria for phase II.
\subsection{On the Selection of the Number of Simulated Forward-Backward Paths}
\label{sec:Mk}
The selection strategy of the number of sampled forward-backward paths, $M_k$, for interval $I_k$, is determined by the following sampling scheme:
\begin{enumerate}
\item First sample $M$ forward-reverse paths (in the numerical examples we use $M{=}100$).
\item If the number of joined forward-reverse paths using a delta kernel is less than a certain threshold, $\gamma$, we transform the data as described in Section \ref{sec:trasf}. This data transformation allows us to use the Epanechnikov kernel \eqref{eq:epa}. In this way, we are likely to obtain a larger number of joined paths.
\item We then compute the coefficient of variation of the sample mean of the sum of the number of times that each reaction $j$ occurred in the interval $I_k$, $R^{(f)}_{j,I_k} {+} R^{(b)}_{j,I_k}$ and $F^{(f)}_{j,I_k} {+} F^{(b)}_{j,I_k}$, for $j{=}1,...,J$. Here $F^{(f)}_{j,I_k}= \int_{I_k}
g_j(X^{(f)}(s))\,ds$ and the coefficient of variation of the sample mean of the sum $F^{(b)}_{j,I_k}= \int_{I_k}
g_j(X^{(b)}(s))\,ds$. Further details can be found in Section \ref{sec:contobservedpaths}.
The coefficient of variation ($cv$) of a random variable is defined as the ratio of its standard deviation $\sigma$ over its mean $\mu$,
$cv := \frac{\sigma}{\abs{\mu}}$.
In this case, for the reaction channel $j$ in the interval $I_k$, we have:
$$cv_{\bar R}(I_k,j) = L_k^{-1/2}\, \frac{\sdev{R^{(f)}_{j,I_k}(\tilde{\omega}_m) {+} R^{(b)}_{j,I_k}(\tilde{\omega}_{m})}{L_k}}{{\avg{R^{(f)}_{j,I_k}(\tilde{\omega}_m) {+} R^{(b)}_{j,I_k}(\tilde{\omega}_{m})}{L_k}}}$$
and
$$cv_{\bar F}(I_k,j) = L_k^{-1/2}\, \frac{\sdev{F^{(f)}_{j,I_k}(\tilde{\omega}_m) {+} F^{(b)}_{j,I_k}(\tilde{\omega}_{m})}{L_k}}{{\avg{F^{(f)}_{j,I_k}(\tilde{\omega}_m) {+} F^{(b)}_{j,I_k}(\tilde{\omega}_{m})}{L_k}}} ,$$
where $\sdev{Y}{L}{:=} \avg{Y^2}{L}-\avg{Y}{L}^2$ is the sample standard deviation of the random variable $Y$ over an ensemble of size $L$ and $\avg{Y}{L}{:=}\frac{1}{L}\sum_{m=1}^L Y(\omega_m)$ is its sample average. Here $L_k$ denotes the number of joined paths in the interval $k$, which is bounded by $M_k^2$.
In the case that $L_k$ is small, we compute a bootstrapped coefficient of variation.
The idea is that by controlling both coefficients of variation, we can control the variation of the $p$-th iteration estimation $\hat \theta_{II}^{(p)}$. Our numerical experiments confirm this fact.
\item If each coefficient of variation is less than a certain threshold then the sampling for interval $I_k$ finishes, where $M_k$ is the total number of sampled paths, and accepting the quantities in step 3. and also the quantities $\kappa(\eta_k(m,m'))\psi_k(m')$, $m,m'=1,...,L$ as defined in Section \ref{sec:contobservedpaths}. Otherwise, we sample additional forward-reverse paths (increasing the number of sampled paths at each iteration $M$) and go to step 2.
\end{enumerate}
This selection procedure is implemented in Algorithm \ref{alg:fr_path}.
\subsection{On the Complexity of the Path Joining Algorithm}
\label{sec:complexity}
In this section, we describe the computational complexity of Algorithm \ref{alg:fr_join} for joining paths in phase II, and show that this complexity is $O(M \log (M))$ on average.
Let us describe the idea.
First, fix a time interval $I_k$ and a reaction channel $j$. We use
the following double sum as an example
\begin{equation*}
\sum_{m=1}^{M} \sum_{m'=1}^{M} \left( R^{(f)}_{j,I_k}(\tilde{\omega}_m) + R^{(b)}_{j,I_k}(\tilde{\omega}_{m'})\right)\kappa_{m,m'}.
\end{equation*}
A double sum like this one appears in the numerator of \eqref{eq:averagesIk}.
Instead of computing a double loop which always takes $O(M^2)$ steps (and many of those steps contribute 0 to the sum), we take the following alternative approach:
let
$\times_{i=1}^d [A_i,B_i]$ be the smallest hyperrectangle of sides $[A_i,B_i]$, $i=1,...,d$ that contains the cloud
$\mathcal{Y}$, defined in \eqref{eq:cloudY}. Let us also assume that
$A_i,B_i$, $i=1,...,d$ are integers.
The length $B_i-A_i$ depends on how sparse the cloud is in its $i$-th dimension. Given the cloud, it is easy to check that the values $A_i,B_i$, $i=1,...,d$ can be computed in $O(M)$ operations. Now, we subdivide the hyperrectangle into sub-boxes of size-length 1, with sides parallel to the coordinate axis.
Since we have a finite number of those sub-boxes, we can associate an index for each one in such a way that it is possible to directly retrieve each one using a suitable data structure (for example an efficient sparse matrix or a hash table). The average access cost of such structure is constant with respect of $M$. For each sub-box, we associate a list of \emph{forward} points that ended up in that sub-box. It is also direct to see that the construction of such a structure takes a computational cost of $M$ steps on average.
Then, instead of evaluating the double sum which has $O(M^2)$ steps, we evaluate only the non zero terms. This is because, when a kernel $\kappa$ is used, $\kappa(x,y) \neq 0$ if and only if $x$ and $y$ are situated in neighboring sub-boxes.
That is,
\begin{align*}
\sum_{m=1}^{M} \sum_{m'=1}^{M}& \left( R^{(f)}_{j,I_k}(\tilde{\omega}_m) + R^{(b)}_{j,I_k}(\tilde{\omega}_{m'})\right) \kappa_{m,m'} \\
&=\sum_{m'=1}^{M} \sum_{i=1}^{3^d} \sum_{l=1}^{n(b_i)} \left( R^{(f)}_{j,I_k}(\tilde{\omega}_{\ell(l)}) + R^{(b)}_{j,I_k}(\tilde{\omega}_{m'})\right)\kappa_{\ell(l),m'} ,
\end{align*}
where $n(b_i)$ is the total quantity of \emph{reverse} end points associated with the $i$-th neighbor of the sub-box to which the \emph{forward} end-point, $\tilde Y^{(f)}(t_k^*,\tilde{\omega}_{m})$, belongs, whereas $\ell(l)$ indexes one of those reverse end points.
Note that the constant of this complexity depends exponentially on the dimension ($3^d$).
The cost that dominates the triple sum on the right-hand side is the expected maximum number of reverse points that can be found in a sub-box. This size can be proved to be $O(\log (M))$, which makes the whole joining algorithm of order $O(M \log (M))$. For additional details we refer to \cite{Bayer}.
\subsection{A Linear Transformation for the Epanechnikov Kernel}
\label{sec:trasf}
Our numerical experiments show that clouds formed by the endpoints of simulated paths, $\mathcal{X}$, usually have a shape similar to the \emph{cloud $\mathcal{Z}$} shown in the left panel of Figure \ref{fig:CloudZ}.
It turns out that partitioning the space into $d$-dimensional cubes with sides parallel to the coordinate axis is not idle for selecting kernel domains and consequently for finding SRN bridges.
It is more natural way to divide the space into a system of parallelepipeds with sides parallel to the principal directions of cloud $\mathcal{Z}$ having sides proportional to the lengths of its corresponding semi-axes to use as supports for our kernels.
Another way of proceeding (somehow related but not totally equivalent) is to transform the original cloud $\mathcal{Z}$ to obtaining another cloud $T(\mathcal{Z})$ with a near-spherical shape. Then, scale it to have on average one point of the cloud in each $d$-dimensional cube (with sides parallel to the coordinate axis). In this new cloud, $H(\mathcal{Z})$, we can naturally find neighbors using the algorithm described in Section \ref{sec:complexity} below and we have the Epanechnikov kernel to assign weights.
This is why in Section \ref{sec:FREM} we wanted to transform the data $\mathcal{X}_k$ into an isotropic cloud, such that, every unitary cube centered in
$\tilde Y^{(f)}(t_k^*,\tilde{\omega}_m')$
contains, on average, one point of the cloud
$\cup_{m'} \tilde Y^{(b)}(t_k^*,\tilde{\omega}_m')$.
We will now describe the details of the aforementioned transformations.
First, we show a customary procedure in statistics to motivate the transformation.
Let $\Sigma := \text{cov}(\mathcal{Z})$ be the sample covariance matrix computed from a cloud of points $\mathcal{Z}$.
To obtain a de correlated version of $\mathcal{Z}$, the linear transformation $T(z) = \Sigma ^{-1/2} \, z$ is widely used in statistics.
For example, consider a cloud $\mathcal{Z}$ of points obtained by sampling $10^3$ independent highly correlated bi-variate Gaussian random variables. The corresponding cloud $T(\mathcal{Z})$, depicted in the right panel of Figure \ref{fig:CloudZ}, shows the aspect of a sphere with a radius $3$ units.
\begin{figure}[h!]
\centering
\begin{minipage}{0.49\textwidth}
\includegraphics[width=\textwidth]{CloudZ1}
\end{minipage}
\hfill
\begin{minipage}{0.49\textwidth}
\includegraphics[width=\textwidth]{CloudZ2}
\end{minipage}
\caption{Left: A bivariate Gaussian cloud, $\mathcal{Z}$. Right: Its corresponding decorrelated and scaled version $T(\mathcal{Z})$.}
\label{fig:CloudZ}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.49\textwidth]{CloudZ3}
\caption{Cloud $H(\mathcal{Z})$.}
\label{fig:CloudZ3}
\end{figure}
The next step is to obtain a radius $\alpha$ such that the volume of a $d$-dimensional sphere of radius $3\alpha$ equals to the volume of $M$ unitary $d$-dimensional cubes.
From the equation $M = (3\alpha)^d\, V_d$,
we obtain $\alpha = \frac{1}{3} \,(\frac{M}{V_d})^{1/d}$, where $V_d = \frac{\pi^{d/2}}{\Gamma(d/2+1)}$ is the volume of the unitary sphere in $\mathbb{R}^d$. Therefore, the linear transformation $H$ is defined by $H(x):= \alpha T(x)$.
The result of this transformation is depicted in Figure \ref{fig:CloudZ3} in our Gaussian example.
In general, we do not expect to have a Gaussian-like distribution for $\mathcal{X}_k$, however it seems to be a good approximation in our numerical examples. At this point, it is worth mentioning that in examples with several dimensions (species), the number of approximate SRN bridges we get by using the transformation may be of the order of $M^2$. This indicates that the bandwidth is too large, and consequently the bias introduced in the estimation may be large. In these cases, we expand $\alpha$ by a factor of $1.5$, for example, until $O(M)$ approximate bridges are formed. Generally, one or two expansions are enough.
A motivation for the Gaussian approximation is that, for short time intervals and in certain regimes of activity of the system, specially where the total propensity, $a_0$, is high enough, a Langevin approximation of our SRN provides an Ornstein-Uhlenbeck process, which can potentially be close in distribution to our SRN (see \cite{ourInf}).
\begin{rem}
According to the transformation $H$, the kernel used in our case is approximately equal to
\begin{equation*}
\kappa_H(z) := \frac{1}{\det(H)}\,\kappa\left( H^{-1}(z)\right),
\end{equation*}
where $\kappa$ is the Epanechnikov kernel defined in \eqref{eq:epa}, since it corresponds with the continuous case and not with the lattice case.
\end{rem}
\begin{rem}
We can even consider a perturbated version of $T$, say $T_c$, by adding a multiple of the diagonal matrix formed by the diagonal elements of $\Sigma$, \emph{i.e.}, $T_c = (\Sigma + c\,\text{diag}(\Sigma))^{-1/2}$, where $c$ is a positive constant of order $O(1)$. The linear transformation $T_c$ can be considered as a regularization of $T$ that does not change the scale of the transformation $T$.
\end{rem}
\subsection{On the Stopping Criterion}
\label{sec:stopping}
A well-known fact about the EM algorithm is that, given a starting point, it converges to a saddle point or a local maximum of the likelihood function. Unless we know beforehand that the likelihood function has a unique global maximum, we cannot be sure that the output of the EM Algorithm is the MLE we are looking for.
The same phenomenon occurs in the case of the Monte Carlo EM algorithm, and for that reason Casella and Robert in \cite{Casella} recommend generating a set of $N$ (usually $N$ around five) parallel-independent Monte Carlo EM sequences starting from a set of over dispersed initial guesses. Usually, we do not know even the scale of the coordinates of our unknown parameter $\theta = (c_1,c_2,\ldots,c_d)$. For that reason, we recommend running only phase I of our algorithm over a set of uniformly distributed random samples drawn from a $d$-dimensional hyperrectangle $\prod_{i=1}^d (0,C_i]$, where $C_i$ is a reasonable, case dependent, upper bound for each reaction rate parameter $c_i$.
We observed in our numerical experiments that the result of this procedure is a number of points laying on a low dimensional manifold.
Once this manifold is identified, $N$ different initial guesses are taken as over dispersed seeds for phase II.
Note that the stochastic iterative scheme given by formula \eqref{eq:MCEMiteration} may be easily adapted to produce $N$ parallel stochastic sequences, $\seqof{\hat{\theta}_{I\!I,i}^{(p)}}{p=1}{+\infty}$, where, for each $i=1,2,\ldots,N$, the distribution of the random variable $\hat{\theta}_{I\!I,i}^{(p+1)}$ depends on its history of realizations, $\seqof{\hat{\theta}_{I\!I,i}^{(k)}}{k=1}{p}$, only through its previous value, $\hat{\theta}_{I\!I,i}^{(p)}$. In this sense, the $N$ sequences, $\seqof{\hat{\theta}_{I\!I,i}^{(p)}}{p=1}{+\infty}$, are MCMC sequences \cite{Norris, Casella}.
There is a number of convergence assessment techniques or convergence diagnostic tools in the MCMC literature; in this article, we adopt the $\hat R$ criterion by Gelman and Rubin \cite{Rhat, BayesianData}, which monitors the convergence of $N$ parallel random sequences $\seqof{\psi_i^{(p)}}{p=1}{+\infty}$, where $i=1,2,\ldots,N$.
Compute:
\begin{align*}
B_p &:= \frac{1}{N-1} \sum_{i=1}^N \left( \bar{\psi}_{p, i} - \dbar{\psi}_p\right)^2, \text{ where } \bar{\psi}_{p, i} := \frac 1 p \sum_{k=1}^p \psi_i^{(k)} \text{ and } \dbar{\psi}_p := \frac 1 N \sum_{i=1}^N \bar{\psi}_{p, i}, \text{ and}\\
W_p &:= \frac{1}{N} \sum_{i=1}^N s^2_{p,i}, \text{ where } s^2_{p,i} := \frac{1}{p-1} \sum_{k=1}^p \left( \psi_i^{(k)} - \bar{\psi}_{p, i}\right)^2.
\end{align*}
Then define
\begin{align}
V_p &:= \frac{p-1}{p} W_p + B_p \text{ and } \hat{R}_p := \sqrt{\frac{V_p}{W_p}} .
\end{align}
$B$ and $W$ are known as between and within variances, respectively.
It is expected that $\hat R$ (potential scale reduction) declines to $1$ as $p\to+\infty$. In our numerical experiments we use $1.4$ as a threshold.
Observe that if for all $p$, the values $\bar{\psi}_{p, i}$ are grouped in a very small cluster, \emph{i.e.}, $\bar{\psi}_{p, i} \approx \dbar{\psi}_p$ and therefore we have essentially only one Markov chain, then $B_p$ is close to zero and $\hat{R}_p \approx \sqrt{\frac{p-1}{p}} \to 1$ as $p \to +\infty$ independently of the behavior of the chain. To avoid this undesirable situation, we propose to observe also the behavior of the moving averages of order $L$, that is,
\begin{align}\label{eq:philags}
\tilde{\psi}_{p}:= \frac{1}{N} \sum_{i=1}^N \left( \tilde{\psi}_{p,i}-\tilde{\psi}_{p-1,i}\right)^2 \text{ where } \tilde{\psi}_{p,i} := \frac{1}{L}\sum_{\ell=0}^{L-1} {\psi}^{(p-\ell)}_{i} .
\end{align}
We stop when $\tilde{\psi}_{p}$ is sufficiently small.
Once we stop to iterate after $p^*$ iterations, the individual outputs \begin{equation*}
\hat{\theta}_{I\!I,1}^{(p^*)}, \hat{\theta}_{I\!I,2}^{(p^*)},\ldots, \hat{\theta}_{I\!I,N}^{(p^*)}
\end{equation*}
form a small cluster.
Although we cannot be certain that this cluster is near the MLE, we do have at least some confidence. Therefore, we can use the mean of this small cluster as a MLE estimation of our unknown parameter, $\theta$.
Otherwise, if we have two or more clusters or over dispersed results, we should perform a more careful analysis.
\begin{rem}
The $\hat R$ stopping criterion only works if the over dispersed seeds obtained in phase I lie in the basin of attraction of one local maximum of the likelihood function. Otherwise $\hat R$ may not decrease to 1, even worse, it may go to $+\infty$. For that reason, it is recommendable to monitor the evolution of $\hat R$. In our numerical examples we have that $\hat R$ is decreasing and we stop the algorithm using $\hat R_0=1.4$ as a threshold.
\end{rem}
\section{Numerical Examples}
\label{sec:numex}
In this section, we present numerical results that show the performance of our FREM algorithm.
In phase I, we use the alternative definition of ${\theta}_{I\!I,i}^{(0)}$ described in Remark \ref{rem:alternative}. For phase II, we run $N=4$ parallel sequences using $1.4$ as a threshold for $\hat R$ (described in Section \ref{sec:stopping}).
The moving average order used in all numerical examples is $L=3$ (see formula \ref{eq:philags}), and the associated tolerance is $0.05$.
As a point estimator of $\theta$, we provide the cluster average of the sequence
$\hat{\theta}_{I\!I,1}^{(p^*)}, \hat{\theta}_{I\!I,2}^{(p^*)},\ldots, \hat{\theta}_{I\!I,N}^{(p^*)}$.
For each example, we report i) the number of iterations of phase II, $p^*$; ii) a table containing a) the initial points, ${\theta}_{I,i}^{(0)}$, b) the outputs of the phase I, ${\theta}_{I\!I,i}^{(0)}$, and c) the outputs of phase II, $\hat{\theta}_{I\!I,i}^{(p^*)}$; and iii) a Figure with all those values.
For the examples wehre we generate synthetic data, we provide the seed parameter $\theta_G$ we used to generate the observations. It is important to stress that the distance from our point estimator to $\theta_G$ depends of the number of generated observations.
\newcommand{num_examples/main_new_generic_2014_10_12_7_53_46_BD/runs/Birth-death_2014_10_12_8_5_18}{num_examples/main_new_generic_2014_10_12_7_53_46_BD/runs/Birth-death_2014_10_12_8_5_18}
\newcommand{num_examples/main_new_generic_2014_10_12_20_6_29_DEC2/runs/Decay_two_reactions_2014_10_12_20_6_29}{num_examples/main_new_generic_2014_10_12_20_6_29_DEC2/runs/Decay_two_reactions_2014_10_12_20_6_29}
\newcommand{\SIR}
{num_examples/main_new_generic_2015_1_6_11_51_43_SIR2D_new/runs/SIR2D_2015_1_6_11_51_43}
\newcommand{\WEAR}
{num_examples/main_new_generic_2014_11_7_22_16_54_WEAR/runs/Wear_Cilindri_2014_11_7_22_16_55/}
\newcommand{num_examples/ensemble/Decay_two_reactions_2015_1_19_1_13_11}{num_examples/ensemble/Decay_two_reactions_2015_1_19_1_13_11}
\newcommand{num_examples/ensemble/Wear_Cilindri_2015_1_20_12_7_22}{num_examples/ensemble/Wear_Cilindri_2015_1_20_12_7_22}
\newcommand{num_examples/ensemble/Birth-death_2015_1_19_18_6_55}{num_examples/ensemble/Birth-death_2015_1_19_18_6_55}
\newcommand{num_examples/ensemble/SIR2D_2015_1_18_17_54_56}{num_examples/ensemble/SIR2D_2015_1_18_17_54_56}
\newcommand\rsp{\rule[10pt]{0pt}{0pt}}
\subsection{The Decay Process}
We start with a simple decay model with only one species and two reaction channels.
Its stoichiometric matrix and propensity function are:
\begin{align*}
\nu^T = \left(
\begin{array}{cccc} -1 \\
-4
\end{array}
\right) \mbox{ and } a(X) = \left( \begin{array}{l} c_1 X \\ c_2 X \cdot \indicator{X \geq 4} \end{array} \right), \mbox{ respectively} .
\end{align*}
We set $X_0{=}100$, $T{=}1$ and consider synthetic data observed in uniform time intervals of size $\Delta t {=}\frac{1}{16}$. This determines a set of $17$ observations generated from a single path using the parameter $\theta_G {=}(3.78,7.20)$. The data trajectory is shown in Figure \ref{fig:dataDec}.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.4]{Decay_two_reactions_T1_data_path.pdf}
\caption{Data trajectory for the decay example. This is obtained by observing the values of an SSA path at uniform time intervals of size
$\Delta t {=}1/16$.}
\label{fig:dataDec}
\end{figure}
For this example, we use $N{=}4$ FREM sequences starting at $\theta_{I,1}^{(0)}{=}(1, 5)$, $\theta_{I,2}^{(0)}{=}(6, 5)$, $\theta_{I,3}^{(0)}{=}(1, 9)$, and $\theta_{I,4}^{(0)}{=}(6, 9)$. In this and the following examples, for each interval we run a minimum of $M=100$ forward-reverse sample paths and we set a coefficient of variation threshold of $0.1$ (see Section \ref{sec:Mk}).
We illustrate one run of the FREM algorithm in the left panel of Figure \ref{fig:dec2} and in Table \ref{tab:dec}. For that run, the cluster average is
$\hat{\theta} {=} (3.68, 7.50)$, and it took $p^*{=}3$ iterations to converge for a $\hat{R}$ threshold equal to 1.4.
We take $\hat{\theta}$ as a MLE point estimation of the unknown parameters.
\begin{figure}[h!]
\centering
\begin{minipage}{0.48\textwidth}
\includegraphics[width=\textwidth]{Decay_two_reactions_T1_phase2.pdf}
\end{minipage}
\begin{minipage}{0.51\textwidth}
\includegraphics[width=\textwidth]{Decay_two_reactions_T1_phase2_ens.pdf}
\end{minipage}
\caption{Left: One FREM estimation (phase I and phase II) for the decay example. The $N$ final values of this particular run are shown as circles. Right: We show 30 independent runs of the FREM algorithm.}
\label{fig:dec2}
\end{figure}
\begin{table}[h!]
\centering
\begin{tabular}{cccc}
$i$ & $\square {=} {\theta}_{I,i}^{(0)}$ & $\Diamond {=} {\theta}_{I\!I,i}^{(0)}$ & $\bigcirc {=} \hat{\theta}_{I\!I,i}^{(p^*)}$ \\
\hline \rsp
1 &(1, 5) & (1.35, 10.67) & (3.65, 7.52) \\
2 &(6, 5) & (7.85, 9.11) & (3.80, 7.46) \\
3 & (1, 9) & (1.20, 10.71) & (3.63, 7.50) \\
4 &(6, 9) & (7.06, 9.30) & (3.65, 7.50) \\
\end{tabular}
\caption{Values computed by one run of the FREM Algorithm for the decay example, corresponding to the left panel of Figure \ref{fig:dec2}.}
\label{tab:dec}
\end{table}
We computed an ensemble of 30 independent runs (and obtained 30 cluster averages). The result is shown in the right panel of Figure \ref{fig:dec2}. We observe that the variability of the cluster average is indeed very small, indicating the robustness of the method and that 1.4 is a reasonable choice as a threshold for $\hat R$. Details are shown in Table \ref{tab:dec_ens}.
\begin{table}[h!]
\centering
\begin{tabular}{c|cccc}
& Average & Average CI at $95\%$ & Min Value & Max Value \\
\hline \rsp
$\hat{c}_1$ & 3.69 &(3.681, 3.699) & 3.66 & 3.77 \\
$\hat{c}_2$ & 7.50 &(7.495, 7.505) & 7.48 & 7.51
\end{tabular}
\caption{Values computed for an ensemble of 30 independent runs of the FREM algorithm for the decay example.
In each run, we obtain a cluster average, $\hat{\theta}^{(i)}$, as an MLE point estimate. Define $\mathcal{C} {:=} \seqof{\hat{\theta}^{(i)}}{i=1}{30}$.
For each unknown coefficient $c_j$ in $\theta$, we show i) the average of $\mathcal{C}$, ii) a $95\%$ confidence interval for the mean of $\mathcal{C}$, and iii) the minimum and maximum values of $\mathcal{C}$. }
\label{tab:dec_ens}
\end{table}
\begin{rem}
Recall that the distance between the value $\theta_G$ used to generate synthetic data and the estimation $\hat{\theta}$ is meaningless for small data sets. The relevant distance in this estimation problem is the one we obtain from our FREM algorithm $\hat{\theta}$ and the
$\hat{\theta}_{\text{MLE}}$ based on maximizing the true likelihood function however, the later is not available in most cases.
\end{rem}
\subsection{Wear in Cylinder Liners}
\label{ex:wear}
We now test our FREM algorithm by using real data.
The data set $\mathbf{w} = \{w_i\}_{i=1}^n$, taken from \cite{gio2011}, consists of wear levels observed on $n= 32$ cylinder liners of eight-cylinder SULZER engines as measured by a caliper with a precision of
$\Delta = 0.05$ mm. Data are presented in Figure \ref{fig:data}.
\begin{figure}[htp!]
\centering
\includegraphics[scale=0.5]{data}
\caption{Data set from {\rm \cite{gio2011}}. Data refer to cylinder liners used in ships of the Grimaldi Group.
}
\label{fig:data}
\end{figure}
The finite resolution of the caliper allows us to represent the set of possible measurements using a finite lattice.
Let $X(t)$ be the \textit{thickness process} derived from the wear of the cylinder liners up to time $t$, i.e., $X(t) = X_0 - W(t)$, where $W$ is the wear process and $X_0$ is the initial thickness. The final time of some observations is close to $T{=}60,000$ hours.
We model $X(t)$ as a decay processes with two reaction channels and $\Delta = 0.05$, since a simple decay process is not enough to explain the data. The two considered intensity-jump pairs are $(a_1(x),\nu_1) = (c_1x, -\Delta)$ and $(a_2(x),\nu_2) = (c_2x, -4\Delta)$.
Here, $c_1$ and $c_2$ are coefficients with dimension $(\text{mm}\cdot \text{hour})^{-1}$.
The linear propensity functions, the value $X_0{=}5$ mm and the initial values for phase I: $\theta_{I,1}^{(0)}{=}(1,1)$, $\theta_{I,2}^{(0)}{=}(10,1)$, $\theta_{I,3}^{(0)}{=}(1,10)$ and $\theta_{I,4}^{(0)}{=}(10,10)$, are motivated by previous studies of the same data set (see \cite{ourInf} for details).
In our computations, we re scaled the original problem by setting
$\Delta{=}1$ and $T{=}1$.
We illustrate one run of our FREM algorithm in the left panel of Figure \ref{fig:dec2b} and in Table \ref{tab:dec2b}. For that run, we obtained a cluster average of
$\hat{\theta} {=} (8.91 , 5.74)$, which corresponds to $\hat{{\theta}}_o {=} (1.5 \cdot 10^{-4} , 0.97 \cdot 10^{-4})$ in the non scaled model.
The algorithm converged after $p^*{=}93$ iterations using 1.4 as a threshold for $\hat{R}$.
We take that cluster average as an MLE point estimation of the unknown parameters.
\begin{figure}[h!]
\centering
\begin{minipage}{0.48\textwidth}
\includegraphics[width=\textwidth]{Wear_Cilindri_T1_phase2.pdf}
\end{minipage}
\begin{minipage}{0.51\textwidth}
\includegraphics[width=\textwidth]{Wear_Cilindri_T1_phase2_ens.pdf}
\end{minipage}
\caption{Left: FREM estimation (phase I and phase II) for the wear example. The $N$ final values of this particular run are shown as circles. Right: We show 30 independent runs of the FREM algorithm.}
\label{fig:dec2b}
\end{figure}
\begin{table}[h!]
\centering
\begin{tabular}{cccc}
$i$ & $\square {=} {\theta}_{I,i}^{(0)}$ & $\Diamond {=} {\theta}_{I\!I,i}^{(0)}$ & $\bigcirc {=} \hat{\theta}_{I\!I,i}^{(p^*)}$ \\
\hline
1 &(1, 1) & (2.81, 9.90) & (8.56, 5.83) \\
2 &(10, 1) & (36.88, 1.58) & (9.07, 5.71) \\
3 & (1, 10) & (1.13, 10.31) & (8.68, 5.80) \\
4 &(10, 10) & (11.44, 7.79) & (9.34, 5.62) \\
\end{tabular}
\caption{Values computed by one run of the FREM algorithm for the wear example corresponding to the left panel of Figure \ref{fig:dec2b}.}
\label{tab:dec2b}
\end{table}
We computed an ensemble of 30 independent runs (and obtained 30 cluster averages). The result is shown in the right panel of Figure \ref{fig:dec2b}. We observe that there is a small variability in the estimates indicating the robustness of the method. Details are shown in Table \ref{tab:dec2b_ens}.
\begin{table}[h!]
\centering
\begin{tabular}{c|cccc}
& Average & Average CI at $95\%$ & Min Value & Max Value \\
\hline \rsp
$\hat{c}_1$ & 8.94 &(8.90, 8.98) & 8.71 & 9.22 \\
$\hat{c}_2$ & 5.73 &(5.72, 5.74) & 5.66 & 5.79
\end{tabular}
\caption{Values computed for an ensemble of 30 independent runs of the FREM algorithm for the wear example.
In each run, we obtain a cluster average, $\hat{\theta}^{(i)}$, as an MLE point estimate. Define $\mathcal{C} {:=} \seqof{\hat{\theta}^{(i)}}{i=1}{30}$.
For each unknown coefficient $c_j$ in $\theta$, we show i) the average of $\mathcal{C}$, ii) a $95\%$ confidence interval for the mean of $\mathcal{C}$, and iii) the minimum and maximum values of $\mathcal{C}$. }
\label{tab:dec2b_ens}
\end{table}
\begin{figure}[h!]
\centering
\begin{minipage}{0.49\textwidth}
\includegraphics[width=\textwidth]{oldthetaWEAR.pdf}
\end{minipage}
\hfill
\begin{minipage}{0.49\textwidth}
\includegraphics[width=\textwidth]{newthetaWEAR.pdf}
\end{minipage}
\caption{Left: confidence band with the parameter $\tilde{\theta}$ obtained in \cite{ourInf} for the wear example. Right: the confidence band obtained with the FREM algorithm.}
\label{fig:twoCI}
\end{figure}
\begin{rem}
In this particular example, the data set was obtained using a caliper with finite precision. Therefore, our likelihood should also incorporate the distribution of the measurement errors, which may be assumed Gaussian, independent, and identically distributed with mean zero and variance equals to the caliper's precision. We omitted this step in our analysis for the sake of simplicity and brevity.
\end{rem}
\begin{rem}
Comparing our FREM estimate, $\hat{\hat{\theta}} {=} (1.5 \cdot 10^{-4} , 0.97 \cdot 10^{-4})$, with the value obtained in \cite{ourInf} for the same data set and the same model, ${\tilde{\theta}} {=} (0.63 \cdot 10^{-4} , 1.2 \cdot 10^{-4})$, we obtained the same scale in the coefficients and a quite similar confidence band, see Figure \ref{fig:twoCI}.
\end{rem}
\subsection{Birth-Death Process}\label{ex:bd}
This model has one species and two reaction channels:
\begin{align*}
\emptyset \xrightarrow{c_1} X,& \ \ X \xrightarrow{c_2} \emptyset
\end{align*}
described by the stoichiometric matrix and the propensity function
\begin{align*}
\nu^T = \left(
\begin{array}{r}
1 \\
-1
\end{array}
\right) \mbox{ and } a(X) = \left( \begin{array}{l} c_1 \\ c_2 \,X \end{array} \right), \text{ respectively}.
\end{align*}
Since we are not continuously observing the paths of $X$,
an increment of size $k$ in the number of particles in a time interval
$[t_1,t_2]$ may be the consequence of any combination of $n{+}k$ firings of channel 1 and $n$ firings of channel 2 in that interval.
This fact makes the estimation of $c_1$ and $c_2$ nontrivial.
We set $X_0{=}17$, $T{=}200$ and consider synthetic data observed in uniform time intervals of size $\Delta t {=}5$. This determines a set of $41$ observations generated from a single path using the parameter $\theta_G {=}(1, 0.06)$. The data trajectory is shown in Figure \ref{fig:dataBD}.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.4]{Birth-death_T200_data_path.pdf}
\caption{Data trajectory for the Birth-death example. This is obtained by observing the values of an SSA path at uniform time intervals of size
$\Delta t {=}5$.}
\label{fig:dataBD}
\end{figure}
For this example, we ran $N{=}4$ FREM sequences starting at $\theta_{I,1}^{(0)}{=}(0.5,0.04)$, $\theta_{I,2}^{(0)}{=}(0.5,0.08)$, $\theta_{I,3}^{(0)}{=}(1.5,0.04)$, and $\theta_{I,4}^{(0)}{=}(1.5,0.08)$. Those points where chosen after a previous exploration with phase I.
We illustrate one run of our FREM algorithm in the left panel of Figure \ref{fig:bd} and Table \ref{tab:bd}. For that run, we obtained a cluster average of $\hat{\theta} {=} (1.22,0.065)$.
The FREM algorithm took $p^*{=}95$ iterations to converge using a threshold of 1.4 for $\hat{R}$.
We take that cluster average as a MLE estimation of the unknown parameters.
\begin{figure}[h!]
\centering
\begin{minipage}{0.49\textwidth}
\includegraphics[width=\textwidth]{Birth-death_T200_phase2.pdf}
\end{minipage}
\begin{minipage}{0.49\textwidth}
\includegraphics[width=\textwidth]{Birth-death_T200_phase2_ens.pdf}
\end{minipage}
\caption{Left: FREM estimation (phase I and phase II) for the birth-death example. The $N$ final values of this particular run are shown as circles. Right: We show 30 independent runs of the FREM algorithm.}
\label{fig:bd}
\end{figure}
\begin{table}[h!]
\centering
\begin{tabular}{cccc}
$i$ & $\square = {\theta}_{I,i}^{(0)}$ & $\Diamond = {\theta}_{I\!I,i}^{(0)}$ & $\bigcirc = \hat{\theta}_{I\!I,i}^{(p^*)}$ \\
\hline
1 &(0.5, 0.04) & (6.24e-01, 3.29e-02) & (1.24e+00, 6.55e-02) \\
2 &(0.5, 0.08) & (7.68e-01, 4.07e-02) & (1.29e+00, 6.67e-02) \\
3 & (1.5, 0.04) & (1.01e+00, 5.25e-02) & (1.18e+00 6.27e-02) \\
4 &(1.5, 0.08) & (1.53e+00, 7.97e-02) & (1.20e+00, 6.34e-02) \\
\end{tabular}
\caption{Values computed by one run of the FREM Algorithm for the birth-death example corresponding to the left panel of Figure \ref{fig:bd}.}
\label{tab:bd}
\end{table}
We compute an ensemble of 30 independent runs (and obtained 30 cluster averages), and the result is shown in the right panel of Figure \ref{fig:bd}. We observe a moderate variability in the estimates. This may indicate that the $\hat{R}$ threshold needs to be decreased and consequently more iterations of the algorithm may be needed. Details are shown in Table \ref{tab:bd_ens}.
\begin{table}[h!]
\centering
\begin{tabular}{c|cccc}
& Average & Average CI at $95\%$ & Min Value & Max Value \\
\hline \rsp
$\hat{c}_1$ & 1.243 &(1.237, 1.249) & 1.213 & 1.284 \\
$\hat{c}_2$ & 0.0659 &(0.0655, 0.0663) & 0.0643 & 0.0681
\end{tabular}
\caption{Values computed for an ensemble of 30 independent runs of the FREM algorithm for the birth-death example.
In each run, we obtain a cluster average, $\hat{\theta}^{(i)}$, as an MLE point estimate. Define $\mathcal{C} {:=} \seqof{\hat{\theta}^{(i)}}{i=1}{30}$.
For each unknown coefficient $c_j$ in $\theta$, we show i) the average of $\mathcal{C}$, ii) a $95\%$ confidence interval for the mean of $\mathcal{C}$, and iii) the minimum and maximum values of $\mathcal{C}$. }
\label{tab:bd_ens}
\end{table}
\subsection{SIR Epidemic Model}
In this section we consider the SIR epidemic model, where $X(t)=(S(t),I(t),R(t))$ (susceptible-infected-removed individuals) and the total population is constant, $S{+}I{+}R=N$ (see \cite{SIR}). The importance of this example lies in the fact that has a nonlinear propensity function and it has two dimensions.
This model has two reaction channels
\begin{align*}
S {+} I \xrightarrow{\beta} 2I, \ \ I \xrightarrow{\gamma} R
\end{align*}
described by the stoichiometric matrix and the propensity function
\begin{align*}
\nu^T = \left( \begin{array}{rr} -1 & 0 \\ 1 & -1 \\ 0 & 1 \end{array} \right) \mbox{ and } a(X) = \left( \begin{array}{l} \beta \,S I \\ \gamma \,I \end{array} \right).
\end{align*}
We set $X_0{=}(300,5)$, $T{=}10$ and consider synthetic data generated using the parameters
$\theta_G {=} (1.66, 0.44)$ by observing $X$ at uniform time intervals of size $\Delta t {=}1$. The data trajectory is shown in Figure \ref{fig:dataSIR}.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.4]{SIR2D_T10_data_path.pdf}
\caption{Data trajectory for the SIR example. This is obtained by observing the values of an SSA path at uniform time intervals of size
$\Delta t {=}1$.
}
\label{fig:dataSIR}
\end{figure}
For this example we ran $N{=}4$ FREM sequences starting at $\theta_{I,1}^{(0)}{=}(0.40, 0.05)$, $\theta_{I,2}^{(0)}{=}(0.40, 1.00)$, $\theta_{I,3}^{(0)}{=}(3.00, 0.05)$, and $\theta_{I,4}^{(0)}{=}(3.00, 1.00)$. Those points where chosen after some previous exploration with phase I.
We illustrate one run of the FREM algorithm in the left panel of Figure \ref{fig:sir}. Our MLE point estimation is obtained as the cluster average of the values shown in Table \ref{tab:sir}, that is
$\hat{\theta} {=} (1.65, 0.39)$. The FREM algorithm took $p^*{=}3$ iterations to converge, using 1.4 as a threshold for $\hat{R}$.
\begin{figure}[h!]
\centering
\begin{minipage}{0.49\textwidth}
\includegraphics[width=\textwidth]{SIR2D_T10_phase2.pdf}
\end{minipage}
\begin{minipage}{0.49\textwidth}
\includegraphics[width=\textwidth]{SIR2D_T10_phase2_ens.pdf}
\end{minipage}
\caption{Left: FREM estimation (phase I and phase II) for the SIR example. The $N$ final values of this particular run are shown as circles.
In this particular case, where the results of phase I collapses to a single point, $N=4$ FREM sequences seem to be unnecessary, but we note that the $\hat R$ criterion needs at least 2 sequences.
Right: We show 30 independent runs of the FREM algorithm.}
\label{fig:sir}
\end{figure}
\begin{table}[h!]
\centering
\begin{tabular}{cccc}
$i$ & $\square = {\theta}_{I,i}^{(0)}$ & $\Diamond = {\theta}_{I\!I,i}^{(0)}$ & $\bigcirc = \hat{\theta}_{I\!I,i}^{(p^*)}$ \\
\hline
1 &(0.40, 0.05) & (1.50, 0.38) & (1.65, 0.39) \\
2 &(0.40, 1.00) & (1.50, 0.38) & (1.65, 0.39) \\
3 & (3.00, 0.05) & (1.50, 0.38) & (1.66, 0.39) \\
4 &(3.00, 1.00) & (1.50, 0.38) & (1.66, 0.39) \\
\end{tabular}
\caption{Values computed by one run of the FREM Algorithm for the SIR example corresponding to the left panel of Figure \ref{fig:sir}.}
\label{tab:sir}
\end{table}
We computed an ensemble of 30 independent runs (and obtained 30 cluster averages); results are shown in the right panel of Figure \ref{fig:sir}. We observe a very small variability in our estimates; details are shown in Table \ref{tab:sir_ens}.
\begin{table}[h!]
\centering
\begin{tabular}{c|cccc}
& Average & Average CI at $95\%$ & Min Value & Max Value \\
\hline \rsp
$\hat{c}_1$ & 1.6784 &(1.6764, 1.6804) & 1.6648 & 1.6891 \\
$\hat{c}_2$ & 0.3942 &(0.3939, 0.3945) & 0.3920 & 0.3956
\end{tabular}
\caption{Values computed for an ensemble of 30 independent runs of the FREM algorithm for the SIR example.
In each run, we obtain a cluster average, $\hat{\theta}^{(i)}$, as an MLE point estimate. Define $\mathcal{C} {:=} \seqof{\hat{\theta}^{(i)}}{i=1}{30}$.
For each unknown coefficient $c_j$ in $\theta$, we show i) the average of $\mathcal{C}$, ii) a $95\%$ confidence interval for the mean of $\mathcal{C}$, and iii) the minimum and maximum values of $\mathcal{C}$. }
\label{tab:sir_ens}
\end{table}
\subsection{Auto-Regulatory Gene Network}
The following model, taken from \cite{daigle2012accelerated}, has eight reaction channels and five species,
\begin{align*}
DNA + P_2 &\xrightarrow{c_1} DNA{-}P_2, \ \ &
DNA{-}P_2 &\xrightarrow{c_2} DNA + P_2\\
DNA &\xrightarrow{c_3} DNA + mRNA, \ \ &
mRNA &\xrightarrow{c_4} \emptyset\\
P+P &\xrightarrow{c_5} P_2, \ \ &
P_2 &\xrightarrow{c_6} P+P\\
mRNA &\xrightarrow{c_7} mRNA +P, \ \ &
P&\xrightarrow{c_8}\emptyset
\end{align*}
is described respectively by the stoichiometric matrix and the propensity function
\begin{align*}
\nu^T = \left(
\begin{array}{rrrrr}
-1 & 1 & 0 & 0 & -1 \\
1 & -1 & 0 & 0 & 1 \\
0 & 0 & 1 & 0 & 0\\
0 & 0 & -1 & 0 & 0\\
0 & 0 & 0 & -2 & 1\\
0 & 0 & 0 & 2 & -1\\
0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & -1 & 0
\end{array}
\right) \mbox{ and } a(X) = \left( \begin{array}{l} c_1\,DNA{-}P_2 \\ c_2\,DNA \cdot P_2 \\ c_3\,DNA\\ c_4\,mRNA \\
c_5 \,P(P{-}1)\\ c_6\,P_2\\c_7\,mRNA\\c_8\,P \end{array} \right).
\end{align*}
Quoting \cite{daigle2012accelerated}, ``$DNA$, $P$, $P_2$, and $mRNA$ represent $DNA$ promoters,
protein gene products, protein dimers, and messenger $RNA$ molecules, respectively.''
This model has been selected to test the robustness of our FREM algorithm to deal with several dimensions and several reactions.
Following cited works, we also set the initial state of the system at $$X_0 = (DNA,DNA{-}P_2,mRNA, P, P_2) = (7, 3, 10, 10, 10),$$ and run the system to the final time $T = 50$. Synthetic data is gathered by observing a single trajectory generated using
$\theta_G = (0.1, 0.7, 0.35, 0.3, 0.1, 0.9,0.2, 0.1)$ at uniform time intervals of size $\Delta t {=}\frac{1}{2}$. The data trajectory is shown in Figure \ref{fig:dataARG}.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.4]{Auto-regulatory_gene_network_T50_data_path.pdf}
\caption{Data trajectory for the auto-regulatory gene network example obtained by observing the values of an SSA path at uniform time intervals of size
$\Delta t {=}\frac{1}{2}$.
}
\label{fig:dataARG}
\end{figure}
For this example we ran $N{=}2$ FREM sequences starting at
$\theta_{I,1}^{(0)}=0.1\,v$ and
$\theta_{I,2}^{(0)}=0.5\,v$, respectively, where $v$ is the vector of $\mathbb{R}^8$ with all its components equal to one.
The FREM algorithm took, on average, $p^*{=}169$ iterations to converge, taking 2 days in our workstation configuration: a 12 core Intel GLNXA64 architecture and MATLAB version R2014a.
We computed an ensemble of 10 independent runs and obtained 10 cluster averages. We observe very small variability. Details are shown in Table \ref{tab:arg_ens}.
\begin{table}[h!]
\centering
\begin{tabular}{c|cccc}
& Average & Average CI at $95\%$ & Min Value & Max Value \\
\hline \rsp
$\hat{c}_1$ & 0.1011 &(0.1001, 0.1021) &0.0984 & 0.1033 \\
$\hat{c}_2$ &0.6207 &(0.6135, 0.6279) & 0.6005 & 0.6328 \\
$\hat{c}_3$ & 0.3398 &(0.3380, 0.3416) & 0.3358 & 0.3441 \\
$\hat{c}_4$ & 0.3182 &(0.3166, 0.3198) & 0.3139 & 0.3213 \\
$\hat{c}_5$ & 0.0637 &(0.0622, 0.0652) & 0.0595 & 0.0687 \\
$\hat{c}_6$ & 0.5891 &(0.5742, 0.6040) & 0.5485 & 0.6357 \\
$\hat{c}_7$ & 0.1444 &(0.1426, 0.1462) & 0.1392 & 0.1483 \\
$\hat{c}_8$ & 0.0630 &(0.0623, 0.0637) & 0.0618 & 0.0652
\end{tabular}
\caption{Values computed for an ensemble of 10 independent runs of the FREM algorithm for the auto-regulatory gene network example.
In each run, we obtain a cluster average, $\hat{\theta}^{(i)}$, as an MLE point estimate. Define $\mathcal{C} {:=} \seqof{\hat{\theta}^{(i)}}{i=1}{10}$.
For each unknown coefficient $c_j$ in $\theta$, we show i) the average of $\mathcal{C}$, ii) a $95\%$ confidence interval for the mean of $\mathcal{C}$, and iii) the minimum and maximum values of $\mathcal{C}$. }
\label{tab:arg_ens}
\end{table}
\begin{rem}
Observe that in the examples where the stoichiometric vectors are linearly dependent, the results of the phase I, ${\theta}_{I\!I,i}^{(0)}$, $i=1,2,3,4$, lies in a hyperplane that reflects a certain amount of indifference in the coefficient estimations. This does not happen in the SIR example where all the estimations in phase I are essentially the same.
\end{rem}
\section{Conclusions}
\label{conclusions}
In this work, we addressed the problem of efficiently computing approximations of expectations of functionals of bridges in the context of stochastic reaction networks by extending the forward-reverse technique developed by Bayer and Schoenmakers in \cite{Bayer}.
We also showed how to apply this technique to the statistical problem of inferring the set of coefficients of the propensity functions.
We presented a two-phase approach, namely the Forward-Reverse Expectation-Maximization (FREM) algorithm, in which the first phase, based on reaction-rate ODEs is deterministic and is intended to provide a starting point that reduces the computational work of the second phase, namely, the Monte Carlo EM Algorithm.
Our novel algorithm for generating bridges provides a clear advantage over shooting methods and methods based on acceptance rejection techniques.
Our work is illustrated with numerical examples.
In the future, we plan to incorporate higher-order kernels and multilevel Monte Carlo methods in the FREM algorithm.
\section*{Acknowledgments}
\thx{
A. Moraes, R. Tempone and P. Vilanova are members of the KAUST SRI Center for
Uncertainty Quantification at the Computer, Electrical and Mathematical Sciences and Engineering Division at King Abdullah University of Science and Technology (KAUST).
}
\newpage
|
2,869,038,154,587 | arxiv | \section{Introduction}
\label{int}
In the canonical quantization approach to quantum field theory (QFT), states of the quantum field containing particles are built up from the vacuum state using particle creation operators.
The definition of particle states therefore relies on the definition of a vacuum state (a state with no particles).
On a curved space-time, in general there is no unique definition of vacuum state, although there may be one or more natural, physically motivated, choices of vacuum.
This can be understood by considering the expansion of a free quantum field as a complete orthonormal set of field modes.
Each mode can be classified as either a positive (or negative) frequency mode,
whose expansion coefficient is an annihilation (or a creation) operator respectively.
Since the vacuum state is defined as the state annihilated by all the annihilation operators,
its definition therefore depends on the split into positive and negative frequency field modes.
In the case of a scalar field, the choice of split into positive and negative frequency field modes is constrained by the fact that positive (or negative) frequency modes must have positive (or negative) Klein-Gordon norm respectively.
The consequences of this constraint on the definition of a vacuum state for a quantum scalar field can be illustrated by the simple toy model of Minkowski space as seen by an observer rotating about the polar axis.
In this case the rotating vacuum is identical to the Minkowski vacuum \cite{L&P}.
The constraint on the definition of the vacuum state also has an impact on the definition of states containing particles.
In particular, rotating thermal states for scalar fields in Minkowski space are ill-defined everywhere unless the system is enclosed inside a time-like boundary sufficiently close to the axis of rotation \cite{L&P,D&O}.
Motivated by the results of \cite{L&P,D&O}, in this letter we consider a quantum scalar field on $n$-dimensional global anti-de Sitter space-time ($adS\!^{}_{n}$). We study the constraints on the definition of an appropriate vacuum state as seen by an observer rigidly rotating about the polar axis.
We find that, as in Minkowski space, the global rotating vacuum is identical to the global nonrotating vacuum.
However, if the angular velocity of the rotating observer is sufficiently small and $n\ge 4$, this global vacuum contains only positive frequency particles as seen by the rotating observer.
\section{Anti-de Sitter space-time in rotating co-ordinates}
\label{ads}
A convenient dimensionless coordinate system for global $adS\!^{}_{n}$ is the set of hyperspherical coordinates\footnote{Throughout this paper we use units in which $c=G=\hbar=1$.},
\be \lble{cs}
\begin{array}{rcll}
-\pi<&\tau&\leq\pi,&\quad\tau=-\pi\text{ and }\tau=\pi\text{ identified,}\\
0\leq&\rho&<\tfrac{\pi}{2},&\\
0\leq&\tno{j}&\leq\pi,&\quad j=1,2,\ldots,n-3,\\
0\leq&\varphi&<2\pi,&
\end{array}
\ee
parametrizing the temporal, radial, polar and azimuthal directions respectively. The coordinate system \eqr{cs} covers $adS\!^{}_{n}$, excluding polar singularities. In terms of the coordinates \eqr{cs}, the metric on $adS\!^{}_{n}$ takes the form
\be
\lble{nonrot}
ds^{2}_{}=a^{2}_{}\left(\sec\rho\right)^{2}_{}\left[-d\tau^{2}+d\rho^{2}_{}+\left(\sin\rho\right)^{2}_{} d\Sigma _{n-2}^{2} \right] ,
\ee
where $a$ is the radius of curvature of $adS\!^{}_{n}$ and $d\Sigma_{n-2}^{2}$ is the metric on the $(n-2)$-sphere.
Since $\tau = - \pi$ and $\tau = \pi $ are identifed, $adS\!^{}_{n}$ admits closed timelike curves.
To remedy this, we work on the covering space $C\!adS\!^{}_{n}$ where $-\infty < \tau < +\infty $.
We now consider global $CadS\!^{}_{n}$ as seen by an observer rotating with a constant angular velocity $\Omega $ about the polar axis.
The line-element for the rotating space-time is found from \eqr{nonrot} by the change of co-ordinates
\be
\tau\mapsto \tilde{\tau},\qquad
\varphi\mapsto \tilde{\varphi}\isdef\varphi-\Omega a\tau
\lble{sqphi}
\ee
and takes the form
\begin{align}
\lble{rot}
ds^{2} = & \; a^{2}_{} \left( \sec \rho \right) ^{2}_{}
\left[-\left( 1 - \Omega ^{2}a^{2}{\mathcal {D}}^{2} \left( \frac {\sin \rho }{\rho }\right) ^{2} \right) d{\tilde {\tau }}^{2}
\right.
\nonumber
\\
& \;
\left.
+2\Omega a {\mathcal {D}}^{2} \left( \frac {\sin \rho }{\rho } \right) ^{2} d{\tilde {\tau }} \, d{\tilde {\varphi }}
+d\rho^{2}_{}+\left(\sin\rho\right)^{2}_{} d{\tilde {\Sigma }}_{n-2}^{2} \right] ,
\end{align}
where $d{\tilde {\Sigma }}_{n-2}^{2}$ is the metric on the $(n-2)$-sphere with $d\varphi $ replaced by $d{\tilde {\varphi }}$
and ${\mathcal {D}}$ is the distance from the rotation axis:
\be
\lble{D}
{\mathcal {D}}= \rho\sin\tno{1}\sin\tno{2}\ldots\sin\tno{n-3}.
\ee
The speed of a rotating observer who has angular speed $\Omega $ about the polar axis increases as the distance ${\mathcal {D}}$ from the polar axis increases, and becomes equal to the speed of light when $g_{{\tilde {\tau}}{\tilde {\tau }}}=0$.
This surface is known as the speed-of-light surface (SOL).
At the SOL we have $\Omega ^{2}a^{2}{\mathcal {D}}^{2} \rho ^{-2} \sin ^{2} \rho =1$, and so, from \eqr{D}, if $\Omega a<1$ there is no SOL; if $\Omega a =1$ the SOL is on the equator at the boundary of the space-time and if $\Omega a >1$ the SOL moves closer to the rotation axis as $\Omega $ increases.
Sketches of the SOL can be found in Fig.~1 of \cite{Ambrus:2014fka}.
\section{Scalar field on global anti-de Sitter space-time}
\label{scalar}
The equation of motion for a real massive free scalar field $\Phi(x)$ coupled to $g^{}_{\mu\nu}$, the metric tensor of global $CadS\!^{}_{n}$, is
\be \lble{hkge}
\left(\Box-M^{2}_{}-\xi\mathcal{R}\right)\Phi=0,
\ee
where
\be
\Box\isdef g^{\mu\nu}_{}\nabla^{}_{\mu}\nabla^{}_{\nu}
\ee
is the $n$-dimensional curved-space Laplacian, $M$ is the mass of the field quanta, and the constant $\xi$ is the coupling between $\Phi$ and $\mathcal{R}$, the Ricci scalar curvature.
Solving the Klein-Gordon equation \eqr{hkge} on the nonrotating global $CadS\!^{}_{n}$ metric \eqr{nonrot}, the mode solutions take the form \cite{Cota1}
\be
\lble{nfm}
\Phi^{}_{r\ell}=N^{}_{r\ell}e^{-i\omega\tau}_{}R(\rho)Y^{}_{\ell}(\theta,\varphi),
\ee
where $N^{}_{r\ell }$ is a normalization constant.
The hyperspherical harmonics $Y^{}_{\ell }(\theta , \varphi )$ are normalized eigenfunctions of the Laplacian on the $(n-2)$-sphere, whose eigenvalues depend
on the angular quantum number $\ell $, which takes the values $\ell = 0,1,2,\ldots $.
For each $\ell $ there are ${\mathcal {M}}^{}_{\ell }$ eigenfunctions, where the multiplicity ${\mathcal {M}}^{}_{\ell }$ is
\cite{Erd2,Muller}
\be \lble{mol}
{\mathcal {M}}^{}_{\ell }=(2\ell+n-3)\frac{(\ell+n-4)!}{\ell!(n-3)!}.
\ee
It will be convenient for our later analysis to separate out the dependence of $Y^{}_{\ell }(\theta , \varphi)$ on the azimuthal angle $\varphi $, so we write
\be \lble{hh}
Y^{}_{\ell}(\theta,\varphi)=e^{\pm im\varphi}_{}\Theta^{}_{\ell m}(\theta),
\ee
where $m\ge 0$ is the azimuthal quantum number and
\be
\theta\isdef\left(\tno{1},\tno{2},\ldots\tno{n-3}\right).
\ee
The function $\Theta ^{}_{\ell m}(\theta )$ also depends on additional quantum numbers associated with the angles $\theta _{2},\ldots , \theta _{n-3}$,
which we denote $m_{1},\ldots ,m_{n-4}$. For compactness of notation, we do not explicitly write out this dependence.
These additional quantum numbers satisfy the inequalities \cite{Erd2,Muller}
\be
\lble{aqn}
\ell \geq m^{}_{1}\geq\ldots\geq m^{}_{n-4}\geq m \ge 0.
\ee
Although $CadS\!^{}_{n}$ does not have any closed time-like curves, it is not a globally hyperbolic space-time because of the time-like boundary at $\rho =\tfrac {\pi }{2}$.
In order to have a well-defined QFT in the next section, appropriate boundary conditions have to be placed on the scalar field $\Phi $ \cite{AS&I}.
We consider regular modes \cite{B&F} which satisfy reflective boundary conditions $\Phi =0$ on $\rho = \tfrac {\pi }{2}$.
These modes exist provided
\be
\lble{k}
k = {\sqrt {M_{}^{2}a_{}^{2}+\xi\mathcal{R}a^{2}_{}+\frac{(n-1)_{}^{2}}{4} }} + \frac {n-1}{2}
\ee
is real.
With this assumption, the radial function in \eqr{nfm} takes the form
\be
\lble{rm}
R(\rho)\isdef(\sin\rho)^{\ell}_{}(\cos\rho)^{k}_{}P^{\,\left(\ell+\frac{n-3}{2},\,k-\frac{n-1}{2}\right)}_{r}\left(\cos(2\rho)\right),
\ee
where $P^{\,\left(\ell +\frac{n-3}{2},\,k-\frac{n-1}{2}\right)}_{r}$ is a Jacobi polynomial of degree $r$ and we have introduced the radial quantum
number $r=0,1,\ldots $.
The modes \eqr{nfm} are normalized according to the Klein-Gordon inner product:
\be
\lble{KGinner}
\langle\Phi^{}_{r\ell},\Phi^{}_{r'\ell'}\rangle^{}_{\text{KG}}=-\int^{}_{H}d^{n-1}_{}\boldsymbol{x}\sqrt{g}g^{\tau\tau}_{}\cc{\Phi^{}_{r\ell}} \overset{\leftrightarrow}{\partial^{}_{\tau}}\Phi^{}_{r'\ell'},
\ee
evaluated on some space-like hypersurface of simultaneity $H$, with
\be
A\overset{\leftrightarrow}{\partial}^{}_{\mu }B\isdef A\partial^{}_{\mu}B-(\partial^{}_{\mu}A)B,
\ee
and
\be
g\isdef\abs{\det g^{}_{\mu\nu}}.
\ee
The normalization constant $N^{}_{r\ell }$ is then found to be \cite{Cota1}
\be \lble{N}
N^{}_{r\ell}=a_{}^{\frac{2-n}{2}}\,\sqrt{\frac{r!\gf{r+\ell+k}}{\gf{r+\ell+\frac{n-1}{2}}\gf{r+k-\frac{n-3}{2}}}}.
\ee
\section{Defining a global nonrotating vacuum}
\label{vac}
As outlined in the Introduction, the first step in defining a global vacuum state is to split the field modes into positive and negative frequency.
We start by considering the nonrotating modes \eqr{nfm}.
These modes have frequency $\omega $ as seen by a static observer in global $CadS\!^{}_{n}$.
Computing their Klein-Gordon inner product \eqr{KGinner}, we find
\be
\langle\Phi^{}_{r\ell},\Phi^{}_{r'\ell'}\rangle^{}_{\text{KG}}=\frac {\omega }{\left| \omega \right|} \delta^{}_{rr'}\delta^{}_{\ell\ell'}.
\lble{KGnorm}
\ee
Therefore modes with positive $\omega $ have positive Klein-Gordon norm, while those with negative $\omega $ have negative norm.
We therefore take $\omega >0$ as our definition of positive frequency.
With this assumption, for $n\ge 4$ the frequency $\omega $ is given in terms of the radial and angular quantum numbers \cite{Cota1}:
\be \lble{o}
\omega=k+\ell+2r ,
\ee
which is manifestly positive as $k$ \eqr{k}, $\ell, r$ are all positive.
We discuss the $n=3$ case in the next section.
The quantum scalar field is expanded in terms of these modes as
\be
\Phi = \sum _{r=0}^{\infty }\sum _{\ell =0}^{\infty} \sum _{m,m_{1},\ldots, m_{n-4}}
\left[
b^{}_{r\ell }\Phi ^{}_{r\ell } + b^{\dagger }_{r\ell }{\cc {\Phi ^{}_{r\ell }}}
\right] ,
\lble{exp}
\ee
where $m_{1},\ldots ,m_{n-4}$ are additional quantum numbers arising in the spherical harmonics \eqr{hh}.
We have suppressed the dependence of $\Phi _{r\ell }$ and $b^{}_{r\ell }$ on these additional quantum numbers just to keep the notation compact.
Quantizing the field, the coefficients $b^{}_{r\ell }$ and $b^{\dagger }_{r\ell }$ are promoted to operators satisfying the usual
commutation relations:
\be
[ b^{}_{r\ell }, b^{\dagger }_{r'\ell'} ] = \delta _{rr'}\delta _{\ell \ell'} \delta \left(m, m'\right) ,
\quad
[ b^{}_{r\ell }, b{}_{r'\ell'}] = 0 = [ b^{\dagger }_{r\ell }, b^{\dagger }_{r'\ell '} ],
\ee
where we have introduced the notation
\be
\delta \left (m,m' \right) = \delta _{mm'}\delta _{m_{1},m_{1}'} \ldots \delta _{m_{n-4},m_{n-4}'}.
\ee
The global nonrotating vacuum state $\left| 0 \right\rangle $ is then defined as that state annihilated by all the $b_{r\ell }$ operators:
\be
b_{r\ell }\left| 0 \right\rangle = 0.
\lble{vac}
\ee
This vacuum state has been studied in detail in \cite{KW-pI}, where the expectation values of the renormalized quadratic field fluctuations and
stress-energy tensor are computed.
\section{Defining a global rotating vacuum}
\label{rot}
Now we turn to the definition of a global rotating vacuum state. Scalar field modes on the rotating global $CadS\!^{}_{n}$ metric \eqr{rot} are easily found from those on the nonrotating metric \eqr{nonrot} by making the coordinate transformation \eqr{sqphi} in the modes \eqr{nfm}, yielding
\be \lble{rfm}
\tilde{\Phi}^{}_{r\ell}(x)=N_{r\ell}e^{- i\tilde{\omega}\tilde{\tau}}_{}R(\rho)e^{im\tilde{\varphi}}_{}\Theta^{}_{\ell m}(\theta),
\ee
where
\be
\lble{sqo}
\tilde{\omega}\isdef\omega-\Omega a m.
\ee
An observer rotating about the polar axis with angular velocity $\Omega $ measures the frequency of the modes \eqr{nfm} to be ${\tilde {\omega }}$ \eqr{sqo}.
In this case, it is natural to consider the modes in the alternative form \eqr{rfm}.
However, our choice of positive frequency is restricted by the fact that positive frequency modes must have positive Klein-Gordon norm.
From \eqr{KGnorm}, the only possible choice of positive frequency is $\omega >0$.
We therefore expand the field as in the nonrotating case \eqr{exp}, and end up with the global nonrotating vacuum $\left| 0 \right\rangle $ \eqr{vac}.
In Minkowski space, the set of modes with positive Klein-Gordon norm always contains some modes which have negative frequency as seen by an observer rotating
about the polar axis.
This has serious consequences for the construction of states containing particles, and, in particular, rotating thermal states.
The rotating observer measures energy ${\tilde {\omega }}$ for the field modes, and so the natural definition of a rotating thermal state will have
energy ${\tilde {\omega }}$ in the Planck factor \cite{Vilenkin:1980zv}.
However, this definition leads to rotating thermal states for a quantum scalar field being ill-defined everywhere in Minkowski space-time \cite{Vilenkin:1980zv,D&O,A&Wro}.
The only solution to this problem is to enclose the system inside a time-like boundary which is sufficiently close to the axis of rotation \cite{Vilenkin:1980zv,D&O}.
The inclusion of the boundary solves the problem by ensuring that modes with positive Klein-Gordon norm also have positive frequency as seen by the
rotating observer.
Given that $CadS\!^{}_{n}$ has a time-like boundary at $\rho = \tfrac {\pi }{2}$, the question arises as to whether modes with positive Klein-Gordon norm on
$CadS\!^{}_{n}$ can have negative frequency as seen by an observer rotating about the polar axis with angular velocity $\Omega $.
In other words, are there field modes \eqr{rfm} which have $\omega >0$ but ${\tilde {\omega }}<0$?
For $n\ge 4$, we note that $\omega >0$ is given in terms of the quantum numbers $r$ and $\ell $ \eqr{o}, and from this the inequalities \eqr{aqn}
imply that, for $\omega >0$, we have
\be
\omega \ge k + 2r + m > m ,
\ee
since $k>0$ \eqr{k}.
Hence, from \eqr{sqo}
\be
{\tilde {\omega }} = \omega - \Omega a m > m \left( 1- \Omega a \right) .
\ee
Therefore, if $\Omega a <1$, it will be the case that modes with positive Klein-Gordon norm also have positive frequency as seen by the rotating observer.
If $\Omega a <1$, then, from the discussion in Sec.~\ref{ads}, the rotating space-time does not have a SOL.
Our results on global $CadS\!^{}_{n}$ for $n\ge 4$ therefore agree with those in rotating Minkowski space \cite{Vilenkin:1980zv,D&O}: if there is no SOL, then modes with
positive Klein-Gordon norm have positive frequency as seen by the rotating observer.
In Minkowski space showing this result depends on the properties of the zeros of Bessel functions \cite{D&O}, whereas in $CadS\!^{}_{n}$ it comes from the relationship between the mode frequency and the quantum numbers, and the inequalities \eqr{aqn} satisfied by the angular quantum numbers.
The situation on global $CadS\!^{}_{3}$ is slightly different. In order that positive frequency modes have positive
Klein-Gordon norm, we must still have $\omega >0$ as the definition of positive frequency.
This means that the only choice of global vacuum state remains the global nonrotating vacuum.
However, for $n=3$ the frequency $\omega $ depends on the azimuthal quantum number $m\ge 0$ as follows \cite{Parikh1}:
\be
\omega = k + 2r \pm m,
\lble{o3}
\ee
so that
\be
{\tilde {\omega }} = \omega - ma\Omega = k + 2r - m \left( a\Omega \mp 1 \right) .
\lble{o3t}
\ee
Therefore there exist, for sufficiently large $m$, counter-rotating modes (corresponding to the lower signs in (\ref{e:o3}, \ref{e:o3t})) which have
$\omega >0$ but ${\tilde {\omega }}<0$ \cite{Parikh1}.
Such modes have negative frequency as seen by the rotating observer and, as discussed above, are anticipated to render
rotating thermal states ill-defined.
In \cite{Parikh1} an alternative vacuum state is defined when $n=3$ for a rotating anti-de Sitter space-time which has a cylindrical region near the axis of rotation removed.
There is also a family of alternative vacua on rotating Rindler-$adS\!^{}_{3}$ space-time \cite{Parikh1}.
Rotating Rindler-$adS\!^{}_{3}$ possesses an event horizon and corresponds to a portion of the global
$adS\!^{}_{3}$ space-time in the same way that the usual Rindler space-time is only a part of global Minkowski space-time.
In this paper we are considering the entire global $CadS\!^{}_{n}$ space-time and the alternative vacuum states from \cite{Parikh1} cannot be defined in this case.
\section{Conclusions}
\label{conc}
In this paper we have studied a quantum scalar field on global $CadS\!^{}_{n}$ as seen by an observer rotating about the polar axis with angular velocity $\Omega $.
We found that the requirement that positive frequency modes must have positive Klein-Gordon norm (to ensure that the particle annihilation and creation operators satisfy the correct commutation relations) restricts our choice of vacuum state, so that the only possibility is the global nonrotating vacuum.
If $n\ge 4$ and the angular velocity satisfies the inequality $\Omega a <1$ (where $a$ is the radius of curvature of $adS\!^{}_{n}$), then scalar field modes with positive Klein-Gordon norm also have positive frequency as seen by the rotating observer.
In this case the global nonrotating vacuum is the natural state to use for constructing states which contain particles as seen by the rotating observer.
It is of note that if $\Omega a<1$ then the rotating $CadS\!^{}_{n}$ space-time does not have a speed-of-light surface (SOL).
Our results are in accordance with previous work on quantum scalar fields on rotating Minkowski space-time, in particular (i) the global rotating vacuum is identical to the global nonrotating vacuum, and (ii) if the space-time does not have a SOL (in Minkowski space this is achieved by enclosing the system in a boundary sufficiently close to the axis of rotation) then modes with positive Klein-Gordon norm have positive frequency as seen by the rotating observer.
In Minkowski space, if the boundary is inside the SOL, then it is possible to define rotating thermal states for a quantum scalar field, but such states are ill-defined everywhere if the boundary is either outside the SOL or absent \cite{D&O}.
We expect that similar results will be true in $CadS\!^{}_{n}$ with $n\ge 4$: that if $\Omega a <1$ then rotating thermal states are well-defined for a quantum scalar field, but they are not if $\Omega a \ge 1$.
We will investigate this in detail in a future publication \cite{KWaip}.
\section*{Acknowledgments}
The work of C.K. is supported by EPSRC UK, while that of E.W. is supported by the Lancaster-Manchester-Sheffield Consortium for Fundamental Physics under STFC grant ST/L000520/1.
|
2,869,038,154,588 | arxiv | \section{Introduction}
The concentration--compactness method is nowadays a basic tool in applied mathematics for the analysis of variational problems with lack of compactness or more specifically for proving existence of solutions of non-linear partial differential equations which are invariant under a group of transformations. In this review we explore the applicability of the concentration--compactness method on the $X^\alpha$-Schr\"odinger-Poisson model. We will also highlight some related questions, which raise a number of open issues.
Our purpose is to study the existence of steady states of the so-called $X^\alpha$-Schr\"odinger-Poisson ($X^\alpha$-SP) model or Maxwell-Schr\"odinger-Poisson system:
\begin{eqnarray}
&&i\,\frac{\partial \psi}{\partial t}=-\Delta_x\psi+V(x,t)\,\psi-C\,|\psi(x,t)|^{2\alpha}\,\psi\,,\nonumber\\
&&-\Delta_xV=\epsilon\,4\pi\,|\psi|^2,\label{XASP}\\
&&\psi(x,t=0)=\phi(x)\,,\nonumber
\end{eqnarray}
with $\phi\in \L^2(\mathbb R^3)$, $x\in\mathbb R^3$, $t\ge 0$. The self-consistent Poisson potential $V$ is explicitly given by $V(x,t)=\epsilon\,|\psi(x,t)|^2\star|x|^{-1}$, where $\star$ refers to the convolution with respect to $x$ on $\mathbb R^3$ and where $\epsilon$ takes the value $+1$ or $-1$, depending whether the interaction between the particles is repulsive or attractive. The system \eqref{XASP} can therefore be reduced to a single non-linear and non-local Schr\"odinger-type equation
\begin{eqnarray}\label{eq:XASP}
&&i\,\frac{\partial \psi}{\partial t}=-\Delta_x\psi+\epsilon\,\Big(|\psi|^2 \star|x|^{-1}\Big)\,\psi-C\,|\psi|^{2\alpha}\,\psi\,,\\
&&\psi(x,t=0)=\phi(x)\,.\nonumber
\end{eqnarray}
Such a model appears in various frameworks, such as black holes in gravitation \hbox{($\epsilon=-1$)}~\cite{RuSo}, one-dimensional reduction of electron density in plasma physics \hbox{($\epsilon=+1$)}, as well as in semiconductor theory ($\epsilon=+1$), as a correction to the Schr\"odinger-Poisson system (which is $X^\alpha$-SP with $C=0$): see~\cite{BoLoSo,LiSi,Mauser} and references therein.
In the plasma physics case, the $X^\alpha$-SP correction takes into account a nonlinear, although local, correction to the Poisson potential
of opposite sign given by $-\,C\,\vert\psi\vert^{2\alpha}$, where $C$ is a positive constant and where the parameter $\alpha$, responsible for the name of the model, takes values in the range $0<\alpha\le\frac23$. Some relevant values are for example $\alpha=\frac13$, which is called the Slater correction, or $\alpha=\frac23$, which gives rise to the so-called Dirac correction. The idea is to balance the Poisson potential (also called Coulombian potential in the electrostatic case) with a local potential term of opposite sign. This generates a competition between the two potential energies and the kinetic energy that, depending on the values of the constant $C$, can modify the typically dispersive dynamics of the Schr\"{o}dinger-Poisson system~\cite{IlSwLa,SaSo1} in the plasma physics case. The local nonlinear term also modifies the properties of the solutions in the gravitational case, thus leading to a richer behaviour~\cite{BoLoSaSo}. Note that the physical constants have been normalized to unity here for the sake of simplicity.
Throughout the paper we focus our attention on the plasma physical case. Similar techniques can be used for extending our results to the gravitational case. Notice that when $\epsilon=-1$ (gravitational case), the sign of the energy associated to the Poisson potential (also called Newtonian potential) allows to introduce symmetric rearrangements that contribute to simplify some computations~\cite{Lieb,LiebLoss}. In this paper, we shall therefore assume that
\[
\epsilon=+1\,.
\]
We will be concerned with the existence of standing waves, that is, solutions to \eqref{eq:XASP} of the form
\begin{eqnarray*}
\psi(x,t)=e^{i\ell_M t}\,\varphi(x)
\end{eqnarray*}
with $\ell_M>0$ and $\varphi$ in $\L^2(\mathbb R^3)$ solving
\be{eq:sw}
-\Delta \varphi+\epsilon\,\big(|\varphi|^2 \star|x|^{-1}\big)\,\varphi-C\,|\varphi|^{2\,\alpha}\,\varphi\,+\ell_M\,\varphi=0\,.
\end{equation}
Equation~\eqref{eq:sw} is a special case of Schr\"odinger-Maxwell equations~\cite{DAMu}.
The existence and stability analysis of such solutions relies on some preserved physical quantities. The total \emph{mass} (which is also the total electronic charge in the repulsive case, when $\epsilon=+1$)
\[
M[\psi]:=\int_{\mathbb R^3}|\psi(x,t)|^2\,dx
\]
and the \emph{energy} functional
\[
\mathrm E[\psi]:=\mathrm E_{\mathrm{kin}}[\psi]+\mathrm E_{\mathrm{pot}}[\psi]
\]
are invariant quantities for any solution of $X^\alpha$-SP along the time evolution, where the \emph{kinetic} and \emph{potential} energies are defined by
\[
\mathrm E_{\mathrm{kin}}[\psi]:=\frac12\ir{|\nabla\psi(x,t)|^2}\,,\quad\mathrm E_{\mathrm{pot}}[\psi]:=\frac{\epsilon}4\,\D\psi-\frac C{2\alpha+2} \int_{\mathbb R^3} |\psi(x,t)|^{2\alpha+2}\,dx
\]
and
\[
\D\psi:=\iint_{\mathbb R^3\times\mathbb R^3}\frac{|\psi(x,t)|^2\,|\psi(x',t)|^2}{|x-x'|}\,dx\,dx'\,.
\]
The existence of standing waves has been carried out from various perspectives in the vast mathematical literature devoted to this topic. Either one investigates the existence of critical points of the functional $\mathrm E[\varphi]+\ell_M\,M[\varphi]$ on the whole space $\mathrm H^1(\mathbb R^3)$, with the parameter $\ell_M$ being given and fixed, and in that case the $\L^2(\mathbb R^3)$ norm of the solution is not prescribed (see for instance~\cite{DR} and references therein); or one looks for critical points of the energy functional $\mathrm E[\varphi]$ with prescribed $\L^2(\mathbb R^3)$ norm, and then the parameter $\ell_M$ enters into the game as a Lagrange multiplier of the constrained minimization problem. From a physical point of view, the most interesting critical points, the so-called \emph{steady states,} are the minimizers of the problem
\be{minienergy}
I_M:=\inf\big\{\mathrm E[\varphi]\,:\,\varphi\in\Sigma_M\big\}\,,\quad\Sigma_M:=\big\{\varphi\in\mathrm H^1(\mathbb R^3)\,:\,\|\varphi\|_{\L^2(\Real^3)}^2=M\big\}\,.
\end{equation}
Their interest lies in \emph{stability} properties stated in terms of the energy and the mass. Such a feature is of course well known in the literature, see for instance~\cite{CaLi}, and it provides an easier approach than other methods, which are anyway needed when elaborate variational methods are required like in~\cite{BeJeLu}. The energy functional is not bounded from below when $\alpha>\frac23$. When $\alpha>2$, the exponent $2\alpha+2$ lies outside of the interval $(2,6)$ and then $\mathrm H^1(\mathbb R^3)$ is not embedded in $L^{2\alpha+2}(\mathbb R^3)$. We therefore restrict our analysis to the range $\alpha$ in $(0,2)$.
Concerning the existence of steady states, let us make the following observations. First of all, the energy and mass functionals are translation invariant that is, for every $y\in \mathbb R^3$,
\[
\mathrm E[\varphi(\cdot +y)]=\mathrm E[\varphi]\,, \quad M[\varphi(\cdot +y)]=M[\varphi]\,.
\]
Therefore the concentration--compactness method~\cite{bi:PLL-CC1cras,bi:PLL-CC1,bi:PLL-CC2} is the natural framework for the study of the existence of a minimizer and for the analysis of the behavior of the minimizing sequences to \eqref{minienergy} and their possible lack of compactness. According to the terminology of the concentration--compactness principle, from any minimizing sequence $\{\varphi_n\}_{n\ge 1}$ in $\Sigma_M$ we can extract a subsequence (denoted in the same way for simplicity) that either \emph{vanishes,} that is,
\be{vanishing}
\limsup_{n\to\infty}\;\sup_{y\in \mathbb R^3}\int_{y+B_R}\varphi_n^2\,dx=0\quad\forall\,R>0\,,
\end{equation}
or satisfies the property
\be{nonvanishing}
\exists\,R_0>0\,,\;\exists\,\varepsilon_0>0\,,\;\exists\,\{y_n\}_{n\ge 1}\subset\mathbb R^3\quad\mbox{such that}\quad\int_{y_n+B_{R_0}}\varphi_n^2\,dx\ge\varepsilon_0\,.
\end{equation}
In the first case, for any sequence $\{y_n\}_{n\geq 1}$ in $\mathbb R^3$, $\{\varphi_n(\cdot+y_n)\}_{n\geq 1}$ converges to zero weakly in $\mathrm H^1(\mathbb R^3)$. In the second case, up to the extraction of a subsequence, the sequence $\{\varphi_n(\cdot+y_n)\}_{n\geq 1}$ converges weakly towards a nonzero function $\varphi_*$ such that
\[
\int_{\mathbb R^3} \varphi_*^2\,dx=\mu > 0\,.
\]
If $\mu=M$, then compactness (\emph{i.e.,} the strong convergence of subsequences) holds. In the opposite case, $\mu <M$, then \emph{dichotomy} occurs, that is, the splitting of the functions in at least two parts that are going away from each other: see~\cite{bi:PLL-CC1cras,bi:PLL-CC1,bi:PLL-CC2} for more details.
The concentrated--compactness method yields the strict inequalities
\be{ineqstrict}
I_M<I_{M'}+I_{M-M'}\quad\forall\,M\,,\;M'\quad\mbox{such that}\quad 0<M'<M
\end{equation}
as necessary and sufficient conditions for the \emph{relative compactness} up to translations of all minimizing sequences. In this case, we deduce the existence of a minimizer and its orbital stability under the flow \eqref{XASP}. The proof of this equivalence is based on the fact that the only possible loss of compactness for minimizing sequences occurs either from vanishing or from dichotomy. Note that the so-called large inequalities
\be{ineqlarge}
I_M\le I_{M'}+I_{M-M'}\quad\forall\,M\,,\;M'\quad\mbox{such that}\quad 0<M'<M
\end{equation}
always hold true due to the translation invariance. For any $\varepsilon>0$, one may indeed find $C^\infty$ functions $\phi_\varepsilon \in\Sigma_{M'}$ and $\psi_\varepsilon\in\Sigma_{M-M'}$, both with compact supports, such that $I_{M'}\le\mathrm E[\phi_\varepsilon]\le I_{M'}+\varepsilon$ and $I_{M-M'}\le\mathrm E[\psi_\varepsilon]\le I_{M-M'}+\varepsilon$. Then, for any unit vector $e$ in $\mathbb R^3$ and for $n\in \mathbb N$ large enough such that $\phi_\varepsilon$ and $\psi_\varepsilon(\cdot+n\,e)$ have disjoint supports, we have $\phi_\varepsilon+\psi_\varepsilon(\cdot+n\,e)\in \Sigma_M$ and
\[
I_M\le\limsup_{n\to+\infty} \mathrm E[\phi_\varepsilon+\psi_\varepsilon(\cdot+n\,e)]\le I_{M'}+I_{M-M'}+2\varepsilon\,.
\]
The conclusion follows since $\varepsilon$ can be made arbitrarily small. For our particular problem, it can be easily proved that vanishing cannot hold for any minimizing sequence of \eqref{minienergy} if $I_M <0$, although it might hold when $I_M=0$. This is based on Lemma I.1 in~\cite{bi:PLL-CC2} that ensures that vanishing minimizing sequences converge to zero strongly in $L^{2\alpha+2}(\mathbb R^3)$. When $I_M=0$, vanishing has to be avoided by considering particular sequences.
Furthermore, when relative compactness up to translations can be proved for any minimizing sequence, it can also be stated that the minimizing steady state solution is orbitally stable in the sense developed in~\cite{CaLi}, thanks to the fact that mass and energy are time preserved quantities for solutions to \eqref{XASP}. In this sense, let us mention that the well-posedness of the $X^\alpha$-SP system was proved in~\cite{Caze} (Remark~6.5.3) for $\alpha\in (0,\frac23)$. For the case $\alpha=\frac23$, the existence of global solutions was proved~\cite{Caze} only for initial data with $\|\phi\|_{\mathrm H^1(\mathbb R^3)}$ small enough. A theory of existence of $\L^2(\Real^3)$ mixed-state solutions was developed in~\cite{BoLoSo} for the Slater case, $\alpha=\frac 13$. Stability properties have been proved to be false for other kind of standing waves, see for instance~\cite{BeJeLu}.
Our aim is to discuss the applicability of the concentration--compactness method to the problem \eqref{minienergy} for proving the existence of $X^\alpha$-SP \emph{steady states.} Recall that such solutions are minimizers of the energy functional under mass constraint. Let us summarize the results presented in this work in Table \ref{table:resultRef}, with some references for previously known results.
\begin{table}[ht]
\begin{tabular}{|c|c|c|c|}
\hline
$\alpha$ & Energy infimum & Existence of steady states & Ref.\\
\hline
$0$ & $I_M <0$ & No &~\cite{IlSwLa,SaSo1}\\
\hline
$(0,\frac12)$ & $I_M <0$ & Yes, for small $M$ &~\cite{CL1,SaSo,BeSiJFA,BeSiZAMP}\\
& & Open for large $M$ &\\
\hline
$\frac12$ &$I_M=0$ if $C<\frac3{\sqrt2\,\mathrm C_{1/2}}$ & No &~\cite{JeanLuo2012}\\
& $I_M=0$ if $C=\frac3{\sqrt2\,\mathrm C_{1/2}}$ & Open &\\
& $I_M <0$ if $C> \frac3{\sqrt2\,\mathrm C_{1/2}}$ & Yes &\\
\hline
$(\frac12,\frac23)$ & $I_M=0$ if $C\,M^{4\alpha-2}<V_c(\alpha)$ & No &\\
& $I_M=0$ if $C\,M^{4\alpha-2}=V_c(\alpha)$ & Yes &~\cite{JeanLuo2012}\\
& $I_M <0$ if $C\,M^{4\alpha-2}> V_c(\alpha)$ & Yes &~\cite{BeSiZAMP}\\
\hline
$\frac23$ & $I_M=0$ if $C\,M^{\frac23}\le\frac{5}{3\,\mathrm C_{2/3}}$ & No &\\
& $I_M=-\infty$ if $C\,M^{\frac23} > \frac{5}{3\,\mathrm C_{2/3}} $& No &\\
\hline
$(\frac23,2)$ & $I_M=-\infty$ & No &~\cite{BeJeLu}\\
\hline
\end{tabular}
\label{table:resultRef}
\caption{Table of existence results of steady states and related references.}
\end{table}
In this table, the constant $\mathrm C_\alpha$ denotes the optimal constant in the inequality
\[
\nrm u{2\alpha+2}^{2\alpha+2}\le\mathrm C_\alpha\,\nrm u2^{8\alpha-4}\,\D u^{2-3\alpha}\,\nrm{\nabla u}2^{6\alpha-2}\quad\forall\,u\in\mathrm H^1(\mathbb R^3)\,,
\]
with $\D u=4\pi\ir{u^2\,(-\Delta)^{-1}\,u^2}$. The constant
\be{Vc}
V_c(\alpha):=\frac{\alpha+1}{\mathrm C_\alpha}\left(\frac1{3\alpha-1}\right)^{3\alpha-1}\left(\frac1{2\,(2-3\alpha)} \right)^{{2-3\alpha}}
\end{equation}
will appear in Proposition \ref{nonnegative}.
In this review, we emphasize that many partial results can been found in various papers and, concerning variational approaches, particularly in~\cite{BeSiZAMP,BeSiJFA,JeanLuo2012,BeJeLu}. For other existence and non-existence results with the Lagrange parameter taken as a parameter, we refer to~\cite{MR1986248,DAMu,MR1896096,DR,MR2318269}. For solutions satisfying a \emph{Pohozaev constraint} (see Proposition~\ref{Prop:Phozaev}) and in particular the so-called \emph{ground state} solutions, we refer to \cite{MR2422637,DAMu,DR}. Our contribution mostly lies in a unified framework based on the concentration-compactness method. Results corresponding to the ranges $0<\alpha<\frac 12$, $\alpha=\frac 12$ and $\frac 12<\alpha<\frac 23$ have been collected respectively in Propositions~\ref{prop:below-half}, \ref{prop:half}, and \ref{prop:above}. Our main original contribution deals with the threshold case $\alpha=\frac 12$. We also invite the reader to pay attention to the remarks of Section~\ref{Sec:steady} and to Proposition~\ref{prop:onlyVanishing} for a some open problems.\normalcolor
In the range $\alpha\in (0,\frac12)$, we are going to prove that the strict inequalities~\eqref{ineqstrict} hold at least for $M$ small enough. The strategy of proof is inspired by~\cite{CL1} (Appendix~3) and is reproduced here for the reader's convenience. The same result has been derived in~\cite{BeSiZAMP,SaSo} for $\alpha=\frac13$ and $0<M <M_c$, and in~\cite{BeSiJFA,BeSiZAMP,CL1,SaSo} for any $\alpha\in (0,\frac12)$ and any small positive $M$. As far as the authors know, the critical case ($\alpha=\frac12$) has been treated only in~\cite{JeanLuo2012} in the specific case $C=1$, where $I_M=0$; in that case the non-existence of a minimizer has been established. We will show here that there exists a critical value for $C$, which is $3/(\sqrt2\,\mathrm C_{1/2})$, such that for larger values of~$C$ the minimizers exists but not for smaller values. The existence of minimizers for the critical value of $C$ is still an open problem, equivalent to the existence of optimal functions for the above inequality with $\alpha=\frac12$. When $\alpha\in\left(\frac12,\frac23\right)$, existence holds if and only if $M$ is large enough. The result of existence of steady states was previously obtained in~\cite{BeSiZAMP}. No steady states exist in the cases $\alpha=0$ or $\alpha\in\left[\frac23,2\right)$. The result for $\alpha=0$ is in agreement with the general dispersion property verified by the solutions to the repulsive Schr\"odinger-Poisson system proved in~\cite{IlSwLa,SaSo1}. It is also one of the motivations for introducing the local, nonlinear correction to the model. Although the existence of minimizers cannot be expected in the case $\alpha\in\left(\frac23,2\right)$ because $I_M=-\infty$, the existence and instability of other standing waves has recently been proved in~\cite{BeJeLu}. Also see \cite{MR2422637,DAMu,DR} for \emph{ground state} solutions.
For completeness, let us mention that symmetry breaking issues are not completely understood~\cite{LopesMaris,MR2926239}. In this direction, new approaches could be useful like those developed in~\cite{FelliSchneider} and subsequent papers. Stability of minimizers with null energy also raises a number of open questions.
\section{\emph{A priori} estimates and consequences}\label{sec:energy}
Before tackling the existence of steady states, we have to make sure that the minimization problem is well-posed for $\alpha\in[0,\frac23)$, and for small masses $M$ in the case $\alpha=\frac 23$. Let us first recall the Gagliardo-Nirenberg inequality
\be{GNinequality}
\nrm u{2\alpha+2}^{2\alpha+2}\le\C_{\mathrm{GN}}(\alpha)\,\nrm{\nabla u}2^{3\alpha}\,\nrm u2^{2-\alpha}\quad\forall\,u\in\mathrm H^1(\mathbb R^3)
\end{equation}
where $\C_{\mathrm{GN}}(\alpha)$ is the optimal constant, depending only on $\alpha\in[0,2]$.
\begin{lemma}\label{lem:apriori} For any $\alpha\in[0,\frac 12]$, there is a positive constant $\mathrm K_\alpha$ such that, for any $u\in\mathrm H^1(\mathbb R^3)$, we have
\be{App:Interpolation1}
\nrm u{2\alpha+2}^{2\alpha+2}\le\mathrm K_\alpha\,\nrm u2^{2-4\alpha}\,\D u^\alpha\,\nrm{\nabla u}2^\alpha
\end{equation}
and for any $\alpha\in[\frac 12,\frac23]$, there is a positive constant $\mathrm C_\alpha$ such that, for any $u\in\mathrm H^1(\mathbb R^3)$, we have
\be{App:Interpolation2}
\nrm u{2\alpha+2}^{2\alpha+2}\le\mathrm C_\alpha\,\nrm u2^{8\alpha-4}\,\D u^{2-3\alpha}\,\nrm{\nabla u}2^{6\alpha-2}\,.
\end{equation}
\end{lemma}
The case $\alpha=\frac12$ has been established by P.-L.~Lions~\cite{Lions2} in Formula (55) page~54 and is common to the two inequalities, with $\mathrm K_{1/2}=\mathrm C_{1/2}$. The case $\alpha=\frac23$ is a special case of \eqref{GNinequality}, with $\mathrm C_{2/3}=\C_{\mathrm{GN}}(2/3)$. For completeness, let us give a proof.
\begin{proof} We recall that $\D u=4\pi\ir{u^2\,(-\Delta)^{-1}\,u^2}$. By expanding the square and integrating by parts, we get that
\begin{multline*}
0\le\ir{|\nabla u-a\,\nabla(-\Delta)^{-1}\,u^2|^2}\\
=\ir{|\nabla u|^2}+a^2\ir{u^2\,(-\Delta)^{-1}\,u^2}-2a\ir{u^3}\,,
\end{multline*}
that is, for an arbitrary positive parameter $a$,
\[
\ir{u^3}\le\frac1{2a}\ir{|\nabla u|^2}+\frac a2\ir{u^2\,(-\Delta)^{-1}\,u^2}\,.
\]
After optimizing on $a$, we obtain that
\be{estimlions}
\nrm u3^6\le\frac1{4\pi}\,\nrm{\nabla u}2^2\,\D u\,.
\end{equation}
This proves \eqref{App:Interpolation1} and \eqref{App:Interpolation2} when $\alpha=\frac12$. The range $\alpha\in[0,\frac 12]$ is then covered by H\"older's inequality $\nrm u{2\alpha+2}\le\nrm u2^{2-4\alpha}\,\nrm u3^{6\alpha}$.
For $\alpha=\frac23$, \eqref{App:Interpolation2} coincides with \eqref{GNinequality}, namely
\[
\nrm u{10/3}^{10/3}\le\C_{\mathrm{GN}}(\tfrac23)\,\nrm{\nabla u}2^2\,\nrm u2^{4/3}\,.
\]
Hence the case $\alpha\in[\frac 12,\frac23]$ is covered by H\"older's inequality
\[
\nrm u{2\alpha+2}^{\alpha+1}\le\nrm u3^{3(2-3\alpha)}\,\nrm u{10/3}^{5(2\alpha-1)}\,.
\]
\end{proof}
Notice that from~\eqref{estimlions} we know that
\[
\mathrm C_{1/2}\le\frac1{2\,\sqrt\pi}\,.
\]
\begin{lemma}\label{boundedness} The energy functional $\mathrm E$ is bounded from below in $\Sigma_M$, if either $\alpha\in[0,\frac23)$ or $\alpha=\frac23$ and $C\,\C_{\mathrm{GN}}(\frac 23)\,M^{2/3}\le\frac53$. If either $\alpha\in[0,\frac23)$ or $\alpha=\frac23$ and $C\,\C_{\mathrm{GN}}(\frac 23)\,M^{2/3}<\frac53$, any minimizing sequence for $I_M$ is uniformly bounded in $\mathrm H^1(\mathbb R^3)$.\end{lemma}
\begin{proof} As a direct consequence of \eqref{GNinequality}, for every $\varphi\in \Sigma_M$ we have the estimate
\[
\mathrm E[\varphi]\ge\frac 12\,\nrm{\nabla\varphi}2^2-\frac{C\,\C_{\mathrm{GN}}(\alpha)}{2\alpha+2}\,M^{\frac{2-\alpha}2}\,\nrm{\nabla\varphi}2^{3\alpha}\,.
\]
\end{proof}
One of the main ingredients in our analysis is the scaling properties of the terms involved in the functional $\mathrm E$.
\begin{lemma}\label{scaling} Let $\varphi \in \mathrm H^1(\mathbb R^3)$. Assume that $\lambda>0$, let $p$ and $q$ be real numbers and define $\varphi_\lambda^{p,q}(x):=\lambda^p\,\varphi(\lambda^q\,x)$. Then we have
\begin{eqnarray*}
&&\ir{|\varphi_\lambda^{p,q} (x)|^2}=\lambda^{2p-3q}\,\ir{|\varphi(x)|^2}\,,\\
&&\mathrm E[ \varphi_\lambda^{p,q}]=\tfrac12\,\lambda^{2p-q}\ir{|\nabla\varphi|^2}+\tfrac14\,\lambda^{4p-5q}\,\D\varphi-\tfrac{\lambda^{(2\alpha+2)p-3q}}{2\alpha+2}\,C\ir{|\varphi|^{2\alpha+2}}\,.
\end{eqnarray*}
In the particular case $\varphi_\lambda(x):=\lambda^{\frac 32}\,\varphi(\lambda\,x)$, the mass is preserved,
\begin{multline*}
\ir{|\nabla\varphi_\lambda|^2}=\lambda^2\ir{|\nabla\varphi|^2}\,,\quad\D{\varphi_\lambda}=\lambda\,\D\varphi\,,\\
\mbox{and}\quad\ir{|\varphi_\lambda|^{2\alpha+2}}=\lambda^{3\alpha}\ir{|\varphi|^{2\alpha+2}}\,.
\end{multline*}
As a consequence, we have that $M\mapsto I_M$ is non increasing and
\[
I_M\le 0\quad\forall\,M\ge 0\,,
\]
with $I_M=-\infty$ when $\alpha>\frac 23$, for every $M>0$.
\end{lemma}
\begin{proof} The reader is invited to check the changes of variables. Let $\varphi$ be any function in $\Sigma_M$. Then, we have
\[
I_M\le\mathrm E[\varphi_\lambda]=\frac{\lambda^2}2\int_{\mathbb R^3}|\nabla\varphi|^2\,dx+\frac{\lambda}4\,\D\varphi-\frac{\lambda^{3\alpha}\,C}{2\alpha+2}\int_{\mathbb R^3}|\varphi|^{2\alpha+2}\,dx
\]
for all $\lambda>0$, and one concludes by letting the scaling parameter $\lambda$ go to zero that $I_M\le 0$. As a consequence of \eqref{ineqlarge}, the function $M\mapsto I_M$ is non-increasing. The last claim follows by assuming that $\alpha>\frac 23$ and by letting $\lambda$ go to infinity.\end{proof}
\begin{remark}\label{Rem:Vanishing} If $I_M=0$ for some $M>0$, we may built a minimizing sequence that converges to zero weakly in $\mathrm H^1(\mathbb R^3)$ by using the scaling properties. In fact, Lemma~I.1 in~\cite{bi:PLL-CC2} can be applied to any minimizing sequence in order to prove that vanishing cannot hold in the opposite case, $I_M<0$. Therefore, the condition $I_M<0$ is necessary to ensure the relative compactness up to translations of any minimizing sequence. This is the motivation for characterizing the situations in which~$\mathrm E$ reaches negative values.\end{remark}
\begin{lemma}\label{lemaequiv} Let $M>0$ and $\alpha\in[\frac13,\frac23]$. Then $\mathrm E$ takes negative values in $\Sigma_M$ if and only if the functional
\[
\varphi\mapsto\left(\frac1{3\alpha-1}\int_{\mathbb R^3}|\nabla\varphi|^2\,dx\right)^{3\alpha-1}\left(\frac{\D\varphi}{2\,(2-3\alpha)}\right)^{2-3\alpha}-\frac C{\alpha+1}\int_{\mathbb R^3}|\varphi|^{2\alpha+2}\,dx
\]
also takes negative values in $\Sigma_M$. Moreover, if $\alpha\in(\frac13,\frac23)$, then
\be{LowerBoundEnergy}
\mathrm E[\varphi]\ge\frac 14\,\lambda[\varphi]\,\D\varphi\left[1-\left(\frac{C\,M^{4\alpha-2}}{V_c(\alpha)}\right)^\frac1{3-2\alpha}\right]\quad\forall\,\varphi\in\Sigma_M
\end{equation}
with $\lambda[\varphi]:=\left(\frac{3\alpha-1}{\alpha+1}\,C\,\frac{\ir{|\nabla\varphi|^2}}{\ir{\varphi^{2\alpha+2}}}\right)^\frac1{2-3\alpha}$ and $V_c(\alpha)$ given by \eqref{Vc}. \end{lemma}
Here we adopt the convention that $x^x=1$ whenever $x=0$, in order to include the endpoints of the interval.
\begin{proof} Let $\varphi \in \Sigma_M$. Consider the family $\{\varphi_\lambda\}_{\lambda>0}$ associated with $\varphi$, such that $\nrm{\varphi_\lambda}2^2=M$ for any $\lambda>0$, as in Lemma~\ref{scaling}. We are interested in the sign of
\[
\frac 1\lambda\,\mathrm E[\varphi_\lambda]=\frac{\lambda}2\int_{\mathbb R^3}|\nabla\varphi|^2\,dx+\frac14\,\D\varphi-\lambda^{3\alpha-1}\frac C{2\alpha+2}\int_{\mathbb R^3}|\varphi|^{2\alpha+2}\,dx\,.
\]
In the case $\alpha=\frac13$, both potential terms in the r.h.s.~are scale invariant and we obviously have that $\mathrm E$ reaches negative values if and only if
\[
\frac14\,\D\varphi-\frac{3\,C}8 \int_{\mathbb R^3}|\varphi|^{\frac83}\,dx <0\,.
\]
If $\alpha\in (\frac13,\frac23)$, the minimum of the r.h.s. with respect to $\lambda$ is achieved by $\lambda=\lambda[\varphi]$ and it is negative when
\[
-(2-3\alpha)\left(\frac C{2\alpha+2}\int_{\mathbb R^3}|\varphi|^{2\alpha+2}\,dx\right)^{\frac1{2-3\alpha}}{\left(\frac1{2(3\alpha-1)}\int_{\mathbb R^3}|\nabla\varphi|^2\,dx\right)^{\frac{1-3\alpha}{2-3\alpha}}}+\frac14\,\D\varphi<0\,.
\]
Inequality~\eqref{LowerBoundEnergy} is then a consequence of the definition of $V_c(\alpha)$. Finally, for $\alpha=\frac23$ we have that
\be{ener23}
\mathrm E[\varphi_\lambda]=\lambda^2\left(\frac12 \int_{\mathbb R^3}|\nabla\varphi|^2\,dx-\frac{3\,C}{10} \int_{\mathbb R^3}|\varphi|^{\frac{10}3}\,dx \right)+\lambda\,\frac14\,\D\varphi
\end{equation}
takes negative values if and only if the leading order coefficient w.r.t.~$\lambda$,
\[
\frac12 \int_{\mathbb R^3}|\nabla\varphi|^2\,dx-\frac{3\,C}{10} \int_{\mathbb R^3}|\varphi|^{\frac{10}3}\,dx\,,
\]
is negative. We conclude the proof by observing that the three different conditions obtained above correspond to the precise statement of the lemma. \end{proof}
\begin{remark} In the case $\alpha=\frac23$, the functional \eqref{ener23} is not bounded from below in~$\Sigma_M$ when the leading order coefficient w.r.t.~$\lambda$ takes negative values. This remark shows the optimality of the condition on the mass stated in Lemma \ref{boundedness} for $\alpha=\frac23$. \end{remark}
In the range $\frac12<\alpha<\frac23$, we will need an additional estimate to handle the critical case corresponding to $C\,M^{4\alpha-2}=V_c(\alpha)$, that goes as follows.
\begin{corollary}\label{cor:EstimCrit} Let $\alpha\in\left(\frac12,\frac23\right)$. Then, for any $\varphi \in \Sigma_M$,
\[
\nrm\varphi{2\alpha+2}^{2\alpha+2}\le\mathrm C_{1/2}^{2-2\alpha}\,\C_{\mathrm{GN}}(1)^{2\alpha-1}\,M^{\alpha-\frac 12}\,\nrm{\nabla\varphi}2^{4\alpha-1}\,\D\varphi^{1-\alpha}\,.
\]
\end{corollary}
\begin{proof} Let $\varphi \in \Sigma_M$. If $\alpha\in\left(\frac12,\frac23\right)$, then we have that $3<2\alpha+2<\frac{10}3<4$. Using H\"older's inequality we get
\[
\|\varphi \|_{\L^{2\alpha+2}(\mathbb R^3)}^{2\alpha+2}\le\|\varphi \|_{\L^3(\mathbb R^3)}^{3(2-2\alpha)}\,\|\varphi \|_{\L^4(\mathbb R^3)}^{4(2\alpha-1)}\,.
\]
{}From~\eqref{App:Interpolation2} written for $\alpha=\frac 12$, we know that
\[
\nrm\varphi3^3\le\mathrm C_{1/2}\,\D\varphi^\frac12\,\nrm{\nabla\varphi}2\,.
\]
On the other hand, \eqref{GNinequality} with $\alpha=1$ gives
\[
\nrm\varphi4^4\le\C_{\mathrm{GN}}(1)\,\nrm{\nabla\varphi}2^3\,M^\frac12
\]
Altogether, these estimates provide the result.\end{proof}
We split the analysis of the strict negativity of $I_M$ into two results, from which we will conclude that this property depends on $\alpha$ and in some cases also on the mass. Let us start with $\alpha<\frac 12$.
\begin{proposition}\label{tramo1} Let $M>0$. If $\alpha\in[0,\frac 12)$, then the functional $\mathrm E$ always reaches negative values in $\Sigma_M$. As a consequence, $I_M <0$ for all $M>0$ if $\alpha\in[0,\frac 12)$. \end{proposition}
\begin{proof} For $\alpha\in[0,\frac 13)$ the result is a trivial consequence of the mass-preserving scaling in Lemma \ref{scaling}, since we have that
\[
\lambda^{-3\alpha}\,\mathrm E[\varphi_\lambda]=\frac12\,{\lambda^{2-3\alpha}}\ir{|\nabla\varphi|^2}+\frac14\,{\lambda^{1-3\alpha}}\,\D\varphi-\frac C{2\alpha+2} \int_{\mathbb R^3} |\varphi|^{2\alpha+2}\,dx
\]
is negative for any non-trivial $\varphi\in\mathrm H^1(\mathbb R^3)$ if $\lambda>0$ is chosen small enough.
To complete the proof for $\alpha\in[\frac13,\frac12)$, it remains to find a particular test function $\varphi\in\Sigma_M$ with negative energy for any $M>0$. We follow a classical approach in the literature on the concentration--compactness method, see for instance~\cite{Lions1992}. Consider $M>0$ and $\eta \in \Sigma_M$ such that ${\hbox{\rm{supp}}}(\eta) \subset B(0,1)$, where $B(0,1)$ denotes the unit sphere centered at $0$. For any positive integer $n$, define $\eta_n(x):=\eta(n^\frac13 x)$. Then the support of $\eta_n$ is contained in $B(0,1)$ and by direct calculations we have
\begin{eqnarray*}
&\|\eta_n\|_{\L^2(\Real^3)}^2=\frac1n\,\|\eta\|_{\L^2(\Real^3)}^2\,,\quad\D{\eta_n}=\frac1{n^{5/3}}\,\D\eta\,,&\\
&\|\eta_n\|_{\L^{2\alpha+2}(\mathbb R^3)}^{2\alpha+2}=\frac1n\,\|\eta\|_{\L^{2\alpha+2}(\mathbb R^3)}^{2\alpha+2}\,,\quad\int_{\mathbb R^3}|\nabla\eta_n|^2\,dx=\frac1{n^{1/3}}\int_{\mathbb R^3}|\nabla\eta|^2\,dx\,.&
\end{eqnarray*}
Let $n$ be a given integer bigger than $1$ and let us consider the test function $\varphi(x):=\sum_{i=1}^n \eta_n(x-x_i)$, where the points $x_i\in\mathbb R^3$, $i=1,\dots n$ are chosen such that
\[
|x_i-x_j|\ge\frac{{M^2}}{\D\eta}\,n^{2/3}+2\quad\forall\,i\neq j\,.
\]
By definition $\varphi$ verifies $\|\varphi\|_{\L^2(\Real^3)}^2=\|\eta\|_{\L^2(\Real^3)}^2=M$, $\|\varphi\|_{\L^{2\alpha+2}(\mathbb R^3)}^{2\alpha+2}=\| \eta\|_{\L^{2\alpha+2}(\mathbb R^3)}^{2\alpha+2}$ and $\int_{\mathbb R^3}|\nabla\varphi|^2\,dx=n^{2/3}\int_{\mathbb R^3}|\nabla\eta|^2\,dx$. Now, we estimate $\D{\varphi}$ as follows:
\begin{eqnarray*}
\D{\varphi} &=&\sum_{i,\,j=1}^n\iint_{\mathbb R^3\times\mathbb R^3}{|\eta_n(x-x_i)|^2\,|\eta_n(x'-x_j)|^2}\frac{dx\,dx'}{|x-x'|}\\
&=&n\,\D{\eta_n}+\sum_{j \neq i} \iint_{\mathbb R^3\times\mathbb R^3}\frac{|\eta_n(x)|^2\,|\eta_n(x')|^2}{|x+x_i-x'-x_j|}\,dx\,dx'\\
&\le&\frac{\D\eta}{n^{2/3}}+\sum_{j\neq i}\iint_{\mathbb R^3\times\mathbb R^3}\frac{|\eta_n(x)|^2\,|\eta_n(x')|^2}{|x_i-x_j|-2}\,dx\,dx'\\
&\le &\frac{\D\eta}{n^{2/3}}+\frac{\D\eta}{{M^2}\,n^{2/3}}\,\frac{{M^2\,n(n-1)}}{2\,n^2}=\frac{2\,\D\eta}{n^{2/3}}\,.
\end{eqnarray*}
Combining these estimates and Lemma \ref{lemaequiv} with the fact that $(3\alpha-1)-(2-3\alpha)<0$ if $\alpha<\frac12$, we are done with the proof. \end{proof}
If $\alpha\in (\frac12,\frac23]$, the functional $\mathrm E$ might not reach negative values depending on the value of the mass $M$ and the constant~$C$, as stated in the following result.
\begin{proposition}\label{nonnegative} In the case $\alpha\in[\frac12,\frac23]$, $I_M=0$ if and only if
\be{condposi}
C\,M^{{4\alpha-2}}\le V_c(\alpha)
\end{equation}
holds, where the constant $V_c(\alpha)$ is given in \eqref{Vc}. On the contrary, if \eqref{condposi} does not hold, then $I_M$ is negative.\end{proposition}
We recall that $V_c(\alpha)=\frac{\alpha+1}{\mathrm C_\alpha}\left(\frac1{3\alpha-1}\right)^{3\alpha-1}\left(\frac1{2\,(2-3\alpha)} \right)^{{2-3\alpha}}$ where $\mathrm C_\alpha$ is the optimal constant in \eqref{App:Interpolation2}.
\begin{proof} According to Lemma \ref{lemaequiv}, $I_M=0$ for $\alpha\in[\frac12,\frac23]$ if and only if
\[
\nrm\varphi{2\alpha+2}^{2\alpha+2}\le\frac{\alpha+1}C\left(\frac{\nrm{\nabla\varphi}2^2}{3\alpha-1}\right)^{3\alpha-1}\left(\frac{\D\varphi}{2\?(2-3\alpha)}\right)^{2-3\alpha}
\]
for all $\varphi \in \Sigma_M$. Comparing with the definition of $\mathrm C_\alpha$ in \eqref{App:Interpolation2}, this clearly entails that $I_M=0$ if and only if \eqref{condposi} holds. According to Lemma~\ref{scaling}, $I_M$ is negative (and eventually $-\infty$) otherwise.\end{proof}
Although our problem is originally set in the framework of complex valued functions, we finally observe that we can reduce it to non-negative real valued functions.
\begin{lemma} Consider a complex valued minimizer $\psi$ to the problem \eqref{minienergy}. Then, the real function $|\psi|$ is also a minimizer for \eqref{minienergy}.\end{lemma}
\begin{proof} It is well known that if $\psi \in \Sigma_M$, then $|\psi|$ also belongs to $\Sigma_M$. Since the potential energy only depends on $|\psi|^2$, it takes the same value on $\psi$ and $|\psi|$. On the other hand, the kinetic enegy verifies
\[
\int_{\mathbb R^3} \big|\nabla|\psi|\,\big|^2\,dx\le\int_{\mathbb R^3}\Big(\nabla|{\hbox{\rm{Re}}}\,\psi|^2+\nabla|{\hbox{\rm{Im}}}\,\psi|^2\Big)\,dx=\int_{\mathbb R^3}|\nabla\psi|^2\,dx
\]
as a consequence of the convexity inequality for gradients~\cite{LiebLoss}, where equality holds if and only if $|{\hbox{\rm{Re}}}\,\psi(x)|=c\,|{\hbox{\rm{Im}}}\,\psi(x)|$ for some constant $c$. Hence, $|\psi|$ is also a minimizer. \end{proof}
If $I_M$ is achieved, we can then prove the \emph{Virial Theorem} relation for the terms of the energy functional by using their scaling properties.
\begin{proposition}\label{Prop:Phozaev} Assume that $0<\alpha<\frac23$. Any minimizer $\varphi_M$ of $I_M$ satisfies
\be{eqnvirial}
\int_{\mathbb R^3}|\nabla\varphi_M|^2\,dx+\frac14\,\D{\varphi_M}-\frac{3\,\alpha\,C }{2\alpha+2} \int_{\mathbb R^3}|\varphi_M|^{2\alpha+2}\,dx=0\,.
\end{equation}
\end{proposition}
\begin{proof} Let us assume that there exists a minimizer $\varphi_M\in\Sigma_M$ of $I_M$. According to Lemma~\ref{scaling}, for every $\lambda>0$ the rescaled function $\varphi_{M,\lambda}=\lambda^{3/2}\,\varphi_M(\lambda\,\cdot)$ also lies in~$\Sigma_M$. The function $\lambda\mapsto \mathrm E[\varphi_{M,\lambda}]$ attains its minimal value at $\lambda=1$. Since
\[
\mathrm E[\varphi_{M,\lambda}]=\frac12\,\lambda^2\int_{\mathbb R^3}|\nabla\varphi_M|^2\,dx+\lambda\,\frac14\,\D{\varphi_M}-\lambda^{3\alpha}\,\frac C{2\alpha+2}\int_{\mathbb R^3}|\varphi_M|^{2\alpha+2}\,dx\,,
\]
the cancellation of the derivative with respect to $\lambda$ at $\lambda=1$ provides with \eqref{eqnvirial}. \end{proof}
At this stage, we can write down the Euler-Lagrange equation corresponding to the minimization problem $I_M$ and deduce an energy identity.
\begin{lemma}\label{Lem:3.1} Assume that $0<\alpha<\frac23$. Any minimizer $\varphi_M$ of $I_M$ satisfies~\eqref{eq:sw} and
\be{intEL}
\int_{\mathbb R^3}|\nabla\varphi_M|^2\,dx+\D{\varphi_M}-C\int_{\mathbb R^3}|\varphi_M|^{2\alpha+2}\,dx+\ell_M\,M=0\,.
\end{equation}
In particular, at least for $\alpha \in (0,\frac15] \cup (\frac12,\frac23)$, we have $\ell_M>0$. If $\alpha=\frac12$, then $\ell_M=-\frac6M\,I_M\geq 0$.
\end{lemma}
\begin{proof} Identity~\eqref{intEL} is obtained by multiplying the Euler-Lagrange equation~\eqref{eq:sw} by $\varphi_M$ and integrating by parts. If we eliminate $\D{\varphi_M}$ and $\nrm{\varphi_M}{2\alpha+2}$ from~\eqref{eqnvirial}, \eqref{intEL} and use
\[
E[\varphi_M]=\frac 12\int_{\mathbb R^3}|\nabla\varphi_M|^2\,dx+\frac 14\D{\varphi_M}-\frac C{2\alpha+2}\int_{\mathbb R^3}|\varphi_M|^{2\alpha+2}\,dx=-|I_M|\,,
\]
we complete the proof using
\[
\ell_M=\frac 2M\left(\frac{2\alpha-1}{3\alpha-1}\int_{\mathbb R^3}|\nabla\varphi_M|^2\,dx+\frac{5\alpha-1}{3\alpha-1}\,|I_M|\right)\,.
\]
\end{proof}
\begin{corollary}\label{Cor:CriticalMass} Assume that $\alpha\in(0,\frac 12)\cup(\frac 12,\frac23)$. Any minimizer $\varphi_M$ of $I_M$ is such that
\begin{eqnarray*}
&&\ir{|\nabla\varphi_M|^2}=\frac 12\,(3\alpha-1)\,\varepsilon_M-(5\alpha-1)\,\eta_M\\
&&\D{\varphi_M}=(2-3\alpha)\,\varepsilon_M-2\,(2-\alpha)\,\eta_M\\
&&\ir{|\varphi_M|^{2\alpha+2}}=\frac 14\,\varepsilon_M-\frac 32\,\eta_M
\end{eqnarray*}
where
\[
\varepsilon_M:= \frac{M\,\ell_M}{2\alpha-1}\quad\mbox{and}\quad\eta_M:=\frac{I_M}{1-2\alpha}\,.
\]
\end{corollary}
\begin{proof} The proof is a straightforward consequence of $\mathrm E[\varphi_M]=I_M$, \eqref{eqnvirial} and~\eqref{intEL}.\\\end{proof}
Lemma~\ref{Lem:3.1} has interesting consequences concerning the decay of the minimizers, that can be derived from Lemma 19 and Theorem 6 in~\cite{BoMe2011}, as shown in the following result. Also see Theorem~1.3 in~\cite{BeJeLu} and Theorem~6.1 in~\cite{doi:10.1142/S0219199712500034} for related results.
\begin{lemma} Consider a nonnegative solution to \eqref{eq:sw} such that
\[
\frac12\int_{\mathbb R^3}|\nabla\varphi_M|^2\,dx+\,\frac14\,\D{\varphi_M}+\ell_M\,\ir{\varphi_M^2}<\infty\,,
\mbox{ with }\ell_M\ge 0\,.
\]
Then, there exist positive constants $K$ and $\delta$ such that
\[
\varphi_M(x)\le K\,e^{-\delta\,\sqrt{1+|x|}}\quad\forall\,x\in\mathbb R^3\,.
\]
\end{lemma}
In the case $\ell_M=0$, this result ensures that the above solution belongs to $\mathrm H^1(\mathbb R^3)$, since the exponential decay also guarantees that the minimizer is in $\L^2(\mathbb R^3)$.
\medskip\noindent {\bf The rescaled problem.} Given that our main tool in proving the existence of minimizers will consist in checking the strict inequalities \eqref{ineqstrict}, we are going to study the infimum value $I_M$ as a function of the mass $M$. To this purpose, we fix a function $\varphi_1\in \Sigma_1$ and apply the scaling properties in Lemma~\ref{scaling} with $2p-3q=1$ and $\lambda=M$. We denote by $\varphi_{M,p}$ be the corresponding rescaled function. Then, according to Lemma \ref{scaling} we have that $\varphi_{M,p}\in \Sigma_M$ and
\be{eq:enerscal2}
\mathrm E[\varphi_{M,p}]=\tfrac12\,M^{\frac{4p+1}3}\,\nrm{\nabla\varphi_1}2^2+\tfrac14\,M^\frac{2p+5}3\,\D{\varphi_1}-\tfrac C{2\alpha+2}\,M^{2\alpha p+1}\,\nrm{\varphi_1}{2\alpha+2}^{2\alpha+2}\,,
\end{equation}
for any real number $p$.
\section{Existence and non-existence of steady states}\label{Sec:steady}
In this section we analyze the existence of minimizers for the variational problem~\eqref{minienergy}.
\subsection{Non-existence results when \texorpdfstring{$\alpha=0$ or $\alpha=2/3$}{alpha=0 or alpha=2/3}}\label{ssec:nonexistence}
In the case $\alpha=0$, the minimization problem reduces to
\begin{eqnarray*}
I_M&=&\inf\left\{\frac12 \int_{\mathbb R^3}|\nabla\varphi|^2\,dx
+\frac14\,\D\varphi-\frac C2\int_{\mathbb R^3}|\varphi|^2\,dx\,:\,\varphi\in\Sigma_M\right\}\\
&=& \inf\left\{\frac12 \int_{\mathbb R^3}|\nabla\varphi|^2\,dx
+\frac14\,\D\varphi\,:\,\varphi\in\Sigma_M\right\}-\frac C2\,M=-\frac C2\,M\,,
\end{eqnarray*}
by a scaling argument. Therefore, $I_M$ is never achieved when $M>0$ (despite it is always negative) since any possible minimizer would make the gradient term vanish, and then should vanish itself in $\mathbb R^3$.
In the case $\alpha=\frac23$, either $I_M=0$ or $I_M=-\infty$, and in both cases there are no minimizers. Actually, $I_M=0$ if and only if
\[
\frac12 \int_{\mathbb R^3}|\nabla\varphi|^2\,dx-\frac{3\,C}{10}\int_{\mathbb R^3}|\varphi|^{\frac{10}3}\,dx\ge0\,.
\]
See Lemma \ref{lemaequiv} and its proof for details. Hence, the minimum cannot be attained, otherwise $\D{\varphi}=0$, which is absurd.
\vskip6pt
{}From now on we shall assume that $0<\alpha<\frac23$. We first examine the range $0<\alpha<\frac 12$ in Subsection~\ref{ssec:below}. Subsection~\ref{ssec:half} is devoted to the special limiting case $\alpha=\frac12$. Finally, the range $\frac12<\alpha<\frac 23$ is analyzed in Subsection~\ref{ssec:above}.
\subsection{The interval \texorpdfstring{$0<\alpha<\frac 12$}{0<alpha<1/2}}\label{ssec:below}
We prove the following~:
\begin{proposition}\label{prop:below-half} Let $0<\alpha<\frac 12$. Then, for $M>0$ small enough, the strict inequalities
\[
I_M <I_{M'}+I_{M-M'}
\]
hold for every $M'$ such that $0<M'<M$. In particular, all minimizing sequences are compact in $\mathrm H^1(\mathbb R^3)$ up to translations and the extraction of a subsequence. Therefore, $I_M$ is attained for $M$ small enough.
\end{proposition}
\begin{proof} In Proposition~\ref{tramo1} we have proved that
$I_M<0$ for every $M>0$. In the range $\alpha\in(0,\frac12)$, we may choose the parameter $p$ in the rescaled problem \eqref{eq:enerscal2} such that
\begin{equation*}
0\le\,\frac{4p+1}3= 2\alpha p+1\,<\frac{2p+5}3\,,
\end{equation*}
\emph{i.e.,} such that the gradient and the power term are of the same order for small $M$ and dominate the Poisson energy in this regime. With this choice we can deduce
\be{eq:defJ}
I_M=M^{\frac{2-\alpha}{2-3\alpha}}\,J_1^M
\end{equation}
where, for every $\mu>0$,
\[
J_\mu^M=\inf\left\{\frac12\,\nrm{\nabla\varphi}2^2-\frac C{2\alpha+2}\,\nrm\varphi{2\alpha+2}^{2\alpha+2}+M^{\frac{2(1-2\alpha)}{2-3\alpha}}\,\frac14\,\D\varphi\,:\,\varphi\in\Sigma_\mu\right\}\,.
\]
Note that the same scaling argument shows that
\be{eq:scalJ}
J_\mu^M=\mu^{\frac{2-\alpha}{2-3\alpha}}\,J_1^{\mu M}\,.
\end{equation}
With $\mu=\frac{M'}M$ and using \eqref{eq:defJ} and \eqref{eq:scalJ}, it is easily proved that the strict inequalities of Proposition~\ref{prop:below-half} are equivalent to
\be{ineqstrictJ}
J_1^M<J_\mu^M+J_{1-\mu}^M\quad\forall\,\mu\in(0,1)\,.
\end{equation}
We are going to prove that the above strict inequalities hold for $M$ small enough.
Observe now that $M^{\frac{2(1-2\alpha)}{2-3\alpha}}$ goes to zero as $M$ does, and $\lim_{M\rightarrow 0}J_1^M=J_1^0$. The key point is that for every $\lambda>0$, $J_\lambda^0$ satisfies the strict inequalities of the concentration--compactness principle, namely
\be{CCJ}
J_\lambda^0<J_\mu^0+J_{\lambda-\mu}^0\,,\quad\forall\,\mu\in(0,\lambda)\,.
\end{equation}
This is an immediate consequence of the fact that $J_\lambda^0=\lambda^{\frac{2-\alpha}{2-3\alpha}}\,J_1^0$, with
$J_1^0<0$ and $\frac{2-\alpha}{2-3\alpha}>1$. The sign of $J_1^0$ is deduced from the scaling argument in Lemma~\ref{scaling}, by observing that the negative term dominates the gradient contributions for $3\alpha<2$. We now
prove that $J^M_1$ satisfies the strict inequalities \eqref{ineqstrictJ} for $M$ small enough. We argue by contradiction
assuming that this is not the case. Then, there exist a sequence $\{M_n\}_{n\ge 1}$ going to $0$ and a sequence $\{\lambda_n\}_{n\ge 1}$ in $(0,1)$ such that
\be{pasCC}
J_1^{M_n}=J_{\lambda_n}^{M_n}+J_{1-\lambda_n}^{M_n}\,.
\end{equation}
Assume that $\frac 12\le\lambda_n<1$ (if $\lambda_n\in(0,\frac 12)$, we may exchange the roles of $\lambda_n$ and $1-\lambda_n$). By continuity with respect to $M$ we conclude that $\lambda_n \to 1$, otherwise we get a contradiction with~\eqref{CCJ}. In addition, we may choose as $\lambda_n$ the infimum of the set $\{\lambda\in[\frac 12,1)\,:\,J_1^{M_n}=J_{\lambda}^{M_n}+J_{1-\lambda}^{M_n}\}$.
We now claim that, for $n$ large enough, $J_{\lambda_n}^{M_n}$ satisfies the strict inequalities of the concentration--compactness principle
\be{CCJn}
J_{\lambda_n}^{M_n}<J_{\mu}^{M_n}+J_{\lambda_n-\mu}^{M_n}\quad\forall\,\mu\in(0,\lambda_n)\,.
\end{equation}
If not, there exists a sequence $\{\mu_n\}_{n\ge1}$ with $\mu_n\in(\frac12\,\lambda_n,\lambda_n)$ such that
\be{contra}
J_{\lambda_n}^{M_n}=J_{\mu_n}^{M_n}+J_{\lambda_n-\mu_n}^{M_n}\,.
\end{equation}
Then, from \eqref{pasCC} and \eqref{contra} we find
\begin{eqnarray*}
J_{\mu_n}^{M_n}+J_{1-\mu_n}^{M_n}&\ge &J_1^{M_n}=J_{\mu_n}^{M_n}+J_{\lambda_n-\mu_n}^{M_n}+J_{1-\lambda_n}^{M_n}\\
&\ge&J_{\mu_n}^{M_n}+J_{1-\mu_n}^{M_n}\,,
\end{eqnarray*}
for the reverse large inequalities $J_{1-\mu_n}^{M_n}\le J_{1-\lambda_n}^{M_n}+J_{1-\mu_n-(1-\lambda_n)}^{M_n}$ always hold true. Hence, the equality
\begin{equation}\label{contra2}
J_{\mu_n}^{M_n}+J_{1-\mu_n}^{M_n}=J_1^{M_n}
\end{equation}
is verified. By definition of $\lambda_n$ and since $\mu_n\geq \frac{\lambda_n}2$ with $\lambda_n\geq \frac12$, we must have $\frac 14\leq \mu_n<\frac 12$. Extracting a subsequence if necessary, we may assume that $\mu_n$ converges to $\mu$ with $\frac 14\leq \mu\leq\frac 12$. Passing to the limit in \eqref{contra2}, we get $J_{\mu}^{0}+J_{1-\mu}^{0}=J_1^{0}$ with $\mu\in (0,1)$ thereby reaching a contradiction with \eqref{CCJ}. So far we have proved that the strict inequalities \eqref{CCJn} hold.
In particular, for $n$ large enough, there exists a minimizer $\varphi_n$ of $J_{\lambda_n}^{M_n}$ such that $\{(\lambda_n)^{-1/2}\varphi_n\}_{n\ge 1}$ is a minimizing sequence for $J_1^0$. Since \eqref{CCJ} holds, this sequence converges strongly in $\mathrm H^1(\mathbb R^3)$ up to translations to a minimizer $\varphi_\infty$ of $J_1^0$. The same holds for $\{\varphi_n\}_{n\ge 1}$, given that $\lambda_n\to 1$. Without loss of generality we may assume that $\varphi_n$ and $\varphi_\infty>0$ satisfy the respective Euler--Lagrange equations in $\mathbb R^3$
\[
-\Delta\varphi_n-C\,\varphi_n{}^{2\alpha+1}+M_n^{\frac{2(1-2\alpha)}{2-3\alpha}}\,\Big(\varphi_n^2\ast\frac 1{|x|}\Big)\varphi_n+\theta_n\,\varphi_n=0\,,
\]
with $\nrm{\varphi_n}2^2=\lambda_n$ and
\[
-\Delta\varphi_\infty-C\,\varphi_\infty^{2\alpha+1}+\theta_1\,\varphi_\infty=0\,,
\]
with $\nrm{\varphi_\infty}2^2=1$ and $\theta_1>0$. Having in mind to contradict \eqref{pasCC} we argue as follows. We first write
\[
\frac{J_1^{M_n}-J_{\lambda_n}^{M_n}}{1-\lambda_n}=\frac{J_{1-\lambda_n}^{M_n}}{1-\lambda_n}\,.
\]
As $\lambda_n$ goes to $1$, the left-hand side can be bounded from above by $-\,\theta_1$, while from~\eqref{eq:scalJ} the quotient
\begin{equation*}
\frac{J_{1-\lambda_n}^{M_n}}{1-\lambda_n}=(1-\lambda_n)^{\frac{2\alpha}{2-3\alpha}}\,J_1^{(1-\lambda_n)M_n}
\end{equation*}
converges to $0$ because $J_1^{(1-\lambda_n)M_n}$ converges to $J_1^0$ as $\lambda_n\to 1$, and $\frac{2\alpha}{2-3\alpha}$ is positive.\\
\end{proof}
\begin{remark} The general case for any $M$ is still an open problem. The possibility of dichotomy is the delicate case to be analyzed since vanishing is easily ruled out by the fact that $I_M$ is negative. \end{remark}
\subsection{The limiting case \texorpdfstring{$\alpha=\frac 12$}{alpha1/2}}\label{ssec:half}
Our main result is the following.
\begin{proposition}\label{prop:half} Let $\mathrm C_{1/2}$ be the best constant in \eqref{App:Interpolation2} with $\alpha=\frac12$.
\begin{itemize}
\item[(i)] If $\frac3{\sqrt2\,\mathrm C_{1/2}}> C$, then $I_M=0$ and $I_M$ is not achieved for any $M>0$.
\item[(ii)] If $\frac3{\sqrt2\,\mathrm C_{1/2}}<C$, then $I_M<0$ and $I_M$ is achieved for every $M>0$. In addition, all
minimizing sequences are relatively compact in $\mathrm H^1(\mathbb R^3)$ up to translations. \end{itemize}
\end{proposition}
\begin{remark} The remaining case $\frac3{\sqrt2\,\mathrm C_{1/2}}=C$, where $I_1=0$, might be attained if and only if P.-L.~Lions' inequality \eqref{estimlions} has an optimal function in $\Sigma_1$. This is, for the moment, an open question. \end{remark}
\begin{proof} As an immediate consequence of the scaling formulae of Lemma~\ref{scaling}, by taking $p=2$ in \eqref{eq:enerscal2}, we have that
\be{eq:rel-half}
I_M=M^3\,I_1\,,
\end{equation}
for every $M>0$, and $I_M$ is achieved if and only if $I_1$ is also achieved. This is the only case in which all powers of $M$ appearing in the right-hand side of \eqref{eq:enerscal2} are identical. When $I_1<0$, it is a well-known fact~\cite{bi:PLL-CC1cras,bi:PLL-CC1} that the relation \eqref{eq:rel-half} implies the strict inequalities \eqref{ineqstrict}, hence the result. Indeed, the strict inequalities \eqref{ineqstrict} hold as a consequence of the convexity of $M\mapsto M^3$.
Assume now that $I_1=0$, so that $I_M=0$ for every $M>0$. We assume that $I_1$ is achieved by some function $\varphi_1$ in $\mathrm H^1(\mathbb R^3)$. Then $\varphi_1$ satisfies the Euler--Lagrange equation \eqref{eq:sw} with a zero Lagrange multiplier (since it is also a minimizer without any constraint on the $\L^2(\mathbb R^3)$ norm). Also see~Lemma~\ref{Lem:3.1} for a direct proof. If we apply the corresponding equation to $\varphi_1$, integrate over $\mathbb R^3$ and use the information $I_1=\mathrm E[\varphi_1]=0$, we deduce
\begin{equation*}
\frac12\int_{\mathbb R^3}|\nabla\varphi_1|^2\,dx=\frac14\,\D{\varphi_1}=\frac C6\int_{\mathbb R^3}|\varphi_1|^3\,dx\,.
\end{equation*}
Hence, by definition of $\mathrm C_{1/2}$ we obtain
\[
\frac1{\mathrm C_{1/2}}\le\frac{C\,\sqrt2}3\,,
\]
or equivalently
\[
\frac3{\sqrt2\,\mathrm C_{1/2}}\le C\,.
\]
Therefore, using \eqref{condposi}, the equality $I_1=0$ can be achieved only when
\[
\frac3{\sqrt2\,\mathrm C_{1/2}}=C\,.
\]
As a consequence, $I_1$ (and, up to a scaling, $I_M$) is attained if and only if the optimal constant in~\eqref{estimlions} is attained {by a minimizer in $\L^2(\mathbb R^3)$}.
\end{proof}
\medskip We conclude this section by examining the critical case $\alpha=\frac 12$ with the limiting constant $C=\frac3{\sqrt2\,\mathrm C_{1/2}}$. The problem is open, but we can prove that lack of compactness may occur only by vanishing as shown by the following result.
\begin{proposition}\label{prop:onlyVanishing} Assume that $\alpha=\frac12$ and $C=\frac3{\sqrt2\,\mathrm C_{1/2}}$. Let $\{\phi_n\}_{n\ge 1}$ be a minimizing sequence for $I_1$. If \emph{vanishing does not occur,} that is, if~\eqref{nonvanishing} holds,
then there exists a minimizer for $I_1$.
\end{proposition}
By Lemma \ref{boundedness}, $\{\phi_n\}_{n\ge 1}$ is bounded in $\mathrm H^1(\mathbb R^3)$. Since $I_1$ is invariant by translation, relative compactness in $\mathrm H^1(\mathbb R^3)$ may only be expected up to translations. Also, since $I_\lambda=0$ for every $\lambda>0$, concentration--compactness type inequalities turn into equalities. In particular, there exist minimizing sequences that are not relatively compact in $\mathrm H^1(\mathbb R^3)$, up to any translations. According to the concentration--compactness terminology~\cite{bi:PLL-CC1cras,bi:PLL-CC1,bi:PLL-CC2}, either $\{\phi_n\}_{n\ge 1}$ fulfills \eqref{vanishing} and \emph{vanishing} occurs, or \eqref{nonvanishing} holds. If there exists some minimizing sequence for which vanishing does not occur, we will now prove that existence of a minimizer is guaranteed.
\begin{proof} We first show that \eqref{nonvanishing} ensures the existence of a minimizer. Indeed, the new minimizing sequence $\{\phi_n(\cdot+y_n)\}_{n\ge 1}$ converges (up to a subsequence) to a function $\phi$ in $\mathrm H^1(\mathbb R^3)$, weakly in $\mathrm H^1(\mathbb R^3)$ and in $\L^p(\mathbb R^3)$ for every $2\le p\le 6$, strongly in $\L^p_{\rm loc}(\mathbb R^3)$ for every $1\le p<6$ (by the Rellich-Kondrachov theorem); consequently, it also converges almost everywhere in $\mathbb R^3$. The condition \eqref{nonvanishing} guarantees that $\phi\neq 0$ since $\int_{B_{R_0}}\phi^2\,dx\ge\varepsilon_0$ by passing to the limit as $n$ goes to infinity. Let $\mu=\int_{\mathbb R^3}\phi^2\,dx$ with $0<\mu\le 1$.
If $\mu=1$, we are done~: $\{\phi_n(\cdot+y_n)\}_{n\ge 1}$ converges to $\phi$ strongly in $\L^2(\mathbb R^3)$, and therefore in $\L^p(\mathbb R^3)$ for every $2\le p<6$ by H\"{o}lder's inequality. In particular, the convergence is also strong in $\L^3(\mathbb R^3)$ and $0=\liminf_{n\to+\infty}\mathrm E[\phi_n]\ge\mathrm E[\phi)]\geq I_1$. Hence, $\mathrm E[\phi]=0$ and $\phi$ is a minimizer of $I_1$. In addition, the convergence is strong in $\mathrm H^1(\mathbb R^3)$ since all above inequalities turn into equalities.
If $\mu<1$, we are in the so-called dichotomy case. We shall prove that $\phi$ is a minimizer of $I_\mu$. Then, according to Lemma~\ref{prop:half}, $I_1$ is also achieved. Let us define $r_n:=\phi_n(\cdot+y_n)-\phi$. Then, $\{r_n\}_{n\ge 1}$ is bounded in $\mathrm H^1(\mathbb R^3)$. Up to a subsequence, it converges to $0$ weakly in $\mathrm H^1(\mathbb R^3)$ and in $\L^p(\mathbb R^3)$ for every $2\le p<6$, strongly in $\L^p_{\rm loc}(\mathbb R^3)$ for every $1\le p<6$, and almost everywhere in $\mathbb R^3$. In addition, by taking weak limits we find
\begin{eqnarray}\label{eq:dic-grad}
&&\lim_{n\to+\infty}\int_{\mathbb R^3} r_n^2\,dx=1-\mu\,, \nonumber \\ &&\int_{\mathbb R^3} \vert \nabla\phi_n\vert ^2\,dx=\int_{\mathbb R^3} \vert \nabla r_n\vert ^2\,dx+\int_{\mathbb R^3} \vert \nabla\phi \vert ^2\,dx+o_n(1)\,,
\end{eqnarray}
where $o_n(1)$ is a shorthand for a quantity that goes to $0$ when $n$ goes to infinity. Using Theorem~1 in~\cite{bi:BrL1}, we have
\be{eq:dic-power}
\int_{\mathbb R^3}\vert \phi_n\vert^3\,dx=\int_{\mathbb R^3}\vert r_n\vert^3\,dx+\int_{\mathbb R^3}\vert \phi\vert^3\,dx+o_n(1)\,.
\end{equation}
We first check as in~\cite{CL1} that
\be{eq:conv-0}
\lim_{n\to+\infty} \Vert \phi\,r_n\Vert_{\L^p(\mathbb R^3)}=0\quad\forall\,p\in[1,3)\,.
\end{equation}
We just argue for $p=1$, as the analysis for the other powers follows by interpolation. Since $\{r_n\}_{n\ge 1}$ converges strongly to $0$ in $\L^2_{\rm loc}(\mathbb R^3)$, $\{\phi\,r_n\}_{n\ge 1}$ converges strongly to~$0$ in $\L^1_{\rm loc}(\mathbb R^3)$ as $n \to \infty$. Next, for every $R>0$ we have
\[
\int_{|x|\ge R} \vert \phi\,r_n\vert\,dx\le\Big(\int_{|x|\ge R} \vert \phi\vert^2\,dx\Big)^{1/2}\,\Big(\int_{|x|\ge R} \vert r_n\vert^2\,dx\Big)^{1/2}.\]
The first term in the right-hand side may be taken arbitrarily small for $R$ large enough since $\phi \in \L^2(\Real^3)$, while the second one is bounded independently of $n$ and~$R$ since $\{r_n\}_{n\ge 1}$ is bounded in $\L^2(\Real^3)$. Writing $\int_{\mathbb R^3} \vert \phi\,r_n\vert\,dx=\int_{|x|\le R} \vert \phi\,r_n\vert\,dx+\int_{|x|\ge R} \vert \phi\,r_n\vert\,dx$ we get the result. By writing down
\[
\int_{\mathbb R^3}\vert \phi_n\vert^3\,dx-\int_{\mathbb R^3}\vert r_n\vert^3\,dx-\int_{\mathbb R^3}\vert \phi\vert^3\,dx=3\int_{\mathbb R^3}\vert \phi\,r_n\vert\,(\vert r_n\vert+\vert \phi\vert)\,dx\,,
\]
we obtain \eqref{eq:dic-power} since $\{\vert r_n\vert+\vert \phi\vert\}_{n\ge 1}$ is bounded in $\L^2(\Real^3)$ and $\{\phi\,r_n\}_{n\ge 1}$ converges to $0$ in $\L^2(\mathbb R^3)$. Finally, we check that
\be{eq:dic-D}
\liminf_{n\to +\infty}\D{\phi_n}\geq \D\phi+\liminf_{n\to +\infty}\D{r_n}\,.
\end{equation}
On the one hand, since $\{\phi_n\}_{n\ge 1}$ is bounded in $\mathrm H^1(\mathbb R^3)$, $\{\phi_n^2\star \frac 1{|x|}\}_{n\ge 1}$ is bounded in $\L^\infty(\mathbb R^3)$ thanks to
\begin{eqnarray*}
\Big\|\phi^2\star\frac 1{|x|}\Big\|_{\L^\infty(\mathbb R^3)}=\sup_{x\in \mathbb R^3}\int_{\mathbb R^3}
\frac{\phi^2(y)}{|x-y|}\,dy&\le&\sup_{x\in \mathbb R^3}\Big(\int_{\mathbb R^3} \frac{\phi^2(y)}{|x-y|^2}\,dy\Big)^{1/2}\,\nrm\phi2\\
&\le& 2\,\|\nabla\phi\|_{\L^2(\Real^3)}\,\|\phi\|_{\L^2(\Real^3)}\,,
\end{eqnarray*}
where we have used Cauchy-Schwarz' inequality and Hardy's inequality.
Then, we have
\[
\Big\vert \int_{\mathbb R^3}\Big(\phi_n\star\frac 1{|x|}\Big)\,(\phi\,r_n)\,dx\Big\vert\le\Big\Vert \phi_n^2\star \frac1{|x|}\Big\Vert_{\L^\infty(\mathbb R^3)}\,\Vert \phi\,r_n\Vert_{\L^1(\mathbb R^3)}\,,
\]
and hence
\[
\lim_{n\to\infty}\int_{\mathbb R^3}\Big(\phi_n\star\frac 1{|x|}\Big)\,(\phi\,r_n)\,dx=0\,,
\]
because of \eqref{eq:conv-0}. On the other hand, $\int_{\mathbb R^3}\big((\phi\,r_n)\star\frac 1{|x|}\big)\,(\phi\,r_n)\,dx \geq 0$. Actually it is also converging to $0$ as $n\to\infty$, and \eqref{eq:dic-D} follows. Gathering together \eqref{eq:dic-grad}, \eqref{eq:dic-power} and \eqref{eq:dic-D}, we obtain
\begin{eqnarray*}
0=\limsup_{n\to+\infty}\mathrm E[\phi_n]=\liminf_{n\to+\infty}\mathrm E[\phi_n]&=&\mathrm E[\phi]+\liminf_{n\to+\infty}\mathrm E[r_n]\\
&&\quad\ge\mathrm E[\phi]\ge I_\mu=0
\end{eqnarray*}
since $\mathrm E[r_n]$ is nonnegative for every $n\ge1$. Then, all above inequalities turn into equalities. In particular, $I_\mu=\mathrm E[\phi]=0$ is attained. By Lemma~\ref{prop:half}, $I_1$ also is attained. This concludes the proof of Proposition~\ref{prop:onlyVanishing}.
\end{proof}
\subsection{The region \texorpdfstring{$\frac 12<\alpha<\frac 23$}{1/2<alpha<2/3}}\label{ssec:above}
We recall from Proposition \ref{nonnegative} the existence of a critical value $V_c$ such that $I_M=0$ if and only if $C\,M^{4\alpha-2}\le V_c$, and $I_M<0$ otherwise. Let us define
\[
M_c:=\left(\frac{V_c}C\right)^{\frac1{4\alpha-2}}
\]
and notice that $4\alpha-2$ is positive if $\frac 12<\alpha<\frac 23$. The main result in this region is stated in the following proposition.
\begin{proposition}\label{prop:above} Assume that $\alpha\in(\frac12,\frac23)$. The following assertions hold true:
\begin{enumerate}
\item[(i)] If $M<M_c$, then $I_M$ is not achieved.
\item[(ii)] If $M=M_c$, then there exists a minimizer.
\item[(iii)] If $M>M_c$, then the strict inequalities \eqref{ineqstrict} always hold, and in particular there exists a minimizer.
\end{enumerate}
\end{proposition}
In the critical case $M=M_c$, the strict inequalities \eqref{ineqstrict} do not hold. As consequence, the stability of such a solution cannot be ensured by usual arguments.
\begin{proof} We first assume that $M<M_c$, so that $I_M=0$ by Proposition~\ref{nonnegative}. Define
\[
\mathrm E_M[\varphi]:=\frac12\int_{\mathbb R^3}|\nabla\varphi|^2\,dx+\,\frac14\,\D\varphi-M^{4\alpha-2}\,\frac C{2\alpha+2}\int_{\mathbb R^3}|\varphi|^{2\alpha+2}\,dx\,.
\]
We may observe that $\mathrm E_1=\mathrm E$. By applying Lemma~\ref{scaling} with $p=2$ and $q=1$ (or, equivalently, \eqref{eq:enerscal2} with $p=2$), we get
\[
\mathrm E\big[M^2\,\varphi(M\cdot)\big]=M^3\,\mathrm E_M[\varphi]\quad\forall\,\varphi\in\Sigma_1\,.
\]
We argue by contradiction. Assume that $I_M$ is achieved. Then, there exists a minimizer $\varphi_M$ of
\[
\inf\{\mathrm E_M[\varphi]\,:\,\varphi\in\Sigma_1\}=M^{-3}\,I_M\,.
\]
In this way, $\varphi_M$ is a test function for $M_c^{-3}\,I_{M_c}$ and $\mathrm E_{M_c}[\varphi_M]<\mathrm E_M[\varphi_M]=0$ since $M_c>M$. We contradict the fact that $I_{M_c}=0$, thus proving (i).
\medskip Next, we assume that $M>M_c$. In order to prove the strict inequalities \eqref{ineqstrict} and establish (iii) in Proposition~\ref{prop:above}, the key point is the following.
\begin{lemma}\label{lem:keyscaling} If $M> M_c$, then we have
\be{keyscaling}
I_{M'}\le\Big(\frac{M'}M\Big)^3\,I_M\quad\forall\,M'>M\,.
\end{equation}
In particular, the function $M\mapsto I_M$ is decreasing on $[M_c,+\infty)$. Furthermore,
\be{strictm}
I_M<I_m+I_{M-m}\quad\forall\,m\in(0,M)\,.
\end{equation}
\end{lemma}
\begin{proof} Consider $\varphi\in\Sigma_M$ and let $\tilde\varphi:=\big(\frac{M'}M \big)^2\,\varphi\big(\frac{M'}M\cdot\big)$. We notice that $\tilde\varphi\in\Sigma_{M'}$~and
\begin{eqnarray*}
I_{M'}\le\mathrm E[\tilde\varphi]=\left(\tfrac{M'}M\right)^3\,\left[\tfrac12\int_{\mathbb R^3}|\nabla\varphi|^2\,dx+\,\tfrac12\,\D\varphi \right.-\left.\tfrac C{2\alpha+2}\,\left(\tfrac{M'}M\right)^{4\alpha-2}\int_{\mathbb R^3}|\varphi|^{2\alpha+2}\,dx\right]&&\\
\le\left(\tfrac{M'}M\right)^3\,\mathrm E[\varphi]\,.&&
\end{eqnarray*}
We deduce \eqref{keyscaling} by taking the infimum of the right-hand side over all functions $\varphi$ in $\Sigma_M$ and the monotonicity of $M\mapsto I_M$ on $[M_c,+\infty)$ follows.
We now turn our attention to the proof of \eqref{strictm}. If $I_m=I_{M-m}=0$, inequalities \eqref{strictm} obviously holds for any $M>M_c$, since $I_M$ is negative. If $I_m<0$ but $I_{M-m}=0$ (so that $M_c<m$ and $M-m\le M_c$), then \eqref{strictm} reduces to $I_M<I_m$, which is again guaranteed by \eqref{keyscaling}. If both $I_m$ and $I_{M-m}$ are negative (this is equivalent to $m>M_c$ and $M-m>M_c$, and therefore it may occur only if $M>2M_c$), then we have
\[
I_M\le\Big(\frac Mm\Big)^3\,I_M<\frac Mm\,I_m\quad\mbox{and}\quad I_M\le\Big(\frac M{M-m}\Big)^3\,I_M<\frac M{M-m}\,I_{M-m}
\]
by using \eqref{keyscaling}. Hence, $I_M=\frac m M\,I_M+\frac {M-m}M\,I_M<I_m+I_{M-m}$. This concludes the proof of Lemma~\ref{lem:keyscaling}.\end{proof}
Let us come back to the proof of Proposition \ref{prop:above}. In order to prove the existence of minimizers in the limiting case $C\,M^{4\alpha-2}=V_c$, that is $M=M_c$, we follow the arguments in~\cite{JeanLuo2012}, where a proof for the case $C=1$ is given. As noted in Remark~\ref{Rem:Vanishing}, relative compactness (up to translations) of all minimizing sequences cannot be proved in this case, since $I_{M_c}=0$. We build a particular minimizing sequence as follows.
Let $M_n=M_c+\frac1n$, for every positive integer $n$, and assume that $\varphi_n$ is a minimizer of $I_{M_n}$ in $\Sigma_{M_n}$, which is already known to exist since $M_n>M_c$ and therefore $I_{M_n}<0$ for any $n\ge1$. Since $\{M_n\}_{n\ge1}$ converges towards $M_c$, it can be deduced that $\lim_{n\to\infty}\mathrm E[\varphi_n]=\lim_{n\to\infty}I_{M_n}=I_{M_c}=0$. With the notations of Corollary~\ref{Cor:CriticalMass}, this means that $\lim_{n\to\infty}\eta_{M_n}=0$. If we combine the results of Corollary~\ref{cor:EstimCrit} and Corollary~\ref{Cor:CriticalMass}, then we obtain
\begin{multline*}
\frac 14\,\varepsilon_{M_n}-\frac 32\,\eta_{M_n}\le\mathrm C_{1/2}^{2-2\alpha}\,\C_{\mathrm{GN}}(1)^{2\alpha-1}\,M^{\alpha-\frac 12}\,\left[\frac 12\,(3\alpha-1)\,\varepsilon_{M_n}-(5\alpha-1)\,\eta_{M_n}\right] ^{4\alpha-1}\\
\Big[(2-3\alpha)\,\varepsilon_{M_n}-2\,(2-\alpha)\,\eta_{M_n}\Big]^{1-\alpha}\,.
\end{multline*}
By passing to the limit as $n\to\infty$, we find that
\[
\frac 14\le\mathrm C_{1/2}^{2-2\alpha}\,\C_{\mathrm{GN}}(1)^{2\alpha-1}\,M^{\alpha-\frac 12}\,\left[\frac 12\,(3\alpha-1)\right] ^{4\alpha-1}\Big[(2-3\alpha)\Big]^{1-\alpha}\liminf_{n\to\infty}\varepsilon_{M_n}^{3\alpha-1}\,,
\]
thus proving that $\liminf_{n\to\infty}\varepsilon_{M_n}>0$ and hence
\[
\liminf_{n\to\infty}\ir{|\varphi_n|^{2\alpha+2}}>0\,.
\]
Then, by Lemma I.1 in~\cite{bi:PLL-CC2} the sequence $\{\varphi_n\}_{n\ge 1}$ satisfies the non-vanishing condition \eqref{nonvanishing}. Consequently, up to translations, there exists a subsequence that converges weakly in $\mathrm H^1(\mathbb R^3)$, strongly in $\L^2_{\rm loc} (\mathbb R^3)$ and pointwise almost everywhere, towards a nonzero function $\varphi_\infty$. This sequence can be also assumed to be strongly convergent in $\L^2_{\rm loc} (\mathbb R^3)$ and pointwise convergent almost everywhere. Thanks to~\cite{bi:BrL1} and Lemma 2.2 in~\cite{ZaZa}, we get
\[
0=\lim_ {n \to \infty}I_{M_n}=\lim_ {n \to \infty}\mathrm E[\varphi_n]=\mathrm E[\varphi_\infty]+\lim_ {n \to \infty}\mathrm E[\varphi_n-\varphi_\infty]\,.
\]
Since $0 <\|\varphi_\infty \|_{\L^2(\Real^3)}^2\le M_c$, we have $\mathrm E[\varphi_\infty]\geq 0$ and
\[
\lim_ {n \to \infty} \| \varphi_n-\varphi_\infty \|_{\L^2(\Real^3)}^2=M_c-\|\varphi_\infty \|_{\L^2(\Real^3)}^2 <M_c\,,
\]
then $\lim_ {n \to \infty}\mathrm E[\varphi_n-\varphi_\infty]\ge 0$. Therefore, $\mathrm E[\varphi_\infty]=0$. To conclude the proof of Proposition \ref{prop:above} we observe that $\|\varphi_\infty \|_{\L^2(\Real^3)}^2=M_c$. Otherwise, $\varphi_\infty$ is a minimizer of $I_M $, for some $M<M_c$, and we reach a contradiction with the first statement in Proposition \ref{prop:above}.\hfill$\square$\end{proof}
\medskip\noindent{\scriptsize\copyright~2013 by the authors. This paper may be reproduced, in its entirety, for non-commercial~purposes.}
\bibliographystyle{siam}\small |
2,869,038,154,589 | arxiv | \section{Introduction}
\subsection{The mysteries of LLL}
The LLL algorithm (\cite{LLL82}) is one of the most celebrated algorithmic inventions of the twentieth century, with countless applications to pure and computational number theory, computational science, and cryptography. It is also the most fundamental of lattice reduction algorithms, in that nearly all known reduction algorithms are generalizations of LLL in some sense, and they also utilize LLL as their subroutine. (We refer the reader to \cite{NV10} for a thorough survey on LLL and these related topics.) Thus it is rather curious that much of the salient features of LLL in practice is left totally unexplained, not even in a heuristic, speculative sense, even to this day.
The most well-known among the mysteries of LLL is the gap between its worst-case root Hermite factor(RHF) and the observed average-case, as documented in Nguyen and Stehl\'e (\cite{NS06}). It is a theorem from the original LLL paper (\cite{LLL82}) that the shortest vector of an LLL-reduced basis (in the theoretical sense) in dimension $n$, with its determinant normalized to $1$, has length at most $(4/3)^{\frac{n-1}{4}} \approx 1.075^n$, whereas in practice one almost always observes $\approx 1.02^n$, regardless of the way in which the input is sampled. This is a strange phenomenon in the light of the works of Kim (\cite{K15}) and Kim and Venkatesh (\cite{KV17}), which provide experimental and theoretical evidence that, for almost every lattice, nearly all of its LLL bases have RHF close to the worst bound. It is as though the algorithm is consciously dodging those plethora of inferior bases every time it is run. This leads to the suspicion that LLL must be operating in a complex manner that belies the simplicity of its code.
There are also many other LLL phenomena that remain unaccounted for. One is the geometric series assumption (GSA), originally proposed by Schnorr (\cite{S03}), and its partial failure at the boundaries, both of which are observed in other blockwise reduction algorithms as well e.g. BKZ (\cite{SE94}). Despite being an indispensable component of numerous cryptanalyses of lattice-based systems (e.g. see \cite{DY17}, \cite{BSW18}), the current understanding of GSA is not much better than that of the RHF gap problem above: not even a heuristic explanation, or a precise formulation, only vague empirical observations. There are also questions raised regarding the time complexity of LLL. Nguyen and Stehl\'e (\cite{NS06}) suggest that, in most practical situations, the average time complexity is much lower than the worst-case, suggesting that there may be the average-worst case gap phenomenon here as well. The complexity of the optimal LLL algorithm --- i.e. the parameter $\delta$ equals $1$ --- is not proven to be polynomial-time, although observations suggest that it is (see Akhavi (\cite{A00}) and references therein).
This lack of understanding of the practical behavior of LLL --- and reduction algorithms in general --- may incur a hefty price, especially when it comes to cryptographic applications. To put it somewhat bluntly: simply by running LLL, we managed to ``improve'' the RHF of LLL from $1.075$ to $1.02$; what keeps one from entertaining the possibility that a cheap trick might improve it further to, say, $1.005$, and thereby cripple all lattice-based cryptosystems? As unrealistic --- and perhaps even outrageous --- as this may sound, our current understanding of reduction algorithms is severely unequipped to address this question.
\subsection{This paper}
The theme of the present paper is that statistical physics may enable a scientific approach to the empirical behavior of the LLL algorithm, by studying it as a kind of a sandpile model. As demonstrated throughout this paper, for each LLL phenomenon, there is a corresponding sandpile phenomenon, most of which are either already familiar to physicists or captured by well-known methods in physics. Some aspects of our work seem to present challenges to physics, and we hope those to motivate rich and fruitful interdisciplinary interactions revolving the LLL algorithm, and lattice reduction algorithms in general.
In Section 2, we justify this perspective by presenting stochastic sandpile models that are both impressively close to LLL and mathematically accessible. Specifically, we propose two models of LLL, which we name \emph{LLL-SP} and \emph{SSP} respectively. LLL-SP (Algorithm \ref{alg:lllsp} below) is a nonabelian stochastic model that exhibits nearly identical quantitative behavior to that of LLL in numerous aspects, both in terms of their output statistics such as the distribution of RHF, and their dynamics. This provides compelling evidence that the two algorithms operate under the same principles, or put it formally, that they are in the same universality class. SSP (Algorithm \ref{alg:ssp}) is an abelian stochastic model that is mathematically far more tractable than LLL-SP, and still imitates the most important aspects of the output statistics of LLL.
In Sections 3 and 4, we prove on these models some of the most desired statements regarding LLL. On the RHF gap phenomenon, we have the following
\begin{theorem} \label{thm:intro_ssprhf}
In all sufficiently large system sizes (which corresponds to the lattice dimensions for LLL), there exists a gap between the worst-case and the average-case RHFs of SSP.
\end{theorem}
Theorem \ref{prop:ssprhf} below provides a more precise quantitative statement, after the necessary definitions are set up. We mention that the mathematical study of SSP and the proof of this theorem are announced in the companion paper \cite{Kprep}, separated from the present paper in order for consideration in a purely physical context. Hence Section 3, where we introduce Theorem \ref{thm:intro_ssprhf}, is expository, included for the completeness of the presentation of our perspective on LLL. We expect that a key idea in the proof of Theorem \ref{thm:intro_ssprhf} can be extended to yield the same result for LLL-SP; see Conjecture \ref{conj:mass}.
We are able to prove some fairly strong statements regarding the time complexity of LLL-SP (which also applies to SSP):
\begin{theorem} \label{thm:intro_time}
Choose an input basis $\{\mathbf{b}_1, \ldots, \mathbf{b}_n\} \subseteq \mathbb{R}^n$, and let $E = n^2 \log \max_i\|\mathbf{b}_i\|$. Then
\begin{itemize}
\item (Lower bound on complexity) There exists a constant $C$ such that, with probability $1 - CE^{-1/2}$, LLL-SP takes at least $E/4$ swaps to terminate.
\item (Polynomial-time complexity of the optimal LLL) With probability $1 - \eta$, the optimal LLL-SP --- that is, with the maximal $\delta$ parameter --- terminates within $O_\eta(E)$ swaps.
\end{itemize}
\end{theorem}
See Theorems \ref{thm:lowertime} and \ref{thm:optimal} for precise statements. The lower bound is of particular interest from the cryptographic perspective, since it sets a certain limit on the strength of lattice reduction algorithms. We expect that this result is also valid for LLL assuming a certain conjecture on its dynamical property that is well-supported by our experiments; see Conjecture \ref{conj:mu} below.
In Section 5, we further develop the connection between LLL and sandpile models by ``applying'' the finite-size scaling theory (FSS) to LLL. FSS is a theory in physics that studies critical phase transitions, such as water freezing into ice, and metals being magnetized. Although there is no critical phenomenon to discuss for LLL, the analogy with sandpile models motivates us to investigate if some observables in LLL scale with dimension in a similar way to what is seen in physics in the finite-size scaling theory of critical phenomena.
Denote $y_n$ by the natural log of the ``average RHF'' of LLL in dimension $n$, and $y_\infty := \lim_{n \rightarrow \infty} y_n$. Also, for a (LLL-reduced) basis $\mathcal{B} = \{\mathbf{b}_1, \ldots, \mathbf{b}_n\}$ and its Gram-Schmidt orthogonalization $\{\mathbf{b}_1^*, \ldots, \mathbf{b}_n^*\}$, write $r(i) = \log \|\mathbf{b}_i^*\|/\|\mathbf{b}_{i+1}^*\|$. Then the formulas from FSS that would normally apply to (abelian) sandpiles translate to the following for LLL: there exists a single constant $\sigma$ such that
\begin{enumerate}[(i)]
\item $y_\infty = y_n + \frac{D}{n^\sigma} + \mbox{(smaller errors)}$, for some constant $D$.
\item $\mathrm{Var}(y_n) \sim n^{-2\sigma}$.
\item $2y_\infty - \mathbb{E}(r(i)) \sim i^{-\sigma} \mbox{ or } (L+1-i)^{-\sigma}$, depending on whether $i$ is near $1$ or $n$.
\end{enumerate}
All three statements are clearly interesting: (i) and (ii) are self-explanatory, and (iii) provides the correct formulation of the GSA (which says that $r(i)$ are nearly constant) and its partial failures near the boundaries.
Our data on dimensions up to $300$ --- summarized in Tables \ref{table:1st} and \ref{table:2nd}, and Figures \ref{fig:1st}-\ref{fig:3rd_b} below --- fit robustly with all of the above formulas with $\sigma \approx 0.75$. Accordingly, we obtain a numerical estimate
\begin{equation}\label{eq:FSSprediction}
\mbox{(average RHF of LLL)} \rightarrow 1.02265\ldots, \mbox{ as $n \rightarrow \infty$.}
\end{equation}
It may be of interest that Grassberger, Dhar, and Mohanty (\cite{GDM16}) numerically obtained the same value of $\sigma \approx 0.75$ for a sandpile model with a very different toppling rule. In physics, different systems with the same critical exponents (such as $\sigma$ here) that govern their behavior in the system size limit are said to belong to the same \emph{universality class}. It is expected that there exist not too many distinct universality classes.
There exists some subtlety regarding (iii), arising from the fact that LLL is nonabelian as a sandpile model. It does hold on one end with $\sigma \approx 0.75$ for the first 8-10 values of $i$, but on the other end, it holds with a different exponent $\approx 1.05$. At this point, we do not know how to explain this phenomenon in a satisfactory manner; it could be the size of our data --- which is quite large from the lattice reduction perspective, but tiny from the physical one --- or the authors' shortcomings in physics. At the very least, we obtain a neat extrapolation of $\mathbb{E}(r(i))$ on both ends, which has been of some recent cryptographic interest (\cite{BSW18}, \cite{DY17}).
\subsection{Comparison with previous works}
This paper is not the first to compare LLL, and blockwise reduction algorithms in general, to a sandpile model. The formal similarity seems to have been first noticed in Madritsch and Vall\'ee (\cite{MV10}) --- see also Vall\'ee (\cite{V16}). This idea was and is being more vigorously applied to the simulation of BKZ, the algorithm used in practice to challenge lattice-based cryptosystems that may be viewed as a generalization of LLL. We refer the readers to \cite{CN11}, \cite{HPS11}, and the more recent \cite{BSW18} for examples.
The present work most importantly differs in motivation from the above-mentioned works, and other related works in the cryptographic literature. In cryptography, often the goal is to craft what is called a \emph{simulator} of BKZ, an algorithm of very small temporal and spatial complexity that aids the practitioners in predicting the outcome of BKZ, with a particular interest in the RHF and the output profile. On the other hand, our goal is to search for a scientific theory that matches the observed behavior of LLL. It is one of our hopes that our work serves as a contribution to the construction of a better simulator, but we do not claim to be part of that competition.
This difference in our motivation is what leads us to investigate LLL in ways that have not been tried in the previous works, which are nearly exclusively focused on cryptographic applications. We subject our models to far more severe challenges --- running tens of thousands of tests, applying tweaks, comparing more observables than just the RHF --- than is done for the simulators. We do come up with a high-quality simulator of LLL as a result, yet that is the bare minimum necessity, not a sufficiency, to convince anyone that LLL may be governed by the laws of statistical physics, like the sandpile models are. Furthermore, adopting the well-developed ideas of physics such as the operator algebra method (Sections 3 and 4), and the finite-size scaling theory (Section 5), we question the statements that have often been taken for granted, such as whether the number $1.02$ is not a mere anomaly of the small dimensions, and whether the GSA is really the ideal description of the output shape of LLL.
We again stress that we are not pitting our work against the literature on BKZ simulators, and ask the reader to avoid the mistake of the same kind. Rather, we hope our work to be understood as an attempt to see LLL under a different light. Yes, LLL has been viewed as a sandpile model in the sense of an algorithm, but it has never been viewed as a sandpile model in the sense of an object subject to the principles of statistical mechanics. In that aspect our work is the first of its kind.
\subsection{Assumptions and notations}
In Sections 2-4, instead of the original LLL reduction from \cite{LLL82}, we work with its Siegel variant, a slight simplification of LLL. The Siegel reduction shares with LLL all the same qualitative features, but easier to handle theoretically, making it a reasonable starting point for our study. However, in Section 5 (the section on FSS), we revert to the original LLL, since it would be more interesting to extrapolate its RHF than that of the Siegel variant. Either way, our numerous smaller experiments suggest that the choice of LLL or Siegel affect the outcomes marginally at best.
$n$ always means the dimension of the relevant Euclidean space. Our lattices in $\mathbb{R}^n$ always have full rank. A basis $\mathcal{B}$, besides its usual definition, is an \emph{ordered} set, and we refer to its $i$-th element as ${\mathbf b}_i$. Denote by ${\mathbf b}_i^*$ the component of ${\mathbf b}_i$ orthogonal to all vectors preceding it, i.e. ${\mathbf b}_1, \ldots, {\mathbf b}_{i-1}$. Also, for $i > j$, define $\mu_{i,j} = \langle {\mathbf b}_i, {\mathbf b}_j^* \rangle / \langle {\mathbf b}_j^*, {\mathbf b}_j^* \rangle$. Thus the following equality holds in general:
\begin{equation*}
{\mathbf b}_i = {\mathbf b}_i^* + \sum_{j=1}^{i-1} \mu_{i,j}{\mathbf b}_j^*.
\end{equation*}
We will write for shorthand $\alpha_i := \|{\mathbf b}_i^*\| / \|{\mathbf b}_{i+1}^*\|$, and $Q_i = (\alpha_i^{-2} + \mu_{i+1,i}^2)^{-1/2}$. When discussing lattices, $r_i := \log \alpha_i$, and when discussing sandpiles, $r_i$ refers to the ``amount of sand'' at vertex $i$.
\subsection{Data for the experiments}
The original codes for the experiment are made available on SK's website https://sites.google.com/view/seungki/home. For the data, please consult one of the authors --- the raw data is of several gigabytes in size.
\subsection{Acknowledgments}
JD and SK are partially supported by NSF CNS-2034176. BY is supported by Sinica Investigator Award AS-IA-109-M01, and Executive Yuan Project AS-KPQ-109-DSTCP. TT and YW are supported by JSPS KAKENHI Grant Number JP20K23322.
We are hugely indebted to Deepak Dhar, who patiently explained much of the underlying physics over a long period of time, and directed us to the relevant works in physics. We also thank Deepak Dhar (again), Nick Genise, and Phong Nguyen for their careful reading and comments, and Shi Bai for his extensive help with parts of the experiments in Section 5.
\section{Modeling LLL by a sandpile}
\subsection{The LLL algorithm}
We briefly review the LLL algorithm; for details, we recommend \cite{LLL82}, in which it is first introduced, and also \cite{JS98} and \cite{NV10}. A pseudocode for the LLL algorithm is provided in Algorithm \ref{alg:lll}. In Line 3, we deliberately left the choice algorithm, that is, the method for choosing $k$, unprescribed. The standard choice is to choose the lowest $k$ satisfying the inequality.
\begin{algorithm}
\caption{The LLL algorithm (Siegel variant)}\label{alg:lll}
\begin{enumerate}[1.]
\item[0.] Input: a basis $\mathcal{B} = \{{\mathbf b}_1, \ldots, {\mathbf b}_n\}$ of $\mathbb{R}^n$, a parameter $\delta < 0.75$
\item while true, do:
\item \hspace{4mm} Size-reduce $\mathcal{B}$.
\item \hspace{4mm} (Lov\'asz test) choose a $k \in \{1, \ldots, n-1\}$ such that $\delta\|{\mathbf b}_{k}^*\|^2 > \|{\mathbf b}_{k+1}^*\|^2$
\item \hspace{4mm} if there is no such $k$, break
\item \hspace{4mm} swap ${\mathbf b}_{k}$ and ${\mathbf b}_{k+1}$ in $\mathcal{B}$
\item Output $\mathcal{B} = \{{\mathbf b}_1, \ldots, {\mathbf b}_n\}$, a $\delta$-reduced LLL basis.
\end{enumerate}
\end{algorithm}
\begin{proposition} \label{prop:post_swap}
After carrying out Step 5 in Algorithm \ref{alg:lll}, the following changes occur:
\begin{enumerate}[(i)]
\item $\alpha_{k-1}^{new} = Q_k\alpha_{k-1}$
\item $\alpha_k^{new} = Q_k^{-2}\alpha_k$
\item $\alpha_{k+1}^{new} = Q_k\alpha_{k+1}$
\item $\mu_{k, k-1}^{new} = \mu_{k+1, k-1}$
\item $\mu_{k+1, k}^{new} = Q_k^2\mu_{k+1,k}$
\item $\mu_{k+2, k+1}^{new} = \mu_{k+2,k} - \mu_{k+2,k+1}\mu_{k+1,k}$
\item $\mu_{k,l}^{new} = \mu_{k+1,l}, \mu_{k+1,l}^{new} = \mu_{k,l}$ for $1 \leq l \leq k-1$
\item $\mu_{l,k}^{new} = \mu_{l,k+1} - \mu_{l,k+1}\mu_{k+1,k}\mu_{k+1,k}^{new} + \mu_{l,k}\mu_{k+1,k}^{new}$ for $l \geq k+2$
\item $\mu_{l,k+1}^{new} = \mu_{l,k} - \mu_{l,k+1}\mu_{k+1,k}$ for $l \geq k+2$
\end{enumerate}
and there are no other changes. The superscript ``new'' refers to the corresponding variable after the swap.
\end{proposition}
\begin{proof}
Straightforward calculations (see e,g, \cite{LLL82}).
\end{proof}
\subsection{Sandpile basics}
We also briefly review the basics of the sandpile models. For references, see Dhar (\cite{D99}, \cite{D06}) or Perkinson (\cite{P14}).
A sandpile model is defined on a finite graph $\mathcal{G}$, with one distinguished vertex called the \emph{sink}. In the present paper, we only concern ourselves with the cycle graph, say $A_n$, consisting of vertices $\{v_1, \ldots, v_n\}$ and one unoriented edge for each adjacent pair $v_i$ and $v_{i+1}$. We also consider $v_1$ and $v_n$ as adjacent. We designate $v_n$ as the sink.
A \emph{configuration} is a function $r : \{v_1, \ldots, v_n\} \rightarrow \mathbb{R}$. Just as reduction algorithms work with bases, sandpile models work with configurations. We write for short $r_i = r(v_i)$. One may think of $r_i$ as the amount or \emph{height} of the pile of sand placed on $v_i$.
\begin{figure}
\includegraphics[scale=1.2]{toppling}
\caption{An illustration of a (legal) toppling $T_i$.} \label{fig:toppling}
\end{figure}
Just as LLL computes a reduced basis by repeatedly swapping neighboring basis vectors, sandpiles compute a \emph{stable configuration} by repeated \emph{toppling.} Let $T, I \in \mathbb{R}_{>0}$. A configuration is \emph{stable} if $r_i \leq T$ for all $i \neq n$. A \emph{toppling operator} $T_i$ ($i \neq n$) replaces $r_i$ by $r_i - 2I$, and $r_{i-1}$ by $r_{i-1} + I$ and $r_{i+1}$ by $r_{i+1} + I$. An illustration is provided in Figure \ref{fig:toppling}. Applying $T_i$ when $r_i > T$ is called a \emph{legal toppling}. By repeatedly applying legal topplings, all excess ``sand'' will eventually be thrown away to the sink, and the process will terminate.
In our paper, $T$ --- \emph{threshold} --- will always be a fixed constant, but $I$ --- \emph{increment} --- could be a function of the current configuration, or a random variable, or both. In the former case, we say that the model is \emph{nonabelian} --- otherwise \emph{abelian}. In the second case, we say that the model is \emph{stochastic}. The (non-stochastic) abelian sandpile theory is quite well-developed, with rich connections to other fields of mathematics --- see e.g. \cite{L10}. Other sandpile models are far less understood, especially the nonabelian ones.
\subsection{The LLL sandpile model}
Motivated by Proposition \ref{prop:post_swap}, especially the formulas (i) -- (iii), we propose the following Algorithm \ref{alg:lllsp}, which we call the \emph{LLL sandpile model}, or LLL-SP for short.
\begin{algorithm}
\caption{The LLL sandpile model (LLL-SP)}\label{alg:lllsp}
\begin{enumerate}[1.]
\item[0.] Input: $\alpha_1, \ldots, \alpha_{n} \in \mathbb{R}$, $\mu_{2,1}, \ldots, \mu_{n,n-1} \in [-0.5,0.5]$, a parameter $\delta < 0.75$
\item Rewrite $r_i : = \log \alpha_i$, $\mu_i := \mu_{i+1,i}$ $T := -0.5\log \delta$
\item while true, do:
\item \hspace{4mm} choose a $k \in \{1, \ldots, n-1\}$ such that $r_k > T$
\item \hspace{4mm} if there is no such k, break
\item \hspace{4mm} subtract $2\log Q_k$ from $r_k$
\item \hspace{4mm} add $\log Q_k$ to $r_{k-1}$ (if $k-1 \geq 1$) and $r_{k+1}$ (if $k+1 \leq n-1$)
\item \hspace{4mm} (re-)sample $\mu_{k-1}, \mu_k, \mu_{k+1}$ uniformly from $[-0.5,0.5]$
\item Output: real numbers $r_1, \ldots, r_{n-1} \leq T$
\end{enumerate}
\end{algorithm}
The only difference between LLL (Algorithm \ref{alg:lll}) and LLL-SP (Algorithm \ref{alg:lllsp}) lies in the way in which the $\mu$'s are replaced after each swap or topple. Our experimental results below demonstrate that this change hardly causes any difference in their behavior. A theoretical perspective is discussed at the end of this section.
The increment $I = \log Q_i = -\frac{1}{2}\log (e^{-2r_i} + \mu^2_i)$ is not as unnatural as it might seem --- see Figure \ref{fig:incr}. The dashed lines there represent the graph of
\begin{equation*}
I_\mu(r) = \begin{cases} r &\mbox{if $r > -\log \mu$} \\ -\log \mu &\mbox{otherwise.} \end{cases}
\end{equation*}
for comparison.
The decision to sample $\mu_i$'s uniformly is largely provisional, though some post hoc justification is provided in Figure \ref{fig:museq}. If desired, one could refine the model by adopting part of Proposition \ref{prop:post_swap} for updating $\mu_i$.
\begin{figure}
\centering
\includegraphics[scale=0.2]{increment}
\caption{Graphs of $\log Q_i$ as a function of $r_i$, for $\mu = 0.01, 0.1, 0.2, 0.3, 0.4, 0.5$, from top to bottom. The graph corresponding to $\mu = 0.5$ crosses the $x$-axis at $x = T \approx 0.1438$. } \label{fig:incr}
\end{figure}
\subsection{Numerical comparisons}
For each dimension $n = 80, 100, 120$, we ran LLL and LLL-SP 5,000 times with the same set of input bases of determinant $\approx 2^{10n}$, generated using the standard method suggested in Section 3 of \cite{NS06}. We used fpLLL (\cite{FPLLL}) for the LLL algorithm. We remind the reader that we have used the Siegel variant here.
In addition, we also ran the same experiment with the following two other choice algorithms, to see how they affect the outcome:
\begin{itemize}
\item \emph{random}: randomly and uniformly choose an index from those on which swapping/toppling is available, and swap/topple on that index.
\item \emph{greedy}: swap/topple on the index with the greatest increment $\log Q_k$.
\end{itemize}
Figure \ref{fig:output} shows the average shape of the output bases and configurations by LLL and LLL-SP. One easily observes that the algorithms yield nearly indistinguishable outputs (except possibly for the greedy; see Remark below). In particular, since RHF can be computed directly from the $r_i$'s by the formula
\begin{equation} \label{eq:rhf}
\mbox{RHF} = \exp\left(\frac{1}{n^2}\sum_{i=1}^{n-1}(n-i)r_i\right),
\end{equation}
we expect both to yield about the same RHF. Indeed, Table \ref{table:rhf} and Figure \ref{fig:RHFdist} show that the RHF distribution of LLL and LLL-SP are in excellent agreement (again except for greedy, for which the average differs by $\approx 0.0011$).
\begin{figure}
\centering
\includegraphics[scale=0.30]{80s} \includegraphics[scale=0.30]{80r} \includegraphics[scale=0.30]{80g}
\includegraphics[scale=0.30]{100s} \includegraphics[scale=0.30]{100r} \includegraphics[scale=0.30]{100g}
\includegraphics[scale=0.30]{120s} \includegraphics[scale=0.30]{120r} \includegraphics[scale=0.30]{120g}
\caption{Average output of LLL (orange square) and LLL-SP (blue circle). Graphs on each column, from left to right, correspond to the original, random, and greedy choice algorithms, respectively. Graphs on each row represent the results in dimensions 80, 100, and 120, respectively. Within each graph, the horizontal and vertical axes represent the index $k$ on vertices and the average height of the piles $r_k$, respectively.
} \label{fig:output}
\end{figure}
\begin{table}[]
\begin{tabular}{r|l|l|l|l|l|l|}
\cline{2-7}
\multicolumn{1}{c|}{} & \multicolumn{2}{c|}{original} & \multicolumn{2}{c|}{random} & \multicolumn{2}{c|}{greedy} \\ \hline
\multicolumn{1}{|c|}{dim} & \multicolumn{1}{c|}{LLL} & \multicolumn{1}{c|}{LLL-SP} & \multicolumn{1}{c|}{LLL} & \multicolumn{1}{c|}{LLL-SP} & \multicolumn{1}{c|}{LLL} & \multicolumn{1}{c|}{LLL-SP} \\ \hline
\multicolumn{1}{|r|}{80} & \begin{tabular}[c]{@{}l@{}}1.0276\\ 0.00218\end{tabular} & \begin{tabular}[c]{@{}l@{}}1.0273\\ 0.00223\end{tabular} & \begin{tabular}[c]{@{}l@{}}1.0268\\ 0.00206\end{tabular} & \begin{tabular}[c]{@{}l@{}}1.0264\\ 0.00209\end{tabular} & \begin{tabular}[c]{@{}l@{}}1.0267\\ 0.00197\end{tabular} & \begin{tabular}[c]{@{}l@{}}1.0256\\ 0.00197\end{tabular} \\ \hline
\multicolumn{1}{|r|}{100} & \begin{tabular}[c]{@{}l@{}}1.0285\\ 0.00182\end{tabular} & \begin{tabular}[c]{@{}l@{}}1.0282\\ 0.00183\end{tabular} & \begin{tabular}[c]{@{}l@{}}1.0277\\ 0.00172\end{tabular} & \begin{tabular}[c]{@{}l@{}}1.0272\\ 0.00177\end{tabular} & \begin{tabular}[c]{@{}l@{}}1.0276\\ 0.00161\end{tabular} & \begin{tabular}[c]{@{}l@{}}1.0265\\ 0.00167\end{tabular} \\ \hline
\multicolumn{1}{|r|}{120} & \begin{tabular}[c]{@{}l@{}}1.0291\\ 0.00157\end{tabular} & \begin{tabular}[c]{@{}l@{}}1.0288\\ 0.00160\end{tabular} & \begin{tabular}[c]{@{}l@{}}1.0283\\ 0.00151\end{tabular} & \begin{tabular}[c]{@{}l@{}}1.0279\\ 0.00153\end{tabular} & \begin{tabular}[c]{@{}l@{}}1.0282\\ 0.00142\end{tabular} & \begin{tabular}[c]{@{}l@{}}1.0271\\ 0.00142\end{tabular} \\ \hline
\end{tabular}
\caption{Averages and standard deviations of RHF, rounded up to appropriate digits.}
\label{table:rhf}
\end{table}
\begin{figure}
\centering
\includegraphics[scale=0.30]{g120s}
\includegraphics[scale=0.30]{g120r} \includegraphics[scale=0.30]{g120g}
\caption{Probability distributions of RHFs of LLL and LLL-SP in dimension 120.} \label{fig:RHFdist}
\end{figure}
The reason that we find LLL and LLL-SP slightly differ with respect to the greedy choice algorithm has to do with the fact that, unlike the original and the random, it ``probes'' one step ahead before making its toppling choice, which has an effect on the $\mu_i$-distribution --- indeed, see Figure \ref{fig:museq} below. We expect this difference to disappear, if LLL-SP is modified to simulate the $\mu_i$-distribution more carefully, using parts of Proposition \ref{prop:post_swap}. Still, it is remarkable that the difference in the average RHF $\approx 0.0011$ is independent of dimension, and the standard deviations remain nearly identical.
The resemblance of the two algorithms runs deeper than on the level of output statistics. See Figures \ref{fig:seq} and \ref{fig:museq}, which depict the plot of points $(i, Q_{k(i)}^{-2})$ and $\mu_{k(i)+1, k(i)} = \mu_{k(i)}$ as we ran LLL and LLL-SP on dimension 80, where $k(i)$ is $k$ chosen at $i$-th iteration. The two plots are again indistinguishable, yet another evidence that LLL and LLL-SP possess nearly identical dynamics. Although too cumbersome to present here, we have the same results on higher dimensions as well.
\begin{figure}
\centering
\includegraphics[scale=0.47]{pot_lll80s} \includegraphics[scale=0.47]{pot_sp80s}
\includegraphics[scale=0.47]{pot_lll80r} \includegraphics[scale=0.47]{pot_sp80r}
\includegraphics[scale=0.47]{pot_lll80g} \includegraphics[scale=0.47]{pot_sp80g}
\caption{Plots of $i$ versus $Q_{k(i)}^{-2}$ during a typical run of LLL(left) and LLL-SP(right), with respect to the sequential, random, and greedy choice algorithms, respectively from top to bottom.} \label{fig:seq}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.47]{mu_lll80s} \includegraphics[scale=0.47]{mu_sp80s}
\includegraphics[scale=0.47]{mu_lll80r} \includegraphics[scale=0.47]{mu_sp80r}
\includegraphics[scale=0.47]{mu_lll80g} \includegraphics[scale=0.47]{mu_sp80g}
\caption{Plots of $i$ versus $\mu_{k(i)}$ for LLL(left) and LLL-SP(right), with respect to the sequential, random, and greedy choice algorithms, respectively from top to botton.} \label{fig:museq}
\end{figure}
\subsection{Discussion}
The only difference between LLL and LLL-SP has to do with the way they update the $\mu_k (= \mu_{k+1,k})$'s. For LLL-SP, the $\mu_k$-variables are i.i.d. and independent of the $r_k$-variables. For LLL, $\mu_k$ is determined by a formula involving its previous value and $r_k$. However, it seems plausible that the $\mu_k$'s in LLL, as a stochastic process, is \emph{mixing}, which roughly means that they are close to being i.i.d, in the sense that a small perturbation in $\mu_k$ causes the next value $\mu_k^{new}$ to become near unpredictable. Numerically, this is robustly supported by the graphs at the bottom of Figure \ref{fig:museq}. Theoretically, our intuition comes from the fact that the formula $\mu_{k}^{new} = \mu_{k}/(\mu_k^2 + \alpha_k^{-2})$ (mod 1) is an approximation of the Gauss map $x \mapsto \{1/x\}$, which is well-known to have excellent mixing properties (see e.g. Rokhlin (\cite{R61}) and the references in Bradley (\cite{B05}) for more recent works).
The above discussion can be summarized and formulated in the form of a mathematical conjecture, which can then be considered a rigorous version of the statement ``LLL is essentially a sandpile model.'' Below is our provisional formulation of such a conjecture.
\begin{conjecture} \label{conj:mu}
Choose a distribution $\mathcal{D}$ on the set of bases in $\mathbb{R}^n$, to be used to sample inputs for LLL. Define $k(i)$, as earlier, to be the index of the pile toppled at $i$-th iteration. Then $k(i)$ is a random variable depending on the input distribution, and so is $\mu_{k(i)}$. Then, if $\mathcal{D}$ is ``generic,'' then
\begin{enumerate}[(i)]
\item $(|\mu_{k(i)}|)_{i = 1, 2, \ldots}$ is strongly mixing as a stochastic process. (Roughly speaking, this means $|\mu_k(N)|$ is nearly independent of $|\mu_k(M)|$ for which $N-M$ is large; see the text \cite{B95} for a precise definition.)
\item each $|\mu_{k(i)}|$ is contained in a compact subset $S$ of the set of all probability density functions on $[0, 0.5]$ with respect to the $L^\infty$-norm. $S$ is independent of the dimension, the input distribution, or any other variable.
\end{enumerate}
\end{conjecture}
The design intent of Conjecture \ref{conj:mu} is so that what is provable for LLL-SP would also be provable for LLL by an analogous argument (e.g. the theorems in Section 4), while retaining the flexibility as to what the correct distribution of $\mu_k$ might be. It is to be updated accordingly as our understanding of LLL and LLL-SP progresses, in the hope that Conjecture \ref{conj:mu} may come within reach at some point.
\section{Abelian sandpile analogue of LLL, and its RHF gap}
The drawback of LLL-SP as a model of LLL is that, being nonabelian, it is difficult to study theoretically; indeed, there are few proved results on nonabelian sandpile models. In this section, we introduce a certain abelian stochastic sandpile model that we named SSP, which is in a sense an abelianized version of LLL-SP. At a first glance, SSP seems rather removed from LLL, but the shapes of their average output are surprisingly similar. Moreover, SSP admits a mathematical theory that is analogous to that of ASM due to Dhar (\cite{D90}, see also \cite{D06}). This allows us to prove statements such as the average-worst case gap in RHF (Theorem \ref{prop:ssprhf}), suggesting that SSP may be a good starting point for investigating the RHF distributions of reduction algorithms.
We again mention that this section is in fact an exposition of a concurrently written work (\cite{Kprep}) by SK and YW, slightly rearranged to emphasize the connection to LLL. Although we transferred much of our work on SSP to a separate paper in order to properly treat it from the physical perspective, we offer its detailed summary for the completeness of our narrative here.
\subsection{Background on ASM}
To facilitate reader's understanding, we briefly describe the abelian sandpile model (ASM), the most basic of sandpile models, and parts of its theory that is relevant to us. Its pseudocode is provided in Algorithm \ref{alg:asm}. See Dhar (\cite{D90}), where the theory is originally developed, or the presentation slides by Perkinson (\cite{P14}).
\begin{algorithm}
\caption{Abelian sandpile model (ASM)}\label{alg:asm}
\begin{enumerate}[1.]
\item[0.] Input: $r_1, \ldots, r_{n-1} \in \mathbb{Z}$, parameters $T, I \in \mathbb{Z}$, $0 < I \leq T/2$
\item while true, do:
\item \hspace{4mm} choose a $k \in \{1, \ldots, n-1\}$ such that $r_k > T$
\item \hspace{4mm} if there is no such k, break
\item \hspace{4mm} subtract $2I$ from $r_k$
\item \hspace{4mm} augment $I$ to $r_{k-1}$ and $r_{k+1}$
\item Output: integers $r_1, \ldots, r_{n-1} \leq T$
\end{enumerate}
\end{algorithm}
The important ASM concepts for us are that of the \emph{recurrent configurations} and the \emph{steady state}. Let $M$ be the set of all stable (non-negative) configurations of ASM. Given two configurations $r, s \in M$, we have the operation
\begin{equation*}
r \oplus s = \mbox{(stabilization of $r+s$)},
\end{equation*}
which is the outcome of ASM with input being the configuration $r+s$ defined by $(r+s)_i = r_i + s_i$ for each $i$. Unlike LLL, the output of ASM is independent of the choice of toppling order --- hence the term ``abelian'' --- and thus $\oplus$ is well-defined. This operation makes $M$ into a commutative monoid.
Define $g \in M$ to be the configuration with $g_1 = 1$ and $g_2 = \ldots = g_{n-1} = 0$. We call $r \in M$ \emph{recurrent} if
\begin{equation*}
\underbrace{g \oplus \ldots \oplus g}_{m\, \mathrm{times}} = \mbox{$r$ for infinitely many $m$}.
\end{equation*}
One can actually take any $g$ for which at least one $g_i$ is coprime to the g.c.d. of $T$ and $I$ (this condition is nothing but only to avoid concentration on a select few congruence classes). Equivalently, with LLL in mind, we can also define that $r$ is recurrent if there exist infinitely many input configurations such that their stabilization results in $r$. It is a theorem that the set $R$ of the recurrent configurations of ASM forms a group under $\oplus$.
One may ask, given an $r \in R$, what is the proportion of $m \in \mathbb{Z}_{>0}$ that satisfies $g \oplus \ldots \oplus g\, (m\, \mathrm{times}) = r$? It turns out that the answer is $1/|R|$ for any $r \in R$, that is, each element of $R$ has the same chance of appearing. This distribution, say $\rho$, on $R$ is called the \emph{steady state} of the system. And the phrase \emph{average output shape} that we have been using in the empirical sense obtains a formal definition as $\sum_{r \in R} \rho(r)r$. The steady state is unique in the following sense: choose an $r \in R$ according to $\rho$, and take any configuration $s$; then $r \oplus s$ is again distributed as $\rho$.
\subsection{Introduction to SSP}
A pseudocode for SSP is provided in Algorithm \ref{alg:ssp}. This is exactly the same as ASM, except for Step 4, which determines the amount of sand to be toppled at random. The decision to sample from the uniform distribution is an arbitrary one; we could have chosen any compactly supported distribution, and much of the discussion below still apply.
\begin{algorithm}
\caption{Stochastic sandpile (SSP)}\label{alg:ssp}
\begin{enumerate}[1.]
\item[0.] Input: $r_1, \ldots, r_{n-1} \in \mathbb{Z}$, parameters $T, I \in \mathbb{Z}$, $0 < I \leq T/2$
\item while true, do:
\item \hspace{4mm} choose a $k \in \{1, \ldots, n-1\}$ such that $r_k > T$
\item \hspace{4mm} if there is no such k, break
\item \hspace{4mm} sample $\gamma$ uniformly from $\{1, \ldots, I\}$
\item \hspace{4mm} subtract $2\gamma$ from $r_k$
\item \hspace{4mm} augment $\gamma$ to $r_{k-1}$ and $r_{k+1}$
\item Output: integers $r_1, \ldots, r_{n-1} \leq T$
\end{enumerate}
\end{algorithm}
The average output shape of this stochastic sandpile model (SSP) is shown in Figure \ref{fig:sspoutput}. Figure \ref{fig:sspoutput} shares all the major characteristics of Figure \ref{fig:output}: flat in the middle, and diminishing at both ends. In cryptographic literature these features have been respectively referred to as the geometric series assumption(GSA) and its failure at the boundaries. In Section 5, we will see that finite-size scaling theory provides a far more quantitatively robust description of the output shape.
\begin{figure}
\centering
\includegraphics[scale=0.6]{ssp}
\caption{Average output of SSP, $n = 100$, $I = 200$ and $T = 400$.} \label{fig:sspoutput}
\end{figure}
\subsection{Mathematical properties of SSP}
A mathematical theory of SSP closely analogous to that of ASM has been recently developed (\cite{Kprep}), largely motivated by the experimental result of the previous section. Every aspect of the above-mentioned ASM theory carries over to the SSP theory, except that instead of configurations one works with a distribution on the set of configurations, due to its stochastic nature. For configurations $r_1, \ldots, r_k$ and $p_i \in (0,1]$ such that $p_1 + \ldots + p_k = 1$, we write
\begin{equation} \label{eq:sspelt}
\sum_{i=1}^k p_i[r_i]
\end{equation}
to represent a distribution that assigns probability $p_i$ to the configuration $r_i$. For instance, if $r$ is a configuration unstable at vertex $k$, and if $v_k = (0, \ldots, -1, 2, -1, \ldots, 0)$ with $2$ in $k$-th entry, then for the toppling operator $T_k$ we have
\begin{equation} \label{eq:ssptopple}
T_k[r] = \sum_{\gamma=1}^I \frac{1}{I}[r_k - \gamma v_k].
\end{equation}
We say a configuration of form \eqref{eq:sspelt} is \emph{mixed} if $k \geq 2$ and \emph{pure} otherwise, \emph{stable} if all $r_i$'s are stable, and \emph{nonnegative} if all $r_i$'s are nonnegative.
The most important property of SSP is that, like ASM, it possesses the unique steady state, that is, a configuration $g$ such that
\begin{equation*}
g \oplus f = g
\end{equation*}
for any nonnegative $f$. It is clear that if we understand the steady state, then we understand the RHF distribution. The following is easy to prove:
\begin{theorem}\label{prop:ssprhf}
The worst-case $\log\mathrm{(RHF)}$ of SSP is $T/2 + o_n(1)$. The average $\log\mathrm{(RHF)}$ of SSP is bounded from above by $T/2 - I/2e^2 + o_n(1)$.
\end{theorem}
We note that empirically one observes $\log\mathrm{(RHF)} \approx T/2 - I/8$ on average.
\begin{proof}[Sketch (and discussion) of proof]
This is essentially Proposition 8 of \cite{Kprep}. We present the sketch of the proof for completeness.
\begin{figure}
\centering
\includegraphics[scale=0.3]{ssp_init} \vspace{3mm}
\includegraphics[scale=0.3]{ssp_move} \vspace{3mm}
\includegraphics[scale=0.3]{ssp_final} \vspace{2mm}
\caption{The parallelepiped argument.} \label{fig:par}
\end{figure}
Take an unstable (pure) configuration $r$. If $r$ is sufficiently far away from the origin in the configuration space, we must topple on each and every vertex at least once --- in fact, arbitrarily many times --- on the way of stabilizing $r$. So consider $T_1T_2 \ldots T_{n-1}[r]$, where $T_i$ is the toppling operator on vertex $i$. By repeated applications of \eqref{eq:ssptopple}, $T_1T_2 \ldots T_{n-1}[r]$ is a distribution on the configuration space that is supported on a parallelepiped-shaped cluster, as illustrated in the top of Figure \ref{fig:par} in case $n = 3$ and $I = 4$; the upper-right vertex in the parallelogram is $r - (1, 1, \ldots, 1)$.
Applying $T_i$ to this parallelepiped-shaped distribution amounts to ``pushing'' the parallelepiped in the direction of $i$, resulting in another parallelepiped-shaped distribution. The middle graph in Figure \ref{fig:par} illustrates this process, by indicating with x marks the outcome of applying $T_1$ to the original distribution (assuming that the horizontal axis represents $r_1$). Repeating, we eventually reach the situation as in the bottom of Figure \ref{fig:par}, where none of the $T_i$ would preserve the shape of the parallelepiped, since $(T, T, \ldots, T)$ is already a stable configuration and thus $T_i$ leaves it there. From this point on, the action of $T_i$ can no longer be easily described.
However, we claim that, for any $r$ sufficiently far enough from the origin, the distribution on the parallelepiped obtained by the time the upper-right corner reaches $(T, \ldots, T)$ is arbitrarily close to a certain limiting distribution $\wp$. To see this, consider the action of $T_i$ on the distribution on the parallelepiped, while forgetting the information about where that parallelepiped is located in the configuration space. Then one notices that each $T_i$ acts as a linear operator on the space of such distributions. Simultaneously diagonalizing all $T_i$'s --- possible because they pairwise commute --- one finds that $1$ is the single largest eigenvalue of multiplicity one, whose corresponding eigenvector is $\wp$. Upon repeated applications of $T_i$'s, the components corresponding to the lesser eigenvalues converge to zero, proving the claim.
(This is actually the proof that SSP has the unique steady state.)
In fact, $\wp$ can be easily computed, using which we can show that the maximum point density of the steady state occurs at $(T, \ldots, T)$ with density $\approx (I/2)^{-(n-1)}$. This is enough to deduce a nontrivial upper bound on the average RHF, as follows. Estimate the number $N(\alpha)$ of stable configurations whose $\log\mbox{(RHF)}$ are greater than $\alpha$, and take $\alpha$ such that $N(\alpha) \cdot (I/2)^{-(n-1)}$ vanishes as $n \rightarrow \infty$. It turns out we can choose $\alpha = T/2 - I/2e^2$.
\end{proof}
There are a couple of difficulties in directly applying the same idea to LLL or LLL-SP. For instance, because the increment depends on the $r_i$'s for those systems, the effect of $T_i$ is not as neat as illustrated in Figure \ref{fig:par}. It would push the side of the parallelepiped with ``uneven force,'' skewing the shape of the parallelepiped and the distribution lying on it.
This makes proving the existence of the steady state for LLL or LLL-SP difficult.
However, for the purpose of bounding the average RHF away from the worst-case, all we need to show is that the maximum density of the output distribution cannot be too large. This seems feasible yet quite vexing; we state it as a conjecture below for future reference. As in the SSP case, we expect that the maximum density is attained on the upper-right corner.
\begin{conjecture} \label{conj:mass}
For a generic distribution $\mathcal{D}$ on the set of bases of $\mathbb{R}^n$, the probability density function of the corresponding output distribution $\mathcal{D}^\circ$ of LLL (or LLL-SP) is bounded from above by a constant $C$ that depends only on $n$.
\end{conjecture}
It may also be interesting to try to deduce other statements on the RHF of SSP, e.g. a lower bound on the average RHF, or why the average RHF appears to be Gaussian, as in Figure \ref{fig:RHFdist}.
\section{Regarding time complexity}
Although expanding the SSP theory, and Theorem \ref{prop:ssprhf} in particular, to LLL-SP seems challenging for the time being, we are able to prove some attractive statements for LLL-SP with respect to its complexity, which we present below. We also consider their extensions to LLL assuming the truth of Conjecture \ref{conj:mu}.
\subsection{A lower bound}
The theorem below gives a probabilistic \emph{lower} bound on the complexity of LLL-SP, which agrees up to constant factor with the well-known upper bound. There are two ingredients in the proof: (i) measuring the progress of the LLL algorithm by the quantity \emph{energy}, a well-known idea from the original LLL paper (\cite{LLL82}) (ii) bounding the performance of LLL-SP by a related SSP.
\begin{theorem} \label{thm:lowertime}
Consider LLL-SP, and an input configuration $r$ whose \emph{log-energy} $E = E(r)$, defined by
\begin{equation*}
E(r) = \sum_{j=1}^{n-1} \sum_{i=j}^{n-1} (n-i)r_i,
\end{equation*}
is sufficiently large --- in fact, $E > 10H$ works, with $H$ defined as in \eqref{eq:enbound}. Then the probability that LLL-SP is not terminated in $E/4$ steps is at least $1 - CE^{-1/2}$ for an absolute constant $C > 0$.
\end{theorem}
Observe that the familiar upper bound $O(n^2\log \max_i\|\textbf{b}_i\|)$ on the number of required steps is equivalent to $O(E)$, with the implicit constant depending on $\delta$.
\begin{proof}
If the algorithm is terminated, then $E$ must have become less than
\begin{equation*}
\sum_{i=1}^{n} (n-i+1)(n-i)T/2,
\end{equation*}
(where $T := -\log \delta^{1/2} > 0$) which equals,
\begin{equation} \label{eq:enbound}
H:= \frac{T}{6}(n^3 - n).
\end{equation}
Taking converse, we see that if $E$ is greater than \eqref{eq:enbound}, then LLL-SP has not yet terminated. At $k$-th toppling, $E$ decreases by at most $\log \mu_{k(i)}^{-2}$, where $k(i)$ is the index of the vertex in which $i$-th toppling occured. If toppled $N$ times, the decrease in $E$ is bounded by at most $F_N := \sum_{i=1}^N \log \mu_{k(i)}^{-2}$. In sum,
\begin{equation} \label{eq:lowertimegoal}
\mathrm{Prob}(E - F_N > H)
\end{equation}
gives the lower bound on the probability that LLL-SP is not terminated after $N$ swaps. Hence, it suffices to show that \eqref{eq:lowertimegoal} is bounded from below by $1-CE^{-1/2}$ when $N = E/2$.
The central limit theorem is applicable on $F_N$, since $\mu_{k(i)}$ are i.i.d.
More precisely, we apply the Berry-Esseen theorem, which asserts the following. Suppose we have i.i.d. random variables $X_1, X_2, \ldots$, so that $m = \mathbb{E}(X_1)$, $\sigma = (\mathbb{E}(X_1^2) - \mathbb{E}(X_1)^2)^{1/2}$, and $\rho = \mathbb{E}(X_1^3)$ are all finite. Furthermore, let $Y_N = \sum_{i=1}^N X_i$, and let $G_N(x)$ be the cumulative distribution function of $Y_N$, and $\Phi_N(x)$ be the cumulative distribution function of the normal distribution $N(Nm, N\sigma^2)$. Then for all $x$ and $N$,
\begin{equation*}
\left| G_N(x) - \Phi_N(x) \right| = O(N^{-1/2}),
\end{equation*}
where the implied constant depends on $m, \sigma, \rho$ only.
We let $X_i = \log \mu_{k(i)}^{-2}$ so that $F_N = G_N$, and apply the Berry-Esseen. It is easy to compute and check that $m, \sigma, \rho$ are all finite e.g. $m = 2(1+\log 2) \approx 3.386$ and $\sigma = 2$. Then, for a random variable $\mathcal{N}_N \sim N(Nm, N\sigma^2)$, \eqref{eq:lowertimegoal} is bounded by
\begin{equation*}
\mathrm{Prob}(E - \mathcal{N}_N > H)
\end{equation*}
plus an error of $O(N^{-1/2})$.
Now choose $N = E/4$, so that $\mathcal{N}_N \sim N((1+\log 2)E/2, E)$. Using Chebyshev's inequality we can prove
\begin{equation*}
\mathrm{Prob}(\mathcal{N}_N \geq 0.9E) \leq O(E^{-1}),
\end{equation*}
where the implied constant is absolute. Thus if $E$ is large enough so that $E - H > 0.9E$, we have that \eqref{eq:lowertimegoal} is at least $1 - CE^{-1/2}$ for some $C > 0$, as desired.
\end{proof}
\begin{remark}
1. We can use the same idea to obtain a lower bound on the average RHF of LLL-SP, but it turns out to be slightly less than $1$, which happens to be useless in the context we are in.
2. There exists a central limit theorem for a strong mixing process (\cite{B95}), and also a central limit theorem for a sequence of independent but non-identical sequence of random variables (e.g. the Lyapunov CLT). Conjecture \ref{conj:mu} states that the $|\mu_{k(i)}|$ of LLL is strong mixing (weaker than independent) and non-identical (though contained in a compact set). We do not know whether there exists a central limit theorem that applies in this context, though we suspect that there should be.
\end{remark}
\subsection{The optimal LLL problem}
The optimal LLL problem (see e.g. \cite{A00}) asks whether LLL with the optimal parameter $\delta = 3/4$ terminates in polynomial time. The following theorem, while crude, shows that this is true for LLL-SP with arbitrarily high probability.
\begin{theorem} \label{thm:optimal}
For any $\eta > 0$ small, LLL-SP with $\delta = 3/4$ terminates after $O_\eta(E)$ steps with probability $1 - \eta$.
\end{theorem}
\begin{proof}
Write $\mu$ for the random variable uniformly distributed in $[0, 1/2]$. In case $\delta < 3/4$, the complexity bound of LLL is established with the observation that, with each swap, the energy $E$ decreases by at least $c := \log(\delta + 1/4)^{-1} > 0$, and thus the algorithm must terminate within $E/c$ steps. Similarly, in case $\delta = 3/4$, we try to show that the minimum change of energy $\log(\delta + \mu^2)^{-1}$ is strictly bounded away from zero almost all the time.
(If $I$ was the increment for a given toppling operation, it is easy to show that the energy decreases by $2I$ after such a step.)
Choose a small $\varepsilon > 0$, and let $p = \mathrm{Prob}(\mu \leq (1-\varepsilon)/2) = 1- \varepsilon$. Let $d = \log(3/4 + p^2/4)^{-1}$, which is the minimum possible change in energy provided $\mu \leq (1-\varepsilon)/2$. Now take $10E/d$ samples $\mu_1, \mu_2, \ldots$ of $\mu$ (there is nothing special about the constant $10$ here). If at least $E/d$ of those samples are less than $(1-\varepsilon)/2$, LLL-SP would terminate. Proving that this probability is arbitrarily close to $1$ is now a simple exercise with the binomial distribution.
\end{proof}
Observe that the above proof carries over to the case of LLL assuming Conjecture \ref{conj:mu}; the compactness condition on the $\mu_{k(i)}$ distributions allows control on the probability that they are all simultaneously bounded away from $1/2(1-\varepsilon)$.
\section{Finite-size scaling theory}
Finite-size scaling (FSS) is a theory in statistical physics used to study critical phenomena. Such phenomena are often studied via models on finite graphs and then by analyzing the quantity $\chi$ of interest as the system size $L$ --- the number of vertices of the graph --- goes to infinity. Roughly speaking, FSS asserts that, upon a proper rescaling of the variables, $\chi$ become nearly independent of $L$ for $L \gg 0$. FSS also provides a description of this asymptotic behavior of $\chi$ as $L \rightarrow \infty$.
For sandpile models, FSS implies asymptotic formulas that would be particularly interesting if they also applied to the LLL algorithm, as discussed in Section 1.2 above. Although it would be inappropriate to say ``apply FSS to LLL,'' as LLL has no underlying critical phenomenon, the formulas themselves, isolated from the context of the original theory, could certainly be tried. We ran a long experiment on LLL that is analogous to the one in Section III of Grassberger, Dhar, and Mohanty (\cite{GDM16}), in which the authors employ FSS to study the Oslo model, a sandpile model with an entirely different toppling rule than the ones we have considered so far. This section presents the results from this experiment.
\subsection{A brief introduction to FSS}
We start with a brief introduction to FSS and its predictions that are pertinent to our work. For readers who are unfamiliar with physics but wish to gain some quick basic knowledge, we recommend browsing the theory of one- and two-dimensional Ising models. Also see Section III of \cite{GDM16}, which states the formulas \eqref{eq:1st}-\eqref{eq:3rd} that we will introduce below. For more serious general treatises on FSS, see \cite{C88} or \cite{G18}.
In the theory of critical phase transition in physics --- e.g. the transition in a magnetic material from a magnetized to unmagnetized state --- one finds that the quantity $\chi$ of interest, for example the magnetic susceptibility, diverge near the critical point, or critical temperature; see Figure \ref{fig:fss1}. Furthermore, this divergence is often described by a power law, e.g.
\begin{equation*}
\chi \sim \frac{C}{(\epsilon-\epsilon_\mathrm{crit})^\gamma} + \frac{C_1}{(\epsilon-\epsilon_\mathrm{crit})^{\gamma_1}} + \frac{C_2}{(\epsilon-\epsilon_\mathrm{crit})^{\gamma_2}} + \ldots \mbox{with $\gamma > \gamma_1 > \gamma_2 \ldots$}
\end{equation*}
where $\epsilon = \epsilon(T)$ is an appropriate normalization of temperature $T$, and $\epsilon_\mathrm{crit}$ is the normalized critical temperature.. The theory of critical phase transitions is a systematic understanding of these exponents and the relations between them, mainly by employing the apparatus of the renormalization group (see \cite{G18}).
However, this kind of divergence only occurs for systems that are much larger than the size of atoms. For equilibrium systems such as the Ising model, this is reflected in the partition function $Z(L, \beta)$ of the system, where $L$ is the system size and $\beta$ is the inverse temperature. For any finite $L$, the partition function is a smooth function of $\beta$, and there are no singularities, hence no phase transitions. In practice, if the system has a large but finite size $L$, the singularities are ``rounded off' by an amount that decreases as $L$ becomes larger, as illustrated on the left side of Figure \ref{fig:fss2}.
\begin{figure}
\includegraphics[scale=0.4]{fss1}
\caption{$\chi$ (when $L = \infty$) as a function of normalized temperature $\epsilon$, diverging near $\epsilon_\mathrm{crit}$.} \label{fig:fss1}
\end{figure}
\begin{figure}
\includegraphics[scale=0.4]{fss2}
\caption{Left: $\chi(L, \epsilon)$ for different system sizes $L_1 > L_2 > L_3$. Right: upon a suitable scaling of the coordinates, $\chi$ becomes nearly identical for any $L$.} \label{fig:fss2}
\end{figure}
Remarkably, it is found that these curves for $\chi(L, T)$ for different $L$ near the critical point can be made to collapse on each other, by scaling both $x$ and $y$-axes by factors depending on $L$, so that one has
\begin{equation*}
\chi(L,T) \sim L^af((\epsilon-\epsilon_\mathrm{crit}) L^b),
\end{equation*}
for some function $f$ and constants $a,b$ --- see Figure \ref{fig:fss2}. This scaling collapse is called the \emph{finite size scaling}. In addition, for each $\epsilon$ away from $\epsilon_\mathrm{crit}$, $\chi$ converges to a finite value as $L \rightarrow \infty$; from this it must be that
\begin{equation*}
f(x) \sim \frac{1}{x^{a/b}} \mbox{ for $x$ near $\infty$.}
\end{equation*}
Hence, for each $\epsilon \neq \epsilon_\mathrm{crit}$, $\chi \sim (\epsilon-\epsilon_\mathrm{crit})^{-a/b}$ as $L \rightarrow \infty$. On the other hand, by making $T$ approach the critical temperature at a rate such that $(\epsilon-\epsilon_\mathrm{crit}) L^b$ is large but constant, we obtain $\chi(L, \epsilon_{\mathrm{crit}}) \sim L^a$ for $L \gg 0$. These relations can be used to study $\chi(\infty, T)$ by looking at $\chi(L, T)$ for finite values of $L$, for example.
In non-equilibrium systems such as sandpile models, the temperature is no longer a parameter that an external observer controls; rather, as the dynamics unfolds, the system approaches the critical temperature on its own (hence the term \emph{self-organized criticality} (SOC) systems, as they are sometimes called). Therefore, the above story needs some tweaking, but similar statements hold. For sandpile models, one interprets $\epsilon = z_L$ and $\epsilon_\mathrm{crit} = z_c$, where $z_L = \mathbb{E}(z(r))$ is the average of $z(r):=(1/L)\sum r(i)$ taken over the steady state of size $L$ system, and $z_c = \lim_{L \rightarrow \infty} z_L$ is the critical ``temperature.'' Then one has the relation
\begin{equation} \label{eq:1st}
z_c = z_L + \frac{C}{L^{\sigma}} + \mbox{(smaller errors)}
\end{equation}
for some constants $C$ and $\sigma$, akin to what one would obtain by putting together the two relations $\chi \sim (\epsilon-\epsilon_\mathrm{crit})^{-a/b}$ and $\chi \sim L^a$ discussed earlier. Moreover, FSS also predicts that
\begin{equation} \label{eq:2nd}
\mathrm{Var}(z(r)) \sim L^{-2\sigma}
\end{equation}
with the same $\sigma$. In the literature, for each system, the letter $\sigma$ is reserved to denote the constant such that \eqref{eq:1st} or \eqref{eq:2nd} holds.
There also exist the FSS theory of boundary behavior --- see e.g. Diehl (\cite{D97}). In the case of the Ising model, write $m(T)$ for the bulk magnetization at temperature $T$, and $m(i,T)$ for the mean magnetization at distance $i$ from the surface. Then, for the system size $L \gg 0$, there is a relation
\begin{equation*}
m(T) - m(i,T) \sim i^{-a}f((\epsilon - \epsilon_\mathrm{crit})^b i)
\end{equation*}
for some exponents $a$ and $b$, where $f(x) \sim \exp(-cx)$ for a constant $c>0$ and $x$ large. Similarly, for sandpile models, the average of the $i$-th pile $r(i)$ satisfies
\begin{equation} \label{eq:3rd}
z_c - \mathbb{E}(r(i)) \sim i^{-a_1} \mbox{ or } (L+1-i)^{-a_2},
\end{equation}
for some $a_1$ and $a_2$, depending on whether $i$ is closer to $1$ or $L$. For abelian models, thanks to its inherent left-right symmetry, it can be argued theoretically and experimentally that $a_1 = a_2 = \sigma$. For nonabelian models, it is possible that $a_1 \neq a_2$.
Recall that the root Hermite factor (RHF) of a configuration $r$ is defined as
\begin{equation}\label{eq:logrhf}
\log \mbox{RHF$(r)$} = \frac{1}{(L+1)^2}\sum_{i=1}^L (L+1-i)r(i).
\end{equation}
Write $y_L$ for the (empirical) average of the RHF of LLL in dimension $n = L+1$, and $y_c = \lim_{L \rightarrow \infty} y_L$. The analogous statements to \eqref{eq:1st} and \eqref{eq:2nd} for RHF then becomes
\begin{align}
& y_c = y_L + \frac{D}{L^\sigma} + \mbox{(smaller errors)} \tag{\ref{eq:1st}'} \label{eq:1st'} \\
& \mathrm{Var}(y_L) \sim L^{-2\sigma}. \tag{\ref{eq:2nd}'} \label{eq:2nd'}
\end{align}
\subsection{Design}
We ran extensive experiments on dimensions $100, 150, 200, 250, 300$, with at least 50,000 iterations for each dimension, to test the formulas \eqref{eq:1st}, \eqref{eq:1st'}, \eqref{eq:2nd}, \eqref{eq:2nd'}, \eqref{eq:3rd} on the LLL algorithm. It was quite a sizable experiment, involving more than 300 cores for over four months. Unlike in the previous sections, we use the original LLL here, with $\delta = 0.999$.
We tried a couple of different methods to generate random bases: the same method as in Section 2 above, with determinant $2^{10n}$ and also with determinant $2^{5n}$, and the knapsack-type bases. We found that they all yield the same results in the lower dimensions, so for dimensions $\geq 200$ we only used the knapsack-type bases with parameter $20n$, which are $n \times (n+1)$ matrices of form
\begin{equation*}
\begin{pmatrix}
x_1 & 1 & & & \\
x_2 & 0 & 1 & & \\
\vdots & \vdots & \vdots & \ddots & \\
x_n & 0 & \cdots & 0 & 1
\end{pmatrix}
\end{equation*}
where $x_1, \ldots, x_n$ are integers sampled from $[0, 2^{20n})$ uniformly.
\subsection{Average and variance of RHF}
Table \ref{table:1st} (graphically depicted in Figure \ref{fig:1st}) summarizes our data on the averages of $z_L$ and $y_L$. It demonstrates that our data fits very well with \eqref{eq:1st} and \eqref{eq:1st'}, with $\sigma = 0.75$. Accordingly, we obtain the numerical estimates
\begin{equation} \label{eq:zy_pred1}
z_L \approx 0.0448 - 0.194L^{-3/4}, y_L \approx 0.0224 - 0.09L^{-3/4},
\end{equation}
and thus
\begin{equation} \label{eq:rhf_extrapolate1}
\mathrm{RHF_L} \approx \exp(0.0224 - 0.09L^{-3/4}) \rightarrow 1.02265\ldots \mbox{ as $L \rightarrow \infty$},
\end{equation}
which is close but slightly higher than the ``1.02.''
\begin{table}[]
\begin{tabular}{|l|lllll|}
\hline
dim($=L+1$) & \multicolumn{1}{c}{100} & \multicolumn{1}{c}{150} & \multicolumn{1}{c}{200} & \multicolumn{1}{c}{250} & \multicolumn{1}{c|}{300} \\ \hline
$z_L$ & 0.03866 & 0.04028 & 0.04115 & 0.04172 & 0.04211 \\ \hline
$z_L - CL^{-\sigma}$ & 0.04479 & 0.04480 & 0.04480 & 0.04480 & 0.04480 \\ \hline
$y_L$ & 0.01957 & 0.02032 & 0.02072 & 0.02098 & 0.02116 \\ \hline
$y_L - DL^{-\sigma}$ & 0.02242 & 0.02242 & 0.02241 & 0.02241 & 0.02240 \\ \hline
\end{tabular}
\caption{Results on $z_L$ and $y_L = \mathbb{E}(\log\mathrm{RHF})$, with $\sigma = 3/4$, $C = -0.194$ and $D = -0.09$.}
\label{table:1st}
\end{table}
\begin{figure}
\includegraphics[scale=0.5]{Ez}
\includegraphics[scale=0.5]{Er}
\caption{Left: dimension versus $z_L - 0.194L^{-3/4}$. Right: dimension versus $y_L - 0.09L^{-3/4}$.}
\label{fig:1st}
\end{figure}
Table \ref{table:2nd} and Figure \ref{fig:2nd} show our data on the variances of $z_L$ and $y_L$. They also fit \eqref{eq:2nd} and \eqref{eq:2nd'} quite well, with the same $\sigma = 0.75$, though to a slightly lesser extent.
\begin{table}[]
\begin{tabular}{|l|lllll|}
\hline
dim($=L+1$) & \multicolumn{1}{c}{100} & \multicolumn{1}{c}{150} & \multicolumn{1}{c}{200} & \multicolumn{1}{c}{250} & \multicolumn{1}{c|}{300} \\ \hline
$V(z_L)$ & $2.24 \times 10^{-6}$ & $1.21 \times 10^{-6}$ & $7.84\times 10^{-7}$ & $5.62 \times 10^{-7}$ & $4.21\times 10^{-7}$ \\ \hline
$V(z_L)/L^{-2\sigma}$ & 0.00224 & 0.00222 & 0.00222 & 0.00222 & 0.00219 \\ \hline
$V(y_L)$ & $1.05 \times 10^{-6}$ & $5.44\times10^{-7}$ & $3.49\times10^{-7}$ & $2.45\times10^{-7}$ & $1.84\times10^{-7}$ \\ \hline
$V(y_L)/L^{-2\sigma}$ & 0.00105 & 0.00100 & 0.00099 & 0.00097 & 0.00096 \\ \hline
\end{tabular}
\caption{Results on the variances of $z_L$ and $y_L$, with $\sigma = 3/4$.}
\label{table:2nd}
\end{table}
\begin{figure}
\includegraphics[scale=0.5]{Vz}
\includegraphics[scale=0.5]{Vr}
\caption{Left: dimension versus $V(z_L)/L^{-1.5}$. Right: dimension versus $V(y_L)/L^{-1.5}$.}
\label{fig:2nd}
\end{figure}
\subsection{Boundary statistics}
Figures \ref{fig:3rd_f} and \ref{fig:3rd_b} present comparisons of our data with \eqref{eq:3rd}, with Figure \ref{fig:3rd_f} examining the left boundary (i.e. $i$ near $1$) and Figure \ref{fig:3rd_b} the right boundary (i.e. $i$ near $L$). Here we used $z_c = 0.448$, obtained in Section 3.1 above.
From Figure \ref{fig:3rd_b}, on the right boundary we do find that $z_c -\mathbb{E}(r(L-i)) \sim i^{-0.75}$ on the first $10$ points or so. However, Figure \ref{fig:3rd_f}, and also the rest of the points on Figure \ref{fig:3rd_b}, makes matters more subtle: it appears that, on the left end, and for many points on the right end, $z_c - \mathbb{E}(r(i)) \sim i^{-1.05}$ appears to be the correct observation.
\begin{figure}
\includegraphics[scale=0.5]{rifor}
\caption{$i$ versus $\log(z_c - \mathbb{E}(r(i)))$.}
\label{fig:3rd_f}
\end{figure}
\begin{figure}
\includegraphics[scale=0.5]{riback}
\caption{$i$ versus $\log(z_c - \mathbb{E}(r(L-i)))$.}
\label{fig:3rd_b}
\end{figure}
\subsection{Summary and discussions}
Typically in physics, experiments of this kind are carried out up to $L$ close to a million, if not more. An experiment of such magnitude is clearly infeasible for lattice reduction, and hence we have been severely constrained in our experiments from the physical perspective. In addition, our estimates of the critical exponent $\sigma$ and other constants very likely leave much room for improvement, by employing more extensive and elaborate numerical techniques. Despite these limitations, our experiments reveal some clear patterns in the empirical output statistics of LLL, robustly described by formulas from statistical mechanics.
We obtain two particularly notable implications. First, the folklore number ``1.02'' is not too far from the LLL behavior in the limit. One could reasonably suspect that the average-worst case RHF gap is only a peculiarity in the low dimensions, and that it would disappear in the dimension limit, citing the result of \cite{KV17} for instance. But we found evidence that the gap is actually a real phenomenon. Second, Figures \ref{fig:3rd_f} and \ref{fig:3rd_b} provide neat formula for the average output statistics of LLL, via an appropriate normalization of graphs such as Figure \ref{fig:output}. This is a vast refinement of GSA, at least for the LLL algorithm. Of course, the same set of experiments can be carried out for BKZ, and our pilot experiments with BKZ-20 look promising. This result will appear in a forthcoming paper.
It remains a mystery as to how to explain the boundary phenomenon that we observed here. It is not entirely surprising for nonabelian models to behave differently on the left and right ends, but the particular shape of Figure \ref{fig:3rd_b} is not seen often even in physics, to the best of our knowledge. It is probable that the more familiar pattern may emerge with more data.
|
2,869,038,154,590 | arxiv | \section{Introduction}
\label{s:intro}
Phase I dose-finding clinical trials in oncology seek to find the maximum tolerated dose (MTD) to obtain reliable information regarding the safety profile of a drug or a combination of drugs, pharmacokinetics, and the mechanism of action \cite{chevret_2006}\cite{crowley_2005}. In this phase, the endpoint is defined as the dose-limiting toxicity (DLT), which is mainly based on the National Cancer Institute (NCI) Common Toxicity Criteria for Adverse Events \cite{CTCAE_2017}. Usually, standard algorithm-based or model-based dose-escalation methods aim to find the MTD while considering the entire cycle dosing as a single administration \cite{storer_1989}\cite{oquigley_1990}. Most methods assume that toxicity increases with the dose; however, the estimation of the relationship between toxicity and multiple doses over a cycle remains elusive as we can observe nonlinear dose-response profiles \cite{musuamba_2017}\cite{bullock_2017}\cite{schmoor_schumacher_1992}. We assume that considering the complete cycle dosage could improve treatment safety while maintaining future potential efficacy.
To account for dosage repetition over the treatment cycle, some authors have considered either the dose-schedule or the dose-regimen relationship. The National Cancer Institute defines “schedule” as “A step-by-step plan of the treatment that a patient is going to receive […] It also includes the amount of time between courses of treatment and the total length of time of treatment.” Moreover, the NCI defines “regimen” as “A treatment plan that specifies the dosage, the schedule, and the duration of treatment”. Following these definitions, we considered the dose-regimen relationship, as it includes the dosage, the repetition scheme and the duration.
For some molecules, it has been observed that, in the same patient, starting a dose-regimen with a lower lead-in dose and increasing the dose step-by-step before reaching the steady-state dose can reduce the occurrence of acute toxicities \cite{chen_2019}. However, a dose-regimen starting with higher lead-in doses can increase the efficacy.
Dose-finding trials can aim to study different dose-regimens with the same or different total cumulative dose to determinate the most appropriate regimen supported by PK/PD profiles. Several methodological papers have attempted to address the issue of prospective dose and schedule finding methods. Braun et al., Liu and Braun and Zhang and Braun proposed considering the time-to-toxicity rather than the usual binary outcome to optimize dose and schedule, as the timing of administration \cite{braun_2005}\cite{braun_2007}\cite{liu_braun_2009}\cite{zhang_braun_2013}. Wages et al. proposed considering dose-schedule finding as a 2-dimensional problem and extended the partial order continual reassessment method developed for combination trials \cite{wages_2014}. Other authors proposed dose-schedule-finding methods that jointly model toxicity and efficacy outcomes \cite{li_2008}\cite{thall_2013}\cite{guo_2016}. Lyu et al. proposed a hybrid design that is partially algorithm-based and partially model-based for sequences of doses over multiple cycles when few doses are under study \cite{lyu_2018}.
Only a few methods consider PK/PD data in the prospective dose-allocation design. Ursino et al. compared multiple methods that enable the use of PK measures in sequential Bayesian adaptive dose-finding designs, including a dose-AUC-toxicity model combining two models to recommend the dose \cite{ursino_2017}. Gunhan et al. proposed a Bayesian time-to-event pharmacokinetic adaptive model for multiple regimens using PK latent profiles to measure drug exposure \cite{gunhan_2018}. Our aim is to extend these propositions by modeling the dose-regimen toxicity relationship using PK/PD.
\section{Motivation}
This work was motivated by the ongoing first-in-human dose-escalation study of SAR440234 administered as a single agent to patients with relapsed or refractory acute myeloid leukemia, high-risk myelodysplastic syndrome or B-cell acute lymphoblastic leukemia (NCT03594955) \cite{NCT03594955}. SAR440234 is a novel bispecific T-cell engager antibody that activates and redirects cytotoxic T lymphocytes (CTLs) to enhance the CTL-mediated elimination of CD123-expressing tumor cells. CTL activation induces the release of inflammatory cytokines, which can potentially cause cytokine release syndrome (CRS). CRS is a systemic inflammatory response and among the most commonly observed toxicities of T-cell engaging bispecific antibodies, such as blinatumomab, which is a bispecific anti-CD19/CD3 antibody \cite{Shimabukuro-Vornhagen_2018}. Several cytokines, such as IL6, IL10 and INF$\gamma$, are consistently found to be elevated in serum from patients with CRS. The association between the peak of cytokine and CRS has been evaluated by Teachey et al. \cite{teachey_2016}. It has been shown that repeating the dosing of the drug can decrease CRS, particularly when the first administration is divided into several steps progressively \cite{chen_2019}. Therefore, intra-patient dose-escalation with a dose-regimen consisting of lower initial doses followed by a higher maintenance dose was implemented in this study to reduce the occurrence of CRS \cite{boissel_2018}.
The aim of the trial was to find the MTD of SAR440234 using the 3+3 design as the dose-escalation design. However, the 3+3 design and more general dose-finding designs do not consider intra-patient escalation information to decrease PD toxicity outcomes (CRS); these designs transform the dose-regimen received by the patient into a single dose-level. This approach is inefficient for achieving the trial goal.
In conclusion, we propose to model the binary toxicity endpoint (CRS) and the continuous PD response (cytokine profile) at the end of the trial, once all data have been collected, to characterize the dose-regimen toxicity relationship. This dose-regimen assessment method (DRtox) allows the determination of the maximum tolerated dose-regimen (MTD-regimen), as illustrated in Figure \ref{trial_scheme}.
\begin{figure}
\begin{center}
\centerline{\includegraphics[width=16cm]{figures/trial_scheme.png}}
\end{center}
\caption{Trial scheme: the DRtox method is applied at the end of the dose-escalation stage of a phase I trial.
\label{trial_scheme}}
\end{figure}
\section{Model}
\label{model}
Let $\boldsymbol{\mathcal{D}}=\{d_1,...,d_L\}$ be the set of doses that can be administered to patients, where $d_{l}<d_{l+1}$. Let $\boldsymbol{\mathcal{S}}=\{\boldsymbol{S_1},...,\boldsymbol{S_K}\}\subset \boldsymbol{\mathbb{S}}$ be the panel of dose-regimens to be studied in the trial. The dose-regimen $\boldsymbol{S_k} \in \boldsymbol{\mathcal{S}}$, where $k \in \{1,...,K\}$, is defined as the sequence of $J$ doses, $\boldsymbol{S_k}=(d_{k,1},d_{k,2},...,d_{k,J})$, administered at times $\boldsymbol{t}=(t_1,t_2,...,t_J)$, where $d_{k,j} \in \boldsymbol{\mathcal{D}}$ for $j \in \{1,...,J\}$. To simplify the notations, we assumed that all dose-regimens have the same number of drug administrations at the same times, but this assumption can be relaxed. Let $\boldsymbol{S_{k,j}}$ be the subregimen of $\boldsymbol{S_k}$ until the $j^\text{th}$ administration, $\boldsymbol{S_{k,j}}=(d_{k,1},d_{k,2},...,d_{k,j})$, for $j<J$. Let $n \in \mathbb{N}$ be the number of patients included in the trial. Let $Y_{i,j}$ be the binary toxicity response of patient $i$ observed exactly after the $j^\text{th}$ administration, and let $Y_i$ be his/her global toxicity response at the end of the administrations.
Let $\boldsymbol{\widetilde{s}_{i}}=(d_{i,1},d_{i,2},...,d_{i,J}) \in \boldsymbol{\mathcal{S}}$ be the dose-regimen planned for the $i^\text{th}$ patient. We assume that the drug administration is stopped if toxicity occurs; thus let $j_i$ denote the last administration of patient $i$. We denote the actual regimen received by patient $i$ as $\boldsymbol{s_{i}}=(d_{i,1},d_{i,2},...,d_{i,j_{i}}) \subset \boldsymbol{\widetilde{s}_{i}}$, where $\boldsymbol{s_{i}}=\boldsymbol{\widetilde{s}_{i}}$ if no toxicity is observed. Let $\boldsymbol{s_{i,j}}$ be the subregimen until $j$ of $\boldsymbol{s_i}$, where $j \leq j_i$.
The aim is to estimate the MTD-regimen at the end of the trial, which is defined as the dose-regimen with the toxicity probability closest to the target toxicity rate $\delta_{T}$, i.e. the MTD-regimen is the regimen $\boldsymbol{S_{k^\star}}$, where $k^\star=\displaystyle\operatornamewithlimits{argmin}_{k} \left | p_T(\boldsymbol{S_k})- \delta_{T} \right | $.
We assume that a PD endpoint extracted from the continuous PD profile of a biomarker related to toxicity plays an intermediate role in the dose-regimen toxicity relationship. We propose a dose-regimen assessment method (DRtox) in which the first model is built for the dose-regimen and the PD endpoint, and the second model is built for the PD endpoint and the toxicity response. Therefore, integrating both models links the dose-regimen to the toxicity response to find the MTD-regimen. In the following section, the structure of the PK/PD models is described, two approaches between the PD endpoint and toxicity response are proposed, as well as a practical method for their integration.
\subsection{Dose-regimen PD response model}
Let $C(t)$ be the continuous drug concentration and $E(t)$ be the continuous PD response related to toxicity measured at time $t$.
We assume that $C(t)$ and $E(t)$ can be modeled using nonlinear mixed-effects models as follows:
\begin{equation}
\left\{
\begin{array}{l}
C(t)=f^{(1)}\left(\boldsymbol{\theta_i^{(1)}},t\right)+g^{(1)}\left(\boldsymbol{\theta_i^{(1)}},t,\boldsymbol{\xi_1}\right)\epsilon^{(1)} \\
E(t)=f^{(2)}\left(\boldsymbol{\theta_i^{(2)}},t\right)+g^{(2)}\left(\boldsymbol{\theta_i^{(2)}},t,\boldsymbol{\xi_2}\right)\epsilon^{(2)} \\
\end{array}
\right.
\end{equation}
where $f^{(1)}$ and $f^{(2)}$ represent the structural models, which are usually solutions of differential equations based on biological knowledge. $\boldsymbol{\theta_i}=\left(\boldsymbol{\theta_i^{(1)}},\boldsymbol{\theta_i^{(2)}}\right)$ represents the $i$th patient's specific parameter vector, where usually, $\boldsymbol{\theta_i}=\boldsymbol{\mu} e^{\boldsymbol{\eta_i}}$, with $\boldsymbol{\mu}$ denoting the fixed effects vector, and $\boldsymbol{\eta_i}$ denoting the random effects vector defined as $\boldsymbol{\eta_i} \sim \mathcal{N}(\boldsymbol{0},\boldsymbol{\Omega})$, with $\boldsymbol{\Omega}$ denoting the variance-covariance matrix.
$g^{(1)}$ and $g^{(2)}$ represent the error models, which depend on the additional parameters $\boldsymbol{\xi_1}$ and $\boldsymbol{\xi_2}$, and $\epsilon^{(1)}$ and $\epsilon^{(2)}$ are standard Gaussian variables. The usual error models are the constant model where $g^{(l)}\left(\boldsymbol{\theta_i^{(l)}},t,\xi_l=a\right)=a$, the proportional model where $g^{(l)}\left(\boldsymbol{\theta_i^{(l)}},t,\xi_l=b\right)=bf^{(l)}\left(\boldsymbol{\theta_i^{(l)}},t\right)$ and combinations of the constant and proportional models.
\subsection{PD endpoint toxicity model}
$r(\boldsymbol{\theta_i},\boldsymbol{s_{i,j}}) $ is defined as the function derived from the PK/PD models that returns the value of the PD endpoint (such as the peak of a biomarker) exactly after the administration of the dose-regimen $\boldsymbol{s_{i,j}}$ with individual PK/PD parameters $\boldsymbol{\theta_i}$. Let $\boldsymbol{R(\theta_i,s_{i,j}}) = (r(\boldsymbol{\theta_i},s_{i,1}),...,r(\boldsymbol{\theta_i},\boldsymbol{s_{i,j}}))$ be the function derived from the PK/PD models that returns the vector of all PD endpoints (such as all biomarker peaks) observed after the administration of the regimen $\boldsymbol{s_{i,j}}$ with individual PK/PD parameters $\boldsymbol{\theta_i}$. For patient $i$, we can simplify the notations considering $r_{i,j}=r\left(\boldsymbol{\theta_i},\boldsymbol{s_{i,j}}\right)$, $\boldsymbol{R_{i,j}}=\boldsymbol{R\left(\theta_i,s_{i,j}\right)}$ and the vector of all PD endpoints $\boldsymbol{R_{i}}=\boldsymbol{R_{i,j_i}}$.
Then, let $r^{M}_{i}=\displaystyle\max\limits_{l \in \{1,...,j_i\}}(r_{i,l})$ be the summary PD endpoint (such as the highest peak) observed in patient $i$,which we assume is related to toxicity.
To define the prior distributions, let $(\overline{r}^M_1,\overline{r}^M_2,...\overline{r}^M_K)$ denote the reference values of the summary endpoint of all dose-regimens of the trial $(\boldsymbol{S_1},...,\boldsymbol{S_k})$; for example we can consider population values $\overline{r}^M_k=\max \left\{ r\left(\boldsymbol{\mu},\boldsymbol{S_{k,1}}\right),...,r\left(\boldsymbol{\mu},\boldsymbol{S_k}\right) \right\}$ with $\boldsymbol{\mu}$ as the PK/PD vector of fixed effects.
In the following section, two statistical models between the PD endpoint and toxicity response are shown.
\subsubsection{Logistic-DRtox}
We propose a Bayesian logistic model to link the global binary toxicity response of patient $i$ receiving $\boldsymbol{s_i}$ to his summary PD endpoint related to toxicity as follows:
\begin{equation}\label{eq:logitmodel}
\text{logit}\left\{\mathbb{P}\left(\displaystyle Y_{i}=1\right)\right\}=\beta_0+\beta_1 \log\left(\displaystyle\frac{r^M_i}{\overline{r}^M_{k_{T}}}\right)
\end{equation}
where $\beta_1>0$ to have the toxicity probability that increases with the value of the summary PD endpoint. We normalize the PD endpoint for prior elicitation using $\overline{r}^M_{k_{T}}$ which is the reference value of dose-regimen $\boldsymbol{S_{k_T}}$ which we initially guess to have a toxicity probability of $\delta_T$. In this model, we do not consider the longitudinal values of the biomarker as we assume that toxicity is not due to the cumulative effect of the biomarker profile. However, previous drug administrations are considered in the construction of the biomarker through the PK/PD model. Let $\pi_1\left\{\left(\beta_0,\beta_1\right),r^M_i\right\}=\text{logit}^{-1}\left\{\beta_0+\beta_1 \log\left(\displaystyle\frac{r^M_i}{\overline{r}^M_{k_{T}}}\right)\right\}$.
Regarding prior distributions, we consider a normal distribution for the intercept, $\beta_0 \sim \mathcal{N}(\overline{\beta}_0,\sigma_{\beta_0}^2) $ and a gamma distribution for the slope to ensure positivity, $\beta_1 \sim \gamma(\alpha_1,\displaystyle\frac{\alpha_1}{\overline{\beta}_1})$, where $\alpha_1$ is the shape parameter, $\overline{\beta}_0=\mathbb{E}[\beta_0]$ and $\overline{\beta}_1=\mathbb{E}[\beta_1]$.
By construction, we have $\overline{\beta}_0=\text{logit}\left(\delta_T\right)$, obtained via Eq.~\ref{eq:logitmodel} with $r^M_i = \overline{r}^M_{k_{T}}$. Then, let $(p_1,...,p_K)$ be the initial guesses of the toxicity probabilities of regimens $(\boldsymbol{S_1},...,\boldsymbol{S_K})$, where $p_{k_T}=\delta_T$. We can determine $\overline{\beta}_1$ using either only one regimen, which differs from the reference regimen $\boldsymbol{S_{k_T}}$, as $
\pi_{1}\left\{\left(\overline{\beta_0},\overline{\beta_1}\right),\overline{r}^M_k\right\}=p_k$, with $k \in \{1,...,K\}$ and $k \neq k_T$, or multiple regimens, such as the neighbors of the reference regimen, as follows:
\begin{equation}
\overline{\beta}_1=\operatornamewithlimits{argmin}_{\beta_1} \sum_{k=k_T-1}^{k_T+1} \left[p_k-\pi_{1}\left\{\left(\overline{\beta}_0,\beta_1\right),\overline{r}^M_k\right\} \right]^2
\end{equation}
\subsubsection{Hierarchical-DRtox}
In this approach, we assume that patients experience toxicity if their PD response exceeds an unknown threshold specific to each patient. To consider inter-individual variability in toxicity, we introduce a patient-specific continuous latent variable, $Z_i$, which represents the toxicity threshold of the PD response. In contrast to the previous approach, we model toxicity after each administration using a modification of the hierarchical probit model as follows \cite{berry_2010}:
\begin{equation}
\left\{
\begin{array}{l}
Y_{i,j}=\left\{
\begin{array}{l}
0 \quad \text{if } Z_i>\log\left(\displaystyle\frac{r_{i,j}}{\overline{r}^M_{k_{50}}}\right) \\
1 \quad \text{if } Z_i \leq \log\left(\displaystyle\frac{r_{i,j}}{\overline{r}^M_{k_{50}}}\right) \\
\end{array}
\right. \\
Z_i \sim \mathcal{N}(\mu_z,\tau_z^2) \\
\end{array}
\right.
\label{bayesian_hierarchical}
\end{equation}
where $\overline{r}^M_{k_{50}}$ is the reference value at the dose-regimen $\boldsymbol{S_{k_{50}}}$ which we initially guess to have a toxicity probability of 0.5. By adding the random effect, this Bayesian hierarchical model shares common features with the probit model, where $\tau_z^2$ represents the between-subject variance and controls the extent of the borrowing across all patients.
If we consider a new patient $i$ with a vector of biomarker endpoints $\boldsymbol{R_i}$, we can predict his probability of toxicity by $\mathbb{P}\left(Y_i=1\right)=F_z\left\{\displaystyle\log\left(\frac{r^M_{i}}{\overline{r}^M_{k_{50}}}\right)\right\}$, where $F_z$ is the cumulative distribution function of $\mathcal{N}(\mu_z,\tau_z^2)$. The details of the formula are shown in Web Appendix B. Let $\pi_2\left\{\left(\mu_z,\tau_z^2\right),r^M_{i}\right\}=F_z\left\{\displaystyle\log\left(\frac{r^M_{i}}{\overline{r}^M_{k_{50}}}\right)\right\}$.
Regarding the prior distributions, we consider $\mu_z \sim \mathcal{N}(0,\sigma_{\mu_z}^2) $ and $\tau_z \sim \text{half-Cauchy}(0,\sigma_{\tau_z}^2)$. Regarding the half-Cauchy distribution, we followed the recommendations by Gelman, as we assumed that $\tau_z$ could be near 0 \cite{gelman_2006}. Web Appendix F shows how this model can be implemented.
\subsection{Dose-regimen toxicity model}
The posterior toxicity probability of dose-regimen $\boldsymbol{S_k}$ is estimated by integrating the PD endpoint toxicity model on all possible values of the PD endpoint. As this integral cannot usually be solved analytically, the posterior toxicity probability of regimen $\boldsymbol{S_k}$ is estimated via the drawing of an $M$ hypothetical set of patients with M-vector $(p_T(\boldsymbol{S_k})^{(1)},...,p_T(\boldsymbol{S_k})^{(M)})$ as posterior toxicity probabilities. Then, the posterior toxicity probability of regimen $\boldsymbol{S_k}$ is estimated as the posterior mean $\widehat{p_T}(\boldsymbol{S_k})=\displaystyle\frac{1}{M}\sum_{m=1}^{M}p_T(\boldsymbol{S_k})^{(m)}$. This sample of the posterior toxicity probability requires the following three major steps:
\begin{enumerate}
\item \underline{Model fitting:}
\begin{enumerate}
\item First, the PK/PD models are fitted to obtain estimates of the population parameters comprising the fixed effects, $\boldsymbol{\widehat{\mu}}$, and the random effects variance-covariance matrix, $\boldsymbol{\widehat{\Omega}}$, under the Frequentist paradigm. The patients' individual PK/PD parameters, $(\boldsymbol{\widehat{\theta}_1},...,\boldsymbol{\widehat{\theta}_n})$, are also estimated.
\item Based on the estimated PK/PD parameters, the PD biomarkers are predicted for each patient:
\begin{itemize}
\item For the logistic-DRtox: the global biomarker peaks $(\widehat{r}^{M}_1,...,\widehat{r}^{M}_n)$ are predicted for each patient as $\widehat{r}^{M}_i= \max \left\{ r\left(\boldsymbol{\widehat{\theta}_i},\boldsymbol{s_{i,1}} \right),...,r\left(\boldsymbol{\widehat{\theta}_i},\boldsymbol{s_i}\right)\right\}$ for $i \in \{1,...,n\}$.
The vector of toxicity responses and biomarker responses, $((Y_{1},...,Y_{n}),(\widehat{r}^{M}_1,...,\widehat{r}^{M}_n))$, constitutes the data of the trial.
\item For the hierarchical-DRtox: the biomarker peaks vectors $(\boldsymbol{\widehat{R}_1},...,\boldsymbol{\widehat{R}_n})$ are predicted for each patient as $\boldsymbol{\widehat{R}_i}=\boldsymbol{R(\widehat{\theta}_i,s_{i})}$ for $i \in \{1,...,n\}$.
The vector of toxicity responses and biomarker responses, $((Y_{1,1},...,Y_{n,j_n}),(\boldsymbol{\widehat{R}_1},...,\boldsymbol{\widehat{R}_n}))$, constitutes the data of the trial.
\end{itemize}
\item A vector of the parameters of the PD endpoint toxicity model of size $m_\text{iter}$ is sampled from their posterior distribution:
\begin{itemize}
\item For the logistic-DRtox, $\left(\left(\beta_0^{(1)},\beta_1^{(1)}\right),...,\left(\beta_0^{(m_\text{iter})},\beta_1^{(m_\text{iter})}\right)\right)$ is sampled.
\item For the hierarchical-DRtox, $\left(\left(\mu_z^{(1)},\tau_z^{(1)}\right),...,\left(\mu_z^{(m_\text{iter})},\tau_z^{(m_\text{iter})}\right)\right)$ is sampled.
\end{itemize}
\end{enumerate}
\item \underline{Prediction of new patients for the sampling distribution of the PD endpoint:}
\begin{enumerate}
\item The individual PK/PD parameters of $m_\text{predict}$ simulated patients, $\left(\boldsymbol{\theta^{(1)}},...,\boldsymbol{\theta^{(m_\text{predict})}}\right)$, are sampled from $\boldsymbol{\widehat{\mu}}$ and $\boldsymbol{\widehat{\Omega}}$ as $\boldsymbol{\theta^{(m_p)}}=\boldsymbol{\widehat{\mu}} e^{\boldsymbol{\eta^{(m_p)}}}$, with $\boldsymbol{\eta^{(m_p)}} \sim \mathcal{N}(\boldsymbol{0},\boldsymbol{\widehat{\Omega}})$ for $m_p \in \{1,...,m_\text{predict}\}$
\item The maximum biomarker endpoint of each simulated patient receiving regimen $\boldsymbol{S_k}$ is obtained as $r^{M^{(m_p)}}=\max \left( r\left(\boldsymbol{\theta^{(m_p)}},\boldsymbol{S_{k,1}}\right),...,r\left(\boldsymbol{\theta^{(m_p)}},\boldsymbol{S_k}\right) \right)$ for $m_p \in \{1,...,m_\text{predict}\}$
\end{enumerate}
\item \underline{Estimation of the posterior distribution of the probability of toxicity:}
\begin{enumerate}
\item The $m^\text{th}$ iteration, $m=(m_i,m_p) \in \{1,...,M\}$, where $M=m_\text{iter}*m_\text{predict}$, of the posterior probability of toxicity of dose-regimen $\boldsymbol{S_k}$, $p_T(\boldsymbol{S_k})^{(m)}$, is obtained depending on the method chosen:
\begin{itemize}
\item For the logistic-DRtox, $p_T(\boldsymbol{S_k})^{(m)}=\pi_1\left\{\left(\beta_0^{(m_i)},\beta_1^{(m_i)}\right),r^{M^{(m_p)}}\right\}$
\item For the hierarchical-DRtox, $p_T(\boldsymbol{S_k})^{(m)}=\pi_2\left\{\left(\mu_z^{(m_i)},\tau_z^{(m_i)}\right),r^{M^{(m_p)}}\right\}$ \\
\end{itemize}
\end{enumerate}
\end{enumerate}
The DRtox approach allows us to estimate the toxicity probability of the panel of dose-regimens $\boldsymbol{\mathcal{S}}$ and predict the toxicity probability of each new regimen defined from the set of doses $\boldsymbol{\mathcal{D}}$.
\section{Simulation study}
\label{simu_study}
\subsection{Simulation settings}
The performance of the DRtox was evaluated through a simulation study. We assumed that toxicity was related to a PD endpoint (the peak of cytokine in the context of our motivating example). Therefore, to simulate toxicity, we first needed to simulate the PK/PD profiles and simulate toxicity from the PD profile.
Regarding the PK/PD models, we were inspired by published models on blinatumomab, which is another bispecific T-cell engager that binds to CD3 on T-cells and to CD19 on tumor cells. Regarding the PK model, we considered a 1-compartment infusion model in which the parameters are the volume of distribution V and the clearance of elimination Cl and assumed 4 hours of infusion \cite{zhu_2016}. The model is defined in Web Appendix A. Regarding the PD aspect, the objective was to model cytokine mitigation in the case of intra-patient dose-escalation. We simplified the model developed by Chen et al., which assumes that cytokine production is stimulated by the drug concentration but inhibited by cytokine exposure through the AUC \cite{chen_2019}. We defined the PD model as follows:
\begin{equation}
\displaystyle\frac{\mathrm{d} E\left(t\right)}{\mathrm{d} t}=\displaystyle\frac{E_{max}C\left(t\right)^H}{{EC_{50}}^H+C\left(t\right)^H}\left \{ 1-\displaystyle\frac{I_{max}AUC_E\left(t\right)}{\displaystyle\frac{IC_{50}}{K^{J-1}}+AUC_E\left(t\right)} \right \}-k_{deg}E\left(t\right)
\end{equation}
where $E\left(t\right)$ and $C\left(t\right)$ are the cytokine and drug concentration at time t, respectively, $AUC_E\left(t\right)$ is the cumulative cytokine exposure, and the parameters are defined in Table \ref{table_PKPD_param}. Additional information concerning the PK/PD models is provided in Web Appendix A.
In both the PK and PD models, we considered a proportional error model with $b=0.1$. The values of the PK/PD parameters used for the simulations were inspired by the estimated parameters of blinatumomab and are displayed in Table \ref{table_PKPD_param} \cite{zhu_2016}\cite{chen_2019}.
\begin{table}[h!]
\caption{Definition and values of the PK/PD parameters used for the simulation study. Parameter estimates represent the fixed effects, and coefficients of variation (CV) are the square root of the diagonal of the variance-covariance matrix. They are inspired by the parameters estimated on blinatumomab, with a modification of I$_\text{max}$ to observe cytokine mitigation after several administrations \cite{zhu_2016}\cite{chen_2019}.
\label{table_PKPD_param}}
\begin{center}
\begin{tabular}{c|cccc}
& \multirow{2}{*}{Parameter} & \multirow{2}{*}{\shortstack{Estimate\\\\ (\% CV)}} & \multirow{2}{*}{Unit} & \multirow{2}{*}{Description} \\ \\ \hline
\multirow{2}{*}{ \shortstack{PK\\\\model}} & Cl & $1.36$ $(41.9)$ & L/h & Clearance \\
& V & $3.4$ $(0)$ & L & Distribution volume \\ \hline
\multirow{7}{*}{\shortstack{PD\\\\model}} & E$_\text{max}$ & $3.59 \cdot 10^5$ $(14)$ & pg/mL/h & Max cytokine release rate \\
& EC$_\text{50}$ & $1 \cdot 10^4$ $(0)$ & ng/mL & Drug exposure for half-max release \\
& H & $0.92$ $(3)$ & &Hill coefficient for cytokine release \\
& I$_\text{max}$ & $0.995$ $(0)$ & & Max inhibition of cytokine relase \\
& IC$_\text{50}$ & $1.82 \cdot 10^4$ $(12)$ & pg/mL$\cdot$h & Cytokine exposure for half-max inhibition \\
& k$_\text{deg}$ & $0.18$ $(13)$ & h$^{-1}$ & Degradation rate for cytokine \\
& K & $2.83$ $(36)$ & & Priming factor for cytokine relase\\
\end{tabular}
\end{center}
\end{table}
To simplify and accelerate the PK/PD estimation during the simulations, we followed the traditional PK/PD modeling strategy for small sample size data by fixing some parameters. We considered the parameters EC$_\text{50}$, I$_\text{max}$ and IC$_\text{50}$ fixed and no random effects on V and H. Our modeling choices can be challenged in practice, but the objective of this work is not to propose a PK/PD model but rather to propose a global modeling approach including PK/PD estimation in a phase I toxicity model. In our case, we decided to use a previously validated PK/PD model that mimics the behavior we expect in our motivating trial to demonstrate the performance of the proposed framework.
We used as the PD endpoint $r_{i,j}$ the peak of cytokine observed for patient $i$ after the $j^\text{th}$ administration, and for $r^{M}_{i}$ the highest peak of cytokine observed for patient $i$. Using the PK/PD models presented above and the parameters shown in Table \ref{table_PKPD_param}, we were able to model the mitigation of cytokine release upon repeating dosing, which was reflected by the decrease in the cytokine peak with repeating dosing. Hence, we were able to model that slowly increasing the dose reduces the cytokine peak compared to directly giving the steady-state dose.
For example, we compared the concentration and cytokine profiles of patients $i$ and $i'$ who received regimens $\boldsymbol{s_i}=(1,5,10,25,25,25,25)$ $\mu$g/kg and $\boldsymbol{s_{i'}}=(25,25,25,25,25,25,25)$ $\mu$g/kg administered on days 1, 5, 9, 13, 17, 21 and 25 (Figure \ref{plot_PKPD}). From the $4^\text{th}$ administration, the concentration profiles of patients $i$ and $i'$ are the same, but in the cytokine profile, the maximum peak of cytokine of patient $i'$ is much higher than that of patient $i$, $r^{M}_{i'}=r_{i',1}>r^{M}_{i}=r_{i,4}$.
\begin{figure}[h!]
\begin{center}
\centerline{\includegraphics[width=16cm]{figures/plot_PKPD.pdf}}
\end{center}
\caption{Concentration (up) and cytokine (down) profiles of two patients, one receiving a dose-regimen with intra-patient escalation in solid line and the other receiving a dose-regimen without intra-patient escalation in dashed line, administered on days 1, 5, 9, 13, 17, 21 and 25. Horizontal lines represent the maximum peak of cytokine observed after each dose-regimen.
\label{plot_PKPD}}
\end{figure}
To simulate toxicity from the cytokine profile, we defined a threshold $\tau_{\scriptscriptstyle{T}}$ on the cytokine response and assumed that toxicity occurred if this threshold was exceeded \cite{ursino_2017}. To introduce between-subject variability, we defined a log-normally distributed measure of subject sensitivity, $\alpha_i$ for patient $i$, where $\alpha_i=\text{e}^{\eta_{\alpha_i}}$ and $\eta_{\alpha_i} \sim \mathcal{N}(0,\omega_\alpha^2)$.
We assumed that patient $i$ experienced toxicity at the $j^{\text{th}}$ administration, $Y_{i,j}=1$, if $\alpha_i r_{i,j} \geq \tau_{\scriptscriptstyle{T}}$.
To compute the toxicity probability of regimen $\boldsymbol{S_k}$, we used the Monte-Carlo method by simulating $N=10000$ cytokine profiles under $\boldsymbol{S_k}$ and computing
\begin{equation}
p_T(\boldsymbol{S_k}) = \displaystyle\frac{1}{N} \sum_{i=1}^{N}
\left[
1-\Phi\left\{
\frac{\text{log}\left(\tau_{\scriptscriptstyle{T}}\right)-\text{log}\left(r_i^M\right)}{\omega_\alpha}
\right\}
\right]
\end{equation}
where $\Phi$ is the cumulative distribution function of the standard normal distribution.
We present the results of 3 toxicity scenarios by varying the dose-regimens and the value of the threshold $\tau_{\scriptscriptstyle{T}}$ to explore different positions of the MTD-regimen (with $\omega_\alpha=0.25$). Additional scenarios are presented in Web Appendix D. In each scenario, we considered 6 dose-regimens, and each dose-regimen included 7 dose-administrations on days 1, 5, 9, 13, 17, 21 and 25. The dose-regimens chosen for each scenario and the dose-regimen toxicity curves are displayed in Figure \ref{plot_scenarios_1_3}. In Scenarios $1$, $2$ and $3$, the MTD-regimen is situated at dose-regimens $\boldsymbol{S_4}$, $\boldsymbol{S_2}$ and $\boldsymbol{S_4}$, respectively. Scenarios $1$ and $2$ are inspired from the motivating trial in which the dose-regimens reach the steady-state dose at approximately the same time, and have increasing steady-state doses. However, Scenario $3$ represents a case in which the objective is to reach the steady-state dose, 40 $\mu$g/kg, as fast as possible to increase potential efficacy under toxicity constraints. The dose-regimen toxicity relationship is similar to that in Scenario 1 but with less difference between the MTD-regimen and its neighbors.
\begin{figure}
\begin{center}
\centerline{\includegraphics[width=16cm]{figures/plot_scenarios_1_3.pdf}}
\end{center}
\caption{The first 3 subplots represent the panel of dose-regimens from $\boldsymbol{S_1}$ in spaced dashed line to $\boldsymbol{S_6}$ in solid line, for the 3 main scenarios, where the type of points is specific to each scenario. In the last subplot in the lower right corner, the dose-regimen toxicity relationship is represented for each scenario, where the MTD-regimen is the dose-regimen having the toxicity probability the closest to the target $\delta_T$, plotted in dashed line.
\label{plot_scenarios_1_3}}
\end{figure}
For each scenario, 1000 trials were simulated, and $\delta_{T}=0.3$ was considered the toxicity target. Because we applied our methods once all patients from the trial were included, we evaluated the impact of 2 traditional dose-escalation designs, i.e., the 3+3 design and a modified continual reassessment method (CRM) initially proposed by O'Quigley et al. \cite{oquigley_1990}. A flow diagram of the rules of the 3+3 design is provided in Web Appendix E. For the modified CRM, we considered a 2-parameter logistic regression model with cohorts of a size of 3 and a total sample size of 30 patients \cite{cheung_2011}. Dose-skipping was not allowed, and early stopping rules were not implemented. We based the skeleton of the CRM, i.e., the prior guesses of the toxicity probabilities, on Scenario 1, i.e., $(0.06, 0.12, 0.20, 0.30, 0.40, 0.50)$. This skeleton was used in all simulations and scenarios.
When defining the prior distributions for our proposed models, we calibrated the model prior distributions based on the initial guesses of the toxicity probabilities (we used the same initial guesses for the CRM). To quantify the information provided by the prior distribution, we computed the approximate effective sample size (ESS), which was defined as the equivalent sample size embedded in the prior distribution of the model parameters \cite{yuan_2017}. In practice, we approximated the ESS by matching the mean and variance of the toxicity probabilities computed from the prior distributions to those of a beta distribution. Then, the ESS was computed as the sum of the parameters of the beta distribution. More details of the ESS computation are shown in Web Appendix C. In our settings, for the logistic-DRtox, we considered $k_T=4$, $\sigma_{\beta_0}=2$ and $\alpha=5$, leading to an approximate mean ESS of $1.6$. For the hierarchical-DRtox, we considered $k_{50}=6$, $\sigma_{\mu}=1$ and $\sigma_{\tau}=1$, leading to an approximate mean ESS of 1.8.
All simulations were performed in the R environment, using Monolix software for the PK/PD estimation and Stan for the Bayesian analysis \cite{r_2018}\cite{monolix_2019}\cite{stan_2019}.
\subsection{Simulation results}
\subsubsection{Proportions of correct selection}
We first evaluated the performance of the DRtox according to the proportions of correct selection (PCS) based on the proportions that each regimen is selected as the MTD-regimen over the trials. We evaluated the impact of the dose-regimens and the position of the MTD-regimen in 3 toxicity scenarios, and the impact of the dose-escalation design, i.e., either the 3+3 design or the CRM. The PCS results of the 3 main scenarios and the mean sample size of each dose-regimen across the trials due to the chosen dose-escalation design are displayed in Table \ref{table_PCS}. The PCS of additional scenarios are displayed in Web Appendix D. As a practical rule, we could only recommend as the MTD-regimen a dose-regimen that was administered during the dose-escalation phase of the trial.
\begin{table}[]
\caption{Proportions that each dose-regimen is being selected as the MTD-regimen over the 1000 trials in the 3 toxicity scenarios and the 2 dose-allocation designs, either the 3+3 design or the CRM. For each scenario, the PCS on the true MTD-regimen are represented in bold. For each dose-allocation design, the mean sample size of each dose-regimen is displayed.
\label{table_PCS}}
\centering
\begin{tabular}{llllllll}
\cline{3-8}
& & $\boldsymbol{S_1}$ & $\boldsymbol{S_2}$ & $\boldsymbol{S_3}$ & $\boldsymbol{S_4}$ & $\boldsymbol{S_5}$ & $\boldsymbol{S_6}$ \\ \hline
\multicolumn{2}{l}{\textbf{Scenario 1}} & \textbf{0.08} & \textbf{0.11} & \textbf{0.15} & \textbf{0.3} & \textbf{0.44} & \textbf{0.52} \\ \cline{1-8}
\multicolumn{1}{l}{\multirow{4}{*}{3+3}} & Mean sample size & 3.6 & 3.5 & 3.5 & \textbf{3} & 1.6 & 0.4 \\ \cline{2-8}
\multicolumn{1}{l}{} & Logistic-DRtox & 8.6 & 5.9 & 19 & \textbf{42.2} & 19.6 & 4.7 \\
\multicolumn{1}{l}{} & Hierarchical-DRtox & 7.5 & 7.6 & 19.1 & \textbf{43.8} & 18.6 & 3.4 \\
\multicolumn{1}{l}{} & 3+3 & 13.9 & 16.1 & 32.2 & \textbf{27.6} & 8.6 & 1.6 \\ \cline{1-8}
\multicolumn{1}{l}{\multirow{4}{*}{CRM}} & Mean sample size & 4.2 & 3.7 & 5.6 & \textbf{8.8} & 5.6 & 2.1 \\ \cline{2-8}
\multicolumn{1}{l}{} & Logistic-DRtox & 0 & 1.2 & 15.5 & \textbf{64.6} & 15.5 & 3.2 \\
\multicolumn{1}{l}{} & Hierarchical-DRtox & 0 & 0.8 & 12.8 & \textbf{64.3} & 19.4 & 2.7 \\
\multicolumn{1}{l}{} & Logistic CRM & 0 & 1.4 & 15.1 & \textbf{50.4} & 27.1 & 6 \\ \cline{1-8}
\multicolumn{2}{l}{\textbf{Scenario 2}} & \textbf{0.15} & \textbf{0.3} & \textbf{0.44} & \textbf{0.52} & \textbf{0.69} & \textbf{0.83} \\ \cline{1-8}
\multicolumn{1}{l}{\multirow{4}{*}{3+3}} & Mean sample size &4 & \textbf{3.6} & 1.8 & 0.5 & 0.1 & 0\\ \cline{2-8}
\multicolumn{1}{l}{} & Logistic-DRtox & 27.2 & \textbf{42.5} & 24.7 & 5.2 & 0.4 & 0 \\
\multicolumn{1}{l}{} & Hierarchical-DRtox & 29.3 & \textbf{41.2} & 24.3 & 4.8 & 0.4 & 0 \\
\multicolumn{1}{l}{} & 3+3 & 57.3 & \textbf{31} & 9.8 & 1.7 & 0.2 & 0 \\ \cline{1-8}
\multicolumn{1}{l}{\multirow{4}{*}{CRM}} & Mean sample size & 8.7 & \textbf{11.1} & 7.5 & 2.3 & 0.3 & 0 \\ \cline{2-8}
\multicolumn{1}{l}{} & Logistic-DRtox & 14.8 & \textbf{65.9} & 17.4 & 1.7 & 0.2 & 0 \\
\multicolumn{1}{l}{} & Hierarchical-DRtox & 12.3 & \textbf{66.2} & 18.9 & 2.6 & 0 & 0 \\
\multicolumn{1}{l}{} & Logistic CRM & 12.5 & \textbf{56} & 26.7 & 4.7 & 0.1 & 0 \\ \cline{1-8}
\multicolumn{2}{l}{\textbf{Scenario 3}} & \textbf{0.07} & \textbf{0.11} & \textbf{0.2} & \textbf{0.3} & \textbf{0.42} & \textbf{0.56} \\ \cline{1-8}
\multicolumn{1}{l}{\multirow{4}{*}{3+3}} & Mean sample size & 3.6 & 3.6 & 3.7 & \textbf{2.7} & 1.4 & 0.4 \\ \cline{2-8}
\multicolumn{1}{l}{} & Logistic-DRtox & 7.8 & 6.4 & 25.2 & \textbf{34.1} & 21.6 & 4.9 \\
\multicolumn{1}{l}{} & Hierarchical-DRtox & 5.9 & 7.9 & 27.3 & \textbf{35.8} & 20.6 & 2.5\\
\multicolumn{1}{l}{} & 3+3 & 13.1 & 24.4 & 29.5 & \textbf{24} & 7.7 & 1.3 \\ \cline{1-8}
\multicolumn{1}{l}{\multirow{4}{*}{CRM}} & Mean sample size & 4 & 4 & 6.4 & \textbf{8} & 5.2 & 2.3 \\ \cline{2-8}
\multicolumn{1}{l}{} & Logistic-DRtox & 0.1 & 1.4 & 19.6 & \textbf{52} & 25.1 & 1.8 \\
\multicolumn{1}{l}{} & Hierarchical-DRtox & 0.1 & 0.8 & 17.7 & \textbf{54.4} & 25.9 & 1.1 \\
\multicolumn{1}{l}{} & Logistic CRM & 0.1 & 2.3 & 20.3 & \textbf{44.5} & 26.4 & 6.4 \\ \cline{1-8}
\end{tabular}
\end{table}
In all scenarios, the PCS of the logistic-DRtox and the hierarchical-DRtox are very similar. Both methods outperform the dose-escalation design implemented in most scenarios. After implementing the 3+3 design, our methods correctly select the MTD-regimen in more than 10\% more trials compared to the dose-allocation design. After implementing the CRM design, both methods correctly select the MTD-regimen in approximately 10\% more trials compared to the CRM.
The results of Scenarios 1, 3 and 6 (presented in Web Appendix D) illustrate the effect of the variation in the dose-regimen scheme with a similar dose-regimen toxicity relationship. Compared to Scenario 1, the PCS of the logistic-DRtox and hierarchical-DRtox are decreased by approximately 10\% in Scenario 3, while there is not much difference in the results between Scenarios 1 and 6. Therefore, the loss of performance in Scenario 3 is caused not only by the variation in the dose-regimen scheme but also by the difference in the dose-regimen toxicity relationship, as in Scenario 3 there is less difference in the toxicity probabilities between the MTD-regimen and its neighbors.
However, the performance of the DRtox is heavily impacted by the dose-escalation design implemented; after implementing the CRM design, the DRtox correctly selects the MTD-regimen in more than $50 \%$ of the trials, but its PCS can decrease by $20 \%$ when applied after the 3+3 design. This loss of performance is due to the small sample size after implementing the 3+3 design and the higher proportion of patients allocated to suboptimal dose-regimens.
\subsubsection{Estimation of the toxicity probabilities}
We also evaluated the performance of the DRtox based on the precision of the estimation of the toxicity probabilities of all dose-regimens. We represented the distribution of the estimated toxicity probabilities, defined as the mean of the posterior distribution, over 1000 trials. The results of Scenario 3 obtained after implementing the CRM are presented in the lower part of Figure \ref{plot_pTox_predict}. The results of the other scenarios are displayed in Web Appendix D.
In all scenarios, the toxicity probability of the MTD-regimen is well estimated by the DRtox and the CRM. Both the hierarchical-DRtox and the logistic-DRtox seem to be better in estimating the toxicity probability at all dose-regimens, even those far from the MTD-regimen. This phenomenon could be due to the additional PK/PD information and the correct understanding of the toxicity mechanism. Using the CRM, the entire dose-regimen toxicity curve is well estimated when the skeleton is close to the truth, as shown in Scenario 1 (Web Appendix D). However, in most cases, the toxicity estimation is precise around the MTD-regimen, but not reliable for the other dose-regimens. Regarding the dose-regimens far from the MTD-regimen, the hierarchical-DRtox seems to estimate the toxicity probability with less bias but more variance than the logistic-DRtox. In Web Appendix D, the distribution of the root mean square error (RMSE) of all methods is plotted; the RMSE is computed on all dose-regimens or on the MTD-regimen and its neighbors. Near the MTD-regimen, the estimation of the logistic-DRtox is better than that of the hierarchical-DRtox; both models are better than the CRM. However, in the scenarios in which the MTD-regimen is at extreme positions of the dose-regimens panel (Scenarios 2 and 4 in Web Appendix D), the entire dose-regimen toxicity relationship is better estimated with the hierarchical-DRtox than the logistic-DRtox.
\subsubsection{Recommendation of a more suitable untested dose-regimen}
Finally, one strength of the DRtox is that it models the entire relationship between the dose-regimen and toxicity and can predict the toxicity probability of any new dose-regimen. Notably, in this work, we assumed that the administration times were fixed to simplify the notations but regimens with different times of drug administration can also be considered. Therefore, at the end of the dose-escalation stage of the trial, the DRtox can recommend dose-regimens that were not tested in the trial to be investigated in expansion studies. For example, let us imagine a scenario in which the panel of dose-regimens missed the true MTD-regimen, as illustrated in the upper plot of Figure \ref{plot_pTox_predict}, where regimen $\boldsymbol{S_3}=(5,10,25,50,50,50,50)$ $\mu$g/kg is under-dosing and regimen $\boldsymbol{S_4}=(10,25,50,100,150,150,150)$ $\mu$g/kg is overdosing.
\begin{figure}
\begin{center}
\centerline{\includegraphics[width=15cm]{figures/plot_tox_prob_predict_crm.pdf}}
\end{center}
\caption{Violin plots of the estimated toxicity probabilities in an additional scenario in which the dose-regimen panel missed the true MTD-regimen and in Scenario 3 on 1000 trials implemented with the CRM including 30 patients. The predicted toxicity probability of a new regimen $\boldsymbol{S_\text{new}}$ is framed in dotted line. Horizontal lines on the density estimates represent the median and first and third quantiles of the distributions and the plus sign represents the mean. The dashed line represents the toxicity target and the solid line represents the true toxicity probabilities.
\label{plot_pTox_predict}}
\end{figure}
The upper plot of Figure \ref{plot_pTox_predict} illustrates the gap between the estimated toxicity probabilities of regimens $\boldsymbol{S_3}$ and $\boldsymbol{S_4}$, suggesting that an alternative regimen could be found to have a toxicity probability closer to the target. At the end of the dose-escalation stage of the trial, the DRtox can predict the toxicity probability of any new regimen, such as regimen $\boldsymbol{S_\text{new}}=(10,25,50,100,100,100,100)$ $\mu$g/kg, whereas the CRM is unable to perform predictions as the model is built on a skeleton based on the panel of dose-regimens. In the upper plot of Figure \ref{plot_pTox_predict}, we can observe that both the hierarchical-DRtox and the logistic-DRtox predict that new regimen $\boldsymbol{S_\text{new}}$ has a toxicity probability closer to the target; therefore we can propose to evaluate the new regimen in expansion cohorts.
Another practical case is illustrated in Scenario 3 in which the objective was to administer the steady-state dose of $40$ $\mu$g/kg as soon as possible. As shown in the lower plot of Figure \ref{plot_pTox_predict}, the estimated MTD-regimen is $\boldsymbol{S_4}=(10, 20, 40, 40, 40, 40, 40)$ $\mu$g/kg, and the next regimen of the panel, $\boldsymbol{S_5}=(20, 40, 40, 40, 40, 40, 40)$, is estimated to be too toxic. Nevertheless, one might wonder whether another regimen with an acceptable toxicity could be found in which the steady-state dose is administered from the second administration. The DRtox predicts the toxicity probability of new regimen $\boldsymbol{S_\text{new}}=(10, 40, 40, 40, 40, 40, 40) $ to be approximately 0.3 as shown in the lower plot of Figure \ref{plot_pTox_predict}, and this new regimen can be compared in terms of efficacy to the estimated MTD-regimen $\boldsymbol{S_4}$ in subsequent stages of the trial. Therefore, at the end of the trial, the DRtox can evaluate alternative regimens that were not included in the panel for future studies.
\subsubsection{Sensitivity analysis}
We also evaluated the DRtox under different prior distributions and after increasing the variability in toxicity. The results of these analyses are shown in Web Appendix D.
To evaluate the impact of the prior distributions, we compared the main results with those obtained with a stronger prior distribution measured with an approximate ESS of 9, which is high for a trial including 30 patients. As the prior distributions are based on Scenario 1, stronger prior information increased the performance in the scenarios in which the dose-regimen toxicity relationship is similar to that in Scenario 1 (Scenarios 1, 3, 5 and 6), but the performance was decreased in Scenario 4. Therefore, defining prior distributions using reliable data from previous studies can increase the performance, but attention should be paid to the quality and quantity of the information used to avoid decreasing the performance.
We also observed that our methods were robust to an increase in the variability in toxicity by increasing $\omega_\alpha$ from $0.25$ to $0.5$ while maintaining the PK/PD variability unchanged.
\section{Discussion}
\label{s:discuss}
In this work, we developed a dose-regimen assessment method (DRtox) to model the relationship between the dose-regimen and toxicity by modeling a PD endpoint. We estimated the toxicity related to the PD endpoint in the context of an ongoing phase I trial in which the assumption of a monotonic increase in the dose-regimen toxicity relationship did not hold. We found that when the process generating toxicity was reasonably understood and approximated, adding PK/PD information increased the proportion of correct selection (PCS). This method allowed for a better estimation of the dose-regimen toxicity curves, as this type of modeling enabled the sharing of more information across regimens. Moreover, the DRtox was able to evaluate additional regimens for expansion cohorts that were not present in the dose-regimen panel set but may have a predicted toxicity probability closer to the target. In practice, our methods should be applied at the end of the dose-escalation phase of the motivating trial once all PK/PD and toxicity data are collected. Our model can address missing data as follows; (1) Regarding missing doses in the dose-regimen and associated cytokine profiles, as we are using nonlinear PK/PD modeling, our method would take into account whether a patient misses one or more planned doses as the model considers the actual regimen received and not the planned regimen. (2) Regarding missing cytokine data, which is expected to be rare in this trial as the cytokine is carefully monitored by frequent sampling to detect its peak, individual cytokine peaks could be predicted from the population PK/PD model. However, it would be more common for PK/PD data to be below the limit of quantification, but these data are considered by the PK/PD model as censored data rather than missing data. (3) Regarding missing toxicity data, patients with missing data should be replaced.
In the simulation study, we assumed that the dose-regimens were ordered, but the DRtox can be applied when only partial ordering is known. As the DRtox is applied at the end of the trial, the choice of the dose-escalation design may have a significant impact on the results. The performance achieved using a model-based design, such as the CRM with 30 patients, is better than that achieved using an algorithm-based design, such as the 3+3 design, which has the main disadvantage of treating most patients at subtherapeutic doses and having a small total sample size that cannot be fixed before the trial.
Regarding the logistic-DRtox, since drug administration is stopped in the case of toxicity, the performance can be impacted by incomplete observations of the PD endpoint, even though it seemed to lightly impact our simulation study. In the case toxicities occur at the beginning of the administrations, resulting in a high number of incomplete PD observations, we propose the use of the predicted PD given by the PK/PD model under the complete regimen planned.
The hierarchical-DRtox added a constraint, i.e., toxicity must occur at the maximum of the PD response. Errors in the PK/PD estimation may lead to an undefined hierarchical model. In our simulation study, we observed this latter issue in less than $2 \%$ of the trials. In the real world, this issue could indicate that the proposed PK/PD model is incorrect, and that another model should be considered. However, in our simulation study, we decided to run other simulated datasets to replace the upper $2 \%$ of the trials for all methods. One way to relax this constraint is to allow the patients' toxicity threshold to vary among administrations by adding a second latent variable, which could lead to complex models that are challenging to estimate.
In this work, we assumed that all dose-regimens have the same repetition scheme and duration. However, the DRtox can address regimens with different schemes, administration modes, etc. The first part of the DRtox relies on PK/PD modeling. An incorrect PK/PD model can have a negative impact on the full method. However, as usual in the PK/PD field, the aim of the model is to well catch the outcome profiles; therefore an "incorrect/approximated" model could still be applied without DRtox performance loss.
In conclusion, we proposed a general approach for modeling toxicity through a PK/PD endpoint. In this work, we considered a specific PD endpoint in the context of an actual ongoing clinical trial, but various endpoints (such as the AUC or a combination of several toxicity biomarkers) could be used depending on the type of toxicity considered. Moreover, we developed the DRtox under the assumption that toxicity was linked to the maximum value of the PD biomarker, but other assumptions could be raised, such as assuming a cumulative effect. The usual dose-finding designs were developed to determine the MTD in the first cycle of treatment after a single administration. However, with the increase in the number of targeted molecules, immuno-oncology therapies and combinations with alternative dose-regimens, standard dose-allocation designs fail to identify the dose-regimen recommended for future studies. Incorporating PK/PD exposure data in early phase toxicity modeling through stronger collaboration between biostatisticians and pharmacometricians may lead to a better understanding of the entire dose-regimen toxicity relationship and provide alternative dosage recommendation for the next phases of the clinical development.
\section*{Acknowledgements}
This work was partially funded by a grant from the Association Nationale de la Recherche et de la Technologie, with Sanofi-Aventis R\&D, Convention industrielle de formation par la recherche number 2018/0530.
The authors would like to thank Raouf El-Cheikh, Laurent Nguyen and Christine Veyrat-Follet for their time and explanations of PK/PD modeling and Paula Fraenkel for her thoughtful review of the manuscript.
|
2,869,038,154,591 | arxiv | \section{Introduction}
In optical metrology of nanostructures rigorous (i.e., accurate) simulation of light propagation is an essential
component~\cite{Pang2012aot,Lai2012aot}.
A challenge consists in reducing computation times for simulation results matching predefined accuracy requirements.
This is especially important when real-world structures of complex geometry are considered.
\begin{figure}[b]
\begin{center}
\includegraphics[width=.7\textwidth]{epse/fields.eps}
\caption{Electric field intensity distribution in pseudo-color representation (red: high intensity, blue: low intensity).
Top: linear color scale, bottom: logarithmic color scale. Left: S-polarized light, right: P-polarized light.
}
\label{fields}
\end{center}
\end{figure}
We present a fast, finite-element based method to address such computation challenges.
In this contribution we especially focus on finite-element based computation of
derivatives of the propagating light fields (and of derived quantities like transmission or reflection intensities)
with respect to geometrical parameters of the scattering target.
As practical example we present a sensitivity analysis for patterns on a scatterometry
reference standard:
dependence of the scatterometric signal on geometry parameters
(CDs, sidewall-angles, corner-rounding) is evaluated in various parameter regimes.
This paper is structured as follows:
The background of our model is presented in Section~\ref{section_background},
the numerical method is described in Section~\ref{section_numerical},
convergence results are reported in Section~\ref{section_convergence},
and results of a sensitivity analysis of scatterometric signals from a pattern proposed as sample on a scatterometric
standard is reported in Section~\ref{section_sensitivity}.
\section{Background / Model}
\label{section_background}
Light scattering off nanoscopic structures on scatterometry samples is modeled by
the linear Maxwell's equations in frequency domain~\cite{Pomplun2007pssb,Burger2012springer}.
From these a single equation for the electric field $\Field{E}$ can be derived:
\begin{equation}
\mathbf{curl}\;\Tensor{\mu}^{-1}\mathbf{curl}\; \Field{E}-\omega^{2}\Tensor{\epsilon}\Field{E}=i\omega\Field{J}.
\label{eq:mwE}
\end{equation}
where $\Tensor{\epsilon}$ and $\Tensor{\mu}$ are the permittivity and permeability tensor, $\omega$ is
the time-harmonic frequency of the electromagnetic field, and the
electric current $\Field{J}$ is source of an electromagnetic field.
The domain of interest is separated into an infinite
exterior $\Omega_{\mathrm{ext}}$ which hosts the given incident field and the scattered field,
and an interior $\Omega_{\mathrm{int}}$ where the total field is computed.
Electromagnetic waves incident from the exterior to the interior at the boundaries between both domains
are added to the right hand side of Eq.~\eqref{eq:mwE}.
For numerical simulations the infinite exterior is treated using transparent
boundary conditions (using the perfectly matched layer method, PML).
Transforming Eq.~\eqref{eq:mwE} into weak formulation and discretizing it using
finite elements yields a matrix equation:
\begin{equation}
A \Field{E}_h = f
\label{eq:matrixE}
\end{equation}
where $A$ is a sparse matrix, $f$ contains the source terms,
and $\Field{E}_h$ is the expansion of the electric field in a finite-dimensional FEM basis.
Inversion of $A$ and multiplication with the right hand side gives the solution $\Field{E}_h$:
\begin{equation}
\Field{E}_h = A^{-1}f
\label{eq:invmatrixE}
\end{equation}
Note that solutions corresponding to different sources incident on the same pattern can be obtained from the
same inverted system matrix, given that $A$ does not depend on the sources.
E.g., when $f_1$ and $f_2$ correspond to incident light of two different polarizations, the corresponding near fields
$\Field{E}_{h,1}$ and $\Field{E}_{h,2}$ can be obtained from the same inverted system matrix:
\begin{eqnarray}
\Field{E}_{h,1}& =& A^{-1}f_1\\
\Field{E}_{h,2}& = &A^{-1}f_2
\label{eq:invmatrixE2}
\end{eqnarray}
Inversion of the system matrix (i.e., computation of $A^{-1}$)
typically is the computationally most costly step, therefore {\it re-using} the same
inverted matrix $A^{-1}$ for $N$ sources reduces the computational costs approximately by a factor of $N^{-1}$,
in a simulation setting where $N$ independent source terms are present.
In optimization problems, reconstruction problems and sensitivity studies, often an accurate measure of the
partial derivative of the near field with respect to project parameters $p_i$
(e.g., geometry parameters, source parameters, material parameters), $\partial p_i\Field{E}_{h}$, is required.
As is well known, it is straight-forward in the finite-element context to compute these quantities
by again {\it re-using} the inverted system matrix:
\begin{equation}
\partial p_i\Field{E}_{h} = A^{-1}[\partial p_i f -(\partial p_i A )\Field{E}_{h}]
\label{eq:partial1}
\end{equation}
Also higher-order derivatives $\partial^N p_i\Field{E}_{h}$ can be computed, e.g.,
\begin{equation}
\partial^2 p_i\Field{E}_{h} =
A^{-1}[(\partial^2 p_i f) -(\partial^2 p_i A )\Field{E}_{h} - 2(\partial p_i A)(\partial p_i \Field{E}_{h})]
\label{eq:partial2}
\end{equation}
Here, $\partial^N p_i A$ is the $N$th derivative of $A$ with respect to parameter $p_i$, and
$\partial^N p_i f$ is the $N$th derivative of source term $f$ with respect to parameter $p_i$.
\begin{figure}[t]
\begin{center}
\psfrag{R}{\sffamily $R$}
\psfrag{alpha}{\sffamily $\alpha$}
\psfrag{ha}{\sffamily $h$}
\psfrag{line}{\sffamily line}
\psfrag{superspace}{\sffamily superspace}
\psfrag{substrate}{\sffamily substrate}
\psfrag{px}{\sffamily $p$}
\psfrag{cd}{\sffamily $CD$}
\includegraphics[width=.5\textwidth]{epse/schematics_emrp_1.eps}
\caption{
Schematics of the geometry of the investigated scatterometric target (unit cell of a 1D-periodic grating).
Free parameters of the model are the critical dimension (CD, width at $h/2$), the height $h$, pitch $p$, sidewall
angle $\alpha$, corner rounding radius $R$.
}
\label{schematics_emrp_1}
\end{center}
\end{figure}
\section{Numerical method}
\label{section_numerical}
For rigorous simulations of the scattered light field we use the
finite-element (FEM) Maxwell solver JCMsuite.
This solver incorporates higher-order edge-elements, self-adaptive meshing,
and fast solution algorithms for solving time-harmonic Maxwell's equations.
Also, automatic computation of first- and higher-order parameter derivatives is
implemented in the software.
Previously the solver has, e.g., been used in scatterometric investigations
of EUV line masks (1D-periodic patterns), contact hole masks
(2D-periodic patterns) and more complicated 3D patterns~\cite{Scholze2007a,Scholze2008a,Burger2011eom1,Burger2011pm1,Kato2012a}.
Convergence studies in these investigations demonstrate that highly accurate, rigorous
results can be attained even for the relatively large 3D computational domains which are typically present
in 3D EUV setups.
The workflow for the simulations is as follows:
a scripting language (Matlab) automatically iterates the input parameter sets
(physical parameters like geometrical dimensions and numerical parameters like mesh refinement).
For each set, a triangular 2D mesh is created automatically by the built-in mesh generator.
Then, the solver is
started for computing the electromagnetic near field and its parameter derivatives,
postprocessing is performed to extract, e.g., diffraction order efficiencies and their parameter derivatives,
and results are evaluated and saved.
\begin{table}[h]
\begin{center}
\begin{tabular}{|l|l|}
\hline
dimension & 1D \\ \hline
material & Si \\ \hline
pitch & 100\,nm\\ \hline
CD & 50\,nm\\\hline
$h$& 20\,nm \\ \hline
$\alpha$& 88\,deg \\ \hline
$R$ & 2\,nm\\ \hline
$\lambda$& 193\,nm \\ \hline
$\theta$& 30\,deg \\ \hline
$\phi$& 0\,deg \\ \hline
\end{tabular}
\caption{Parameter settings for the scatterometry standard simulations (compare Fig.~\ref{schematics_emrp_1}).
Line height $h$, sidewall angle $\alpha$, corner rounding radius $R$, illumination vacuum wavelength $\lambda_0$,
illumination inclination and rotation angle, $\theta$ and $\phi$.
Parameter settings for the scatterometry standard simulations (compare Fig.~\ref{schematics_emrp_1})
}
\label{table_specs_2}
\end{center}
\end{table}
\begin{figure}[t]
\begin{center}
\includegraphics[width=.7\textwidth]{epse/mesh.eps}
\caption{Finite-element mesh for spatial discretization of the geometry. Left: full geometry, right: detail at a rounded
corner.
}
\label{mesh}
\end{center}
\end{figure}
Numerical settings which yield highly accurate results for the setup of interest in the presented investigations
are identified in a convergence study
(Section~\ref{section_convergence}).
As numerical settings for the solver in the subsequent Section~\ref{section_sensitivity} on a sensitivity study,
finite elements of third-order polynomial degree, and adaptive, error-estimator controlled meshing of the
geometry in the computational domain and of transparent boundaries are chosen.
This setting yields discrete problems with few ten thousands of unknowns (e.g., 30,000 unknowns),
and few seconds (e.g., 4\,sec) of computation time
per computation (for computation of reflectivities and their parameter derivatives, for two polarizations, and
for a specific physical setting).
The FEM software solves these problems by direct LU factorization on a standard desktop computer.
Figure~\ref{fields} shows a graphical representation of a typical near-field intensity distribution. Please note that
(as expected for this angle of incidence) the S-polarized incident wave leads to a smooth intensity distribution
while the P-polarized incident wave leads to a highly discontinuous intensity distribution.
\section{Model validation}
\label{section_convergence}
\begin{figure}[t]
\begin{center}
\begin{minipage}[c]{.45\textwidth}
\psfrag{Relative error}{\sffamily Relative error}
\psfrag{p}{\sffamily $p$}
\psfrag{Rs}{\sffamily $I_0$}
\psfrag{dw Rs}{\sffamily $\partial I_0/\partial_{CD}$}
\psfrag{dh Rs}{\sffamily $\partial I_0/\partial_{h}$}
\psfrag{dswa Rs}{\sffamily $\partial I_0/\partial_{\alpha}$}
\includegraphics[width=.97\textwidth]{epse/convergence_n_s.eps}
\end{minipage}
\begin{minipage}[c]{.45\textwidth}
\psfrag{Relative error}{\sffamily Relative error}
\psfrag{p}{\sffamily $p$}
\psfrag{Rp}{\sffamily $I_0$}
\psfrag{dw Rp}{\sffamily $\partial I_0/\partial_{CD}$}
\psfrag{dh Rp}{\sffamily $\partial I_0/\partial_{h}$}
\psfrag{dswa Rp}{\sffamily $\partial I_0/\partial_{\alpha}$}
\includegraphics[width=.97\textwidth]{epse/convergence_n_p.eps}
\end{minipage}
\caption{
Dependence of the relative error of the reflectivity and its derivatives with respect to geometry parameters
on numerical parameter $p$.
{\it Left:} S-polarized incident light,
{\it Right:} P-polarized incident light.
}
\label{figure_convergence}
\end{center}
\end{figure}
In order to validate our model we perform a convergence study where we investigate how the computed quantities
and their derivatives with respect to geometry parameters depend on the chosen numerical parameters.
We investigate a geometry which could be used as part of a scatterometric standard~\cite{Bodermann2012op}.
The investigated pattern is a 1D-periodic line gratings
etched into silicon (Si), with specific pitch (periodicity) and center line-width (CD)
Figure~\ref{schematics_emrp_1}
shows a schematics of the 2D setup for this test case.
Table~\ref{table_specs_2}
shows parameter values of the project setup.
Figure~\ref{mesh} shows a graphical representation of a 2D mesh.
The pattern is illuminated from the superspace at oblique incidence with S- and P-polarized, monochromatic
plane waves.
The quantity of interest in this case is the intensity of light in the zero'th reflected diffraction order,
$I_0$ ($I_0\sim |\Field{E}|^2$, cf., Eq.~\ref{eq:mwE}), and
it's derivatives with respect to line width, height and sidewall angle,
$\partial I_0/\partial_{CD}$,
$\partial I_0/\partial_{h}$,
$\partial I_0/\partial_{\alpha}$,
as function of varied geometry parameters.
Please note that here, we normalize $I_0$ with the intensity of the incoming light field, i.e., $I_0$ is
a dimensionless quantity.
This numerical study is restricted to evaluation of intensities of the unpolarized light field, $I_0$,
however, as the derivatives of the vectorial electric field amplitudes are computed ($\partial p_i\Field{E}_{h}$,
cf., Eq.~\ref{eq:partial1}), also other quantities (sensitivities of all entries in the M\"uller matrix)
are accessible without extra computational costs.
This numerical study is also restricted to 1D-periodic patterns (i.e., 2D computational domains),
however, the method can also be applied (and is implemented in the software) for 3D setups and/or isolated
computational domains (i.e., non-periodic setups).
Numerical errors as present in any numerical method for solving Maxwell's equations
depend on the actual numerical settings. The two main numerical degrees of freedom for the finite-element method are
the spatial discretization (mesh refinement) and the choice of ansatz functions which are used to approximate
the fields on the spatial discretization mesh. The ansatz functions are typically defined by their polynomial degree $p$
(when ansatz functions with a higher degree are chosen, this results in a larger basis for approximating the
solution, and -- more importantly -- in higher approximation quality~\cite{Pomplun2007pssb}).
Figure~\ref{figure_convergence} shows how the numerical error of the reflection intensity and of its derivatives
converges with finite element polynomial degree $p$. Relative errors are defined as normalized deviations from
so-called quasi-exact results (results obtained at higher numerical discretization)~\cite{Burger2005bacus,Burger2008bacus}.
As can be seen from this Figure, very high levels of accuracy are reached for both, the reflected intensities, and
for their derivatives with respect to geometry parameters. We have also checked that computing these derivatives
using numerical differentiation yields the same numerical values (however, with worse convergence properties, and at
significantly higher numerical cost).
\section{Sensitivity study}
\label{section_sensitivity}
In order to demonstrate utility of the method we have performed several exemplary sensitivity studies.
For given setups we investigate how the derivatives with respect to geometry parameters depend on specific
physical parameter settings. This can be used to identify regimes where a scatterometric setup should work
with higher sensitivity (yielding lower measurement uncertainties) than in other regimes.
Figure~\ref{figure_sensitivity_angle} (left) shows how the scatterometric signal (zero order reflection intensity)
varies with azimuthal angle of incidence of the illuminating
plane waves. As expected, S- and P-polarization show a different behavior.
The right part of this Figure shows how the sensitivity with respect to parameter variations depends on this angle.
From this Figure it can, e.g., be seen that in this case sensitivity is about an order of magnitude higher for incident P-polarized light,
and that absolute values of sensitivity are highest for small angles $\theta$ (i.e., close to perpendicular incidence).
\begin{figure}[t]
\begin{center}
\begin{minipage}[c]{.45\textwidth}
\psfrag{theta}{\sffamily $\theta$ [deg]}
\psfrag{R}{\sffamily $I_0$}
\psfrag{S-Pol}{\sffamily S-Pol}
\psfrag{P-Pol}{\sffamily P-Pol}
\includegraphics[width=.97\textwidth]{epse/angle_scan_r.eps}
\end{minipage}
\begin{minipage}[c]{.45\textwidth}
\psfrag{theta}{\sffamily $\theta$ [deg]}
\psfrag{dp R}{\sffamily $\partial I_0/\partial_{p}$ [1/deg]}
\psfrag{p, dw R p}{\footnotesize \sffamily P, $\partial I_0/\partial_{\tiny CD}$}
\psfrag{p, dh R p}{\footnotesize \sffamily P, $\partial I_0/\partial_{h}$}
\psfrag{p, dswa R p}{\footnotesize \sffamily P, $\partial I_0/\partial_{\alpha}$}
\psfrag{s, dw R s}{\footnotesize \sffamily S, $\partial I_0/\partial_{\tiny CD}$}
\psfrag{s, dh R s}{\footnotesize \sffamily S, $\partial I_0/\partial_{h}$}
\psfrag{s, dswa R s}{\footnotesize \sffamily S, $\partial I_0/\partial_{\alpha}$}
\includegraphics[width=.97\textwidth]{epse/angle_scan_dr.eps}
\end{minipage}
\caption{
{\it Left:} Dependence of the scatterometric signal $I_0$ on the azimuthal angle of incidence of the illuminating
plane waves, for S- and P-polarization.
{\it Right:} Dependence of the sensitivity with respect to parameter variations (CD, height, sidewall angle) on
the angle of incidence.
}
\label{figure_sensitivity_angle}
\end{center}
\end{figure}
Figure~\ref{figure_sensitivity_height} (left) shows how the scatterometric signal (zero order reflection intensity)
varies with height of the grating lines. As in the previous case, S- and P-polarization show a different behavior.
The right part of this Figure shows how the sensitivity with respect to parameter variations depends on the line height.
From this Figure it can, e.g., be seen that in this case again, sensitivity is about an order of magnitude higher for incident P-polarized light,
and that absolute values of sensitivity with respect to CD variations are highest for line of height $h\approx 20\,$nm
(in the investigated parameter regime).
\begin{figure}[t]
\begin{center}
\begin{minipage}[c]{.45\textwidth}
\psfrag{h}{\sffamily $h$ [nm]}
\psfrag{R}{\sffamily $I_0$}
\psfrag{S-Pol}{\sffamily S-Pol}
\psfrag{P-Pol}{\sffamily P-Pol}
\includegraphics[width=.97\textwidth]{epse/height_scan_r.eps}
\end{minipage}
\begin{minipage}[c]{.45\textwidth}
\psfrag{h}{\sffamily $h$ [nm]}
\psfrag{dp R}{\sffamily $\partial I_0/\partial_{p}$ [1/nm]}
\psfrag{p, dw R p}{\footnotesize \sffamily P, $\partial I_0/\partial_{\tiny CD}$}
\psfrag{p, dh R p}{\footnotesize \sffamily P, $\partial I_0/\partial_{h}$}
\psfrag{p, dswa R p}{\footnotesize \sffamily P, $\partial I_0/\partial_{\alpha}$}
\psfrag{s, dw R s}{\footnotesize \sffamily S, $\partial I_0/\partial_{\tiny CD}$}
\psfrag{s, dh R s}{\footnotesize \sffamily S, $\partial I_0/\partial_{h}$}
\psfrag{s, dswa R s}{\footnotesize \sffamily S, $\partial I_0/\partial_{\alpha}$}
\includegraphics[width=.97\textwidth]{epse/height_scan_dr.eps}
\end{minipage}
\caption{
{\it Left:} Dependence of the scatterometric signal $I_0$ on the height of the grating lines $h$, for S- and P-polarization.
{\it Right:} Dependence of the sensitivity with respect to parameter variations (CD, height, sidewall angle) on $h$.
}
\label{figure_sensitivity_height}
\end{center}
\end{figure}
\section{Conclusion}
To summarize, a method for automatic and computational-cost-effective computation of parameter derivatives
of electromagnetic near fields and derived quantities has been demonstrated. This is useful for design optimization tasks,
parameter reconstruction, sensitivity analysis, and other applications.
A convergence study has been performed which demonstrated that very high levels of accuracy can be achieved.
The method has been applied to investigate sensitivity of a scatterometric setup in different parameter regimes.
\section*{Acknowledgments}
The work presented here is part of the EMRP Joint Research Project IND\,17 {\sc Scatterometry}.
The EMRP is jointly funded by the EMRP participating countries within EURAMET and the European Union.
The authors would further like to acknowledge the support of
European Regional Development Fund (EFRE) / Investitionsbank Berlin (IBB) through contracts
ProFIT 10144\,554/5 and the support of DFG (Deutsche Forschungsgemeinschaft) through
the DFG Research Center {\sc Matheon}.
|
2,869,038,154,592 | arxiv | \section{Introduction}
Particle physics data are famously collated and summarised in the
Particle Data Book \cite{Beringer:1900zz}. However, it is interesting
to note that there are no entries on the deconfined phase of QCD -- a
symptom of the difficulty of studying (experimentally and
theoretically) this new phase. The {\sc fastsum} Collaboration has
studied QCD at non-zero temperature for a number of years using
dynamical quarks on {\em anisotropic} lattices where the temporal
lattice spacing, $a_\tau$, is less than the spatial one, $a_s$. Since
the temperature $T = 1/(a_\tau N_\tau)$, where $N_\tau$ is the number
of lattice points in the temporal direction, this gives the distinct
advantage that more points are sampled in a euclidean correlator for a
given temperature compared to the isotropic case.
Our research programme began with two-flavour dynamical ``1st
generation'' ensembles from which we studied a number of
phenomenological quantities, such as spectral features in charmonium
and bottomonium at zero and non-zero momenta, and the inter-quark
potential in charmonium \cite{fastsum1}. We have now improved the
accuracy of our results by producing our ``2nd generation'' lattices
which have 2+1 dynamical flavours, a larger volume, improved discretisation
and more realistic dynamical quark masses, see Table
\ref{tab:params}.
In this talk, I give an overview of our 2nd generation ensembles
including our estimate of the deconfinement temperature, $T_c$, obtained
from the Polyakov loop. I discuss the partial restoration of chiral
symmetry in the light meson spectrum and briefly review four results
obtained from these lattices which are covered fully in other talks
\cite{Giudice:2013fza,Amato:2013oja,Evans:2013zca,Harris:2013vfa}.
\section{Lattice details}
Our 2nd generation ensembles use the Hadron Spectrum Collaboration's
(HSC) Symanzik-improved gauge action \cite{Lin:2008pr}, with
\begin{equation}
S_G = \frac{\beta}{\gamma_g} \left\{
\sum_{x,s\neq s^\prime} \left[
\frac{5}{6 u_s^4 }{\cal P}_{ss^\prime}(x)
- \frac{1}{12 u_s^6 }{\cal R}_{ss^\prime}(x)\right]
+ \sum_{x,s}\gamma_g^2 \left[
\frac{4}{ 3 u_s^2 u_\tau^2}{\cal P}_{s\tau}(x)
- \frac{1}{12 u_s^4 u_\tau^2}{\cal R}_{s\tau}(x)\right]
\right\},
\label{eq:sg}
\end{equation}
where ${\cal P}$ and ${\cal R}$ are the usual $1\times1$ plaquette and
$2\times1$ rectangular Wilson loops, $u_{s(\tau)}$ are the spatial
(temporal) tadpole factors of the bare links, $\gamma_{g(f)}$ are the
bare gauge (fermion) anisotropies and, as usual, $\beta=2 N_c/g^2$ and
$N_c=3$ is the number of colours. The means of the
stout-smeared links are $\tilde{u}_\mu$ (with $\tilde{u}_\tau=1$).
We use a tadpole-improved clover fermion action and stout-smeared
links \cite{Morningstar:2003gk} using the same parameters as the
Hadron Spectrum Collaboration \cite{Lin:2008pr},
\begin{eqnarray}
S_F
&=& \sum_{x} \overline{\psi }(x) \frac{1}{ \tilde{u}_\tau} \bigg\{
\tilde{u}_\tau m_0
+ \gamma_\tau \nabla_\tau + \nabla_\tau^2
+\frac{1}{\gamma_f} \sum_s[ \gamma_s \nabla_s + \nabla_s^2] \nonumber \\
&-&\frac{1}{2} c_\tau \sum_{s} \sigma_{\tau s} F_{\tau s}
- \frac{1}{2} c_s \sum_{s<s^\prime} \sigma_{ss^\prime} F_{ss^\prime}
\bigg\} \psi (x),
\label{eq:sf}
\end{eqnarray}
where
\begin{equation}
c_\tau = \left(\frac{\gamma_g}{\gamma_f}+\frac{1}{\xi}\right)\frac{1}{2\tilde{u}_s^2}\;,
\;\;\;\;\;\;\;\;\;\;
c_s = \frac{1}{\gamma_f \tilde{u}_s^3}.
\end{equation}
The first line is the usual Wilson action and the second line is the
clover piece with $\tau$ and $s$ referring to temporal and spatial
directions. The $\nabla_\mu$ are covariant finite differences and $\xi
= a_s/a_\tau$ is the renormalised anisotropy. $\gamma_{s(\tau)}$ are
the spatial (temporal) Dirac matrices and $\sigma_{\mu\nu} =
\frac{1}{2}[\gamma_\mu, \gamma_\nu]$.
We use the same parameters as the HSC employed in their studies
\cite{Edwards:2008ja} corresponding to an anisotropy, $\xi=3.5$. We
generate ensembles with two volumes, $24^3$ and $32^3$, enabling us to
study finite volume effects. We also make use of the $T=0$
(i.e. $N_\tau=128$) configurations kindly made available to us from
HSC. Table \ref{tab:params} gives a full list of our parameters. The
generation of the ensembles were performed using the Chroma software
suite \cite{Edwards:2004sx} with Bagel routines \cite{bagel}.
\begin{table}
\begin{center}
\begin{tabular}{l|ccccc|ccccc}
\hline
&&&&&&&&&&\\
& \mcl{\bf 1st Generation} & \mc{\bf 2nd Generation} \\
&&&&&&&&&&\\
\hline
&&&&&&&&&&\\
Flavours & \mcl{2} & \mc{2+1} \\
Volume(s) & \mcl{($\sim$2fm)$^3$} & \mc{($\sim$3fm)$^3$ \& ($\sim$4fm)$^3$} \\
$a_\tau$ [fm] & \mcl{0.0268(1)} & \mc{0.03506(23)} \\
$a_s$ [fm] & \mcl{0.162(4)} & \mc{0.1227(8)} \\
$\xi=a_s/a_\tau$ & \mcl{6} & \mc{3.5} \\
$M_\pi/M_\rho$ & \mcl{$\sim 0.54$} & \mc{$\sim 0.45$} \\
$N_\tau^{\text{crit}} = (a_\tau T_c)^{-1}$
& \mcl{33.5} & \mc{30.4(7)} \\
&&&&&&&&&&\\
Gauge Action & \mcl{Symanzik Improved} & \mc{Symanzik Improved} \\
Fermion Action & \mcl{Stout Link, Fine-Wilson,}
& \mc{Tadpole Improved Clover} \\
& \mcl{Coarse-Hamber-Wu} & \mc{\ } \\
&&&&&&&&&&\\
\hline
&&&&&&&&&&\\
&$N_s$ &$N_\tau$ &$T$ &$T/T_c$ &$N_\text{cfg}$ &$N_s$ &$N_\tau$ &$T$ &$T/T_c$ &$N_\text{cfg}$ \\
&&& (MeV) &&&&& (MeV) &&\\
&&&&&&&&&&\\
\hline
&&&&&&&&&&\\
& 12 & 16 & 459 & 2.09 & \crl{1000} & 24 & 16 & 352 & 1.90 & \crr{1000}\\
& 12 & 18 & 408 & 1.86 & \crl{1000} & 32 & 16 & 352 & 1.90 & \crr{1000}\\
& 12 & 20 & 368 & 1.68 & \crl{1000} & 24 & 20 & 281 & 1.52 & \crr{1000}\\
& 12 & 24 & 306 & 1.40 & \crl{ 500} & 24 & 24 & 235 & 1.27 & \crr{1000}\\
& 12 & 28 & 263 & 1.20 & \crl{1000} & 32 & 24 & 235 & 1.27 & \crr{ 500}\\
& 12 & 32 & 230 & 1.05 & \crl{1000} & 24 & 28 & 201 & 1.09 & \crr{1000}\\
& 12 & 80 & 92 & 0.42 & \crl{ 250} & 32 & 28 & 201 & 1.09 & \crr{ 500}\\
& & & & & & 24 & 32 & 176 & 0.95 & \crr{1000}\\
& & & & & & 32 & 32 & 176 & 0.95 & \crr{ 500}\\
& & & & & & 24 & 36 & 156 & 0.84 & \crr{ 500}\\
& & & & & & 24 & 40 & 141 & 0.76 & \crr{ 500}\\
& & & & & & 32 & 48 & 117 & 0.63 & \crr{ 250}\\
& & & & & & 16 &128 & 44 & 0.24 & \crr{ 500}\\
& & & & & & 24 &128 & 44 & 0.24 & \crr{ 550}\\
&&&&&&&&&&\\
\hline
\end{tabular}
\caption{A list of the lattice parameters used for our
1st and 2nd generation ensembles.
\label{tab:params}}
\end{center}
\end{table}
\section{Determination of the deconfining temperature}
The Polyakov loop, $L$, can be used to determine $T_c$ as follows
\cite{Borsanyi:2012xf}. We note that $L$ is related to the free energy,
$F$, of a static quark, via:
\begin{equation}
L(T) = e^{-F(T)/T}.
\end{equation}
However, $F$ is only defined up to an additive renormalisation
constant, $\Delta F=f(\beta,m_0)$.
We can impose a renormalisation condition at a renormalisation
temperature, $T_R$, by requiring
\begin{equation}
L_R(T_R) \equiv c,
\label{eq:lr}
\end{equation}
for some suitable choice of $T_R$ and $c$. This means a
multiplicative renormalisation constant, $Z_L$, can be fixed as
follows
\begin{equation}
L_R(T) = e^{-F_R(T)/T} = e^{-(F_0(T)+\Delta F)/T} =
L_0(T) e^{-\Delta F/T} = L_0(T) Z_L^{N_\tau}.
\end{equation}
In Fig. \ref{fig:poly}, we plot the Polyakov loop with three different
renormalisation schemes corresponding to different choices of $T_R$
and the constant in Eq.$\;$(\ref{eq:lr}), as listed in the figure
caption. By fitting the data to cubic splines we obtain the point of
inflection $a_\tau T_c = 0.0329(7)$ where the error reflects the
spread from the three renormalisation schemes. This statistical
uncertainty is given by the thickness of the three interpolating
curves and can be seen to be negligible in this context. The result is
then $N_\tau^\text{crit} = 30.4(7)$ or $T_c = 185(4)$ MeV.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\textwidth]{figures/renorm_poly.eps}
\caption{The renormalised Polyakov loop, $L_R$, depicted by solid
($32^3$) and open ($24^3$) symbols. The solid curves are obtained by
cubic splines and their temperature derivatives, $\chi$, are
depicted by dashed curves. Three renormalisation schemes are
considered, Scheme A: $L_R(N_\tau=16) = 1.0$, Scheme B:
$L_R(N_\tau=20) = 1.0$, Scheme C: $L_R(N_\tau=20) = 0.5$.}
\label{fig:poly}
\end{center}
\end{figure}
\section{First results}
The deconfinement transition is expected to occur in the same $T$
range as the chiral symmetry restoration. For this reason it is
interesting to study the chiral partners in the light meson sector to
find evidence of this effect. In Fig. \ref{fig:chiral} we show the
pseudoscalar and scalar meson correlator for $T/T_c = 0.63$ and
$1.90$. As can be seen, in the high temperature case, these two
channels are closer together than at low temperature illustrating the
partial restoration of chiral symmetry in these quantities.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\textwidth]{figures/PS+Sc.eps}
\caption{Correlation functions (normalised relative to $\tau=0$) for
the light scalar and pseudoscalar mesons at two different
temperatures on either side of the deconfinement transition, showing
partial restoration of chiral symmetry. The $N_\tau=16$ points have
been shifted horizontally for clarity.}
\label{fig:chiral}
\end{center}
\end{figure}
We have commenced studying several quantities on our 2nd generation
ensembles. Results on the following quantities have been reported
in this conference and elsewhere.
\begin{itemize}
\item {\bf Susceptibility} \cite{Giudice:2013fza}. We study the
electric charge susceptibility which is of interest experimentally
to quantify fluctuations in heavy-ion collision experiments and for
the determination of the electric charge diffusion coefficient.
\item {\bf Electrical conductivity}
\cite{Amato:2013oja,Amato:2013naa}. The temperature dependence of
the electrical conductivity has been calculated on our lattices,
using the exactly conserved lattice current. We find that the
conductivity divided by the temperature increases with temperature
across the deconfinement transition. This is the first time this
quantity has been computed as a function of temperature.
\item {\bf Inter-quark potential in charmonium} \cite{Evans:2013zca}.
This is the first time this quantity has been calculated at high
temperature with relativistic (rather than static) quarks. We find
that its behaviour at low temperature agrees with the (confining)
Cornell potential and that it becomes less confining as the
temperature increases.
\item {\bf Bottomonium spectrum} \cite{Harris:2013vfa}. We have
used the NRQCD formulation to study spectral functions in
bottomonium via the Maximum Entropy Method. We confirm our earlier
result \cite{fastsum1} that the S-wave ($\Upsilon$ and $\eta_b$)
ground states survive to $T \sim 2T_c$ whereas excited states are
suppressed, while the P-wave ($h_b, \chi_{b0,b1,b2}$) ground states
dissociate close to $T_c$.
\item {\bf Charmonium spectrum} \cite{Kelly:sewm2012}. A study of
charmonium spectral functions across the deconfining transition is
also in progress.
\end{itemize}
\newpage\section{Conclusions}
This talk summarises our {\sc fastsum} collaboration's latest
finite-temperature studies using anisotropic lattices. We have
improved upon our 1st generation 2-flavour ensembles by generating
ensembles which have 2+1 flavours, larger volume, improved
discretisation, and smaller dynamical quark masses. In this talk, the
deconfining temperature was presented and the (partial) chiral symmetry
restoration in the light meson sector was studied. Other work presented elsewhere in this
conference was summarised: the susceptibility, electrical
conductivity, interquark potential in charmonium and (NRQCD)
bottomonium spectral functions.
Our future plans are to improve our ensembles further -- we are
currently tuning our ``3rd generation'' ensembles which have a smaller
temporal lattice spacing and have plans for a ``4th generation'' run
with smaller spatial lattice spacing. We will thus be able to move
towards a continuum extrapolation of all our quantities, leading to
truly quantitative finite-temperature results for spectral quantities.
\section*{Acknowledgements}
This work is undertaken as part of the UKQCD collaboration
and the DiRAC Facility jointly funded by STFC, the Large Facilities Capital
Fund of BIS and Swansea University.
We acknowledge the PRACE Grants 2011040469 and Pra05\_1129,
European Union Grant Agreement No. 238353 (ITN STRONGnet),
HPC Wales,
the Irish Centre for High-End Computing,
the Irish Research Council,
the Leverhulme Trust,
the Royal Society,
the Science Foundation Ireland,
STFC,
and the Wolfson Foundation
for support.
The authors would like to thank Seyong Kim, Maria Paola Lombardo, Mike
Peardon and Don Sinclair, for useful comments, discussions and collaboration.
|
2,869,038,154,593 | arxiv | \section{Introduction}
\subsection{Literature Review}
The dynamics of incompressible fluid flows are described by a set of nonlinear PDEs known as the Navier-Stokes equations. The properties of such flows are then characterized in terms of a dimensionless parameter $Re$, the Reynolds number. Experiments show that many wall-bounded shear flows have a critical Reynolds number $Re_C$ { below which the flow is stable
with respect to disturbances of any amplitude.} However, spectrum analysis of the linearized Navier-Stokes equations, considering only infinitesimal perturbations,
predicts a linear stability limit $Re_L$ which upper-bounds $Re_C$~(\cite{DR81}). On the other hand, the bounds using energy methods $Re_E$, the limiting value for which the energy of arbitrary large perturbations decreases monotonically, are much below $Re_C$~(\cite{J76}). For Couette flow, for instance, $Re_E =32.6$ was computed by~\cite{Se59} using the energy functional, $Re_L = \infty$ using spectrum analysis~(\cite{Romanov73}), and $Re_C \approx 350$ was estimated empirically by~\cite{TA92}. \\
Conventional hydrodynamic stability methods usually involve linearization of the Navier-Stokes equations around a base flow followed by spectrum analysis, revealing the Reynolds number estimate for when this solution becomes unstable. The discrepancy between $Re_L$ and $Re_C$ has long been attributed to the eigenvalues analysis approach of the linearized Navier-Stokes operator~(\cite{Trefethen30071993}). Other theoretical methods for studying stability of flows are often based on spectral truncation of the Navier-Stokes equations into an ODE system. This method is fettered by truncation errors and by the mismatch between the dynamics of the truncated model and the Navier-Stokes PDE. To alleviate this drawback, recently in~(\cite{GC12,CGHP14}) a method was proposed based on keeping a number of modes from the Galerkin expansion of the nonlinear Navier-Stokes equations and bounding the energy of the remaining modes. It was shown in~(\cite{Huang20150622}) that, in the case of rotating Couette flow, this method can find a global stability limit, which is better than the energy method but not as good as the linear stability limit\footnote{Recall that the linear stability and the global stability limits coincide for the Taylor-Couette flow (\cite{Taylor289}).}.\\
In fact, even in the seminal paper by~\cite{Rey83}, it was observed that external excitations and body forces play an important role in flow instabilities. Mechanisms such as energy amplification of external excitations and body forcings have shown to be crucial in understanding transition to turbulence as highlighted by~\cite{J76}. Therefore, instead of studying stability, researchers began to focus on growth and were able to uncover additional flow properties through the new paradigm of input-output analysis. A phenomenon called \textit{transient growth} is known as the culprit for flow instability; \textit{i.e.}, although the perturbations to the linearized Navier-Stokes equation are stable (and the eigenvalues have negative real parts), they undergo high amplitude transient amplifications that steer the trajectories out of the region of linearization. The root cause of the transient growth phenomenon is the non-normality of the stable Navier-Stokes operator
that has been linearized about a base flow. This phenomenon has led to studying the resolvent operator or $\varepsilon$-pseudospectra to uncover when transition occurs, based on the general solution to the linearized Navier-Stokes equations (\cite{Sch07}). In particular, (\cite{mckeon_sharma_2010}) used resolvent analysis to study the amplification scalings from an input composed of nonlinear terms and periodic forcings for turbulent pipe flows. \\
The input-output properties can be characterized based on the class of forcings (noise vs square integrable signals) and the flow model (linear vs nonlinear or finite-dimensional vs infinite dimensional) one considers. For stochastic forcings (Gaussian noise), energy amplification to the linearized Navier-Stokes equations in wall-bounded shear flows was studied by~\cite{FI93}. In a similar vein,~(\cite{BD01}), using the stochastically forced linearized Navier-Stokes equation, showed analytically through the calculation of traces of operator Lyapunov equations, that the input-output $\mathcal{H}^2$-norm from streamwise constant excitations to perturbation velocities in channel flows is proportional to $Re^3$. The amplification scaling of the linearized Navier-Stokes equation was further characterized in~(\cite{JB05}) and (\cite{mjphd04}), where the authors studied the influence of each component of the body forces in terms of the input-output $\mathcal{H}^2$-norm. For square integrable forcings, \cite[Chapter 9]{mjphd04} and~(\cite{JB05}) provided worst-case amplification mechanisms for incompressible viscous channel flows based on the linearized Navier-Stokes equations. \\
\subsection{Contribution}
Our work extends the rich input-output analysis paradigm. We propose a method based on dissipation inequalities (\cite{W72}) to study input-output amplification in wall-bounded shear flows (described by the \emph{nonlinear} Navier-Stokes PDE, rather than finite-dimensional ODE approximations or linearizations) that are invariant in one of the spatial directions.
Here, a dissipation inequality establishes a relation between the rate of change of the
weighted kinetic energy of the flow perturbations (characterized by a storage functional), the energy supplied from the body forces, and the energy dissipated via viscosity (characterized by a supply rate). This approach exploits our previous work (\cite{AVPaut}) wherein dissipation inequalities for nonlinear PDEs were formulated. \\
Based on these dissipation inequalities, we study three flow properties. We start by studying energy growth from initial perturbations, which is tantamount to the notion of transient growth (\cite{Trefethen30071993}). Note that the definition of transient growth requires a linear approximation of the dynamics; whereas, the concept of energy growth used in this study is applied directly to nonlinear dynamics. Additionally, we consider body forcings and external excitations that are square integrable and we study worst-case amplification mechanisms. In addition to square integrable forcings, we provide a mathematical framework to consider a new class of forcings, in particular those that are constrained only in terms of either their maximum or their absolute value for all time. This is the first time that input-output response of wall bounded shear flows under persistent forcings is being investigated. \\
Furthermore, for flows with streamwise constant perturbations described by the nonlinear Navier-Stokes equations, we find a weighted kinetic energy form as the storage functional that converts the dissipation inequalities into integral inequalities with quadratic integrands in perturbation velocities and their spatial derivatives. Then, using these functionals, we propose conditions based on matrix inequalities that can be checked via convex optimization using available MATLAB software. One strength of the method is that the results can be directly extended to more complex flow geometries as long as they can be described by semi-algebraic sets. A precise characterization of this condition is provided in Section~\ref{sec:convex}. \\
Our proposed methodology allows us to study multiple input-output aspects, such as energy growth, worst-case disturbance amplification, and stability to persistent disturbances of a broad class of shear flows within a single framework. We evaluate the performance of the proposed method by several examples from both channel and pipe flows, namely rotating Couette flow, plane Couette flow, plane Poiseuille flow, and Hagen-Poiseuille flow. We demonstrate that our results tally with the transient growth results in the literature. For channel flows, we show the results obtained using our method are consistent with the results in~(\cite{JB05}) and~\cite[Chapter 9]{mjphd04} in terms of worst-case disturbance amplification and we show that our framework can be used to study pipe flows, as well. Moreover, we observe an intriguing correspondence between the stability bounds to persistent forcings and the experimental Reynolds numbers for transition to turbulence, which provides a theoretical tool to predict transition.\\
Preliminary mathematical results on this work were presented in~(\cite{7403365}). The current paper is different from~(\cite{7403365}) in several aspects. From a theoretical standpoint, the current paper provides a method for energy growth analysis and extends the formulation to both flows between parallel plates and flows in pipes. In addition, it presents the mathematical proofs of the input-output analysis framework and the formulation based on convex optimization. From the examples standpoint, in addition to an extended study of the rotating Couette flow, we applied the framework to investigate the input-output properties of plane Couette flow, plane Poiseuille, and the Hagen-Poiseuille flow. Furthermore, the current version includes a comparison with previous results in the literature and an examination of flow structures corresponding to maximum input-output amplifications.
\subsection{Organization}
In the next section, we briefly describe the flow model studied in the paper. In Section~\ref{sec:dissiInq}, we propose the flow input-output analysis framework based on dissipation inequalities.
In Section~\ref{sec:convex}, we show how the input-output analysis can be computationally implemented as the solution to a convex optimization problem. In Section~\ref{sec:NR}, we demonstrate the effectiveness of the proposed framework by applying it to study input-output properties of rotating Couette flow, plane Couette flow, plane Poiseuille flow, and Hagen-Poiseuille flow. Finally, in Section~\ref{sec:conclusions}, we present some concluding remarks and provide directions for future research.
\section{The Flow Perturbation Model}
Let $I$ be an index set corresponding to the spatial coordinates.
The dynamics of forced incompressible shear flows are described by the Navier-Stokes equations, given~by
\begin{eqnarray} \label{eq:NS}
\partial_t \boldsymbol{\bar{u}} &=& \frac{1}{Re} \nabla^2 \boldsymbol{\bar{u}}- \boldsymbol{\bar{u}}\cdot \nabla \boldsymbol{\bar{u}}- \nabla {\bar{p}} + F \boldsymbol{\bar{u}}+\boldsymbol{d}, \nonumber \\
0 &=& \nabla \cdot \bar{\boldsymbol{u}},
\end{eqnarray}
\noindent where $t>0$, $F \in \mathbb{R}^{3\times3}$ represents terms coming from rotation,~{$\mathrm{x} \in \Omega = \Omega_{i} \times \Omega_j \subset \mathbb{R} \times \mathbb{R}$} with $i \neq j$, $i,j \in I$ are spatial coordinates and $\partial_s(\cdot) = \frac{\partial (\cdot)}{\partial s}$.
The dependent variable $\boldsymbol{d}: \mathbb{R}_{\ge 0} \times \Omega \to \mathbb{R}^3$ is the input vector representing exogenous excitations or body forces, $\boldsymbol{\bar{u}} : \mathbb{R}_{\ge 0} \times \Omega \to \mathbb{R}^3$ is the velocity vector, and ${\bar{p}}:\mathbb{R}_{\ge 0} \times \Omega \to \mathbb{R}$ is the pressure. $\nabla^2$ is the Laplacian operator,
$\nabla$ denotes the gradient, and $\nabla \cdot \boldsymbol{u}$ denotes the divergence of $\boldsymbol{u}$.
We consider perturbations $(\boldsymbol{u},{p})$ to the { steady solution} $(\boldsymbol{U},{P})$, which are spatially invariant in one of the directions, say $x_m$, $m \in I$, \textit{i.e.,}~\mbox{$\partial_{x_m} =0$}.
Let $I_0= I-\{m\}$. The velocity field can be decomposed as
\begin{equation} \label{eq:perturbsub}
\boldsymbol{\bar{u}} = \boldsymbol{u} + \boldsymbol{U},~{\bar{p}} = {p} + {P},
\end{equation}
where $(\boldsymbol{U},P)$ are divergence free steady state solutions, \textit{i.e.},
\begin{eqnarray} \label{eq:eU}
0 &=& \frac{1}{Re} \nabla^2 \boldsymbol{U} -\boldsymbol{U} \cdot \nabla \boldsymbol{U} - \nabla P + F\boldsymbol{U}.
\end{eqnarray}
Substituting~\eqref{eq:perturbsub} in~\eqref{eq:NS} and using~\eqref{eq:eU}, we obtain the perturbation dynamics
\begin{eqnarray} \label{eq:mainNS}
\partial_t \boldsymbol{u} &=& \frac{1}{Re} \nabla^2 \boldsymbol{u} - \boldsymbol{u} \cdot \nabla \boldsymbol{u} - \boldsymbol{U} \cdot \nabla \boldsymbol{u} - \boldsymbol{u} \cdot \nabla \boldsymbol{U} -\nabla p + F \boldsymbol{u} + \boldsymbol{d}, \nonumber \\
0 &=& \nabla \cdot \boldsymbol{u}.
\end{eqnarray}
In the rest of this paper, we study the properties of~\eqref{eq:mainNS}. We concentrate on perturbations with no-slip boundary conditions $\boldsymbol{u}|_{\partial \Omega} \equiv 0$ (in the direction with solid boundaries) and periodic boundary conditions (in the spatially homogeneous direction). In a similar manner, we extend the results to pipe flows (cylindrical coordinates) as discussed in Appendix~\ref{app:cylindr}. Next, we introduce the input-output analysis method based on dissipativity theory.
\section{Dissipation Theory and Dissipation Inequalities}\label{sec:dissiInq}
In systems and control theory, dissipativity (\cite{W72,Willems2007134,hill1980dissipative})\footnote{Note that the notion of dissipativity used here should not be confused with dissipative operators in semigroup theory (\cite{lumer1961}). The latter is concerned with proving the existence of a contraction semigroups and used to prove well-posedness of solutions to PDEs, \cite{CZ95}; whereas, the dissipativity notion we use here is concerned with the input-output properties of a dynamical system.} establishes a relationship between the energy stored in the system represented by a continuous, non-negative functional $V(u)$, known as the storage functional, and the power supplied to the system $W(u,d,y)$, known as the supply rate, with $d$ and $y$ being the inputs and outputs of the system, respectively. This relationship is often given by a dissipation inequality (in differential form) as
\begin{equation} \label{eq:DisInq}
\frac{d V(u)}{dt} \le W(u,d,y).
\end{equation}
A system is called dissipative with respect to the supply rate $W(u,d,y)$, if there is a non-negative functional $V(u)$ that satisfies~\eqref{eq:DisInq}. Dissipativity theory has a close connection with Lyapunov stability theory~(\cite{khalil1996noninear}). In particular, dissipativity theory can be understood as a generalization of the Lyapunov stability theory to systems with inputs and outputs.
Given the dissipation inequality~\eqref{eq:DisInq} with a fixed supply rate, the main challenge is to find a corresponding storage functional that satisfies the dissipation inequality along the solutions of the flow. In fact, kinetic energy was shown to be a candidate storage functional for some input-output properties. In the special case of an irrotational flow ($F=0$ in~\eqref{eq:mainNS}) under no-slip, stress-free or periodic boundary condition, if we set $V$ to be the kinetic energy of the perturbations $V({\boldsymbol{u}}) = \int_\Omega | {\boldsymbol{u}} |^2~{d}\Omega$, we can show~\cite[p. 31]{DG95} that the total kinetic energy of the perturbations satisfies the following equality
$$
\frac{d V(\boldsymbol{u})}{dt} = -\frac{1}{Re} \| \nabla \boldsymbol{u} \|_{\mathcal{L}^2_\Omega}^2 - \int_\Omega \boldsymbol{u} \cdot \nabla \boldsymbol{U} \cdot \boldsymbol{u}~d\Omega + \int_\Omega {\boldsymbol{u}} \cdot \boldsymbol{d}~d\Omega.
$$
The above equality implies that the kinetic energy of the perturbations in the flow changes according to three effects:
the energy dissipated by viscosity, the energy either injected or dissipated depending on the base flow, and the energy expended by the external force.
Since the viscosity term $\frac{1}{Re} \| \nabla \boldsymbol{u} \|_{\mathcal{L}^2_\Omega}^2$ is always non-negative, we can obtain the following inequality
$$
\frac{d V(\boldsymbol{u})}{dt} \le - \int_\Omega \boldsymbol{u} \cdot \nabla \boldsymbol{U} \cdot \boldsymbol{u}~d\Omega + \int_\Omega {\boldsymbol{u}} \cdot \boldsymbol{d}~d\Omega.
$$
If the base flow $\boldsymbol{U}$ is such that the term $\int_\Omega \boldsymbol{u} \cdot \nabla \boldsymbol{U} \cdot \boldsymbol{u}~d\Omega$ is non-negative, we can conclude that the following dissipation inequality holds
$$
\frac{d V(\boldsymbol{u})}{dt} \le \int_\Omega {\boldsymbol{u}} \cdot \boldsymbol{d}~d\Omega,
$$
where $W(\boldsymbol{u},\boldsymbol{d}) = \int_\Omega {\boldsymbol{u}} \cdot \boldsymbol{d}~d\Omega$ is the supply rate. This is a well-known dissipation inequality that corresponds to \textit{passivity}. Passivity has been used to study finite-dimensional linear discretizations of the Navier-Stokes equation with the nonlinearity being modeled as an input~(\cite{3662449,HEINS2016348}).
The general dissipation inequality framework allows us to consider more general energy inequalities rather than only the passivity inequality. In particular, our formulation considers weighted kinetic energy as the storage functional and three different supply rates. As will be shown in Section~\ref{sec:convex}, for the class of fluid flows studied in this paper, we present an algorithmic way to find the storage functionals based on convex optimization.
\subsection{Input-Output Properties}
We now define the three types of input-output properties that we study within the dissipativity framework and discuss their relation to common notions in the literature.
The first property that we can study is bounds on the maximum \textit{energy growth} due to initial perturbation velocities for the nonlinear Navier-Stokes equation~\eqref{eq:mainNS}. In the context of linear systems, this corresponds to maximum transient growth,~\cite{858386, reddy_henningson_1993,gustavsson_1991}.
\begin{defi} [Energy Growth]
Let \mbox{$\boldsymbol{d}\equiv 0$} in ~\eqref{eq:mainNS}. If there exists a constant $\gamma>0$ such that
\begin{equation} \label{eq:engr}
\| \boldsymbol{u} \|_{\mathcal{L}^2_{[0,\infty),\Omega}} \le \gamma \| \boldsymbol{u}(0,\cdot) \|_{\mathcal{L}^2_{\Omega}},
\end{equation}
where $\| u \|_{\mathcal{L}^2_{[0,T),\Omega}} = \left ( \int_0^T \int_\Omega u^2(\tau,\theta)~\mathrm{d}\tau \mathrm{d}\theta \right)^{\frac{1}{2}}$ and $\| u_0 \|_{\mathcal{L}^2_{\Omega} }= \left ( \int_\Omega u_0^2(\theta)~ \mathrm{d}\theta \right)^{\frac{1}{2}}$, then we say that the flow perturbations have bounded energy growth.
\end{defi}
The next property of interest is related to amplifications from square integrable body forces or disturbances (see \cite[Chapter 9]{mjphd04} for results pertaining to a linearized model of channel flows). The square integrable forcings are of special interest, because they can be interpreted as finite energy forcings.
We refer to this class of amplifications as \textit{worst-case disturbance amplification}.
\begin{defi} [Worst-Case Disturbance Amplification]
If there exists \mbox{$\eta_i >0$, $i\in I$}, such that
\begin{equation} \label{eq:L2}
\| \boldsymbol{u} \|_{\mathcal{L}^2_{[0,\infty),\Omega}}^2 \le \sum_{i\in I}\eta_i^2 \| d_{i} \|_{\mathcal{L}^2_{[0,\infty),\Omega}}^2,
\end{equation}
subject to zero initial perturbations $\boldsymbol{u}(0,\mathrm{x})\equiv0,~\forall \mathrm{x} \in \Omega$, then we say that the flow has bounded worst-case disturbance amplification.
\end{defi}
The above property is equivalent to~the induced $\mathcal{L}^2$-norm in control theory (\cite{van2017l2}). In other words, each $\eta_i$ upper-bounds the peak amplification of perturbation velocities from the forcing in the direction $i$, $d_i$, when the forcings in other directions are set to zero, i.e., $d_j = 0$,~$j \in I$, $i \neq j$. That is,
$$
\eta_i \le \sup_{\| {d}_i \|_{\mathcal{L}^2_{[0,\infty),\Omega}} \neq 0} \frac{\| \boldsymbol{u} \|_{\mathcal{L}^2_{[0,\infty),\Omega}}}{\| {d}_i \|_{\mathcal{L}^2_{[0,\infty),\Omega}}}.
$$
Due to nonlinear flow dynamics, the actual induced $\mathcal{L}^2$-norm of system~\eqref{eq:mainNS} is a nonlinear function of $\|\boldsymbol{d}\|_{\mathcal{L}^2_{[0,\infty),\Omega}}$ \cite[Example I]{AVPaut}. The quantities $\eta_i,~i\in I$ provide upper-bounds on the actual induced $\mathcal{L}^2$-norms. In this sense, minimizing $\eta_i >0$, $i\in I$, provides an upper bound to the worst-case disturbance amplification.
From a practical perspective, global
stability of a base flow is often not very meaningful, because small disturbances may cause an unstable behavior. Hence, we require a notion of stability that relates disturbances to perturbation velocities. Besides, the definition of the worst-case disturbance amplification requires the forcings to be square integrable. This automatically leads to the exclusion of persistent forcings, e.g. constant and sinusoidal forcings, that are defined for all time. To include these classes of forcings in a nonlinear context\footnote{ In the fluids literature, the ensemble average energy density or the $\mathcal{H}^2$-norm has been used to study amplifications from Gaussian stochastic forcings to the linearized flow dynamics (\cite{FI93,JB05}). The $\mathcal{H}^2$-norm is equivalent to the (root mean square) RMS-value of the linearized flow response to white noise forcings. However, extension of $\mathcal{H}^2$ analysis to the nonlinear Navier-Stokes equations is an open problem.}, we employ the concept of input-to-state stability (\cite{So08}) to study the class of upper-bounded forcings. We refer to this extended notion of stability, as \textit{stability to persistent disturbances}.
Prominent among the features of this property
are that forcings that are bounded, eventually
small, integrally small, or convergent should
lead to perturbation velocities with the respective property. Furthermore, this property quantifies in what
manner initial perturbation velocities affect transient behavior. Flows with this property
do not have unstable behavior for persistent (nonvanishing) forcings.
To characterize this property, let us introduce a few comparison functions. $\mathcal{K}$ denote the class of nonnegative functions that are strictly increasing and zero for zero argument, and $\mathcal{K}_\infty$ denote the class of functions that, in addition, become unbounded as their argument goes to infinity.
\begin{defi} [Stability to persistent disturbances]
If there exist some scalar $\psi>0$, functions $\beta,\tilde{\beta},\chi \in \mathcal{K}_\infty$, and $\sigma \in \mathcal{K}$, such that
{
\begin{multline}\label{eq:iss}
\|\boldsymbol{u}(t,\cdot)\|_{\mathcal{L}^2_\Omega} \le \beta \left( e^{-\psi t} \chi \left(\|\boldsymbol{u}(0,\cdot)\|_{\mathcal{L}^2_\Omega} \right) \right) + \tilde{\beta} \left( \sup_{\tau \in [0,t)} \big( \int_\Omega \sigma \big(|\boldsymbol{d}(\tau,\mathrm{x})|\big) \,\, d\Omega \big) \right),
\end{multline}}
\noindent for all $t>0$, then we call the flow stable to persistent disturbances.
\end{defi}
Property~\eqref{eq:iss} implies convergence to the base flow $(\boldsymbol{U},P)$ in the $\mathcal{L}^2_\Omega$-norm (the norm corresponding to the space of square integrable functions over the spatial domain) when the disturbances are not present ($\boldsymbol{d} \equiv 0$). Indeed, the $\beta \left( e^{-\psi t} \chi \left(\|\boldsymbol{u}(0,\cdot)\|_{\mathcal{L}^2_\Omega} \right) \right)$ term dominates for small $t$, and this serves
to quantify the magnitude of the transient growth as a function of the size of the
initial state $\|\boldsymbol{u}(0,\cdot)\|_{\mathcal{L}^2_\Omega}$.
Moreover, as $t \to \infty$, we obtain
\begin{multline}
\lim_{t \to \infty}\| \boldsymbol{u}(t,\cdot) \|_{\mathcal{L}^2_\Omega} \le \tilde{\beta} \left( \int_\Omega \|\sigma(|\boldsymbol{d}(\cdot,\mathrm{x})|) \|_{\mathcal{L}^\infty_{[0,\infty)} }\,\, d\Omega \right)
\le \tilde{\beta} \left( \int_\Omega \sigma(\|\boldsymbol{d}(\cdot,\mathrm{x})\|_{\mathcal{L}^\infty_{[0,\infty)} }) \,\, d\Omega \right),
\end{multline}
\noindent where, $\sigma, \beta \in \mathcal{K}$ and $\|f\|_{\mathcal{L}^\infty_{[0,\infty)} }=\sup_{\tau \in [0,\infty)} |f(\tau)|$. Hence, as long as the external excitations or body forces $\boldsymbol{d}$ are upper-bounded, the perturbation velocities $\boldsymbol{u}$ are bounded in the $\mathcal{L}^2_\Omega$-norm, meaning that they remain square integrable over the flow geometry.
In fact, by input-to-state superposition theorem~(\cite{Sontag2013}),
we can shows that stability to persistent disturbances is the conjunction of two
properties, one of them concerned with asymptotic
bounds on the perturbation velocities, in the sense of $\|\boldsymbol{u}(t,\cdot)\|_{\mathcal{L}^2_\Omega}$,
as a function of the magnitude of
the forcings, and the other one providing a transient
term obtained when we ignore forcings (see Figure~\ref{figiss}).
\begin{figure}
\centering{
\includegraphics[width=9cm]{figures/ISSfluid.eps}}
\caption{ The stability to persistent disturbances property combines transient growth (overshoot)
and asymptotic behavior.}\label{figiss}
\end{figure}
We now demonstrate how the problem of verifying the properties in Definitions 3.1-3.3 can be cast as verifying a set of dissipation inequalities. This result which can be derived from~\cite[Theorem~6]{AVPaut} allows for the extension of well known methods for stability, input/output, and optimal perturbation analysis of linear systems to the full nonlinear Navier-Stokes equation.
{
\begin{thm} \label{bigthmfluidfluid}
Consider the perturbation model~\eqref{eq:mainNS}. If there exist a positive semidefinite storage functional {$V(\boldsymbol{u})$}, positive scalars $\{\eta_i\}_{i\in I}$, $\psi$, $\gamma$, and functions $\beta_1,\beta_2 \in \mathcal{K}_\infty$, $\sigma \in \mathcal{K}$, such that\\
\mbox{I)} when $\boldsymbol{d}\equiv 0$,
\begin{equation} \label{fjffjhfjghfjfh}
V(\boldsymbol{u}) \le \gamma^2 { \| \boldsymbol{u}(t,\cdot) \|^2_{\mathcal{L}^2_\Omega}},
\end{equation}
\begin{equation} \label{con:engr}
\frac{d V(\boldsymbol{u}(t,\mathrm{x}))}{dt} \le - \int_\Omega \boldsymbol{u}^\prime (t,\mathrm{x})\boldsymbol{u}(t,\mathrm{x}) \,\, d\Omega,
\end{equation}
then it has bounded energy growth as given by~\eqref{eq:engr};\\
\mbox{II)}
\begin{eqnarray} \label{fluide6}
\frac{d V(\boldsymbol{u}(t,\mathrm{x}))}{dt} \le - \int_\Omega \boldsymbol{u}^\prime (t,\mathrm{x})\boldsymbol{u}(t,\mathrm{x}) \,\, d\Omega
+ \int_\Omega \sum_{i \in I} \eta_i^2 {d}_i^2(t,\mathrm{x}) \,\, d\Omega,
\end{eqnarray}
then the perturbation velocities~\eqref{eq:mainNS} has worst-case disturbance amplification upper-bounds $\eta_i$, $i\in I$ as in~\eqref{eq:L2};\\
\mbox{III)}
\begin{equation} \label{fluidse11}
{ \beta_1(\|\boldsymbol{u}(t,\cdot)\|_{\mathcal{L}^2_\Omega}) \le V(\boldsymbol{u}) \le \beta_2(\|\boldsymbol{u}(t,\cdot)\|_{\mathcal{L}^2_\Omega})},
\end{equation}
\begin{equation} \label{fluidse12}
\frac{d V(\boldsymbol{u}(t,\mathrm{x}))}{dt} \le - \psi V(\boldsymbol{u}(t,\mathrm{x})) + \int_\Omega \sigma(|\boldsymbol{d}(t,\mathrm{x})|) \,\, d\Omega,
\end{equation}
then perturbation velocities described by~\eqref{eq:mainNS} are stabe to persistent disturbances as given by \eqref{eq:iss} with $\chi = \beta_2$, $\beta=\beta_1^{-1}\circ 2$ and $\tilde{\beta}=\beta_1^{-1}\circ \frac{2}{\psi}$, where $\circ$ implies function composition.
\end{thm}
}
In the following, we derive classes of storage functionals $V(\boldsymbol{u})$ suitable for the analysis of perturbation dynamics~\eqref{eq:mainNS} invariant in one of the three spatial coordinates. We consider two classes of flows, namely, channel flows with perturbations that vary in two spatial dimensions and time discussed in Section~\ref{sec:channelwww} and pipe flows invariant in the axial direction discussed in Appendix~\ref{app:cylindr}.
\subsection{Flows Between Parallel Plates} \label{sec:channelwww}
In Cartesian coordinates, for a scalar function ${v}$, $\nabla {v} = \sum_i \partial_{x_i} v \overrightarrow{e}_{i}$ and $\nabla^2 {v} = \sum_i \partial_{x_i}^2 v$, {where $\overrightarrow{e}_{i}$ is the unit vector in the direction $x_i$.} For a vector valued function $\boldsymbol{w}=\sum_i w_i \overrightarrow{e}_{i}$, the divergence $\nabla \cdot \boldsymbol{w}$ is given by $\nabla \cdot \boldsymbol{w} = \sum_i \partial_{x_i} w_i$.
In the following, $\{x_1,x_2,x_3\}$ corresponds to $\{x,y,z\}$ (streamwise, wall-normal, and spanwise directions) and $I=\{1,2,3\}$. Additionally, we adopt Einstein's multi-index notation over index $j$, that is the sum over repeated indices $j$, e.g., $v_j \partial_{x_j} u_i = \sum_j v_j\partial_{x_j} u_i$.
The perturbation model \eqref{eq:mainNS} can be re-written as
\begin{eqnarray} \label{eq:mainNSEin}
\partial_t u_i &=& \frac{1}{Re} \nabla^2 u_i - u_j \partial_{x_j} u_i - U_j \partial_{x_j} u_i - u_j \partial_{x_j} U_i - \partial_{x_i} p + F_{ij} u_j + d_i, \nonumber \\
0 &=& \partial_{x_j} u_j.
\end{eqnarray}
where $i,j \in I$ and $F_{ij}$ is the $(i,j)$ entry of $F$. {To simplify the exposition, without loss of generality, we assume that the perturbations are invariant with respect to $x_1$. Since $x_i,~i=1,2,3$ are arbitrary, this does not affect the formulation.}
The next proposition states that, by choosing a suitable storage functional structure (weighted kinetic energy of the perturbation velocities), the time derivative of the storage functional turns out to be upper-bounded by a quadratic form in the velocity fields $\boldsymbol{u}$ and their spatial derivatives. This property paves the way for a convex optimization based method to check stability and input-output properties. Convex optimization is a subfield of optimization that studies the problem of minimizing convex functions over convex sets. The convexity makes optimization easier than the general case since local minimum must be a global minimum, and first-order conditions are sufficient conditions for optimality~(\cite{BV04}). Convex optimization problems can be solved efficiently by interior-point methods~(\cite{nesterov1994interior}). Convex optimization was used by~\cite{4876195} to obtain a low-order decomposition of the
Navier-Stokes equations based on resolvent modes.
\begin{prop} \label{fluidsprop1}
Consider the perturbation model~\eqref{eq:mainNSEin} subject to periodic or no-slip boundary conditions $\boldsymbol{u}|_{\partial\Omega} =0$. Assume the velocity perturbations in~\eqref{eq:mainNSEin} are invariant with respect to $x_1$. Let $I_0 = \{2,3\}$ and
\begin{equation} \label{eq:Lyap}
V(\boldsymbol{u}) = \frac{1}{2}\int_\Omega \boldsymbol{u}^\prime Q \boldsymbol{u} \,\, d\Omega,
\end{equation}
where $Q =\left[ \begin{smallmatrix} q_1 & 0 & 0 \\ 0 & q_i & 0 \\ 0 & 0 & q_j \end{smallmatrix} \right]>0$, $q_i = q_j$ for $i\neq j$, $i,j \in I_0$, be a candidate storage functional. Then, the time derivative of~\eqref{eq:Lyap} along the solutions to~\eqref{eq:mainNSEin} satisfies
\begin{multline} \label{eq:Lyapmaindt}
\frac{dV(\boldsymbol{u})}{dt} \le -\sum_{i\in I}q_i\int_\Omega \bigg( \frac{ C}{Re} u_i^2 + U_j u_i \partial_{x_j} u_i +u_j u_i \partial_{x_j} U_i - u_i F_{ij} u_j -u_i d_i \bigg) \,\, d\Omega,
\end{multline}
where $C$ is a positive constant that only depends on the domain $\Omega$.
\end{prop}
The proof of this proposition is given in Appendix~\ref{appfdsfdfdsfcccvr}.
Remark that a special case of~\eqref{eq:Lyap} was used in~(\cite{JH71}) to study the stability of viscous flows (subject to streamwise constant perturbations) in pipes and between rotating cylinders. The authors referred to this structure as \emph{the two energy function}. In the formulation presented in this paper, assuming invariant perturbations in the $x_1$-direction, we can represent the two energy function as
$$
V(\boldsymbol{u}) = \frac{1}{2}\int_\Omega \boldsymbol{u}^\prime \left[ \begin{smallmatrix} q & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1\end{smallmatrix} \right] \boldsymbol{u} \,\, d\Omega,
$$
where $q>0$ is a constant. The ``optimal" value for this constant was then calculated analytically for the pipe Poiseuille and the Taylor-Couette flow by~\cite{JH71}.
Note that in \eqref{eq:Lyapmaindt} the Poincar\'e constant, $C$, appears. There are several estimates for the optimal Poincar\'e constant. The optimal constant (\cite{PW60}) we use in this paper is
\begin{equation}
C(\Omega) = \frac{\pi^2}{D(\Omega)},
\end{equation}
where $D(\Omega)$ is the diameter of the domain $\Omega$.
Proposition~\ref{fluidsprop1} allows us to provide an algorithmic method for input-output analysis of fluid flows based on convex optimization. These convex optimization problems are in terms of linear matrix inequalities and polynomial matrix inequalities. This formulation is delineated in more detail in the next section.
\vspace{-.85cm}
\section{Matrix Inequality Formulation for Streamwise Constant Perturbations}
\label{sec:convex}
In this section, we show that the input-output analysis problem outlined in Section 3 for the class of streamwise constant perturbations can be converted into a set of matrix inequalities. These matrix inequalities can be solved by convex optimization, provided that the base flow is a polynomial in the spatial coordinates and the flow geometry is described by a semi-algebraic set\footnote{
Let $\mathcal{R}[x]$ be the set of polynomials with real coefficients. A set is semi-algebraic if it can be described by a finite number of polynomial equalities and inequalities. That is, $\mathcal{S} \subset \mathbb{R}^n$ for some closed field, say $\mathbb{R}$, is defined by a set of polynomial equalities and inequalities as follows
$
\mathcal{S} = \left\{ x \in \mathbb{R}^n \mid p_i(x) \ge 0,~i=1,2,\ldots, n_p,~ q_i(x)=0,~i=1,2,\ldots,n_q \right\},
$
where $\{p_i \}_{i=1}^{n_p},\{q_i\}_{i=1}^{n_q} \in \mathcal{R}[x]$, where $\mathcal{R}[x]$ denotes the set of polynomials in the variable $x$ with real coefficients.
}. Examples are laminar base flows that are linear or parabolic, and turbulent flows that can be represented by polynomial fits (or by piecewise polynomial functions).
To present a convex method for checking the conditions in Theorem~\ref{bigthmfluidfluid} (also see Corollary~\ref{cor1} in Appendix B), we restrict our attention to streamwise constant perturbations in the $x_1$-direction with base flow $\boldsymbol{U} = U_m(x_2,x_3) \overrightarrow{e}_1$, where $\overrightarrow{e}_1$ denotes the unit vector in the $x_1$-direction.
In order to present the procedure, we first need to define the following notation. For a square matrix $M$, $M \succcurlyeq 0$ ($M \succ 0$) implies that the matrix is positive semidefinite (positive definite), i.e., all the eigenvalues of $M$ are non-negative (positive). Similarly, $M \preccurlyeq 0$ ($M \prec 0$) signifies that $-M \succcurlyeq 0$ ($-M \succ 0$). By $\mathrm{I}_{n \times n}$, we denote the square matrix of dimension $n \times n$ with diagonal entries set to $1$.
\begin{corollary} \label{LMIcor}
Consider the perturbation dynamics given by~\eqref{eq:mainNSEin}, that are constant in the streamwise direction $x_1$ and with base flow $\boldsymbol{U} = U_m(\mathrm{x}) \overrightarrow{e}_1$, where $\mathrm{x}=(x_2,x_3)$. Let $I_0=\{2,3\}$. If there exist positive constants $\{q_l\}_{l\in I}$ with $q_i=q_j$, $i,j\in I_0$, $\{\eta_l\}_{l\in I}$, $\{\psi_l\}_{l\in I}$, and functions $\{\sigma_l\}_{l\in I}$ such that \\
\begin{multline} \label{eq:Mmat}
M(\mathrm{x}) =\\ \begin{bmatrix}
\left(\frac{C}{Re}-F_{11} \right)q_1 & \frac{q_1(\partial_{x_j}U_m(\mathrm{x})-F_{1j})-q_jF{j1}}{2} & \frac{q_1(\partial_{x_i}U_m(\mathrm{x})-F_{1i})-q_iF{i1}}{2} \\ \frac{q_1(\partial_{x_j}U_m(\mathrm{x})-F_{1j})-q_jF{j1}}{2} & \left(\frac{C}{Re}-F_{jj} \right)q_j & -\frac{q_j F_{j1}}{2} \\ \frac{q_1(\partial_{x_i}U_m(\mathrm{x})-F_{1i})-q_iF{i1}}{2} & -\frac{q_j F_{j1}}{2} & \left(\frac{C}{Re}-F_{ii} \right)q_i
\end{bmatrix},\\~i,j \in I_0, i\neq j.
\end{multline}\\
\mbox{I)} when $\boldsymbol{d}\equiv 0$,
\begin{equation} \label{condengr}
M\left(\mathrm{x}\right) - \mathrm{I}_{3 \times 3} \succcurlyeq 0,~\mathrm{x}\in \Omega,
\end{equation}
\mbox{II)}
\begin{equation} \label{eq:Nmat}
N(\mathrm{x}) = \begin{pmat}[{..|}]
~ & ~ & ~ & -\frac{q_1}{2} & 0 & 0\cr
~ & M(\mathrm{x})-\mathrm{I}_{3\times 3} & ~ & 0 & -\frac{q_j}{2} & 0 \cr
~ & ~& ~ & 0 & 0 & -\frac{q_i}{2} \cr\-
-\frac{q_1}{2} & 0 & 0 & \eta_{1}^2 & 0 & 0 \cr
0 & -\frac{q_j}{2} & 0 & 0 & \eta_{i}^2 & 0 \cr
0 & 0 & -\frac{q_i}{2} & 0 & 0 & \eta_{j}^2 \cr
\end{pmat} \succcurlyeq 0,
\end{equation}
for $i,j \in I_0, i\neq j$ and $\mathrm{x}\in \Omega$,\\
\mbox{III)} $\sigma_l(\mathrm{x}) \ge 0,~\mathrm{x} \in \Omega$, $l\in I$ and
\begin{equation}\label{eq:Pmat}
Z(\mathrm{x}) =\begin{pmat} [{..|}] ~ & ~ & ~ & -\frac{q_1}{2} & 0 & 0 \cr
~ & M(\mathrm{x})-W & ~ & 0 & -\frac{q_j}{2} & 0 \cr
~ & ~ & ~ & 0 & 0 & -\frac{q_i}{2} \cr\-
-\frac{q_1}{2} & 0 & 0 & \sigma_{1}(\mathrm{x}) & 0 & 0 \cr
0 & -\frac{q_j}{2} & 0 & 0 & \sigma_{j}(\mathrm{x}) & 0 \cr
0 & 0 & -\frac{q_i}{2} & 0 & 0 & \sigma_{i}(\mathrm{x}) \cr \end{pmat} \succcurlyeq 0,
\end{equation}
for $i,j \in I_0, i\neq j$ and $\mathrm{x}\in \Omega$, where $W=\left[\begin{smallmatrix}\psi_1q_1 & 0 & 0\\0&\psi_jq_j&0\\0&0&\psi_i q_i\end{smallmatrix} \right]$. Then, it follows that \\
\mbox{I)} the flow energy growth is bounded by $\gamma^2 = \max_{i \in I} q_i$ as described by~\eqref{eq:engr},\\
\mbox{II)} the worst-case disturbance amplification (induced $\mathcal{L}^2$ norm from the disturbances to perturbation velocities) is bounded by $\eta_i$, \mbox{$i\in I$} as in~\eqref{eq:L2} when the initial perturbations have zero velocity,\\
\mbox{III)} the flow is stable to persistent disturbances in the sense of~\eqref{eq:iss} with $\sigma(|\boldsymbol{d}|) = \sum_{i\in I} \sigma_i(\mathrm{x}) d_i^2$.
\end{corollary}
The proof of the above Corollary is given in Appendix~B.
When $U_m(\mathrm{x})$ is a polynomial function, inequalities~\eqref{eq:Mmat}-\eqref{eq:Pmat} are polynomial matrix inequalities that should be checked for all $\mathrm{x} \in \Omega$. If the set $\Omega$ is a semi-algebraic set,~\textit{i.e.,}
\begin{equation*}
\Omega = \left\{ \mathrm{x} \in \mathbb{R}^2 \mid g_l(\mathrm{x})=0,~f_k(\mathrm{x})>0,~l = 1,2,\ldots,L,~k=1,2,\ldots,K \right\},
\end{equation*}
where $\{g_l\}_{l=1}^{L}$ and $\{f_k\}_{k=1}^{K}$ are polynomial functions,
then these inequalities can be cast as a sum-of-squares program by applying Corollary~\ref{cor:psatz}. We show in the next section that this assumption is indeed the case for several well-known flows. For a brief introduction to sum-of-squares programming refer to Appendix~\ref{app:sosp}. Note that once the input-output analysis problem is cast as a sum-of-squares program, it can be checked using available MATLAB toolboxes such as~SOSTOOLS (\cite{sostools}) and YALMIP~ (\cite{YALMIP}).
We can compute the bound on the maximum energy grown described in~\eqref{eq:engr} by solving an optimization problem. To this end, we solve
\begin{eqnarray} \label{opprobengr}
& \underset{\{q_i\}_{i \in I}}{\min} \left( \underset{{i \in I}}{\max}~q_i \right)& \nonumber \\
&\text{subject~to}& \nonumber \\
&M(\mathrm{x}) - \mathrm{I}_{3 \times 3} \succcurlyeq 0,& \nonumber \\
&q_i>0,~i \in I.&
\end{eqnarray}
In order to find upper-bounds on the worst-case disturbance amplification (the induced $\mathcal{L}^2$-norm) from the body forces or disturbances~$\boldsymbol{d}$ to the perturbation velocities $\boldsymbol{u}$ as described in~\eqref{eq:L2}, we solve the following optimization problem
\begin{eqnarray}
&\underset{\{q_i\}_{i \in I}}{\min}~\sum_{i\in I} \eta_i^2 & \nonumber \\
&\text{subject~to}& \nonumber \\
&N(\mathrm{x})\succcurlyeq 0,& \nonumber \\
&q_i>0,~i \in I.&
\end{eqnarray}
\section{Numerical Results} \label{sec:NR}
In this section, we illustrate the proposed method by analyzing four benchmark flows, namely, plane Couette flow, plane Poiseuille flow, rotating Couette flow (a simplified Taylor-Couette flow model), and Hagen-Poiseuille flow. For worst-case disturbance amplification, we carry out a comparative analysis of the influence of each of the disturbance components. For stability to persistent disturbances, we find the maximum Reynolds number for which stability to persistent disturbances holds.
\subsection{Plane Couette Flow} \label{example:planecouette}
\begin{figure}
\centering{
\includegraphics[width=10cm]{figures/picCouette.eps}}
\caption{Schematic of the plane Couette flow geometry.}\label{fig1}
\end{figure}
We consider the flow of viscous fluid between two parallel plates, where the gap between the plates is much smaller than the length of the plates as illustrated in Figure~\ref{fig1}.
We consider no-slip boundary conditions \mbox{$\boldsymbol{u}|_{y=-1}^1 = 0$} in the wall-normal direction and $\boldsymbol{u}(t,y,z)=\boldsymbol{u}(t,y,z+L)$ in the spanwise direction. The Poincar\'e constant is then given by $C=\frac{\pi^2}{\sqrt{L^2+2^2}}$.
We are interested in studying bounds on energy growth, worst-case amplification, and stability to persistent forcings. To this end, we consider the following storage functional
\begin{equation} \label{ExampleCouette}
V(u) = \int_0^{L} \int_{-1}^1 \left[\begin{smallmatrix} u_x \\ u_y \\ u_z \end{smallmatrix}\right]^\prime \left[\begin{smallmatrix} q_x & 0 & 0 \\ 0 & q_y & 0 \\ 0 & 0 & q_z \end{smallmatrix} \right]\left[\begin{smallmatrix} u_x \\ u_y \\ u_z \end{smallmatrix} \right] \,\, dydz,
\end{equation}
with $q_y=q_z$, which is the same as storage functional~\eqref{eq:Lyap} considering invariance with respect to $x$.
For this flow ($m=x,j=y,i=z$), the $M$ matrix~\eqref{eq:Mmat} described as
\begin{equation} \label{sddfsdf}
M = \begin{bmatrix} \frac{q_xC}{Re} & \frac{ q_x}{2} & 0 \\ \frac{ q_x}{2} & \frac{q_yC}{Re} & 0 \\
0 & 0 & \frac{q_yC}{Re}
\end{bmatrix}
\end{equation}
Let $L = \pi$. For energy growth analysis, we solve optimization problem~\eqref{opprobengr} with $M$ given by~\eqref{sddfsdf}. The results are depicted in Figure~\ref{figeng}. For small Reynolds numbers $\gamma^2 \propto O(Re)$, whereas for larger Reynolds numbers $\gamma^2 \propto O(Re^3)$. Therefore, it can be inferred that $\gamma^2 = c_0 Re + c_1 Re^3$ with $c_0,c_1>0$. This is consistent with the results by~\cite{BBD02} where the maximum energy growth of steamwise constant (nonlinear) plane Couette flow was calculated analytically.
\begin{figure}
\centerline{
\includegraphics[scale=.35]{figures/amplificationCouette.eps}
}
\caption{Upper bounds on the maximum energy growth for plane Couette flow in terms of Reynolds numbers.}\label{figeng}
\end{figure}
For worst-case amplification analysis, we apply inequality~\eqref{eq:Nmat} which for this particular flow is given by the following linear matrix inequality
\begin{eqnarray}
N = \begin{pmat}[{..|}]
~ & ~ & ~ & -\frac{q_x}{2} & 0 & 0\cr
~ & M-\mathrm{I}_{3\times 3} & ~ & 0 & -\frac{q_y}{2} & 0 \cr
~ & ~& ~ & 0 & 0 & -\frac{q_y}{2} \cr\-
-\frac{q_x}{2} & 0 & 0 & \eta_{x}^2 & 0 & 0 \cr
0 & -\frac{q_y}{2} & 0 & 0 & \eta_{y}^2 & 0 \cr
0 & 0 & -\frac{q_y}{2} & 0 & 0 & \eta_{z}^2 \cr
\end{pmat} \succcurlyeq 0 \nonumber
\end{eqnarray}
with $M$ as in~\eqref{sddfsdf}.
\begin{figure}
\centerline{
\includegraphics[scale=.35]{figures/CouetteL2GainsFinal.eps}
}
\caption{Upper bounds on the worst-case amplification for perturbation velocities of plane Couette flow for different Reynolds numbers.}\label{fig4}
\end{figure}
The obtained upper-bounds on the worst-case amplification for Couette flow are given in Figure~\ref{fig4}. Since the flow is stable for all Reynolds numbers, the worst-case amplifications are increasing monotonically with Reynolds number. The obtained upper-bounds depicted in Figure~\ref{fig4} imply $\eta_x^2 = a_0 Re^{2}+a_1Re^{3}$, $\eta_y^2 = b_0Re^2+b_1 Re^4$ and $\eta_z^2 = c_0 Re^2 +c_1 Re^4$ with \mbox{$a_0,a_1,b_0,b_1,c_0,c_1>0$}. This implies that worst-case amplification in all three components of disturbances grow with a $Re^2$ ratio for low Reynold numbers. For Reynolds numbers approximately greater than $1$, the streamwise disturbances are amplified proportional to $Re^3$; whereas, the wall-normal and spanwise disturbance components are amplified relative to $Re^4$. Therefore, for high Reynolds numbers, worst-case amplification from wall-normal and spanwise disturbance components are approximately $Re$ times larger than the worst-case amplification from streamwise forcings.
The obtained upper-bounds depicted in Figure~\ref{fig4} can be compared with Corollary~\ref{app:corollary} (see Appendix~\ref{app:calculation}), wherein it was demonstrated that $\eta_x^2 = f_0 Re^2$, $\eta_{y}^2=g_0 Re^2 + g_1Re^4$ and $\eta_{z}^2 = h_0 Re^2 + h_1Re^4$ for the linearized plane Couette flow with constants $ f_0,g_0,g_1,h_0,h_1>0$.
Lastly, in order to check the stability to persistent forcings property, we check inequality~\eqref{eq:Pmat} from Corollary~\ref{LMIcor} for the Couette flow under study,~\textit{i.e.,}
\begin{equation}
Z =\begin{pmat} [{..|}] ~ & ~ & ~ & -\frac{q_x}{2} & 0 & 0 \cr
~ & M-W & ~ & 0 & -\frac{q_y}{2} & 0 \cr
~ & ~ & ~ & 0 & 0 & -\frac{q_y}{2} \cr\-
-\frac{q_x}{2} & 0 & 0 & \sigma_{x} & 0 & 0 \cr
0 & -\frac{q_y}{2} & 0 & 0 & \sigma_{y} & 0 \cr
0 & 0 & -\frac{q_y}{2} & 0 & 0 & \sigma_{z} \cr \end{pmat} \succcurlyeq 0 \nonumber
\end{equation}
with $M$ given in~\eqref{sddfsdf} and $W=\left[\begin{smallmatrix} q_x \psi_x & 0 & 0\\ 0 & q_y \psi_y & 0\\0& 0 & q_y \psi_z \end{smallmatrix}\right]$. We fix $\psi_i = 10^{-4},~i=x,y,z$ and $L=2\pi$.
In this case, we obtain $Re_{ISS} = 316$. {The quantity $Re_{ISS}=316$ is the closest estimate to the empirical Reynolds number $Re \approx 350$ obtained by~\cite{TA92} above which transition to turbulence is observed. In this sense, it turns out that the $Re_{ISS}$ gives lower bounds on the Reynolds number above which transition occurs.}
\begin{figure}
\centering{
\includegraphics[scale=.35]{./figures/CouFlowStructU.eps}
\includegraphics[scale=.35]{./figures/CouFlowStructv.eps}\\
\includegraphics[scale=.35]{./figures/CouFlowStructW.eps}
}
\caption{The perturbation flow structures with maximum amplification from persistent forcings at $Re=316$ for plane Couette flow.}\label{figISSflowstructCou}
\end{figure}
In order to understand the above result on stability to persistent disturbances, we carried out numerical experiments to obtain the flow structures that receive maximum amplification from persistent disturbances. The experiments were undertaken for the linearized Navier-Stokes equation through the Orr-Somerfield equations. Appendix~\ref{app:NEFS} discusses the details of these numerical experiments. Notice that these results are based on solving linear matrix inequalities that ensure stability to persistent forcings for the ODE space-discretizations of the Orr-Somerfield equations. This is carried out by making a $50 \times 50$ grid on the wave number space $k_x-k_z$ ($k_x,k_z \in [0,150]$) and running the linear matrix inequalities for each point in the grid. Then, the wave numbers corresponding to the maximum amplification are selected (especially, we are interested to find $k_x$ corresponding to maximum amplification, as this is the streamwise direction) and the corresponding flow structure is simulated. It turns out that the maximum amplification corresponds to the streamwise constant case $k_x=0$. Figure~\ref{figISSflowstructCou} illustrates the flow structures that receive maximum amplification at $Re=316$.
It is also worth mentioning that certificates for stability to persistent disturbances of the linearized Navier-Stokes equation, as discussed in Appendix~\ref{app:NEFS}, could be constructed for all Reynolds numbers, which is in contrast to the nonlinear case. This illustrates that stability to persistent disturbances is a fundamentally nonlinear phenomenon.
\begin{figure}
\centering{
\includegraphics[width=15cm]{figures/planePois.eps}}
\caption{Schematic of the plane Poiseuille flow geometry.}\label{figccjldfdsds2}
\end{figure}
\subsection{Plane Poiseuille Flow}
Similar to the plane Couette flow, we consider the flow of viscous fluid between two parallel plates, where the gap between the plates is much smaller than the length of the plates. Unlike the plane Couette flow, the plates are stationary and the flow is induced by a pressure gradient in the flow direction, flowing from the region of higher pressure to one of lower pressure. The flow geometry is depicted in Figure~\ref{figccjldfdsds2}.
The domain $\Omega$ is defined as $\Omega=\{ (y,z) \mid -1<y<1,~0< z <L\}$. The flow perturbations are assumed invariant in the streamwise direction $x$. The base flow is given by $\boldsymbol{U}= U_m(y) \overrightarrow{e}_x= (1-y^2) \overrightarrow{e}_x$ and $P = 1 - \frac{4x}{Re}$. We consider no-slip boundary conditions \mbox{$\boldsymbol{u}|_{y=-1}^1 = 0$} and $\boldsymbol{u}(t,y,z)=\boldsymbol{u}(t,y,z+L)$. The Poincar\'e constant is then given by $C=\frac{\pi^2}{\sqrt{L^2+2^2}}$. We study the the input-output properties of the flow using the storage functional~\eqref{ExampleCouette}.
For this flow ($m=x,j=y,i=z$), we have
\begin{equation} \label{sddfsdf2}
M(y) = \begin{bmatrix} \frac{q_xC}{Re} & { -yq_x} & 0 \\ { -yq_x} & \frac{q_yC}{Re} & 0 \\
0 & 0 & \frac{q_yC}{Re}
\end{bmatrix}.
\end{equation}
\begin{figure}
\centerline{
\includegraphics[scale=.37]{figures/transientgrowthPlanePois.eps}
}
\caption{Upper bounds on the maximum energy growth for plane Poiseuille flow in terms of Reynolds numbers.}\label{figfjdfjkdoo1}
\end{figure}
To find upper bounds on maximum energy growth for the plane Poiseuille flow, we solve the optimization problem~\eqref{opprobengr} with $M$ as given in~\eqref{sddfsdf2}. The results are illustrated in Figure~\ref{figfjdfjkdoo1}. This implies that the maximum energy amplification is described by $\gamma^2 = b_0 Re + b_1 Re^2$, with $b_0,b_1>0$. This result tallies with transient growth calculations of (\cite{reddy_henningson_1993}), in which the authors showed that the transient growth of the linearized plane Poiseuille flow model behaves like $O(Re^2)$ for large Reynolds numbers.
For worst-case amplification analysis, we use inequality~\eqref{eq:Nmat} which for this flow is given by the following matrix inequality
\begin{figure}
\centerline{
\includegraphics[scale=.4]{figures/L2gainsPlanePois.eps}
}
\caption{Upper bounds on the worst-case amplification of plane Poiseuille flow for different Reynolds numbers.}\label{figx1}
\end{figure}
\begin{eqnarray}
N = \begin{pmat}[{..|}]
~ & ~ & ~ & -\frac{q_x}{2} & 0 & 0\cr
~ & M(y)-\mathrm{I}_{3\times 3} & ~ & 0 & -\frac{q_y}{2} & 0 \cr
~ & ~& ~ & 0 & 0 & -\frac{q_y}{2} \cr\-
-\frac{q_x}{2} & 0 & 0 & \eta_{x}^2 & 0 & 0 \cr
0 & -\frac{q_y}{2} & 0 & 0 & \eta_{y}^2 & 0 \cr
0 & 0 & -\frac{q_y}{2} & 0 & 0 & \eta_{z}^2 \cr
\end{pmat} \succcurlyeq 0,\quad y \in (-1,1), \nonumber
\end{eqnarray}
with $M$ as in~\eqref{sddfsdf2}. The obtained upper-bounds on the worst-case amplification for the plane Poiseuille flow are also given in Figure~\ref{figx1}.
Form Figure~\ref{figx1}, it can be inferred that $\eta_x^2 = a_0 Re^{2}+a_1Re^{3}$, $\eta_y^2 = b_0Re^{2.2}+b_1 Re^4$ and $\eta_z^2 = c_0 Re^2 +c_1 Re^4$ with $a_0,a_1,b_0,b_1,c_0,c_1>0$. From this result, we can infer that the worst-case amplification in all three components of disturbances grow with a $Re^2$ ratio for low Reynold numbers. For Reynolds numbers approximately greater than $\approx 5$, the streamwise disturbances are amplified proportional to $Re^3$; whereas, the wall-normal and spanwise disturbance components are amplified relative to $Re^4$. Therefore, for high Reynolds numbers, worst-case amplification from wall-normal and spanwise forcings are approximately $Re$ times larger than from the worst-case amplification from streamwise forcings.
For stability to persistent disturbances, we check inequality~\eqref{eq:Pmat} from Corollary~\ref{LMIcor} for plane Poiseuille flow,~\textit{i.e.,}
\begin{equation}
Z =\begin{pmat} [{..|}] ~ & ~ & ~ & -\frac{q_x}{2} & 0 & 0 \cr
~ & M(y)-W & ~ & 0 & -\frac{q_y}{2} & 0 \cr
~ & ~ & ~ & 0 & 0 & -\frac{q_y}{2} \cr\-
-\frac{q_x}{2} & 0 & 0 & \sigma_{x}(y) & 0 & 0 \cr
0 & -\frac{q_y}{2} & 0 & 0 & \sigma_{y}(y) & 0 \cr
0 & 0 & -\frac{q_y}{2} & 0 & 0 & \sigma_{z}(y) \cr \end{pmat} \succcurlyeq 0, \quad y \in (-1,1), \nonumber
\end{equation}
with $M$ given in~\eqref{sddfsdf2} and $W=\left[\begin{smallmatrix} q_x \psi_x & 0 & 0\\ 0 & q_y \psi_y & 0\\0& 0 & q_y \psi_z \end{smallmatrix}\right]$. We fix $\psi_i = 10^{-4},~i=x,y,z$ and $L=2\pi$.
In this case, we obtain $Re_{ISS} = 1855$. {The quantity $Re_{ISS}=1855$ can be compared with the empirical Reynolds number at the onset of turbulence $Re \approx 2000$ as discussed by \cite{RevModPhys.72.603}. Once again, we infer that $Re_{ISS}$ provides a lower bound for the Reynolds number for which transition to turbulence occurs.}
\begin{figure}
\centering{
\includegraphics[scale=.35]{./figures/PoisFlowStructU.eps}
\includegraphics[scale=.35]{./figures/PoisFlowStructv.eps} \\
\includegraphics[scale=.35]{./figures/PoisFlowStructW.eps}
}
\caption{The perturbation flow structures with maximum amplification to persistent disturbances $Re=1855$ for plane Poiseuille flow.}\label{figISSflowstructPois}
\end{figure}
Analogous to the plane Couette flow, we undertook numerical experiments to find the flow structures subject to maximum amplification from persistent forcings. Again, we found that the maximum amplification corresponds to the streamwise constant case $k_x=0$. Figure~\ref{figISSflowstructPois} illustrates the flow structures that receive maximum amplification from persistent forcings at $Re=1855$.
\begin{figure}
\centering{
\includegraphics[width=8cm]{figures/cylinders.png}\\
\includegraphics[width=8cm]{figures/RCFillus.png}}
\caption{ The Taylor-Couette flow, where the gap between the cylinders is much smaller than their radii (top). Schematic of the rotating Couette flow geometry with rotation about the $x_3$-axis (bottom).}\label{fig1xxx}
\end{figure}
\subsection{Rotating Couette Flow}
We consider the flow between two co-axial cylinders, where the gap between the cylinders is much smaller than their radii. In this setting, the flow can be represented by the Couette flow subject to rotation (\cite{Lasagna2016176}) as illustrated in Figure~\ref{fig1xxx}. The axis of rotation is parallel to the $x_3$-axis and the circumferential direction corresponds to $x_1$-axis. Then, the dynamics of the perturbation velocities is described by~\eqref{eq:mainNS}. The perturbations are assumed to be invariant with respect to $x_1$ \mbox{($\partial_{x_1}=0$)} and periodic in $x_3$ with period $L$. The domain is, therefore, defined as $$\Omega = \left\{ (x_2,x_3) \mid (x_2,x_3) \in (-1,1)\times (0,L) \right\}.$$ Note that $\Omega$ is indeed a semialgebraic set as given by
$$
\Omega = \left\{ (x_2,x_3) \mid ~~ (1-x_2)(1+x_2)>0 ~~ \text{and} ~~ x_3(x_3-L)>0\right\}.
$$
The base flow is given by $\boldsymbol{U}=(x_2,0,0)'=x_2\overrightarrow{e}_1$ and $P=P_0$. In~addition,
~$
F =\left[ \begin{smallmatrix} 0 & Ro & 0 \\ -Ro & 0 & 0 \\ 0 & 0 & 0 \end{smallmatrix} \right],
$~
where $Ro \in [0,1]$ is a parameter representing the Coriolis force. That is, $Ro=0$ corresponds to the case where the outer and inner cylinders are rotating with the same speed but in opposite directions and $Ro=1$ is the case where both cylinders are rotating with the same velocity in the same direction. Notice that the cases that correspond to plane Couette flow was discussed in detail in Section~\ref{example:planecouette}. The case $Ro=1$ is globally stable for all Reynolds numbers due to Rayleigh criterion (\cite{PhysRevE.95.021102}). In this example, we focus on $Ro \in (0,1)$.
For comparison purposes, we consider periodic boundary conditions \mbox{$\boldsymbol{u}(t,-1,x_3)=\boldsymbol{u}(t,1,x_3)$} and $\boldsymbol{u}(t,x_2,x_3)=\boldsymbol{u}(t,x_2,x_3+L)$. The Poincar\'e constant is then given by $C=\frac{\pi^2}{\sqrt{L^2+2^2}}$. The linear stability limit of the flow can be computed by studying the spectrum of the linearized model~(\cite{Lasagna2016176}). That is,
$$
Re_L = \frac{2\sqrt{2}}{\sqrt{1-Ro}\sqrt{Ro}},
$$
with a minima at $Ro= 0.5$ corresponding to $Re = 4 \sqrt{2}$. Linear stability analysis suggests that the flow is stable for all Reynolds numbers for $Ro=0,1$. Moreover, the energy stability limit of the flow is found as $Re_E = 4 \sqrt{2}$~(\cite{huang2015sum}).
We consider the following storage functional
$$
V(u) = \int_0^{L} \int_{-1}^1 \left[\begin{smallmatrix} u_1 \\ u_2 \\ u_3 \end{smallmatrix}\right]^\prime \left[\begin{smallmatrix} q_1 & 0 & 0 \\ 0 & q_2 & 0 \\ 0 & 0 & q_2 \end{smallmatrix} \right]\left[\begin{smallmatrix} u_1 \\ u_2 \\ u_3 \end{smallmatrix} \right] \,\, dx_2dx_3,
$$
which is the same as storage functional~\eqref{eq:Lyap} assuming invariance with respect to $x_1$.
Although our main focus is on input-output analysis, for this particular flow, we also study global stability for the sake of comparison with the nonlinear stability analysis method in~(\cite{huang2015sum}). Note that for the rotating Couette flow the global stability bound and the linear stability bounds should coincide (\cite{Taylor289,huang2015sum}). To study stability, we simply check the following inequality
$$
\frac{dV(u)}{dt} \le -\psi V(u),
$$
for some positive constant $\psi$. Setting $\psi = 10^{-2}$, we check the following matrix inequality
$$
M - \psi \begin{bmatrix} q_1 & 0 & 0\\ 0 & q_2 & 0\\0& 0 & q_2 \end{bmatrix} \succcurlyeq 0.
$$
Note that for this flow ($m=1,j=2,i=3$), we have
\begin{equation} \label{sddfsddsdsdsszzxxf}
M = \begin{bmatrix} \frac{q_1C}{Re} & \frac{q_2Ro - q_1(Ro-1)}{2} & 0 \\ \frac{q_2Ro - q_1(Ro-1)}{2} & \frac{q_2C}{Re} & 0 \\
0 & 0 & \frac{q_2C}{Re}
\end{bmatrix} .
\end{equation}
The stability results are depicted in Figure~\ref{fig23e442342}. Interestingly, the stability bounds obtained using the proposed method can effectively approximate the linear stability limit for all $Ro \in (0,1)$, which is indeed the case for this flow. This result can be compared with the stability method in (\cite{huang2015sum,GC12}) where the global stability bounds only converge to the linear stability bound for $Ro \in [0.2529, 0.7471]$. This improved accuracy illustrates the significance of considering the full nonlinear PDE model of the flow rather than finite-dimensional truncations of the flow dynamics.
\begin{figure}
\centerline{
\includegraphics[scale=.45]{figures/TCReVsRo.eps}
}
\caption{Stability bounds $Re_{E}$ (using energy method), $Re_L$ (linear stability limit), and $Re^*$ (using the proposed method) in terms of $Ro$ for rotating Couette flow.}\label{fig23e442342}
\end{figure}
\begin{figure}
\centerline{
\includegraphics[scale=.45]{figures/TG_RCF.eps}
}
\caption{Energy growth for rotating Couette flow with respect to the parameter~$Ro$.}\label{fig23e44234asdsdas2}
\end{figure}
\begin{figure}
\centerline{
\includegraphics[scale=.45]{figures/TGRefixedRCF.eps}
}
\caption{Energy growth for rotating Couette flow with respect to the Reynolds number $Re$ for fixed~$Ro=0.5$.}\label{fig23e442dsdsdssd342}
\end{figure}
We next demonstrate how the proposed framework can be used to determine energy growth. We solve optimization problem~\eqref{opprobengr} with matrix $M$ given in~\eqref{sddfsddsdsdsszzxxf}. Figure~\ref{fig23e44234asdsdas2} illustrates the maximum energy growth curves of the flow with respect to $Ro$.
The figure demonstrates that as the Reynolds number approaches the global stability bound $Re_G = 4 \sqrt{2}$, the energy growth from initial perturbation velocities increases. Furthermore, this growth is more significant for $Ro=0.5$, i.e., the least stable rotation configuration. To compare the energy growth results here with the ones available in the literature, we fix $Ro=0.5$ and observe how the energy growth evolves as the Reynolds number approaches the global stability bound. These results are depicted in Figure~\ref{fig23e442dsdsdssd342}, which shows for stable Reynolds numbers the energy growth scales with $O(Re^{\frac{2}{3}})$. This is consistent with analytical transient growth computations in~(\cite{maretzke_hof_avila_2014}) based on Wentzel-Kramers-Brillouin theory and the calculations and empirical results of ~(\cite{2004AA425385Y}) that furthermore showed that the maximum transient growth correspond to perturbations that are ``uniform along the direction of the rotation axis" (streamwise constant perturbations in our model). Note that both of these aforementioned studies were carried out based on the linearized (linearly stable) model of the flow. Figure~\ref{fig23e442dsdsdssd342} also shows that for Reynolds numbers closer to the global stability bound $Re_G = 4 \sqrt{2}$, the relationship between the energy growth and the Reynolds number becomes significantly nonlinear, as the flow is becoming unstable.
\begin{figure*}
\centering{
\includegraphics[scale=.25]{figures/L2Re50RCF.eps}
\includegraphics[scale=.25]{figures/L2Re53RCF.eps}\\
\includegraphics[scale=.3]{figures/L2Re56RCF.eps}
}
\caption{Upper bounds on worst-case amplification from $\boldsymbol{d}$ to perturbation velocities $\boldsymbol{u}$ of rotating Couette flow for different Reynolds numbers: $Re=5$ (top left), $Re=5.3$ (top right), and $Re=5.6$ (bottom).}\label{fig3assaasdxx}
\end{figure*}
Finally, we use inequality~\eqref{eq:Nmat} to evaluate worst-case disturbance amplification (induced $\mathcal{L}^2$-norm), which for this particular flow is given by the following linear matrix inequality
\begin{eqnarray}
N = \begin{pmat}[{..|}]
~ & ~ & ~ & -\frac{q_1}{2} & 0 & 0\cr
~ & M-\mathrm{I}_{3\times 3} & ~ & 0 & -\frac{q_2}{2} & 0 \cr
~ & ~& ~ & 0 & 0 & -\frac{q_2}{2} \cr\-
-\frac{q_1}{2} & 0 & 0 & \eta_{1}^2 & 0 & 0 \cr
0 & -\frac{q_2}{2} & 0 & 0 & \eta_{2}^2 & 0 \cr
0 & 0 & -\frac{q_2}{2} & 0 & 0 & \eta_{3}^2 \cr
\end{pmat} \succcurlyeq 0, \nonumber
\end{eqnarray}
with $M$ as in~\eqref{sddfsddsdsdsszzxxf}.
Figure~\ref{fig3assaasdxx} depicts the obtained results for three different Reynolds numbers. As the Reynolds number approaches $Re_G=4\sqrt{2}$ for $Ro=0.5$, the upper-bounds on the worst-case disturbance amplification from the body forces $\boldsymbol{d}$ to perturbation velocities $\boldsymbol{u}$ increase dramatically. Furthermore, worst-case amplification from streamwise and wall-normal disturbances is significantly larger than the amplification from spanwise disturbances. For example, for $Re=5.6$, worst-case amplification from streamwise and wall-normal disturbances is $10000$-times larger than the amplification from spanwise disturbances.
\subsection{Hagen-Poiseuille Flow}
In Appendix~\ref{app:cylindr}, we extended the proposed input-output analysis framework to pipe flows. In this example, we show the applicability of the proposed method for pipe flows through studying input-output properties of the Hagen-Poiseuille flow.
\begin{figure}
\centering{
\includegraphics[width=8cm]{figures/pipeflow.eps}}
\caption{Schematic of the Hagen-Poiseuille flow geometry.}\label{figccjldf}
\end{figure}
We consider the flow of viscous fluid driven by the pressure gradient in a pipe as illustrated in Figure~\ref{figccjldf}. The domain $\Omega$ is defined as $\Omega=\{ (r,\theta) \mid 0<r<1,~0< \theta <2\pi\}$. The flow is invariant in the streamwise direction $z$. It was shown by \cite{SH94} that axial constant perturbations are subject to maximum background energy amplification in pipe flow. The base flow is given by $\boldsymbol{U}= U_m(r) \overrightarrow{e}_z = (1-r^2) \overrightarrow{e}_z$ and $P = 1 - \frac{4z}{Re}$. Then, the perturbation dynamics is given by~\eqref{eq:NScyl} in Appendix~\ref{app:cylindr} with $F\equiv 0$ and $U_m(r)=1-r^2$. Moreover, we assume no-slip boundary conditions $\boldsymbol{u}|_{r=1}=0$.
We consider the storage functional given in~\eqref{LyapCyr}. Then, substituting $U_m$ and $F$, we have
\begin{equation} \label{rwerw}
M_c({r}) = \begin{bmatrix}
\frac{ q_zC}{Re} & -rq_z & 0 \\ -rq_z & \frac{ q_rC}{Re} & 0 \\ 0 & 0 & \frac{ q_\theta C}{Re}
\end{bmatrix}.
\end{equation}
In order to find upper bounds on maximum energy growth for Hagen-Poiseuille flow, we solve optimization problem~\eqref{opprobengr} with $M=M_c(r)$ as~\eqref{rwerw}. The results are illustrated in Figure~\ref{figfjdfjkdoo}. The results imply that the maximum energy growth is described by $\gamma^2 = b_0 Re + b_1 Re^2$, with $b_0,b_1>0$. This is consistent with the calculations and numerical experiments of (\cite{SH94}) on the transient growth based on the linearized Navier-Stokes equations for the pipe flow.
\begin{figure}
\centerline{
\includegraphics[scale=.35]{figures/amplificationPipe.eps}
}
\caption{Upper bounds on the maximum energy growth for Hagen-Poiseuille flow in terms of Reynolds numbers.}\label{figfjdfjkdoo}
\end{figure}
Considering $M_c(r)$ as in \eqref{rwerw}, inequality~\eqref{eq:Nmat2} becomes
\begin{equation} \label{eq:Nmat22}
N_c(r) = \begin{pmat}[{..|}]
~ & ~ & ~ & -\frac{q_z}{2} & 0 & 0\cr
~ & M_c(r)-\mathrm{I}_{3\times 3} & ~ & 0 & -\frac{q_r}{2} & 0 \cr
~ & ~& ~ & 0 & 0 & -\frac{q_\theta}{2} \cr\-
-\frac{q_z}{2} & 0 & 0 & \eta_{z}^2 & 0 & 0 \cr
0 & -\frac{q_r}{2} & 0 & 0 & \eta_{r}^2 & 0 \cr
0 & 0 & -\frac{q_\theta}{2} & 0 & 0 & \eta_{\theta}^2 \cr
\end{pmat} \succcurlyeq 0,\quad r \in (0,1).
\end{equation}
Minimizing $\eta_{z}^2$, $\eta_{r}^2$ and $\eta_{\theta}^2$ subject to the above inequality provides upper-bounds on the worst-case disturbance amplification for the Hagen-Poiseuille flow. The results are depicted in Figure~\ref{fig6}. The interesting conclusion from the figure is that the perturbations are amplified as $\eta_z^2=a_0Re^2+a_1Re^3$, $\eta_\theta^2=b_0Re^2+b_1Re^4$, and $\eta_r^2=c_0Re^2+c_1Re^4$ with $a_0,a_1,b_0,b_1,c_0,c_1>0$. Thus, similar to channel flows, for low Reynolds numbers, worst-case amplification from all three disturbance components scale to $Re^2$. For Reynolds numbers greater than $\approx 8$, the amplification from axial (which is the direction of the base flow) disturbances grow proportional to $Re^3$; whereas, the worst-case amplification from azimuthal and radial disturbances increase with respect to $Re^4$. This implies that for sufficiently large Reynolds numbers, the worst-case amplification growth from azimuthal and radial external forcings are $Re$-times larger than the amplification from axial forcings.
Note that~(\cite{JB05}) just considered channel flows which does not include the Hagen-Poiseuille flow.
In order to check stability to persistent forcings, the following polynomial matrix inequality
\begin{equation}
Z_c(r) =\begin{pmat} [{..|}] ~ & ~ & ~ & -\frac{q_z}{2} & 0 & 0 \cr
~ & M_c(r)-W_c & ~ & 0 & -\frac{q_r}{2} & 0 \cr
~ & ~ & ~ & 0 & 0 & -\frac{q_\theta}{2} \cr\-
-\frac{q_z}{2} & 0 & 0 & \sigma_{z}(r) & 0 & 0 \cr
0 & -\frac{q_r}{2} & 0 & 0 & \sigma_{r}(r) & 0 \cr
0 & 0 & -\frac{q_\theta}{2} & 0 & 0 & \sigma_{\theta}(r) \cr \end{pmat} \succcurlyeq 0,\quad r \in (0,1),
\end{equation}
where $W_c=\left[\begin{smallmatrix}\psi_zq_z & 0 & 0\\0&\psi_r q_r &0\\0&0&\psi_\theta q_\theta\end{smallmatrix} \right]$ was checked. The maximum Reynolds number for which certificates of ISS could be found was $Re_{ISS} = 1614$ using degree 10 polynomials in $\sigma_{z}(r),\sigma_{\theta}(r)$ and $ \sigma_{r}(r)$. {Remarkably, this is a lower bound to the Reynolds number for which transition to turbulence was observed empirically by~\cite{PM06}, \textit{i.e.}, $Re \approx 1800$.} Therefore, even in the case of the Hagen-Poiseuille flow, stability to persistent disturbances analysis can be used to predict transition.
\begin{figure}
\centerline{
\includegraphics[scale=.4]{figures/gainPipeL2final.eps}
}
\caption{Upper bounds on the worst-case amplification of the Hagen-Poiseuille flow in terms of different Reynolds numbers.}\label{fig6}
\end{figure}
\begin{table}
\begin{center}
\begin{minipage}{12cm}
\begin{tabular}{@{}c|c|c|cl@{}}
{Flow} & {Energy Growth} & {Worst-Case Amplification}
& \multicolumn{1}{c@{}}{Transition}\\[3pt]
\hline
Plane Couette & $\boldsymbol{O({Re}^3)}$, $O(Re^3)$\footnote{\cite{BBD02}}
& $\left(\begin{matrix} \boldsymbol{O({Re}^3)} \\ \boldsymbol{O({Re}^4)} \\ \boldsymbol{O({Re}^4)} \end{matrix} \right)$, $\left(\begin{matrix} O({Re}^3) \\ O({Re}^4) \\ O({Re}^4) \end{matrix} \right)$\footnote{\cite{JB05}} & $\boldsymbol{316}$, $350$\footnote{\cite{TA92}}\\
\hline
Plane Poiseuille & $\boldsymbol{O({Re}^2)}$, $O(Re^2)$\footnote{\cite{reddy_henningson_1993}} & $\left(\begin{matrix} \boldsymbol{O({Re^3})} \\ \boldsymbol{O({Re^4})} \\ \boldsymbol{O({Re^4})} \end{matrix} \right)$, $\left(\begin{matrix} O({Re^3}) \\ O({Re^4}) \\ O({Re^4}) \end{matrix} \right)$\footnote{\cite{JB05}} & $\boldsymbol{1855}$, $2000$\footnote{\cite{RevModPhys.72.603}}\\
\hline
Hagen-Poiseuille & $\boldsymbol{O(Re^2)}$, $O(Re^2)$\footnote{\cite{SH94}} & $\left(\begin{matrix} \boldsymbol{O({Re^3})} \\ \boldsymbol{O({Re^4})} \\ \boldsymbol{O({Re^4})} \end{matrix} \right)$, -- & $\boldsymbol{1614}$, $1800$\footnote{\cite{PM06}}
\end{tabular}
\end{minipage}
\end{center}
\caption{Summary of the numerical results using the proposed framework (boldfaced), and results obtained in the literature.} \label{table1}
\end{table}
\section{Discussions}\label{sec:conclusions}
We studied stability and input-output properties of fluid flows with spatially invariant perturbations in one of the directions using dissipation inequalities. Our framework generalizes certain types of input-output analysis techniques to the nonlinear Navier-Stokes equations, thereby matching more closely with experimental results. The proposed input-output analysis method introduces a unified framework for addressing a broad range of questions related to transition (transient growth and input-output analysis) that can be adapted to a large class of flow conditions. Whenever the base flow is given by a polynomial of spatial coordinates and the flow geometry is described by a semi-algebraic set, we showed how the input-output framework can be computationally implemented based on convex optimization. For illustration purposes, we applied the proposed method to study several examples of flows between parallel plates and a pipe flow. A toolbox is under development which can be used to apply the proposed framework to investigate more flows and input-output properties.
Table~\ref{table1} lists the numerical results based on the proposed framework for plane Couette flow, plane Poiseuille flow, and the Hagen-Poiseuille flow. For energy growth and worst-case amplification, the table outlines the amplification scalings at high Reynolds numbers. Energy growth results for all three flows tally with the theoretical and experimental amplification scalings in the literature. Our worst-case amplification scalings for plane Couette flow and plane Poiseuille flow were consistent with the scalings calculated by~\cite{JB05}. In addition to comparing the scalings we obtained using our framework for channel flows, we carried out numerical experiments to study the worst-case amplification scalings in Hagen-Poiseuille flow. This indicates that, similar to channel flows, perturbations in the direction of the base flow are least amplified in Hagen-Poiseuille flow. For transition analysis, we compare the maximum Reynolds numbers for which stability to persistent disturbances could be certified to the Reynolds numbers for which transition to turbulence was observed experimentally. We inferred from the results that $Re_{ISS}$ can be used as an acceptable theoretical estimate to predict transition to turbulence.
In addition to the aforementioned three flows, we undertook global stability analysis, energy growth analysis, and worst-case amplification analysis for the rotating Couette flow. Global stability analysis results could replicate the actual global stability bounds calculated by \cite{Taylor289}. Our results for energy growth implied a scaling of $O(Re^{\frac{2}{3}})$, which is consistent with the transient growth calculations in~(\cite{maretzke_hof_avila_2014}) and the calculations and empirical results of ~(\cite{2004AA425385Y}).
Future research will focus on applying the framework obtained here to turbulent flows~(\cite{annurev-fluid-010814-014637}). In particular, we study \textit{time-averaged mechanical energy dissipation}. For a channel flow of channel length $h$, the mechanical energy dissipation per unit mass is given by
$$
\varepsilon := \frac{\nu^3}{h^4} \| \nabla \boldsymbol{u} \|^2_{\mathcal{L}^2_\Omega},
$$
where $\nu$ is the kinematic viscosity. \cite{PhysRevE.49.4087} proposed a variational method for bounding this quantity based on the \textit{background flow} decomposition. The method has been significantly successful in finding the time-averaged mechanical energy dissipation scaling with respect to the root-mean-square velocity $U$ and $\ell$ the longest length scale, i.e., it was shown that
$$
\varepsilon \le c_1 \nu \frac{U^2}{\ell^2}+c_2 \frac{U^3}{\ell}.
$$
and bounds on $c_1$ and $c_2$ were obtained for different flows~(\cite{doering_foias_2002,CHILDRESS2001105,ALEXAKIS2006652,rollin_dubief_doering_2011,tang_caulfield_young_2004}). In order to find bounds on the time-averaged mechanical energy dissipation, we can consider the following dissipation inequality
\begin{equation} \label{dsse3dss}
\frac{d V(\boldsymbol{u})}{dt} \le \frac{\nu^3}{h^4} \| \nabla \boldsymbol{u} \|^2_{\mathcal{L}^2_\Omega} - C,
\end{equation}
where $C>0$ is a constant. Minimizing $C$ while searching over the storage functional $V(\boldsymbol{u})$ gives upper bounds on the time-averaged mechanical energy dissipation.
Another interesting problem for future research is identifying the regions of attraction for different flow configurations. For example, in the case of Taylor-Couette flow, after decomposing the Navier-Stokes equation about different flow regimes, one can search for estimates of the region of attraction inside which each flow regime is stable.
In addition, input-output amplification mechanisms of turbulent flows is also an intriguing prospective research direction. In this regard,~\cite{FLM454650,PGCD09}, consider a non-polynomial model for turbulent mean velocity profiles and turbulent eddy viscosities. Polynomial approximations (of high degrees) of such nonlinear models fit the formulation given in this paper.
Lastly, more general storage functional structures can be considered. More specifically, given the nonlinear dynamics of the Navier-Stokes equations, one can consider the following class of storage functionals
$$
V(\boldsymbol{u}) = \int_\Omega \begin{bmatrix} \boldsymbol{u} \\ \boldsymbol{u}^2 \end{bmatrix}^\prime Q \begin{bmatrix} \boldsymbol{u} \\ \boldsymbol{u}^2 \end{bmatrix} \,\, {d}\Omega.
$$
However, a convex formulation using the above structure is not clear at the moment.
|
2,869,038,154,594 | arxiv | \section{Introduction}
Kitaev's famous proposal~\cite{kitaev.01} for the realization of Majorana quasiparticles opened a period of intense study into these topological bound states~\cite{aguado.17,lutchyn.bakkers.18,pawlak.hoffman.19}.
Currently, realizations of Majorana bound states are expected in a few low-dimensional platforms.
First are semiconducting-superconducting hybrid nanostructures~\cite{deng.yu.12,mourik.zuo.12,das.ronen.12,finck.vanharlingen.13,
nichele.drachmann.17,gul.zhang.18,deng.vaitiekenas.16,deng.vaitiekenas.18}, where the interplay between intrinsic spin--orbit coupling, induced superconductivity, and external magnetic field lead to the realization of zero-energy bound states~\cite{lutchyn.bakkers.18}.
Second are one-dimensional (1D) chains of magnetic atoms deposited on a superconducting surface~\cite{nadjperge.drozdov.14,pawlak.kisiel.16,feldman.randeria.16,ruby.heinrich.17,jeon.xie.17,kim.palaciomorales.18}, where Majorana bound states are expected to form as a consequence of the magnetic moments in mono-atomic chains~\cite{klinovaja.stano.13,braunecker.simon.15,kaladzhyan.simon.17,andolina.simon.17}.
More recently, a realization of these zero-energy bound states in two-dimensional topological superconducting domains~\cite{drozdov.alexandradinata.14,menard.guissart.17,palaciomorales.mascot.18} and nanostructures with spin textures~\cite{yang.stano.16,mascot.cocklin.18,garnier.mesaros.19} have also been illustrated as well.
An essential property of Majorana zero modes (MZMs) are their non-Abelian statistics~\cite{nayak.simon.08}.
Such peculiarity makes them a very promising platform for the realization of fault-tolerant quantum computing~\cite{akhmerov.10,liu.wong.14,sarma.freedman.15,aasen.hell.16,hoffman.schrade.16,alicea.oreg.11}.
Quantum computations can be realized using {\it braiding protocols}~\cite{alicea.oreg.11,kim.tewari.15,beenakker.19,wieckowski.mierzejewski.19,trif.simon.19}, which can be practically implemented in wire-type systems~\cite{fatin.matosabiague.16}.
Here quantum qubit registers are stored in spatially separated MZMs, which are topologically protected from noise and decoherence~\cite{ivanov.01,cheng.lutchyn.12}.
The localized Majorana modes can also be manipulated by solely acting on the quantum dots~\cite{ruiztijerina.vernek.15,deng.vaitiekenas.16,ptok.kobialka.17,liu.sau.17,
prada.aguado.17,chevallier.szumniak.18,deng.vaitiekenas.18,reeg.dmytruk.18}.
For practical applications, it is crucial to describe the source of the decoherence in the system.
This is due to the fact that any decoherence can lead to additional errors in the state's coding~\cite{li.coish.18}.
From this, we should maximally clear out the source of decoherence in the system~\cite{nag.sau.18,zhang.mei.19}, which can be induced, e.g., by fluctuations~\cite{schmidt.rainis.12,lai.yang.18}.
In the context of the practical implementation of quantum computers based on MZMs, the interaction introduced in the system plays an important role in the computation process.
The stabilization of MZM can be achieved by introducing limited interaction strengths~\cite{stoudenmire.alicea.11,hassler.schuricht.12,gergs.niklas.16,dominguez.cayao.17}.
On-site repulsive interactions, in the half-spin fermion chain, was earlier discussed within in the Hartree--Fock approximation~\cite{stoudenmire.alicea.11,manolescu.marinescu.14}.
This type of interaction can lead to the decreasing of the Zeeman energy minimum value needed for MZM emergence.
Additionally, on-site interactions can stabilize the MZM~\cite{peng.pientka.15,dominguez.cayao.17}.
Long-range interactions, however, can reduce the decoherence rate~\cite{ng.15}.
In the context of spinless fermions, interactions between nearest sites have been discussed using density-matrix renormalization-group (DMRG) methods~\cite{thomale.rachel.13,gergs.niklas.16}.
Moreover, in this case, moderate repulsive interactions stabilize the topological order.
In the present work, we study the influence of long-range interactions on the MZM's lifetime and spatial structures using exact diagonalization (ED) for the Kitaev chain.
The paper is organized as follows:
In Sec.~\ref{sec.model}, we introduce the microscopic model and present computational details.
In Sec.~\ref{sec.num_res}, we describe the numerical results.
Finally, we summarize the results in Sec.~\ref{sec.sum}.
\section{Model and methods}
\label{sec.model}
We consider a spinless fermion chain with $L$ sites, described by the Kitaev model~\cite{kitaev.01} extended by many-body interactions.
The system can be represented by the following Hamiltonian:
\begin{eqnarray}
\label{eq.ham}
\nonumber \mathcal{H} &=& \sum_{i=1}^{L-1} \left( - t a_i^{\dagger} a_{i+1}^{\phantom{\dagger}} + \Delta a_i^{\dagger} a_{i+1}^{\dagger} + \textrm{h.c.} \right) \\
&& - \mu\sum_{i=1}^L \widetilde n_i + \sum_{r=1}^{L-1}V_r \sum_{i=1}^{L-r} \widetilde n_i \widetilde n_{i+r} ,
\end{eqnarray}
where $a_i^\dagger$ ($a_i^{\phantom{\dagger}}$) is fermionic creation (annihilation) operator of spinless fermion at site $i$, while $\widetilde n_i = a_i^\dagger a_i^{\phantom{\dagger}}-1/2$.
Here $t$ is the hopping integral, $\Delta$ is the superconducting gap, $\mu$ is the chemical potential, and $V_r$ is the $r$-nearest neighbor interaction strength.
In the absence of the interactions, when $|\Delta|>0$, in the Kitaev model,
there are two distinguished phases: topological and trivial~\cite{kitaev.01}.
Then in the thermodynamic limit it can be shown that the topological phase is present for $|\mu|\le 2t$ and the trivial phase is present for $|\mu|> 2t$.
MZMs can emerge only in the topological phase.
Here, one should notice that for a system with many-body interactions, the expression for the phase boundary can be more complicated~\cite{wieckowski.maska.18,thomale.rachel.13,katsura.schuricht.15}.
There are several methods for studying the presence of MZM in the system with many-body interactions.
Additionally, there exist a few indicators, which can be used for checking if the system is in the topological phase~\cite{gergs.niklas.16}.
From a theoretical point of view, studying quantum systems with many-body interactions is a relatively difficult task.
For example, DMRG methods allow for studying systems with thousands of sites, but are limited to short-range interactions.
Here, we used ED for solving the chain with $L$ sites.
Unfortunately, only small systems can be solved exactly (with $L\sim20$)~\cite{kozarzewski.mierzejewski.19}.
This method allows us to study all possible $r$ for selected system size.
For simplification and without loss of generality we take $r$ up to 4, due to the fact that for ED-based methods, only small system sizes are available.
MZMs, as states which are indistinguishable from their \textit{anti}states, should fulfill a few conditions.
Each MZM is equivalent to a zero mode $\Gamma$, which is fermionic operator satisfying the following relations~\cite{sarma.freedman.15}:
\begin{equation}
\Gamma^2 = 1,\qquad [\Gamma,\,\mathcal H] = 0 . \label{eq.majoranadef}
\end{equation}
In actual physical systems with finite length, in which one can realize a MZM, the second condition of~(\ref{eq.majoranadef}) is associated with the exponential suppression of energy splitting, i.e. $[\Gamma,\mathcal H] \propto e^{-L/\xi}$~\cite{sarma.freedman.15}, where $L$ is system size and $\xi$ is correlation length.
MZM in the finite length system has nearly-zero energy and, in consequence, a finite lifetime~\cite{kells.15,wieckowski.maska.18}, which is observed experimentally~\cite{albrecht.higginbotham.16}.
To analyze the influence of long-range interactions into the spatial structure of the Majorana modes, we can analyze the $\Gamma$ modes in the Majorana basis representation.
Then MZMs in the Majorana operator $\gamma_i^+=a_i+a_i^\dagger,\gamma_i^-=i(a_i-a_i^\dagger)$ basis~\cite{kitaev.01} can be expressed as:
\begin{equation}
\Gamma =\sum_{i=1}^{2L} \alpha_i\gamma_i =\sum_{i=1}^L \left(\alpha_i^+ \gamma_i^+ + \alpha_i^- \gamma_i^-\right),
\end{equation}
where $\alpha^\pm_i$ are real coefficients.
We assume a normalization condition $\sum_i \alpha_i^2 = 1$ when $\Gamma^2=1$~\cite{wieckowski.maska.18}.
Here, $\gamma^{+}$ and $\gamma^{-}$ can be understood as new orthogonal basis matrices.
Moreover, this basis is a natural representation for the system which hosts MZM.
To check if MZMs can exist in our system, we used the same method which was introduced for studying integrals of motion in the Heisenberg model~\cite{mierzejewski.prosen.15,mierzejewski.prelovsek.15} and then was adapted for Kitaev model~\cite{wieckowski.maska.18}.
As the second condition for MZM is satisfied in the thermodynamic limit only (except fine-tuning parameters settings), we generate \textit{almost} conserved MZMs by solving the optimization problem~\cite{wieckowski.maska.18}:
\begin{equation}
\lambda = \max_{\{\alpha_i\}} \langle \thickbar \Gamma \thickbar \Gamma \rangle = \max_{\{\alpha_i\}} \sum_{ij}\alpha_i \langle \thickbar\gamma_i \thickbar\gamma_j\rangle \alpha_j,\label{eq.lambda}
\end{equation}
which becomes an eigenvalue problem for the matrix $\langle \thickbar\gamma_i \thickbar \gamma_j\rangle$.
We found the operator $\thickbar\Gamma$ averaged over time $\tau$ as \textit{close} as possible to the operator $\Gamma$.
To measure the distance between operators $\thickbar \Gamma$ and $\Gamma$ we used the Hilbert--Schmidt operator norm defined in the following way: $\langle (\Gamma-\thickbar\Gamma)^2\rangle = \text{Tr}[(\Gamma-\thickbar\Gamma)^2]/\text{Tr}(\mathbb{1})$.
We solved the eigenvalue problem for matrix $\langle \thickbar \gamma_i \thickbar \gamma_j\rangle$ and in result we obtained eigenvalues $\lambda$ with corresponding eigenvectors $[\alpha_i]$.
Averaging was done by high-oscillating terms cut-off in operator energy basis:
\begin{equation}
\thickbar \Gamma = \sum_{nm} \theta\left( \frac1\tau - |E_n-E_m|\right)\langle n|\Gamma|m\rangle \,\,\, |n\rangle \langle m|,
\end{equation}
where $|n\rangle$ and $E_n$ are respectively eigenstate and eigenenergy of the Hamiltonian $\mathcal H$ and $\theta$ is the Heaviside function.
Such averaging, in the limit $\tau\to\infty$ is equivalent to calculating the following: $\lim_{\tau\to\infty} \frac1\tau \int_0^{\tau'}\text d\tau' \,\Gamma(\tau')$.
The value of $\lambda$ carries information about the distance between operators $\Gamma$ and $\thickbar \Gamma$.
In the limit $\tau\to\infty$, we can distinguish three different scenarios: $\lambda=1$, then $\Gamma$ is a strict integral of motion $\Gamma=\thickbar \Gamma$, the distance between them is zero and the MZM condition $[\mathcal H,\,\Gamma]=0$ is satisfied exactly.
For $0<\lambda<1$, part of the information which is stored in $\Gamma$ is conserved.
Finally, if $\lambda=0$, the information stored is lost.
In this work, we then look for the largest $\lambda$, and the corresponding eigenvectors $[\alpha_i]$, which carry information of the possible MZM realizations in the system.
\section{Numerical results}
\label{sec.num_res}
In this Section we present the numerical results.
First, we start by describing the influence of the long-range interaction into Majorana modes lifetimes (Sec.~\ref{sec.lifetime}).
Next, we describe the spatial structure of the Majorana modes in the presence of the long-range interaction (Sec.~\ref{sec.profile}).
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth,keepaspectratio]{plot-crop.pdf}
\caption{
MZM correlation function $\lambda$ as a function of chemical potential $\mu$ and $r$-nearest neighbor interaction strength $V_r$ for
$\Delta/t=0.8$, $\tau=50$ and $L=10$.
Panels correspond to different long-range interaction $V_{r}$, as labeled.
Black and white contour marks $\lambda=0.9$ and 0.1, respectively.
}
\label{fig.plot1}
\end{figure}
\begin{figure}[!b]
\centering
\includegraphics[width=\linewidth,keepaspectratio]{plot2-crop.pdf}
\caption{
The same as in Fig.~\ref{fig.plot1}, but as a function of $\Delta$ and $V_r$.
Black and red contour marks $\lambda=0.9$ and 0.5, respectively.
($\mu=0$)
\label{fig.plot2}
}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth,keepaspectratio]{plot3-crop.pdf}
\caption{
Finite time $\tau$ scaling.
The same as in Fig.~\ref{fig.plot1}(c), but for different times $\tau=1,10,100,1000$.
Black and white contour marks $\lambda=0.9$ and $0.1$, respectively.
\label{fig.plot3}
}
\end{figure}
\begin{figure}[!b]
\centering
\includegraphics[width=\linewidth,keepaspectratio]{plot4-crop.pdf}
\caption{
Finite size $L$ scaling.
The same as in Fig.~\ref{fig.plot2}(d), but for different system sizes $L=6,8,10,12$ and time $\tau=100$.
Black contour marks $\lambda=0.9$.
\label{fig.plot4}
}
\end{figure}
\subsection{Majorana modes lifetimes}
\label{sec.lifetime}
Results in Figs.~\ref{fig.plot1}--\ref{fig.plot4} show the most stable $\lambda$, which we found by solving Eq.~\eqref{eq.lambda}. Note that the biggest $\lambda$ is doubly degenerate -- for each MZM: $\Gamma^+$ and $\Gamma^-$, which are defined later.
To study the influence of long-range interaction, we study each $V_r$ independently, considering only one non-zero $r$-nearest neighbor interaction $V_r$ for given $r$.
In Fig.~\ref{fig.plot1} we compare influence of $V_r$ and $\mu$ on $\lambda$.
It is known that moderate interaction $V_1$ can lead to the broadening of the topological regime~\cite{stoudenmire.alicea.11}.
The same feature can be seen in our result, note the contour $\lambda=0.1$ in Fig.~\ref{fig.plot1}(a).
However, this topological phase broadening is much smaller, when longer range interactions $V_2$, $V_3$ and $V_4$ are present in the system [Fig.~\ref{fig.plot1}(b)--Fig.~\ref{fig.plot1}(d)].
Moreover, increasing the interaction range $r$ decreases the area of strong MZM [see yellow area under the $\lambda=0.9$ contour in Fig~\ref{fig.plot1}(a)--Fig~\ref{fig.plot1}(d)].
Here we can notice that the transition from trivial to topological regime does not occur exactly at $|\mu|=2t$ in the case without interactions ($V_r=0$), due to a finite-size effect~\cite{kitaev.01}.
In Fig.~\ref{fig.plot2} we present the same as in Fig.~\ref{fig.plot1}, but as a function of $\Delta$, instead of $\mu$.
Again one can see the topological phase decreasing as the interaction range $r$ grows.
One can see here characteristic line along $\Delta/t=1$.
This fading line is related to the fact that Kitaev model for the non-interacting case $\Delta=|t|$ and $\mu=0$ (special parameter tweak) contains MZM, which are exactly integrals of motion even for finite system size~\cite{kitaev.01}.
It seems that for large $\tau$ and $\Delta/t\gg1$ MZM are absent in the system.
However, this is only a finite size effect, which we explained in detail in Fig.~\ref{fig.plot4}.
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth,keepaspectratio]{plot5-crop.pdf}
\caption{
Ground-state degeneracy $\delta E$ as a function of interaction $V_r$ and chemical potential $\mu$.
Panels correspond to the different long-range interaction $V_{r}$, as labeled.
System parameters the same as in Fig.~\ref{fig.plot1}.
}\label{fig.plot5}
\end{figure}
\begin{figure}[!b]
\centering
\includegraphics[width=\linewidth,keepaspectratio]{plot6-crop.pdf}
\caption{
Energy gap $\Delta E$ as a function of interaction $V_r$ and chemical potential $\mu$.
Panels correspond to different long-range interaction $V_{r}$, as labeled.
System parameters same as in Fig.~\ref{fig.plot1}.
}
\label{fig.plot6}
\end{figure}
To study the topological phase in the thermodynamic limit $L\to\infty$ and $\tau\to\infty$ one should be extremely careful doing the size/time scaling.
Except for special parameter tweak, following limit always tends to zero: $\lim_{L\to\infty}\lim_{\tau\to\infty} \lambda = 0$.
In Fig.~\ref{fig.plot3} we show how $\lambda$ vanishes over time $\tau$ for the selected case.
However, there is a non-zero topological regime even for a large $\tau=1000$ and for a relatively small system with $L=10$.
In a contrast, limit $\Lambda=\lim_{L\to\infty}\lim_{\tau\to\infty} \lambda$ in general can be different than 0.
The order of these limits is essential, for an almost strong MZM value of $\Lambda\simeq1$~\cite{wieckowski.maska.18}.
In Fig.~\ref{fig.plot4}, a finite-size scaling is presented.
The procedure of extrapolation of $\Lambda$ can be found in Ref.~\cite{wieckowski.maska.18}.
However, in this work, to compare the influence of interaction range $r$, finite time results are sufficient for the discussion.
Next, we check the necessary condition for soft MZM, i.e. degeneracy of the ground-state energies $\delta E=\left|E_0^{\mathrm o}-E_0^{\mathrm e}\right|$ and spectral gap $\Delta E = \min\{E_1^{\mathrm e}-E_0^{\mathrm e},\,E_1^{\mathrm o}-E_0^{\mathrm o}\}$, where $E_n^{\mathrm e}$ ($E_n^{\mathrm o}$) is $n$-eigenenergy from even (odd) parity regime~\cite{ng.15}.
Ground-state degeneracy $\delta E$ and energy gap $\Delta E$ results for different interaction $V_r$ range $r$ can be found in Figs.~\ref{fig.plot5} and \ref{fig.plot6}, respectively.
Surprisingly, one may conclude from the results presented in Fig.~\ref{fig.plot5} that increasing the interaction range $r$ topological phase increases as a consequence of increasing the yellow regime, where $\delta E$ is small.
Simultaneously, in Fig.~\ref{fig.plot6}, the area with a bigger energy gap $\Delta E$ grows with the interaction range $r$.
It should be stressed that the $\delta E$ condition is necessary, but it is not sufficient.
In Fig.~\ref{fig.plot5}(a) one can identify a few yellow stripes.
These lines separate regions where the ground-state average particle number $\langle N \rangle = \langle \sum_i a_i^\dagger a_i^{\phantom{\dagger}}\rangle $ is close to an integer value: $0,1,\dots,L$~(see Supplementary Material for Ref.~\cite{wieckowski.maska.18}).
These lines are the consequence of energy level crossings and are not related to MZM presence in the system.
\subsection{Spatial structure of Majorana modes}
\label{sec.profile}
To study the spatial profile of the MZM, we can express the $\Gamma$ state in the Majorana basis, which was included earlier (cf.~Sec.~\ref{sec.model}).
Then, we can find a pair of orthogonal operators $\Gamma^+$ and $\Gamma^-$:
\begin{equation}
\Gamma^\pm = \sum_{i=1}^L \alpha_i^\pm \gamma_i^\pm ,
\end{equation}
which describe a projection of the $\Gamma$ states into the pure--Majorana $\gamma$ states.
Because of this, every $\Gamma^\pm$ state contains only $\alpha_{i}^{\pm} \neq 0$ (describing contribution of the $\gamma^{\pm}$ state), while in the same time $\alpha_{i}^{\mp} =0$.
Examples of numerical results are shown in Fig.~\ref{fig.plot8a}, where $|\alpha_{i}^{+}|^{2} + |\alpha_{i}^{-}|^{2}$ is presented.
This quantity corresponds to the local density of states~\cite{matsui.sato.03} or differential conductance~\cite{chevallier.klinovaja.16}.
Additionally, in the case of a uniform chain, due to symmetry in the chain midpoint, the coefficients for $\Gamma^+$ and $\Gamma^-$ must be swapped in space, i.e., $\alpha_i^+=\alpha_{L+1-i}^-$.
Using such constraint, one can generate coefficients only for one of $\Gamma^\pm$ to study the spatial structures.
As we can see, increasing interaction range $r$ leads to decrease of the MZM localization, i.e. when $r$ grows, the sum $|\alpha_{i}^{+}|^{2} + |\alpha_{i}^{-}|^{2}$ at the center of the chain increases [cf.~Fig.~\ref{fig.plot8a}(b)--Fig.~\ref{fig.plot8a}(d)].
At the same time, the value of this expression decreases at the ends of the chain.
Such behavior can be explained by decreasing the overlap between MZM located at both left and right ends of the chain.
In contrast, the interaction between nearest-neighbor sites leads to the stabilization increment of the MZM~\cite{dominguez.cayao.17,wieckowski.maska.18}.
Moreover, this emphasizes the importance of many-body interactions on the MZM lifetime.
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth,keepaspectratio]{plot7.pdf}
\caption{
Spatial structure of the Majorana states $\Gamma^+$ and $\Gamma^-$. Results for $L=10$, $\Delta/t=0.4$, $V_r/t=1$, and $\mu/t=0.7$.
\label{fig.plot8a}
}
\end{figure}
\begin{figure}[!b]
\centering
\includegraphics[width=\linewidth,keepaspectratio]{plot8.pdf}
\caption{
Local overlap $|\alpha_i^+\alpha_i^-|$ between MZM $\Gamma^+$ and $\Gamma^-$.
System parameters are the same as in Fig.~\ref{fig.plot8a}.
}
\label{fig.plot8b}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth,keepaspectratio]{plot10-crop.pdf}
\caption{
Overlap $\Omega$ between left, and right Majorana states,
as a function of interaction $V_r$ and chemical potential $\mu$.
Panels correspond to different long-range interaction $V_{r}$, as labeled.
System parameters are the same as in Fig.~\ref{fig.plot1}.
}
\label{fig.plot01}
\end{figure}
\begin{figure}[!b]
\centering
\includegraphics[width=\linewidth,keepaspectratio]{plot_ap-crop.pdf}
\caption{
\label{fig.plot03}
The site number $i$ for which $|a_i^+ a_i^-|$ is maximal as a function of interaction $V_r$ and chemical potential $\mu$.
Panels correspond to different long-range interaction $V_{r}$, as labeled.
System parameters are the same as in Fig.~\ref{fig.plot1}.
}
\end{figure}
As a degree of non-locality of the two Majorana states, we can define their overlap~\cite{prada.aguado.17,deng.vaitiekenas.18}:
\begin{equation}
\Omega = \|\Gamma^+ \widetilde \Gamma^- \| = \sum_{i=1}^{L} | \alpha_{i}^{+} \alpha_{i}^{-} | ,
\end{equation}
where $\widetilde \Gamma^- = \mathcal U \Gamma^-\mathcal U^\dagger$ is reflection in space of $\Gamma^+$ and $\mathcal U$ is unitary operator described by transformation of basis matrices $\mathcal U \gamma_i^\pm\mathcal U^\dagger = \gamma_i^\mp$.
From the definition, $\Omega$ takes ranges from 0 (no overlap) to 1 (perfect overlap).
Here, we note that the $\Omega$ strongly depends on $L$~\cite{dumitrescu.roberts.15}.
Moreover, this quantity can be associated with the resilience of the Majorana qubit to local environmental noise, with complete non-locality $\Omega = 0$ denotes topological qubit protection~\cite{prada.aguado.17}.
In general, the overlap $\Omega$ can be controlled by some parameter modification, like electrostatic potential~\cite{ptok.cichy.18,penaranda.aguado.18,kobialka.ptok.19,rainis.trifunovic.13} or inter-site interactions~\cite{dominguez.cayao.17,wieckowski.maska.18}.
In our case, we control $\Omega$ by modification of the long-range interaction $V_r$ and the chemical potential $\mu$ in the whole system, for which the result is presented in Fig.~\ref{fig.plot01}.
For weak $V_{r}$ and doping $\mu$ overlapping is exponentially small.
When interaction $V_r$ increases, $\Omega$ decreases -- this effect seems to be independent of interaction $V_r$ range.
As one can see, the MZM overlap $\Omega$ is more sensitive to controlling the chemical potential $\mu$ than interaction $V_r$ modifications.
Similar behaviour can be observed in Fig.~\ref{fig.plot03}, where we show site index $i$ for which the ``local'' overlapping $| a_{i}^{+} a_{i}^{-} |$ reaches maximal value.
As one can see, increasing chemical potential $\mu$ leads to a stronger overlap between Majorana states (at the center of the chain, i.e., $i=5$).
In contrast, increasing of the long-range interaction $V_{r}$ leads to increasing overlapping near the edge of the chain -- the maximal value of overlap is more visible outside than in the center of the chain.
Note that, fast changes of site index $i$ for $\mu/t \sim 1$ are associated with numerical accuracy, i.e. local overlap $|\alpha_i^+\alpha_i^-|$ is relatively small and comparable for all $i$ (small variance).
\section{Summary}
\label{sec.sum}
We have studied the influence of interaction range on the Majorana zero mode lifetime and spatial structure in the Kitaev chain.
The Majorana zero mode's lifetime is an important quantity from a practical point of view and can be related to the topological qubit decoherence time.
For the practical application of Majorana zero modes, one needs to extend the decoherence time, which will aid the realization of a quantum computers based on their non-Abelian properties.
From previous theoretical calculations based on DMRG methods, moderate repulsive interactions between the nearest sites can lead to the stabilization of the topological order~\cite{thomale.rachel.13,gergs.niklas.16}.
It should be emphasized that the dissipation and dephasing of the Majorana zero modes have also been studied in the presence of nearest neighbor interactions~\cite{ng.15}.
In this case, the dissipation and dephasing noises can induce parity- and non-parity preserving transitions.
Moreover, the dissipation and dephasing rates can be reduced by increasing the interaction strength at sufficiently low temperature, which can lead to extended coherence times for the Majorana mode~\cite{ng.15}.
In this paper, we have shown that long-range interaction strongly modifies the lifetime of the Majorana zero mode.
These interactions decrease the lifetime of the MZM.
Moreover, we have discovered that interaction between particles located at distant sites is more significant than the interaction between nearest neighbors.
This behavior can have a crucial role from the practical point of
view in real materials, where interaction decays with distance.
This destructive character can be crucial for the practical implementation of Majorana zero modes as topological qubits.
This type of interaction leads to the overlap between two Majorana bound states localized at the opposite end of the chain.
Naturally, it can be a source of decoherence of these states.
In summary, to guarantee the efficiency of quantum computers based on Majorana zero modes, the suppression of the long-range interaction is required.
\begin{acknowledgments}
The authors are thankful to David J. Alspaugh, Szczepan G\l{}odzik, Jan \L{}a\.{z}ewskim, Marcin Mierzejewski, Pascal Simon, and Olga Sikora for very fruitful discussions and comments.
This work was supported by the National Science Centre (NCN, Poland) under Grant No. 2016/23/B/ST3/00647.
\end{acknowledgments}
|
2,869,038,154,595 | arxiv | \section{Introduction}
The paper is devoted to the study of the asymptotic behavior in time
of solutions to the Cauchy problem of the following two systems of
cubic nonlinear Schr\"odinger (NLS) equations in one space dimension:
The first one is
\begin{equation}\label{E:sysnew1}
\left\{
\begin{aligned}
&i\partial_t u_1 + \frac12\partial_x^2 u_1
= 3\lambda_1 |u_1|^2u_1, &&t\in\mathbb{R},\ x\in\mathbb{R},\\
&i\partial_t u_2 + \frac12\partial_x^2 u_2
= \lambda_6 (2|u_1|^2u_2+u_1^2\overline{u_2}), &&t\in\mathbb{R},\ x\in\mathbb{R},\\
& u_1(0,x)=u_{1,0}(x),\qquad u_2(0,x)=u_{2,0}(x), &&x\in\mathbb{R}
\end{aligned}
\right.
\end{equation}
and
\begin{equation}\label{E:d21}
\left\{
\begin{aligned}
&i\partial_t u_1 + \frac12\partial_x^2 u_1
= 0, &&t\in\mathbb{R},\ x\in\mathbb{R},\\
&i\partial_t u_2 + \frac12\partial_x^2 u_2
= 3|u_1|^2u_1, &&t\in\mathbb{R},\ x\in\mathbb{R},\\
& u_1(0,x)=u_{1,0}(x),\quad u_2(0,x)=u_{2,0}(x), &&x\in\mathbb{R}
\end{aligned}
\right.
\end{equation}
is the second,
where $u_j:\mathbb{R}\times\mathbb{R}\to\mathbb{C}$ ($j=1,2$) are unknown functions,
$u_{j,0}:\mathbb{R}\to\mathbb{C}$ ($j=1,2$) are given functions, and $\lambda_1$ and $\lambda_6$
are real constants satisfying $(\lambda_1,\lambda_6)\neq(0,0)$ and
$ (\lambda_6 - \lambda_1)(\lambda_6 - 3\lambda_1) \geqslant 0$.
The systems \eqref{E:sysnew1} and \eqref{E:d21} are particular cases of
\begin{equation}\label{eq:NLS1}
\left\{
\begin{aligned}
i\partial_t u_1 + \frac12 \partial_x^2 u_1
&= 3\lambda_1|u_1|^2u_1 + \lambda_2(2|u_1|^2u_2+u_1^2\overline{u_2})
+ \lambda_3(2u_1|u_2|^2+\overline{u_1}u_2^2) +3 \lambda_4|u_2|^2u_2,\\
i\partial_t u_2 + \frac12 \partial_x^2 u_2
&= 3\lambda_5|u_1|^2u_1 + \lambda_6(2|u_1|^2u_2+u_1^2\overline{u_2})
+ \lambda_7(2u_1|u_2|^2+\overline{u_1}u_2^2) + 3\lambda_8|u_2|^2u_2,\\
\end{aligned}
\right.
\end{equation}
where $t\in \mathbb{R}$, $x\in \mathbb{R}$ and
$ \lambda_j\ (j = 1, \cdots, 8)$ are real constants.
The system \eqref{eq:NLS1} includes several important physical models such as
Manakov system \cite{M} or a system describing spinor Bose-Einstein condensate
\cite{IMW}.
Due to a classification result in our previous study \cite{MSU},
systems of the form \eqref{eq:NLS1} are classified according to the number of mass-like conserved quantities,
which is connected to the complexity of the behavior of solution.
In view of the classification theory, the study of the peculiar systems \eqref{E:sysnew1} and \eqref{E:d21} have
an importance.
We discuss it in detail below.
It is well known that the cubic nonlinearity is critical in one dimension from the view point of the
asymptotic behavior of solutions to the NLS equations and systems.
As for the single
equation
\begin{equation}\label{eq:sNLS}
i\partial_t u + \frac{1}{2}\partial_x^2 u = \lambda |u|^{p-1}u,
\qquad t \in \mathbb{R},\ x \in \mathbb{R}^d,
\end{equation}
where $\lambda\in\mathbb{R}$, $p=1+2/d$ is known to be the critical exponent
(see \cite{B,St,TY}).
In the critical case $p=1+2/d$, the long-range scattering occurs,
namely, a class of solutions to \eqref{eq:sNLS} satisfies
\begin{equation}
u(t) \to t^{-\frac{d}{2}}W\left(\frac{\cdot}{t}\right)e^{\frac{i|\cdot|^2}{2t}
-i\lambda |W(\frac{\cdot}{t})|^{\frac{2}{d}}\log t-i\frac{\pi}{4}d
\quad \text{as}\ t \to \infty,
\end{equation}
for some function $W \in L^\infty$ in a suitable topology. This kind of asymptotic behavior is also called
the modified scattering because it involves a phase correction
(see Ozawa~\cite{O} and Ginibre-Ozawa~\cite{GO} for the final value problem
and Hayashi-Naumkin~\cite{HN} for the initial value problem).
The system \eqref{eq:NLS1} is cubic and the asymptotic behavior of solutions depends on the coefficients of the nonlinearities.
Our underlying motivation of the study in the present paper is
to find the all possible behavior to the system of the form \eqref{eq:NLS1} and
more general system
\begin{equation}
\label{eq:NLS}
\left\{
\begin{aligned}
&i\partial_t u_1 + \frac12\partial_x^2 u_1 =
c_1|u_1|^2u_1+c_2|u_1|^2u_2+c_3u_1^2\overline{u_2}\\
&\qquad\qquad\qquad\qquad\qquad+c_4u_1|u_2|^2+c_5\overline{u_1}u_2^2+c_6|u_2|^2u_2, \\
&i\partial_t u_2 + \frac12\partial_x^2 u_2 =
c_7|u_1|^2u_1+c_8|u_1|^2u_2+c_9u_1^2\overline{u_2}\\
&\qquad\qquad\qquad\qquad\qquad+c_{10}u_1|u_2|^2
+c_{11}\overline{u_1}u_2^2+c_{12}|u_2|^2u_2,
\end{aligned}
\right.
\end{equation}
where
$c_j\ (j=1,\cdots,12)$ are real constants.
Even though the systems of the form \eqref{eq:NLS1} or \eqref{eq:NLS}
are somewhat restricted in the sense that the nonlinearities do not contain
derivatives of unknowns and that the coefficients are real,
the variety of behavior of solutions to these systems is still richer than the
single equations has.
\smallbreak
Now,
let us recall previous results on the large time behavior of solutions to systems
of
the NLS equations.
As for the cubic system in one dimension with nonlinearities with/without derivatives,
the \emph{null condition}, a sufficient condition on nonlinearity
for the existence of a non-trivial solution which asymptotically behaves like a free solution, is obtained
in \cite{KSa} (see \cite{Tsutsumi} for the single equation).
In \cite{KN}, the long-range scattering is obtained for a matrix-valued equation.
As mentioned above, a quadratic nonlinearity is critical in two dimensions and
the asymptotic behavior of solutions to the two dimensional quadratic systems
is also extensively studied.
In this case, the ratio of the masses of two components matters.
This phenomenon is called mass-resonance
(see \cite{HLN,HLN1,HLN2,KLS} for systems with non-derivative nonlinearities and \cite{KS,SaSu} for those with derivative nonlinearities).
The phenomenon is studied also for the one dimensional cubic systems (see \cite{NST,U}).
In this paper, we restrict ourselves to the case where the coefficients of the nonlinearities are real numbers, as mentioned above.
It is known that the NLS
equations/systems with imaginary coefficients admit solution with
different kinds of behavior.
A typical example is the single NLS equation \eqref{eq:sNLS}. If the coefficient $\lambda$ is an imaginary number,
a nonlinear amplification/dissipation phenomenon takes place.
More precisely, if $\lambda \in \mathbb{C} \setminus \mathbb{R}$ then \eqref{eq:sNLS} is dissipative in one time direction and amplifying in the other direction.
The sign of $\Im \lambda$ decides which direction is the dissipative direction.
In the dissipative direction, it has shown that the small solution decays faster than
the free Schr\"odinger evolution (see \cite{HLN3,Ho,K2,OgSato,Sato,S}).
On the other hand, Kita \cite{K} showed that, in the amplifying direction, there exists an arbitrarily small data which gives a blowing-up solution.
Systems with the dissipative structure is also intensively studied.
See \cite{Kim} for systems with non-derivative nonlinearities and \cite{LSu} for those with derivative nonlinearities.
Recently,
new type of behavior of solution is found in a certain system of the cubic NLS equations
in one dimension in \cite{LNSS1,LNSS2}.
It turns out that there exists a system of the form \eqref{eq:NLS} such that the amplification/dissipation phenomenon takes place,
although the coefficients of the nonlinearities are real.
Further, the system admits the following three types of solutions;
(i) blowup forward in time and dissipative decay backward in time;
(ii) blowup backward in time and dissipative decay forward in time;
(iii) blowup for both time directions.
This system is an evidence for the richness of the variety of behaviors for systems.
We discuss this system in Appendix B.
In \cite{MSU}, the authors considered the system of
cubic nonlinear Klein-Gordon equations in one space dimension:
\begin{equation}\label{E:sys}
\left\{
\begin{aligned}
&(\square + 1)u_1
= \lambda_1u_1^3 + \lambda_2u_1^2u_2 + \lambda_3u_1u_2^2 + \lambda_4u_2^3, &&\quad t\in\mathbb{R},\ x\in\mathbb{R},\\
&(\square + 1)u_2
= \lambda_5u_1^3 + \lambda_6u_1^2u_2 + \lambda_7u_1u_2^2 + \lambda_8u_2^3, &&\quad t\in\mathbb{R},\ x\in\mathbb{R},
\end{aligned}
\right.
\end{equation}
where $u_j: \mathbb{R}\times \mathbb{R} \to \mathbb{R}$ ($j=1,2$) are real-valued unknowns, $\square = \partial_t^2 - \partial_x^2$ is
the d'Alembertian, and $\lambda_1,\dots, \lambda_8$ are real constants.
This system is closed under the linear transformation of unknowns.
An equivalence relation between two systems is then naturally introduced by the linear transformation of unknowns.
They give a classification result by considering a quotient set of the systems with respect to the equivalence relation.
The classification result is applicable to \eqref{eq:NLS1}.
This is because the change of coefficients caused by the linear transformation of unknown is identical to that for \eqref{E:sys}.
This agrees with the fact that the asymptotic profile for a solution of \eqref{E:sys}
and that for a solution of \eqref{eq:NLS1} are, at least formally, described by the same ODE system
\begin{equation}\label{eq:limitODE}
\left\{
\begin{aligned}
i\partial_t \alpha_1
&= 3\lambda_1|\alpha_1|^2\alpha_1 + \lambda_2(2|\alpha_1|^2\alpha_2+\alpha_1^2\overline{\alpha_2})
+ \lambda_3(2\alpha_1|\alpha_2|^2+\overline{\alpha_1}\alpha_2^2) +3 \lambda_4|\alpha_2|^2\alpha_2,\\
i\partial_t \alpha_2
&= 3\lambda_5|\alpha_1|^2\alpha_1 + \lambda_6(2|\alpha_1|^2\alpha_2+\alpha_1^2\overline{\alpha_2})
+ \lambda_7(2\alpha_1|\alpha_2|^2+\overline{\alpha_1}\alpha_2^2) + 3\lambda_8|\alpha_2|^2\alpha_2.
\end{aligned}
\right.
\end{equation}
Hereinafter, we refer to the system as \emph{limit ODE system}.
The classification result is also applicable to the system \eqref{eq:limitODE}.
Let us quickly review the classification result in \cite{MSU} by taking \eqref{eq:NLS1} as an example.
The key ingredient is the introduction of a matrix representation of a system:
A system \eqref{eq:NLS1} is identified with a matrix
\begin{equation}\label{eq:defA}
A = \begin{pmatrix}
\lambda_2 & -3\lambda_1 + \lambda_6& -3\lambda_5 \\
\lambda_3 & -\lambda_2 + \lambda_7 & -\lambda_6 \\
3\lambda_4 & 3\lambda_8 - \lambda_3 &-\lambda_7
\end{pmatrix}.
\end{equation}
It then turns out that the change caused by the linear transformation of unknowns is clearly formulated as a matrix manipulation
and, moreover, the characteristic properties such as conservation laws are well described by the matrix (see Section A.4).
In particular, $\rank A$ is an invariant quantity which
indicates the number of mass-like conserved quantities.
Roughly speaking, the behavior of solution becomes complicated as $\rank A$ increases.
In \cite{MSU}, the authors classify two subsets of systems. One is the set of systems such that $\rank A = 1$ and the other is the set of
systems such that $B=O$, where $B$ is the matrix defined from the coefficients of the system as
\begin{equation}\label{eq:defB}
B:= \begin{pmatrix}
-12 \lambda_5 & 3 (\lambda_1 -\lambda_6 ) & 2(\lambda_2-\lambda_7) \\
3 (\lambda_1 -\lambda_6 ) & 2(\lambda_2-\lambda_7) & 3(\lambda_3 -\lambda_8 ) \\
2(\lambda_2-\lambda_7) & 3(\lambda_3 -\lambda_8 ) & 12 \lambda_4
\end{pmatrix}
\end{equation}
and $O \in M_3(\mathbb{R})$ is the zero matrix.
The quotient set of the first subset contains 9 equivalent classes.
And the quotient set of the second does 5 equivalent classes.
In Appendix A, we review the classification result in more detail and extend it to the set of systems of the form \eqref{eq:NLS}.
Note that the system of the form \eqref{eq:NLS} is identified with a vector $(c_j)_{1 \leqslant j \leqslant 12} \in \mathbb{R}^{12}$.
Thus, it is difficult to correspond it with a $3\times3$ matrix like \eqref{eq:defA}.
Hence, we introduce a new way to represent a system. This is an extension of the matrix representation of \eqref{eq:NLS1}.
This enables us to formulate the equivalence relation in a clear way and describe the validity of conservation laws for the generalized system \eqref{eq:NLS}.
As an application, a global existence result for \eqref{eq:NLS} is shown in Proposition \ref{P:massconservation}.
Note that the
local well-posedness of the Cauchy problem of the system \eqref{eq:NLS} is obtained by a standard theory in several function spaces such as the Lebesgue space $ L^2(\mathbb{R}) $, the Sobolev space $ H^1(\mathbb{R}) $ and so on, see \cite{Caz} for instance.
Let us now discuss systems \eqref{E:sysnew1} and \eqref{E:d21} in view of the classification result.
As for the systems \eqref{E:sys} and \eqref{eq:NLS1}, the behavior of solutions is previously studied in the case that
$\rank A \leqslant 1$ or $B=O$.
The conditions are connected to the existence of conserved quantities for theses systems and also for the corresponding limit ODE system \eqref{eq:limitODE}.
Notice that the system \eqref{E:d21} satisfies $\rank A=1$.
The end-point case $\lambda_6=3\lambda_1$ of \eqref{E:sysnew1} also corresponds to the case $\rank A = 1$.
In the other end-point case $\lambda_6=\lambda_1$, one has $B=O$.
We here remark that these cases are previously studied for the corresponding Klein-Gordon system \eqref{E:d21} in \cite{Su,MSU}.
When
$(\lambda_6-\lambda_1)(\lambda_6-3\lambda_1) > 0$, we have $\rank A = 2 $ and $B \neq O$.
As far as the authors know, this case is new.
It is revealed that the asymptotic profile for the second component involves two parts which oscillate in a different way.
This is the main result of this paper.
\subsection{Main results}
To state main results,
we define the weighted $L^{2}$ space $ H^{0,1}(\mathbb{R}) $ by
\begin{equation}\notag
H^{0,1}(\mathbb{R}) = \left\{ f \in L^2(\mathbb{R}) \mid \|f\|_{H^{0,1}} = \|\langle x \rangle f\|_{L^2} < \infty\right\},
\end{equation}
where $ \langle x \rangle = \sqrt{1+|x|^2}$.
Let us first consider the case $ (\lambda_6-\lambda_1)(\lambda_6-3\lambda_1) > 0$ of \eqref{E:sysnew1}.
In this case, one has $\rank A = 2$ and $B \neq O$, where $A$ and $B$ are defined in \eqref{eq:defA} and \eqref{eq:defB}, respectively.
We have
the following result on asymptotic behavior of solution to (\ref{E:sysnew1}).
\begin{theorem}\label{T:main_add}
Suppose $ (\lambda_6-\lambda_1)(\lambda_6-3\lambda_1) > 0$.
Let $0<\gamma<\delta<1/100$. Then there exists $\varepsilon_0>0$ such
that for any $u_{j,0}\in H^{1}(\mathbb{R})\cap H^{0,1}(\mathbb{R})$ satisfying
\varepsilon:=\sum_{j=1}^2(\|u_{j,0}\|_{H^{1}}+\|u_{j,0}\|_{H^{0,1}})\leqslant\varepsilon_0$,
there exists a unique global solution $u_j\in C(\mathbb{R},H^{1}(\mathbb{R})\cap H^{0,1}(\mathbb{R}))$
of (\ref{E:sysnew1})
satisfying
\begin{align*}
\|u_1(t)\|_{H_x^{0,1}}&\lesssim \varepsilon\langle t\rangle^{\gamma},\quad
\|u_1(t)\|_{L_x^{\infty}}\lesssim \varepsilon\langle t\rangle^{-\frac12},\\
\|u_2(t)\|_{H_x^{0,1}}&\lesssim \varepsilon\langle t\rangle^{\delta},\quad
\|u_2(t)\|_{L_x^{\infty}}\lesssim \varepsilon\langle t\rangle^{-\frac12}
\end{align*}
for any $t\in\mathbb{R}$.
Furthermore, there exist two functions $W_1, W_2 \in L^{\infty}$ such that
\begin{align}
u_1(t)
&=t^{-\frac12}W_1\left(\frac{x}{t}\right)e^{\frac{ix^2}{2t}
-i3\lambda_1\left|W_1\left(\frac{x}{t}\right)\right|^2\log t-i\frac{\pi}{4}}
+O(t^{-\frac34+\gamma}),\label{iasym1}\\
u_2(t) &= t^{-\frac12}{\bf 1}_{\{W_1 \neq 0\}}\left(\frac{x}{t}\right)\bigg[\lambda_6\left(\frac{W_1^2}{|W_{1}|^{2}}W_2\right)\left( \frac{x}{t} \right)
e^{i(-3\lambda_1-\lambda_c)|W_1(\frac{x}{t})|^2\log t} \label{iasym2}\\
&\qquad \qquad \qquad \qquad + (3\lambda_{1}-2\lambda_{6}+\lambda_c)
\overline{W_{2}}\left( \frac{x}{t} \right)
e^{i(-3\lambda_1+\lambda_c)|W_1(\frac{x}{t})|^2\log t}\bigg]e^{\frac{ix^2}{2t}-i\frac{\pi}{4}}
\nonumber\\
&\quad+ O(t^{-\frac34+\delta}),\nonumber
\end{align}
in $L^{\infty}(\mathbb{R})$ as $t\to\infty$, where $\lambda_c = \sqrt{3(\lambda_6-\lambda_1)(\lambda_6-3\lambda_1)}$.
Similar asymptotic formulas for $u_j$ hold
for $t\to-\infty$.
\end{theorem}
\begin{remark}
If we assume the opposite inequality $ (\lambda_6-\lambda_1)(\lambda_6-3\lambda_1) < 0 $,
we formally obtain the same asymptotic profile of the solution.
However, since $\lambda_c$ is an imaginary number in this case,
we see that one part of the asymptotic profile grows
and the other part decays, which is problematic in obtaining rigorous asymptotics.
\end{remark}
Let us move to the limiting case, $ \lambda_6 = \lambda_1$ or $ \lambda_6 = 3\lambda_1 $, of \eqref{E:sysnew1}.
Remark that $\rank A =1$ if $\lambda_6 = 3 \lambda_1$ and $B=O$ if $\lambda_6=\lambda_1$.
In this case, we have the following result.
\begin{theorem}\label{T:main4}
Suppose $\lambda_6=3\lambda_1 \neq 0$ or $\lambda_6=\lambda_1 \neq 0$.
Let $0<\gamma<\delta<1/100$. Then there exists $\varepsilon_0>0$ such
that for any $u_{j,0}\in H^{1}(\mathbb{R})\cap H^{0,1}(\mathbb{R})$ satisfying
\varepsilon:=\sum_{j=1}^2(\|u_{j,0}\|_{H^{1}}+\|u_{j,0}\|_{H^{0,1}})\leqslant\varepsilon_0$,
there exists a unique global solution $u_j\in C(\mathbb{R},H^{1}(\mathbb{R})\cap H^{0,1}(\mathbb{R}))$
of (\ref{E:sysnew1})
satisfying
\begin{align*}
\|u_1(t)\|_{H_x^{0,1}}&\lesssim \varepsilon\langle t\rangle^{\gamma},\quad
\|u_1(t)\|_{L_x^{\infty}}\lesssim \varepsilon\langle t\rangle^{-\frac12},\\
\|u_2(t)\|_{H_x^{0,1}}&\lesssim \varepsilon\langle t\rangle^{\delta},\quad
\|u_2(t)\|_{L_x^{\infty}}\lesssim \varepsilon\langle t\rangle^{-\frac12}\log\langle t\rangle
\end{align*}
for any $t\in\mathbb{R}$.
Furthermore, there exist two functions $W_j\in L^{\infty}$ such that
\begin{align}
u_1(t)
&=t^{-\frac12}W_1\left(\frac{x}{t}\right)e^{\frac{ix^2}{2t}
-i3\lambda_1\left|W_1\left(\frac{x}{t}\right)\right|^2\log t-i\frac{\pi}{4}}
+O(t^{-\frac34+\gamma}),\label{asym1}\\
u_2(t)&=
t^{-\frac12}\left\{W\left(\frac{x}{t}\right)\log t+W_2\left(\frac{x}{t}\right)\right\}
e^{\frac{ix^2}{2t}-i3\lambda_1\left|W_1\left(\frac{x}{t}\right)\right|^2\log t-i\frac{\pi}{4}} +O(t^{-\frac34+\delta})
\label{asym2}
\end{align}
in $L^{\infty}(\mathbb{R})$ as $t\to\infty$, where $W$ is given by
\begin{align*}
W(y)=
\left\{
\begin{aligned}
&-6i\lambda_1W_1(y)\Re\left[W_1(y)\overline{W_2(y)}\right] &\ \text{if}\ \lambda_6=3\lambda_1,\\
&2\lambda_1W_1(y)\Im\left[W_1(y)\overline{W_2(y)}\right] &\text{if}\ \lambda_6=\lambda_1.\
\end{aligned}
\right.
\end{align*}
Similar asymptotic formulas for $u_j$ hold
for $t\to-\infty$.
\end{theorem}
\begin{remark}
An intuitive summary of Theorems \ref{T:main_add} and \ref{T:main4} is as follows.
In the non-limiting case $ (\lambda_6-\lambda_1)(\lambda_6-3\lambda_1) > 0$, we have $\lambda_c>0$ and hence the asymptotic profile of the second component $u_2$ has two parts which have different phase modifications, i.e., which oscillate in a different way.
However, in the limiting case $ (\lambda_6-\lambda_1)(\lambda_6-3\lambda_1) = 0$, we have $\lambda_c=0$ and the oscillation of the two parts become the same.
The coincidence causes a logarithmic amplitude correction together with the fact that the coefficients of the two parts becomes the same when
$\lambda_6=\lambda_1$ and the same modulus with the different signs when $\lambda_6=3\lambda_1$.
\end{remark}
\begin{remark}
In the same way as in the proof of Theorem \ref{T:main4},
it is not hard to see that
\begin{eqnarray*}
\|u_2(t)\|_{L_x^2}\lesssim \varepsilon\log t
\end{eqnarray*}
for any $t\ge2$. Furthermore, we see that $W_{j}\in L^{2}\cap L^{{\infty}}$
and
\begin{align*}
u_2(t)=
t^{-\frac12}\left\{W\left(\frac{x}{t}\right)\log t+W_2\left(\frac{x}{t}\right)\right\}
e^{\frac{ix^2}{2t}-i3\lambda_1\left|W_1\left(\frac{x}{t}\right)\right|^2\log t-i\frac{\pi}{4}} +O(t^{-\frac14+\delta})
\end{align*}
in $L^2(\mathbb{R})$ as $t\to\infty$, where $W$ is given in
Theorem \ref{T:main4}. From this asymptotic formula, we have
the lower bound of $L^{2}$ norm of $u_{2}$:
\begin{eqnarray*}
\|u_2(t)\|_{L_{x}^2}\geqslant\|W\|_{L_{x}^{2}}\log t-C\varepsilon.
\end{eqnarray*}
for $t\ge2$.
\end{remark}
We turn to \eqref{E:d21}.
The asymptotic behavior of solutions to the corresponding
system of nonlinear Klein-Gordon equations is studied in Sunagawa~\cite{Su}.
As for \eqref{E:d21},
merely the growth of the $ L^2 $-norm
of the solution in a logarithmic rate is given (see \cite{KSa}). Here, we obtain the explicit asymptotic formula of the solution.
\begin{theorem}\label{T:main51}
Let $0<\gamma<1/100$.
Then there exists $\varepsilon_0>0$ such
that for any $u_{j,0}\in H^{1}(\mathbb{R})\cap H^{0,1}(\mathbb{R})$ satisfying
\varepsilon:=\sum_{j=1}^2\|u_{j,0}\|_{H^{1}}+\|u_{j,0}\|_{H^{0,1}}\leqslant\varepsilon_0$,
there exists a unique global solution $u_j\in C(\mathbb{R},H^{1}(\mathbb{R})\cap H^{0,1}(\mathbb{R}))$
of (\ref{E:d21}) satisfying
\begin{align*}
\|u_1(t)\|_{H_x^{0,1}}&\lesssim \varepsilon,\quad
\|u_1(t)\|_{L_x^{\infty}}\lesssim \varepsilon\langle t\rangle^{-\frac12},\\
\|u_2(t)\|_{H_x^{0,1}}&\lesssim \varepsilon\langle t\rangle^{\gamma},\quad
\|u_2(t)\|_{L_x^{\infty}}\lesssim \varepsilon\langle t\rangle^{-\frac12}\log\langle t\rangle
\end{align*}
for any $t\in\mathbb{R}$.
Furthermore, let $W_{1}:=\hat{u}_{1,0}$. Then there exists a function $W_2\in L^{\infty}$ such that
\begin{align}
u_2(t)=-i
t^{-\frac12}\left\{\left|W_1\left(\frac{x}{t}\right)\right|^2W_1\left(\frac{x}{t}\right)
\log t+W_2\left(\frac{x}{t}\right)\right\}
e^{\frac{ix^2}{2t}-i\frac{\pi}{4}}+O(t^{-\frac34+\delta})\label{asym3}
\end{align}
in $L^{\infty}(\mathbb{R})$ as $t\to\infty$.
Similar asymptotic formula for $u_2$ holds for $t\to-\infty$.
\end{theorem}
\begin{remark}
Our classification argument suggests that the system \eqref{E:d21}
can be regarded as a limiting case of \eqref{E:sysnew1}.
One sees that the behavior of solution is similar in these two cases. Especially,
it is common that the second component has a \emph{logarithmic amplitude correction}.
The difference is as follows:
The behavior of solutions to \eqref{E:sysnew1} involves a logarithmic phase correction term.
Further, the logarithmic amplitude correction depends not only on $W_1$ but also on $W_2$.
This reflects the difference of the mechanism of appearance of logarithmic amplitude correction,
which is, at least formally, easily verified by the analysis of the corresponding limit ODE systems.
\end{remark}
\begin{remark}
Remark that the above theorems do not follow from the argument of Katayama-Sakoda~\cite{KS} since
\eqref{E:sysnew1} and \eqref{E:d21} do not satisfy their assumption.
\end{remark}
The rest of the paper is organized as follows.
In Section 2, we first prove our main result (Theorem \ref{T:main_add}).
Then, we turn to the proofs of Theorems \ref{T:main4} and \ref{T:main51} in Section 3.
Appendix A is devoted to the classification result of \eqref{eq:NLS}.
A global well-posdeness result for \eqref{eq:NLS} is given as an application in Proposition \ref{P:massconservation}.
Finally, we exhibit an interesting example of system in Appendix B.
\section{Proof of Theorem \ref{T:main_add}.}
In this section, we prove Theorem \ref{T:main_add}.
Let us recall that $(u_{1},u_{2})$ satisfies
\begin{equation}\label{E:sysnewa11}
\left\{
\begin{aligned}
&i\partial_t u_1 + \frac12\partial_x^2 u_1
= 3\lambda_1 |u_1|^2u_1, &&t\in\mathbb{R},\ x\in\mathbb{R},\\
&i\partial_t u_2 + \frac12\partial_x^2 u_2
= \lambda_6 (2|u_1|^2u_2+u_1^2\overline{u_2}),&&t\in\mathbb{R},\ x\in\mathbb{R},\\
&u_1(0,x)=u_{1,0}(x),\qquad u_2(0,x)=u_{2,0}(x),
&&x \in \mathbb{R},
\end{aligned}
\right.
\end{equation}
where $\lambda_{1}$ and $\lambda_{6}$ satisfy $(\lambda_{6}-\lambda_{1})(\lambda_{6}-3\lambda_{1})>0$.
To analyze the solution to (\ref{E:sysnewa11}), we introduce several linear operators.
Let $\{U(t)\}_{t\in\mathbb{R}}$ be a unitary group generated by $i\partial_x^2/2$, i.e.,
\begin{align*}
U(t):={{\mathcal F}}^{-1}e^{-\frac{it\xi^2}{2}}{{\mathcal F}},
\end{align*}
where ${{\mathcal F}}$ and ${{\mathcal F}}^{-1}$ are usual Fourier transform
and its inverse transform.
We define multiplication operator $M(t)$ and
dilation operator $D(t)$ by
\begin{align*}
(M(t)f)(x)=e^{\frac{ix^2}{2t}}f(x),
\quad (D(t)f)(x)=t^{-\frac12}f\left(\frac{x}{t}\right)e^{-\frac{i\pi}{4}},
\quad t\in\mathbb{R}\backslash\{0\}.
\end{align*}
Then we have well-known Dollard decomposition
for free Schr\"odinger group:
\begin{align}
U(t)=M(t)D(t){{\mathcal F}}M(t).\label{Do}
\end{align}
Let $w_j:={{\mathcal F}}U(-t)u_j$, $j=1,2$.
Then, applying ${{\mathcal F}}U(-t)$ to (\ref{E:sysnewa11}), we obtain
\begin{align}
i\partial_tw_1=3\lambda_1{{\mathcal F}}U(-t)|U(t){{\mathcal F}}^{-1}w_1|^2U(t){{\mathcal F}}^{-1}w_1.
\label{a11}
\end{align}
By using the Dollard decomposition (\ref{Do}), we easily see that
\begin{align}
{{\mathcal F}}U(-t)&=U(1/t)D^{-1}(t)M^{-1}(t),\label{1.3}\\
U(t){{\mathcal F}}^{-1}&=M(t)D(t)U(-1/t).\label{1.4}
\end{align}
We summarize several estimates for the operator $U(\pm1/t)$.
\begin{lemma}\label{Lem1}
(i) There exists a positive constant $C$ such that
for any $0<\alpha<1/4$ and $\varphi\in H_{\xi}^1(\mathbb{R})$, we have
\begin{align}
\|U(\pm1/t)\varphi-\varphi\|_{L_{\xi}^{\infty}}\lesssim t^{-\alpha}\|\varphi\|_{H_{\xi}^1}.
\label{c1}
\end{align}
(ii) There exists a positive constant $C$ such that
for any $\varphi\in H_{\xi}^1(\mathbb{R})$, we have
\begin{align}
\|U(\pm1/t)\varphi\|_{H_{\xi}^1}\lesssim\|\varphi\|_{H_{\xi}^1}.
\label{c3}
\end{align}
\end{lemma}
\noindent
{\bf Proof of Lemma \ref{Lem1}.} The proof easily follows from the explicit representation (\ref{Do})
of the unitary group $U(t)$. $\qed$
\vskip4mm
By (\ref{1.3}) and (\ref{1.4}), eq. (\ref{a11}) can be rewritten as
\begin{align}
i\partial_tw_1
&=
3\lambda_1 U(1/t)D^{-1}(t)M^{-1}(t)
|M(t)D(t)U(-1/t)w_1|^2M(t)D(t)U(-1/t)w_1\label{a31}\\
&=3\lambda_1 t^{-1}U(1/t)|U(-1/t)w_1|^2U(-1/t)w_1.\nonumber
\end{align}
In a similar way, we have
\begin{align}
i\partial_tw_2=
\lambda_6 t^{-1}U(1/t)
\left\{2|U(-1/t)w_1|^2U(-1/t)w_
+(U(-1/t)w_1)^2\overline{U(-1/t)w_2}\right\}.\label{a41}
\end{align}
We first obtain short time bounds of $w_j$.
\begin{lemma}[Short time bounds]\label{Lem:Short1}
There exists $\varepsilon_0>0$ such
that for any $u_{j,0}\in H^{1}(\mathbb{R})\cap H^{0,1}(\mathbb{R})$ satisfying
\varepsilon:=\sum_{j=1}^2(\|u_{j,0}\|_{H^{1}}+\|u_{j,0}\|_{H^{0,1}})\leqslant\varepsilon_0$,
there exists a unique solution $u_j\in C([0,1],H^{1}(\mathbb{R})\cap H^{0,1}(\mathbb{R}))$
of (\ref{E:sysnewa11}) satisfying
\begin{align*}
\sup_{t\in[0,1]}\sum_{j=1}^{2}\|w_j(t)\|_{H_{\xi}^1}\lesssim\varepsilon.
\end{align*}
\end{lemma}
\noindent
{\bf Proof of Lemma \ref{Lem:Short1}.} The proof follows from
a standard well-posedness theory, see \cite{Caz} for instance.
Hence we omit the proof. $\qed$
\vskip2mm
Next we derive a long time bounds of $w_j$.
We fix $0<\gamma<\delta<1/100$ and introduce
\begin{align*}
\|(w_1,w_2)\|_{X_T}&:=
\sup_{t\in[1,T]}
\left\{\|w_1(t)\|_{L_{\xi}^{\infty}}+\langle t\rangle^{-\gamma}\|w_1(t)\|_{H_{\xi}^1}\right.\\
& \qquad\quad+
\left.(\log\langle t\rangle)^{-1}\|w_2(t)\|_{L_{\xi}^{\infty}}
+\langle t\rangle^{-\delta}\|w_2(t)\|_{H_{\xi}^1}\right\}.
\end{align*}
\begin{lemma}[Long time bounds]\label{Lem:Long1}
There exists $\varepsilon_0>0$ such
that for any $u_{j,0}\in H^{1}(\mathbb{R})\cap H^{0,1}(\mathbb{R})$ satisfying
\varepsilon:=\sum_{j=1}^2(\|u_{j,0}\|_{H^{1}}+\|u_{j,0}\|_{H^{0,1}})\leqslant\varepsilon_0$,
there exists a unique global solution $u_j\in C([0,\infty),H^{1}(\mathbb{R})\cap H^{0,1}(\mathbb{R}))$
of (\ref{E:sysnewa11}) satisfying
\begin{align}
\|(w_1,w_2)\|_{X_{\infty}}\lesssim\varepsilon.\label{longbound1}
\end{align}
\end{lemma}
\begin{remark}\label{Rem:Long} If $(w_1,w_2)$ satisfy (\ref{longbound1}), then
we obtain $L^{\infty}$ decay estimates for solution $u_{1}$
of (\ref{E:sysnewa11}). Indeed, by Lemma \ref{Lem1}, and (\ref{longbound1}),
we see,
\begin{align*}
\|u_1(t)\|_{L_x^{\infty}}
&=\|U(t){{\mathcal F}}^{-1}w_1(t)\|_{L_x^{\infty}}\\
&=\|M(t)D(t)U(-1/t)w_1(t)\|_{L_{\xi}^{\infty}}\\
&\lesssim t^{-\frac12}\|U(-1/t)w_1(t)\|_{L_{\xi}^{\infty}}\\
&\lesssim t^{-\frac12}(\|w_1(t)\|_{L_{\xi}^{\infty}}+t^{-\alpha}\|w_1(t)\|_{H_{\xi}^1})\\
&\lesssim \varepsilon(t^{-\frac12}+t^{-\frac12-\alpha+\gamma}),
\end{align*}
for any $t\ge1$, where $0<\alpha<1/4$. Hence choosing $\alpha$ so that
$0<\gamma<\alpha$, we obtain
\begin{align*}
\|u_1(t)\|_{L_x^{\infty}}\lesssim\varepsilon t^{-\frac12}.
\end{align*}
In a similar way, we have (non-optimal) decay estimate for $u_{2}$:
\begin{align*}
\|u_2(t)\|_{L_x^{\infty}}\lesssim\varepsilon t^{-\frac12}\log t
\end{align*}
for any $t\ge1$. We will show later that $u_{2}(t)=O(t^{-1/2})$ by analyzing
the large time asymptotics of $u_{2}$.
\end{remark}
\begin{proof}
[{\bf Proof of Lemma \ref{Lem:Long1}.}]
We first evaluate $H^1$ norm of $w_1$. By (\ref{a31}), we have
\begin{align*}
w_1(t)
=w_1(1)-3i\lambda_1\int_1^t
\tau^{-1}U(1/\tau)\left[|U(-1/\tau)w_1|^2U(-1/\tau)w_1\right](\tau)d\tau.
\end{align*}
Then, we see
\begin{align*}
\|w_1(t)\|_{H_{\xi}^1}
\lesssim \|w_1(1)\|_{H_{\xi}^1}+\int_1^t
\tau^{-1}\|U(1/\tau)\left[|U(-1/\tau)w_1|^2U(-1/\tau)w_1\right]\|_{H_{\xi}^1}d\tau.
\end{align*}
By Lemma \ref{Lem1},
\begin{align*}
\lefteqn{\|U(1/\tau)\left[|U(-1/\tau)w_1|^2U(-1/\tau)w_1\right]\|_{H_{\xi}^1}}\\
&\lesssim \||U(-1/\tau)w_1|^2U(-1/\tau)w_1\|_{H_{\xi}^1}\\
&\lesssim \|U(-1/\tau)w_1\|_{L_{\xi}^{\infty}}^2\|U(-1/\tau)w_1\|_{H_{\xi}^1}\\
&\lesssim (\|w_1\|_{L_{\xi}^{\infty}}+\tau^{-\alpha}\|w_1\|_{H_{\xi}^1})^2\|w_1\|_{H_{\xi}^1}\\
&\lesssim (1+\tau^{-\alpha+\gamma})^2\tau^{\gamma}\|(w_1,w_2)\|_{X_T}^3,
\end{align*}
where $0<\alpha<1/4$. Choosing $\alpha$ so that $\gamma<\alpha<1/4$
and using Lemma \ref{Lem:Short1}, we find
\begin{align}
\langle t\rangle^{-\gamma}\|w_1(t)\|_{H_{\xi}^1}
&\lesssim \varepsilon+\langle t\rangle^{-\gamma}\|(w_1,w_2)\|_{X_T}^3
\int_1^t\tau^{-1+\gamma}d\tau\label{n11}\\
&\lesssim
\varepsilon+\frac{1}{\gamma}\|(w_1,w_2)\|_{X_T}^3.\nonumber
\end{align}
Next we evaluate $H^1$ norm of $w_2$. By (\ref{a41}), we have
\begin{align*}
\lefteqn{w_2(t)=w_2(1)}\\
&\quad -i\lambda_6\int_1^t
\tau^{-1}U(1/\tau)\left[2|U(-1/\tau)w_1|^2U(-1/\tau)w_2
+(U(-1/\tau)w_1)^2\overline{U(-1/\tau)w_2}\right](\tau)d\tau.
\end{align*}
Then, we see
\begin{align*}
\lefteqn{\|w_2(t)\|_{H_{\xi}^1}\lesssim\|w_2(1)\|_{H_{\xi}^1}}\\
&\quad +\int_1^t
\tau^{-1}
\left\|U(1/\tau)\left[2|U(-1/\tau)w_1|^2U(-1/\tau)w_2
+(U(-1/\tau)w_1)^2\overline{U(-1/\tau)w_2}\right]\right\|_{H_{\xi}^1}d\tau.
\end{align*}
By Lemma \ref{Lem1},
\begin{align*}
\lefteqn{\left\|U(1/\tau)\left[2|U(-1/\tau)w_1|^2U(-1/\tau)w_2
+(U(-1/\tau)w_1)^2\overline{U(-1/\tau)w_2}\right]\right\|_{H_{\xi}^1}}\\
&\lesssim \||U(-1/\tau)w_1|^2U(-1/\tau)w_2\|_{H_{\xi}^1}\\
&\lesssim \|U(-1/\tau)w_1\|_{L_{\xi}^{\infty}}\|U(-1/\tau)w_1\|_{H_{\xi}^1}
\|U(-1/\tau)w_2\|_{L_{\xi}^{\infty}}\\
&\quad +\|U(-1/\tau)w_1\|_{L_{\xi}^{\infty}}^2\|U(-1/\tau)w_2\|_{H_{\xi}^1}\\
&\lesssim (\|w_1\|_{L_{\xi}^{\infty}}+\tau^{-\alpha}\|w_1\|_{H_{\xi}^1})\|w_1\|_{H_{\xi}^1}
(\|w_2\|_{L_{\xi}^{\infty}}+\tau^{-\alpha}\|w_2\|_{H_{\xi}^1})\\
&\quad +(\|w_1\|_{L_{\xi}^{\infty}}+\tau^{-\alpha}\|w_1\|_{H_{\xi}^1})^2\|w_2\|_{H_{\xi}^1}\\
&\lesssim
(1+\tau^{-\alpha+\gamma})\tau^{\gamma}(\log \tau+\tau^{-\alpha+\delta})\|(w_1,w_2)\|_{X_T}^3\\
&\quad+(1+\tau^{-\alpha+\gamma})^{2}\tau^{\delta}\|(w_1,w_2)\|_{X_T}^3,
\end{align*}
where $0<\alpha<1/4$. Choosing $\alpha$ so that $\delta<\alpha<1/4$
and using Lemma \ref{Lem:Short1}, we find
\begin{align}
\langle t\rangle^{-\delta}\|w_2(t)\|_{H_{\xi}^1}
&\lesssim \varepsilon+\langle t\rangle^{-\delta}\|(w_1,w_2)\|_{X_T}^3
\int_1^t\tau^{-1+\delta}d\tau\label{n21}\\
&\lesssim
\varepsilon+\frac{1}{\delta}\|(w_1,w_2)\|_{X_T}^3.\nonumber
\end{align}
Next we derive $L^{\infty}$ estimates for $w_j$.
From viewpoint of the asymptotic formulas for $U(\pm1/t)$
(Lemma \ref{Lem1}), we decompose the nonlinear term as follows:
\begin{align}
i\partial_tw_1&=3\lambda_1 t^{-1}|w_1|^2w_1+R_1,\label{3.91}\\
i\partial_tw_2&=\lambda_6 t^{-1}(2|w_1|^2w_2+w_1^2\overline{w_2})+R_2,\label{3.101}
\end{align}
where $R_1$ and $R_2$ are given by
\begin{align*}
R_1&=3\lambda_1 t^{-1}\Large[U(1/t)|U(-1/t)w_1|^2U(-1/t)w_1-|w_1|^2w_1\Large],\\%\label{b1}\\
R_2&=\lambda_6 t^{-1}\Large[U(1/t)\{2|U(-1/t)w_1|^2U(-1/t)w_2
+(U(-1/t)w_1)^2\overline{U(-1/t)w_2}\}\nonumber\\
&\qquad\ \ -(2|w_1|^2w_2+w_1^2\overline{w_2})\Large]
\end{align*}
Since
\begin{align*}
R_1&=3\lambda_1 t^{-1}\Large[|U(-1/t)w_1|^2U(-1/t)w_1-|w_1|^2w_1\Large]\\
&\quad +3\lambda_1 t^{-1}(U(1/t)-1)|U(-1/t)w_1|^2U(-1/t)w_1,
\end{align*}
by Lemma \ref{Lem1}, we have
\begin{align}
\|R_1\|_{L_{\xi}^{\infty}}&\lesssim
t^{-1}(\|U(-1/t)w_1\|_{L_{\xi}^{\infty}}+\|w_1\|_{L_{\xi}^{\infty}})^2\|U(-1/t)w_1-w_1\|_{L_{\xi}^{\infty}}
\label{3.111}\\
&\quad +t^{-1-\alpha}\||U(-1/t)w_1|^2U(-1/t)w_1\|_{H_{\xi}^1}
\nonumber\\
&\lesssim
t^{-1-\alpha}(\|w_1\|_{L_{\xi}^{\infty}}+t^{-\alpha}\|w_1\|_{H_{\xi}^1})^2\|w_1\|_{H_{\xi}^1}
\nonumber\\
&\quad +t^{-1-\alpha}\|U(-1/t)w_1\|_{L_{\xi}^{\infty}}^2\|U(-1/t)w_1\|_{H_{\xi}^1}
\nonumber\\
&\lesssim
t^{-1-\alpha}(\|w_1\|_{L_{\xi}^{\infty}}+t^{-\alpha}\|w_1\|_{H_{\xi}^1})^2\|w_1\|_{H_{\xi}^1}
\nonumber\\
&\lesssim
t^{-1-\alpha}(1+t^{-\alpha+\gamma})^2t^{\gamma}\|(w_1,w_2)\|_{X_T}^3\nonumber\\
&\lesssim
t^{-1-\alpha+\gamma}\|(w_1,w_2)\|_{X_T}^3.\nonumber
\end{align}
Since
\begin{align*}
R_2&=3\lambda_1 t^{-1}\Large[\{2|U(-1/t)w_1|^2U(-1/t)w_2
+(U(-1/t)w_1)^2\overline{U(-1/t)w_2}\}\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
-(2|w_1|^2w_2+w_1^2\overline{w_2})\Large]\\
&\quad +3\lambda_1 t^{-1}(U(1/t)-1)\{2|U(-1/t)w_1|^2U(-1/t)w_2
+(U(-1/t)w_1)^2\overline{U(-1/t)w_2}\}
\end{align*}
by Lemma \ref{Lem1}, we have
\begin{align}
\lefteqn{\|R_2\|_{L_{\xi}^{\infty}}}\label{3.121}\\
&\lesssim
t^{-1}(\|U(-1/t)w_1\|_{L_{\xi}^{\infty}}+\|w_1\|_{L_{\xi}^{\infty}})\|U(-1/t)w_1-w_1\|_{L_{\xi}^{\infty}}
\|U(-1/t)w_2\|_{L_{\xi}^{\infty}}
\nonumber\\
&\quad +t^{-1}\|w_1\|_{L_{\xi}^{\infty}}^2\|U(-1/t)w_2-w_2\|_{L_{\xi}^{\infty}}
\nonumber\\
&\quad +t^{-1-\alpha}\||U(-1/t)w_1|^2U(-1/t)w_2\|_{H_{\xi}^1}
\nonumber\\
&\lesssim
t^{-1-\alpha}(\|w_1\|_{L_{\xi}^{\infty}}+t^{-\alpha}\|w_1\|_{H_{\xi}^1})\|w_1\|_{H_{\xi}^1}
(\|w_2\|_{L_{\xi}^{\infty}}+t^{-\alpha}\|w_2\|_{H_{\xi}^1})
\nonumber\\
&\quad +t^{-1-\alpha}\|w_1\|_{L_{\xi}^{\infty}}^2\|w_2\|_{H_{\xi}^1}
\nonumber\\
&\quad +t^{-1-\alpha}\|U(-1/t)w_1\|_{L_{\xi}^{\infty}}^2\|U(-1/t)w_1\|_{H_{\xi}^1}
\nonumber\\
&\lesssim
t^{-1-\alpha}(\|w_1\|_{L_{\xi}^{\infty}}+t^{-\alpha}\|w_1\|_{H_{\xi}^1})\|w_1\|_{H_{\xi}^1}
(\|w_2\|_{L_{\xi}^{\infty}}+t^{-\alpha}\|w_2\|_{H_{\xi}^1})
\nonumber\\
&\quad +t^{-1-\alpha}(\|w_1\|_{L_{\xi}^{\infty}}+t^{-\alpha}\|w_1\|_{H_{\xi}^1})^2\|w_2\|_{H_{\xi}^1}
\nonumber\\
&\lesssim
t^{-1-\alpha+\gamma}(1+t^{-\alpha+\gamma})(\log t+t^{-\alpha+\delta})\|(w_1,w_2)\|_{X_T}^3
\nonumber\\
&\quad+t^{-1-\alpha+\delta}(1+t^{-\alpha+\gamma})^{2}\|(w_1,w_2)\|_{X_T}^3
\nonumber\\
&\lesssim t^{-1-\alpha+\delta}\|(w_1,w_2)\|_{X_T}^3.\nonumber
\end{align}
By (\ref{3.91}),
\begin{align*}
\partial_t|w_1|^2=2\Im(R_1\overline{w}_1)\lesssim\|R_1(t)\|_{L_{\xi}^{\infty}}|w_1|.
\end{align*}
Hence (\ref{3.111}) yields
\begin{align}
\partial_t|w_1|\lesssim\|R_1(t)\|_{L_{\xi}^{\infty}}
\lesssim t^{-1-\alpha+\gamma}\|(w_1,w_2)\|_{X_T}^3.
\label{3.1311}
\end{align}
Therefore
\begin{align}
|w_1(t,\xi)|\lesssim\varepsilon+\|(w_1,w_2)\|_{X_T}^3.
\label{3.13110}
\end{align}
By (\ref{3.91}) and (\ref{3.101}),
\begin{align}
i\partial_{t}w_{1}\overline{w_{2}}
&=3\lambda_{1}t^{-1}|w_{1}|^{2}w_{1}\overline{w_{2}}+R_{1}\overline{w_{2}},\label{d11}\\
i\overline{w_{1}}\partial_{t}w_{2}
&=\lambda_{6}t^{-1}(2|w_{1}|^{2}\overline{w_{1}}w_{2}+|w_{1}|^{2}w_{1}\overline{w_{2}})
+R_{2}\overline{w_{1}}.\label{d21}
\end{align}
From (\ref{d11}) and (\ref{d21}), we find
\begin{align}
\lefteqn{\partial_{t}\{w_{1}\overline{w_{2}}+\tilde{\lambda}_{\pm}\overline{w_{1}}w_{2}\}
e^{\pm\lambda_c|w_{1}|^{2}\log t}}\\
&=\left(-iR_{1}\overline{w_{2}}+i\overline{R_{2}}w_{1}\pm 2i\lambda_{c}\log tw_{1}\overline{w_{2}}
\Im(R_{1}\overline{w_{1}})\right)e^{\pm\lambda_c|w_{1}|^{2}\log t}\nonumber\\
& \qquad +\tilde{\lambda}_{\pm}\left(i\overline{R_{1}}w_{2}-iR_{2}\overline{w_{1}}
\pm 2i\lambda_{c}\log t\overline{w_{1}}w_{2}
\Im(R_{1}\overline{w_{1}})\right)e^{\pm\lambda_c|w_{1}|^{2}\log t}\nonumber\\
&\lesssim|R_{1}|(1+|w_{1}|^{2}\log t)|w_{2}|+|R_{2}||w_{1}|,\nonumber
\end{align}
where
\begin{eqnarray}
\tilde{\lambda}_{\pm}=\frac{-3\lambda_{1}+2\lambda_{6}\pm\lambda_c}{\lambda_{6}},
\quad\lambda_c=\sqrt{3(\lambda_{6}-\lambda_{1})(\lambda_{6}-3\lambda_{1})}.\label{lam}
\end{eqnarray}
Hence
\begin{eqnarray*}
|w_1\overline{w_2}|\lesssim\varepsilon^{2}+t^{-\alpha+\delta}
(\|(w_1,w_2)\|_{X_T}^4+\|(w_1,w_2)\|_{X_T}^6).
\end{eqnarray*}
Therefore (\ref{3.101}) and (\ref{3.121}) imply
\begin{align*}
\partial_t|w_2|
&\lesssim t^{-1}|w_1\overline{w_2}||w_1|+|R_2|\\
&\lesssim
\varepsilon^2t^{-1}\|(w_1,w_2)\|_{X_T}
+t^{-1}\|(w_1,w_2)\|_{X_T}^5+t^{-1}\|(w_1,w_2)\|_{X_T}^7\\
&\quad+t^{-1-\alpha+\delta}\|(w_1,w_2)\|_{X_T}^3.
\end{align*}
Integrating this in $t$, we have
\begin{align}
|w_2(t,\xi)|
&\lesssim \varepsilon+\varepsilon^2\log t\|(w_1,w_2)\|_{X_T}+\log t\|(w_1,w_2)\|_{X_T}^5\label{3.151}\\
&\quad +\log t\|(w_1,w_2)\|_{X_T}^7+\|(w_1,w_2)\|_{X_T}^3.\nonumber
\end{align}
Collecting (\ref{n11}), (\ref{n21}), (\ref{3.13110}) and (\ref{3.151}), we
obtain
\begin{align*}
\|(w_1,w_2)\|_{X_T}
\lesssim \varepsilon+\varepsilon^2\|(w_1,w_2)\|_{X_T}+\|(w_1,w_2)\|_{X_T}^3
+\|(w_1,w_2)\|_{X_T}^7.
\end{align*}
The the standard continuity argument yields that
if $\varepsilon=\varepsilon(\gamma,\delta)$ is sufficiently small,
then we have
\begin{align*}
\|(w_1,w_2)\|_{X_T}\lesssim\varepsilon
\end{align*}
for any $T\ge1$. This completes the proof of Lemma \ref{Lem:Long1}.
\end{proof}
\begin{remark}
One sees from \eqref{3.91} and \eqref{3.101} that $(w_1,w_2)$ solves \eqref{eq:limitODE} with an error (by regarding $\log t $ as a ``time'' variable).
\end{remark}
\vskip2mm
\begin{proof}[{\bf Proof of Theorem \ref{T:main_add}.} ]
The global existence of solution to
(\ref{E:sysnewa11}) and decay estimate for $u_{1}$
follow from Lemma \ref{Lem:Long1} and Remark \ref{Rem:Long}.
We now derive the asymptotic formulas (\ref{iasym1}) and (\ref{iasym2})
for solution $(u_1,u_2)$ to (\ref{E:sysnewa11}) as $t\to\infty$.
As byproduct of the asymptotic formula (\ref{iasym2}), we obtain
decay estimate for $u_{2}$.
By (\ref{3.1311}), we see that there exists a real-valued positive function
$\widetilde{W}_1\in L^{\infty}$ such that
\begin{align}
\||w_1|-\widetilde{W}_1\|_{L_{\xi}^{\infty}}\lesssim\varepsilon^3 t^{-\alpha+\gamma},
\label{e11}
\end{align}
where $\gamma<\alpha<1/4$. Substituting this into (\ref{3.91}), we have
\begin{align}
i\partial_tw_1=3\lambda_1 t^{-1}\widetilde{W}_1^2w_1+R_3,\label{e21}
\end{align}
where $R_3$ is given by
\begin{align*}
R_3=3\lambda_1 t^{-1}(|w_1|^2-\widetilde{W}_1^2)w_1+R_1.
\end{align*}
Hence by (\ref{3.111}) and (\ref{e11}),
\begin{align*}
\|R_3(t)\|_{L_{\xi}^{\infty}}\lesssim\varepsilon^3 t^{-1-\alpha+\gamma}.
\end{align*}
Multiplying (\ref{e21}) by $\exp(3i\lambda_1\tilde{W}_1^2\log t)$, we have
\begin{align*}
i\partial_t(w_1e^{3i\lambda_1\widetilde{W}_1^2\log t})=R_3e^{3i\lambda_1\widetilde{W}_1^2\log t}.
\end{align*}
This implies that there exists $W_1\in L^{\infty}$ such that
\begin{align*}
\|w_1e^{3i\lambda_1\widetilde{W}_1^2\log t}-W_1\|_{L_{\xi}^{\infty}}\lesssim\varepsilon^3 t^{-\alpha+\gamma}.
\end{align*}
We easily see that $|W_1|=\widetilde{W}_1$ and obtain
\begin{align}
\|w_1-W_1e^{-3i\lambda_1|W_1|^2\log t}\|_{L_{\xi}^{\infty}}\lesssim\varepsilon^3 t^{-\alpha+\gamma}.
\label{e61}
\end{align}
To derive the asymptotic behavior of $w_{2}$, we introduce a new unknown function
\begin{equation}\notag
\beta(t,\xi) = w_2(t,\xi)e^{3i\lambda_1|W_1(\xi)|^2\log t}.
\end{equation}
Then by (\ref{3.101}), we see
\begin{equation}\label{eq:beta}
\begin{aligned}
i\partial_t \beta &=
\left\{\frac{\lambda_6}{t}(2|w_1|^2w_2+w_1^2\overline{w_2})
+R_{2}\right\}e^{3i\lambda_1|W_1|^2\log t}
- \frac{3\lambda_1}{t}|W_1|^2w_2e^{3i\lambda_1|W_1|^2\log t}\\
&= \frac{1}{t}\left( (-3\lambda_1+2\lambda_6)|W_1|^2\beta
+\lambda_6W_1^2\overline{\beta} \right) + R_4,
\end{aligned}
\end{equation}
where
\begin{eqnarray*}\notag
R_4 &=& \frac{2\lambda_{6}}{t}(|w_1|^2-|W_1|^2)\beta
+ \frac{\lambda_6}{t}\left\{\left( w_{1}e^{3i\lambda_1|W_1|^2\log t} \right)^2 - W_1^2\right\}\overline{\beta}\\
& &+R_2e^{3i\lambda_1|W_1(\xi)|^2\log t}.
\end{eqnarray*}
It holds that
\begin{equation}\notag
\begin{aligned}
\|R_4\|_{L^\infty}
&\lesssim t^{-1}(\|w_1\|_{L^\infty}+\|W_1\|_{L^\infty})
\left\|w_1-W_1e^{-3i\lambda_1|W_1(\xi)|^2\log t}\right\|_{L^\infty}\|w_2\|_{L^\infty}\\
&\quad+\|R_2\|_{L^\infty}\\
&\lesssim \varepsilon^3 t^{-1-\alpha+\delta}.
\end{aligned}
\end{equation}
From \eqref{eq:beta},
we have
\begin{equation}\label{eq:mbeta}
i\partial_t
\begin{pmatrix}
\beta \\[1mm]
\overline{\beta}
\end{pmatrix}
= \frac{1}{t}\begin{pmatrix}
(-3\lambda_1+2\lambda_6)|W_1|^2 & \lambda_6 W_1^2 \\[1mm]
-\lambda_6\overline{W_1}^2 & (3\lambda_1-2\lambda_6)|W_1|^2
\end{pmatrix}
\begin{pmatrix}
\beta \\[1mm]
\overline{\beta}
\end{pmatrix}
+
\begin{pmatrix}
R_4 \\[1mm]
- \overline{R_4}
\end{pmatrix}.
\end{equation}
Let $N:=\{\xi\in\mathbb{R}\ ;\ W_{1}(\xi)=0\}$. Since $i\partial_{t}\beta=R_{4}$ on $N$,
we easily see that $\beta=O(t^{-\alpha+\gamma})$. Therefore we concentrate
our attention to the case $\xi\in N^{c}$. If $\xi\in N^{c}$, then we see
that the matrix
\begin{equation}\notag
A
=
\begin{pmatrix}
(-3\lambda_1+2\lambda_6)|W_1|^2 & \lambda_6 W_1^2 \\[1mm]
-\lambda_6\overline{W_1}^2 & (3\lambda_1-2\lambda_6)|W_1|^2
\end{pmatrix}
\end{equation}
can be diagonalized by the matrix
\begin{equation}\notag
P=
\begin{pmatrix}
\lambda_{6}\frac{W_{1}^{2}}{|W_{1}|^{2}} & 3\lambda_{1}-2\lambda_{6}+\lambda_c \\[1mm]
3\lambda_{1}-2\lambda_{6}+\lambda_c & \lambda_{6}\frac{\overline{W}_{1}^{2}}{|W_{1}|^{2}}
\end{pmatrix},
\end{equation}
where $ \lambda $ is given by (\ref{lam}).
Diagonalizing the equation \eqref{eq:mbeta} by the matrix $ P $, we have
\begin{equation}\notag
i\partial_t
\begin{pmatrix}
\gamma \\[1mm]
\overline{\gamma}
\end{pmatrix}
=
\frac{1}{t}
\begin{pmatrix}
\lambda_c|W_{1}|^{2} & 0 \\[1mm]
0 & -\lambda_c|W_{1}|^{2}
\end{pmatrix}
\begin{pmatrix}
\gamma \\[1mm]
\overline{\gamma}
\end{pmatrix}
+
P^{-1}
\begin{pmatrix}
R_4 \\[1mm]
- \overline{R_4}
\end{pmatrix},
\end{equation}
where $\gamma$ is given by
\begin{equation}\notag
\begin{pmatrix}
\gamma \\[1mm]
\overline{\gamma}
\end{pmatrix}
= P^{-1}
\begin{pmatrix}
\beta \\[1mm]
\overline{\beta}
\end{pmatrix}.
\end{equation}
For the first component, we have
\begin{equation}\notag
i\partial_t\gamma =
\frac{1}{t}\lambda_c|W_{1}|^{2}\gamma
-\frac{\lambda_c\overline{W}_{1}^{2}R_4
+\tilde{\lambda}_{-}|W_{1}|^{2}\overline{R_4}}{2\lambda_c\tilde{\lambda}|W_{1}|^{2}},
\end{equation}
where $\tilde{\lambda}_{-}$ is given by (\ref{lam}). This implies that
\begin{equation}\notag
i\partial_t\left(
e^{i\lambda_c|W_{1}|^{2} \log t}\gamma_1
\right) = -e^{i\lambda_c|W_{1}|^{2} \log t}
\frac{\lambda_c\overline{W}_{1}^{2}R_4
+\tilde{\lambda}_{-}|W_{1}|^{2}\overline{R_4}}{2\lambda_c\tilde{\lambda}|W_{1}|^{2}}.
\end{equation}
By using the estimates for $ R_4 $, we see that
there exists a function $ W_2 \in L^\infty$ such that
\begin{equation}\notag
\left\| e^{i\lambda_c|W_{1}|^{2} \log t}\gamma_1 - W_2\right\|_{L^\infty} \lesssim \varepsilon t^{-\alpha+\gamma}.
\end{equation}
Since
\begin{equation}\notag
\begin{pmatrix}
\beta \\[1mm]
\overline{\beta}
\end{pmatrix}
=
P\begin{pmatrix}
\gamma \\[1mm]
\overline{\gamma}
\end{pmatrix}
= \begin{pmatrix}
\lambda_{6}\frac{W_{1}^{2}}{|W_{1}|^{2}}\gamma+(3\lambda_{1}-2\lambda_{6}+\lambda_c)\overline{\gamma}\\[1mm]
(3\lambda_{1}-2\lambda_{6}+\lambda_c)\gamma+\lambda_{6}\frac{\overline{W}_{1}^{2}}{|W_{1}|^{2}}\overline{\gamma}
\end{pmatrix},
\end{equation}
it follows that
\begin{equation}\notag
\begin{aligned}
\beta(t)
=& \lambda_{6}\frac{W_{1}^{2}}{|W_{1}|^{2}}W_2e^{-i\lambda_c|W_{1}|^{2}\log t}
+(3\lambda_{1}-2\lambda_{6}+\lambda_c)\overline{W}_{2}e^{i\lambda_c|W_{1}|^{2}\log t} + O(t^{-\alpha+\gamma}).
\end{aligned}
\end{equation}
Hence we see that
\begin{equation}\notag
\begin{aligned}
w_2(t, \xi)
=& \lambda_{6}\frac{W_{1}^{2}}{|W_{1}|^{2}}W_2e^{i(-3\lambda_{1}-\lambda_c)|W_{1}|^{2}\log t}\\
&+(3\lambda_{1}-2\lambda_{6}+\lambda_c)\overline{W}_{2}e^{i(-3\lambda_{1}+\lambda_c)\log t} + O(t^{-\alpha+\gamma}).
\end{aligned}
\end{equation}
By Lemma \ref{Lem1} and the asymptotic formula (\ref{e61}) for $w_1$,
we have
\begin{align}
u_1(t)
&=U(t){{\mathcal F}}^{-1}w_1\label{qq1}\\
&=M(t)D(t)U(-1/t)w_1\nonumber\\
&=M(t)D(t)w_1+O(t^{-\frac12-\alpha})\nonumber\\
&=t^{-\frac12}W_1\left(\frac{x}{t}\right)
e^{\frac{ix^2}{2t}-3i\lambda_1\left|W_1\left(\frac{x}{t}\right)\right|^2\log t-i\frac{\pi}{4}}
+O(t^{-\frac12-\alpha+\gamma}),\nonumber
\end{align}
in $L_x^{\infty}$ as $t\to\infty$. Hence we have (\ref{iasym1}). In a similar way,
we obtain (\ref{iasym2}).
This completes the proof.
\end{proof}
\section{Proofs of Theorems \ref{T:main4} and \ref{T:main51}.}
In this section, we prove Theorems \ref{T:main4} and \ref{T:main51}.
We give the proof of Theorem \ref{T:main4} only since
the proof of Theorem \ref{T:main51} is similar and simpler.
We first consider the case $\lambda_6=3\lambda_1$, i.e.,
\begin{equation}\label{E:sysnewa1}
\left\{
\begin{aligned}
&i\partial_t u_1 + \frac12\partial_x^2 u_1
= 3\lambda_1 |u_1|^2u_1, &&t\in\mathbb{R},\ x\in\mathbb{R},\\
&i\partial_t u_2 + \frac12\partial_x^2 u_2
= 3\lambda_1 (2|u_1|^2u_2+u_1^2\overline{u_2}),&&t\in\mathbb{R},\ x\in\mathbb{R},\\
&u_1(0,x)=u_{1,0}(x),\qquad u_2(0,x)=u_{2,0}(x),
&&x \in \mathbb{R},
\end{aligned}
\right.
\end{equation}
Let $w_j:={{\mathcal F}}U(-t)u_j$, $j=1,2$.
As in the proof of Theorem \ref{T:main_add}, by applying ${{\mathcal F}}U(-t)$ to (\ref{E:sysnewa1}), we obtain
\begin{align}
i\partial_tw_1&=3\lambda_1 t^{-1}U(1/t)|U(-1/t)w_1|^2U(-1/t)w_1,\label{a3}\\
i\partial_tw_2&=
3\lambda_1 t^{-1}U(1/t)
\left\{2|U(-1/t)w_1|^2U(-1/t)w_
+(U(-1/t)w_1)^2\overline{U(-1/t)w_2}\right\}.\label{a4}
\end{align}
We first obtain short time bounds of $w_j$.
\begin{lemma}[Short time bounds]\label{Lem:Short}
There exists $\varepsilon_0>0$ such
that for any $u_{j,0}\in H^{1}(\mathbb{R})\cap H^{0,1}(\mathbb{R})$ satisfying
\varepsilon:=\sum_{j=1}^2(\|u_{j,0}\|_{H^{1}}+\|u_{j,0}\|_{H^{0,1}})\leqslant\varepsilon_0$,
there exists a unique solution $u_j\in C([0,1],H^{1}(\mathbb{R})\cap H^{0,1}(\mathbb{R}))$
of (\ref{E:sysnewa1}) satisfying
\begin{align*}
\sup_{t\in[0,1]}\sum_{j=1}^{2}\|w_j(t)\|_{H_{\xi}^1}\lesssim\varepsilon.
\end{align*}
\end{lemma}
\noindent
{\bf Proof of Lemma \ref{Lem:Short}.} The proof follows from
a standard well-posedness theory, see \cite{Caz} for instance.
Hence we omit the proof. $\qed$
\vskip2mm
Next we derive a long time bounds of $w_j$.
We fix $0<\gamma<\delta<1/100$ and introduce
\begin{align*}
\|(w_1,w_2)\|_{X_T}&:=
\sup_{t\in[1,T]}
\left\{\|w_1(t)\|_{L_{\xi}^{\infty}}+\langle t\rangle^{-\gamma}\|w_1(t)\|_{H_{\xi}^1}\right.\\
& \qquad\quad+
\left.(\log\langle t\rangle)^{-1}\|w_2(t)\|_{L_{\xi}^{\infty}}
+\langle t\rangle^{-\delta}\|w_2(t)\|_{H_{\xi}^1}\right\}.
\end{align*}
\begin{lemma}[Long time bounds]\label{Lem:Long}
There exists $\varepsilon_0>0$ such
that for any $u_{j,0}\in H^{1}(\mathbb{R})\cap H^{0,1}(\mathbb{R})$ satisfying
\varepsilon:=\sum_{j=1}^2(\|u_{j,0}\|_{H^{1}}+\|u_{j,0}\|_{H^{0,1}})\leqslant\varepsilon_0$,
there exists a unique global solution $u_j\in C([0,\infty),H^{1}(\mathbb{R})\cap H^{0,1}(\mathbb{R}))$
of (\ref{E:sysnewa1}) satisfying
\begin{align}
\|(w_1,w_2)\|_{X_{\infty}}\lesssim\varepsilon.\label{longbound}
\end{align}
\end{lemma}
\begin{remark}\label{Rem:Long11} As in Remark \ref{Rem:Long}, we
see that if $(w_1,w_2)$ satisfy (\ref{longbound}), then
we obtain
\begin{align*}
\|u_1(t)\|_{L_x^{\infty}}\lesssim\varepsilon t^{-\frac12},
\quad
\|u_2(t)\|_{L_x^{\infty}}\lesssim\varepsilon t^{-\frac12}\log t
\end{align*}
for any $t\ge1$.
\end{remark}
\noindent
{\bf Proof of Lemma \ref{Lem:Long}.}
As in the proof of Lemma \ref{Lem:Long1}, we have
\begin{align}
\langle t\rangle^{-\gamma}\|w_1(t)\|_{H_{\xi}^1}
&\lesssim
\varepsilon+\frac{1}{\gamma}\|(w_1,w_2)\|_{X_T}^3,\label{n1}\\
\langle t\rangle^{-\delta}\|w_2(t)\|_{H_{\xi}^1}
&\lesssim
\varepsilon+\frac{1}{\delta}\|(w_1,w_2)\|_{X_T}^3.\label{n2}
\end{align}
Next we derive $L^{\infty}$ estimates for $w_j$.
Form viewpoint of the asymptotic formulas for $U(\pm1/t)$
(Lemma \ref{Lem1}), we decompose the nonlinear term as follows:
\begin{align}
i\partial_tw_1&=3\lambda_1 t^{-1}|w_1|^2w_1+R_1,\label{3.9}\\
i\partial_tw_2&=3\lambda_1 t^{-1}(2|w_1|^2w_2+w_1^2\overline{w_2})+R_2,\label{3.10}
\end{align}
where $R_1$ and $R_2$ are given by
\begin{align*}
R_1&=3\lambda_1 t^{-1}\Large[U(1/t)|U(-1/t)w_1|^2U(-1/t)w_1-|w_1|^2w_1\Large],\\%\label{b1}\\
R_2&=3\lambda_1 t^{-1}\Large[U(1/t)\{2|U(-1/t)w_1|^2U(-1/t)w_2
+(U(-1/t)w_1)^2\overline{U(-1/t)w_2}\}\nonumber\\
&\qquad\ \ -(2|w_1|^2w_2+w_1^2\overline{w_2})\Large]
\end{align*}
As in the proof of Theorem \ref{T:main_add}, Lemma \ref{Lem1} yields
\begin{align}
\|R_1\|_{L_{\xi}^{\infty}
\lesssim
t^{-1-\alpha+\gamma}\|(w_1,w_2)\|_{X_T}^3,\label{3.11
\end{align}
\begin{align}
\|R_2\|_{L_{\xi}^{\infty}
\lesssim t^{-1-\alpha+\delta}\|(w_1,w_2)\|_{X_T}^3.\label{3.12
\end{align}
By (\ref{3.9}),
\begin{align*}
\partial_t|w_1|^2=2\Im(R_1\overline{w}_1)\lesssim\|R_1(t)\|_{L_{\xi}^{\infty}}|w_1|.
\end{align*}
Hence (\ref{3.11}) yields
\begin{align}
\partial_t|w_1|\lesssim\|R_1(t)\|_{L_{\xi}^{\infty}}
\lesssim t^{-1-\alpha+\gamma}\|(w_1,w_2)\|_{X_T}^3.
\label{3.131}
\end{align}
Therefore
\begin{align}
|w_1(t,\xi)|\lesssim\varepsilon+\|(w_1,w_2)\|_{X_T}^3.
\label{3.13}
\end{align}
By (\ref{3.9}), (\ref{3.10}), (\ref{3.11}) and (\ref{3.12}),
\begin{align}
\partial_t\Re(w_1\overline{w}_2)
&=
\Im(R_1\overline{w}_2)+\Im(R_2\overline{w}_1)
\label{3.141}\\
&\lesssim
\|R_1\|_{L_{\xi}^{\infty}}\|w_2\|_{L_{\xi}^{\infty}}
+\|R_2\|_{L_{\xi}^{\infty}}\|w_1\|_{L_{\xi}^{\infty}}
\nonumber\\
&\lesssim
(t^{-1-\alpha+\gamma}\log t+t^{-1-\alpha+\delta})\|(w_1,w_2)\|_{X_T}^4
\nonumber\\
&\lesssim
t^{-1-\alpha+\delta}\|(w_1,w_2)\|_{X_T}^4.
\nonumber
\end{align}
Integrating this in $t$, we obtain
\begin{align}
|\Re(w_1\overline{w}_2)|
\lesssim\varepsilon^2+\|(w_1,w_2)\|_{X_T}^4.\label{3.14}
\end{align}
On the other hand, by (\ref{3.10}),
\begin{align*}
i\partial_tw_2=3\lambda_1 t^{-1}|w_1|^2w_2+6\lambda_1 t^{-1}w_1\Re(w_1\overline{w_2})+R_2.
\end{align*}
Hence,
\begin{align*}
\partial_t|w_2|^2
&=12\lambda_1 t^{-1}\Re(w_1\overline{w_2})\Im(w_1\overline{w_2})+2\Im(R_2\overline{w_2})\\
&\lesssim t^{-1}|\Re(w_1\overline{w_2})||w_1||w_2|+|R_2||w_2|.
\end{align*}
Therefore (\ref{3.12}) and (\ref{3.14}) imply
\begin{align*}
\partial_t|w_2|
&\lesssim t^{-1}|\Re(w_1\overline{w_2})||w_1|+|R_2|\\
&\lesssim
\varepsilon^2t^{-1}\|(w_1,w_2)\|_{X_T}
+t^{-1}\|(w_1,w_2)\|_{X_T}^5+t^{-1-\alpha+\delta}\|(w_1,w_2)\|_{X_T}^3.
\end{align*}
Integrating this in $t$, we have
\begin{align}
|w_2(t,\xi)|
&\lesssim \varepsilon+\varepsilon^2\log t\|(w_1,w_2)\|_{X_T}\label{3.15}\\
&\quad +\log t\|(w_1,w_2)\|_{X_T}^5+\|(w_1,w_2)\|_{X_T}^3.\nonumber
\end{align}
Collecting (\ref{n1}), (\ref{n2}), (\ref{3.13}) and (\ref{3.15}), we
obtain
\begin{align*}
\|(w_1,w_2)\|_{X_T}
\lesssim \varepsilon+\varepsilon^2\|(w_1,w_2)\|_{X_T}+\|(w_1,w_2)\|_{X_T}^3
+\|(w_1,w_2)\|_{X_T}^5.
\end{align*}
The the standard continuity argument yields that
if $\varepsilon=\varepsilon(\gamma,\delta)$ is sufficiently small,
then we have
\begin{align*}
\|(w_1,w_2)\|_{X_T}\lesssim\varepsilon
\end{align*}
for any $T\ge1$. This completes the proof of Lemma \ref{Lem:Long}. $\qed$
\vskip2mm
\noindent
{\bf Proof of Theorem \ref{T:main4}.}
The global existence and decay estimates for solution to
(\ref{E:sysnewa1}) follow from Lemma \ref{Lem:Long} and
Remark \ref{Rem:Long}. We now derive the asymptotic formulas
(\ref{asym1}) and (\ref{asym2}) for solution $(u_1,u_2)$ to
(\ref{E:sysnewa1}) as $t\to\infty$.
As in the proof of Theorem \ref{T:main_add},
we find that there exists $W_1\in L^{\infty}$ such that
\begin{align}
\|w_1e^{3i\lambda_1|{W}_1|^2\log t}-W_1\|_{L_{\xi}^{\infty}}\lesssim\varepsilon^3 t^{-\alpha+\gamma}.
\label{e6}
\end{align}
Furthermore, by (\ref{3.141}),
there exists a real-valued function $\widetilde{W}\in L^{\infty}$ such that
\begin{align}
\|\Re(w_1\overline{w}_2)-\widetilde{W}\|_{L_{\xi}^{\infty}}
\lesssim\varepsilon^4 t^{-\alpha+\delta}.
\label{e3}
\end{align}
Substituting this into (\ref{3.10}), we have
\begin{align}
i\partial_tw_2=3\lambda_1 t^{-1}|W_1|^2w_2+6\lambda_1 t^{-1}W_1\widetilde{W}e^{-3i\lambda_1 |W_1|^2\log t}+R_4,
\label{e4}
\end{align}
where
\begin{align*}
R_4&=
3\lambda_1 t^{-1}(|w_1|^2-|W_1|^2)w_2
+6\lambda_1 t^{-1}(w_1\Re(w_1\overline{w}_2)-W_1\widetilde{W}e^{-3i\lambda_1 |W_1|^2\log t})\\
&\quad +R_2.
\end{align*}
Hence by (\ref{3.12}), (\ref{e6}) and (\ref{e3}),
\begin{align}
\|R_4(t)\|_{L_{\xi}^{\infty}}\lesssim\varepsilon^5 t^{-1-\alpha+\delta}.
\label{e7}
\end{align}
Multiplying (\ref{e4}) by $\exp(3i\lambda_1 |W_1|^2\log t)$, we find
\begin{align*}
i\partial_t\left\{w_2e^{3i\lambda_1 |W_1|^2\log t}\right\}=6\lambda_1 t^{-1}W_1\widetilde{W}+R_4e^{3i\lambda_1 |W_1|^2\log t}.
\end{align*}
Since $W_1$ and $\widetilde{W}$ are independent of $t$, we see
\begin{align*}
i\partial_t\left\{w_2e^{3i\lambda_1 |W_1|^2\log t}+6i\lambda_1\log tW_1\widetilde{W}\right\}
=R_4e^{3i\lambda_1 |W_1|^2\log t}.
\end{align*}
Hence, by (\ref{e7}) we find that
there exists $W_2\in L^{\infty}$ such that
\begin{align}
\|w_2e^{3i\lambda_1 |W_1|^2\log t}+6i\lambda_1\log tW_1\widetilde{W}-W_2\|_{L_{\xi}^{\infty}}
\lesssim\varepsilon^5 t^{-\alpha+\delta}.\label{e9}
\end{align}
Especially, we have $\widetilde{W}=\Re(W_1\overline{W}_2)$.
In the same way as in the proof of (\ref{qq1}),
from the asymptotic formulas (\ref{e6}) for $w_1$ and (\ref{e9})
for $w_{2}$, we obtain (\ref{asym1}) and (\ref{asym2}).
For the case $\lambda_6=\lambda_1$, it suffices to replace $\Re(w_1\overline{w}_2)$
by $\Im(w_1\overline{w}_2)$ in the proof for the case $\lambda_6=3\lambda_1$.
This completes the proof of Theorem \ref{T:main4}. $\qed$
|
2,869,038,154,596 | arxiv | \section{Introduction}
\label{sec1}
According to Moore’s law \cite{ref31}, traditional computer architectures will reach their physical limits in the near future. Quantum computers \cite{ref1, ref2, ref3, ref4, ref5, ref6, ref7,ref8, ref9, ref10, ref11, ref12, ref13, ref14, ref15, ref16, ref17, ref18, ref19, ref20, ref21, ref22,ref23} provide a tool to solve problems more efficiently than ever would be possible with traditional computers \cite{ref1, ref2, ref3, ref4, ref5, ref6, ref7,ref8, ref9, ref10, ref11}. The power of quantum computing is based on the fundamentals of quantum mechanics. In a quantum computer, information is represented by quantum information, and information processing is achieved by quantum gates that realize quantum operations \cite{ref1, ref2, ref3, ref4, ref5, ref6, ref7,ref8, ref9, ref10, ref11,p1,p2,p3}. These quantum operations are performed on the quantum states, which are then outputted and measured in a measurement phase. The measurement process is applied to each quantum state where the quantum information conveyed by the quantum states is converted into classical bits. Quantum computers have been demonstrated in practice \cite{ref1, ref2, ref3, ref4, ref5, ref6, ref8, ref9}, and several implementations are currently in progress \cite{ref1, ref2, ref3, ref4, ref5, ref6, ref7,ref8, ref9, ref10, ref11, ref16, ref17, ref18, ref19}.
In the physical layer of a gate-model quantum computer, the device contains quantum gates, quantum ports (of quantum gates), and quantum wires for the quantum circuit\footnote{The term ``quantum circuit'', in general, refers to software, not hardware; it is a description or prescription for what quantum operations should be applied when and does not refer to a physically implemented circuit analogous to a printed electronic circuit. In our setting, it refers to the hardware layer.}. In contrast to traditional automated circuit design \cite{ref24, ref25, ref26, ref27, ref28, ref29, ref30}, a quantum system cannot participate in more than one quantum gate simultaneously. As a corollary, the quantum gates of a quantum circuit are applied in several rounds in the physical layer of the quantum circuit \cite{ref1, ref2, ref3, ref4, ref5, ref6, ref7,ref8, ref9, ref10, ref11, ref16, ref17, ref18, ref19}.
The physical layout design and optimization of quantum circuits have different requirements with several open questions and currently represent an active area of study \cite{ref1, ref2, ref3, ref4, ref5, ref6, ref7,ref8, ref9, ref10, ref11, ref16, ref17, ref18, ref19}. Assuming that the goal is to construct a reduced quantum circuit that can simulate the original system, the reduction process should be taken on the number of input quantum states, gate operations of the quantum circuit, and the number of output measurements. Another important question is the maximization of objective function associated with an arbitrary computational problem that is fed into the quantum computer. These parallel requirements must be satisfied simultaneously, which makes the optimization procedure difficult and is an emerging issue in present and future quantum computer developments.
In the proposed QTAM method, the goal is to determine a topology for the quantum circuits of quantum computer architectures that can solve arbitrary computational problems such that the quantum circuit is minimized in the physical layer, and the objective function of an arbitrary selected computational problem is maximized. The physical layer minimization covers the simultaneous minimization of the quantum circuit area (quantum circuit height and depth of the quantum gate structure, where the depth refers to the number of time steps required for the quantum operations making up the circuit to be run on quantum hardware), the total area of the quantum wires of the quantum circuit, the maximization of the objective function, and the minimization of the required number of input quantum systems and output measurements. An important aim of the physical layout minimization is that the resulting quantum circuit should be identical to a high complexity reference quantum circuit (i.e., the reduced quantum circuit should be able to simulate a nonreduced quantum circuit).
The minimization of the total quantum wire length in the physical layout is also an objective in QTAM. It serves to improve the processing in the topology of the quantum circuit. However, besides the minimization of the physical layout of the quantum circuit, the quantum computer also has to solve difficult computational problems very efficiently (such as the maximization of an arbitrary combinatorial optimization objective function \cite{ref16, ref17, ref18, ref19}. To achieve this goal in the QTAM method, we also defined an objective function that provides the maximization of objective functions of arbitrary computational problems. The optimization method can be further tuned by specific input constraints on the topology of the quantum circuit (paths in the quantum circuit, organization of quantum gates, required number of rounds of quantum gates, required number of measurement operators, Hamiltonian minimization, entanglement between quantum states, etc.) or other hardware restrictions of quantum computers, such as the well-known \textit{no-cloning theorem} \cite{ref22}. The various restrictions on quantum hardware, such as the number of rounds required to be integrated into the quantum gate structure, or entanglement generation between the quantum states are included in the scheme. These constraints and design attributes can be handled in the scheme through the definition of arbitrary constraints on the topology of the quantum circuit, or by constraints on the computational paths.
The combinatorial objective function is measured on a computational basis, and an objective function value is determined from the measurement result to quantify the current state of the quantum computer. Quantum computers can be used for combinatorial optimization problems. These procedures aim to use the quantum computer to produce a quantum system that is dominated by computational basis states such that a particular objective function is maximized.
Recent experimental realizations of quantum computers are qubit architectures \cite{ref1, ref2, ref3, ref4, ref5, ref6, ref7,ref8, ref9, ref10, ref11, ref12, ref13, ref14, ref15, ref16, ref17, ref18, ref19}, and the current quantum hardware approaches focus on qubit systems (i.e., the dimension $d$ of the quantum system is two, $d=2$). However, while the qubit layout is straightforwardly inspirable by ongoing experiments, the method is developed for arbitrary dimensions to make it applicable for future implementations. Motivated by these assumptions, we therefore would avoid the term `qubit' in our scheme to address the quantum states and instead use the generalized term, `quantum states' throughout, which refers to an arbitrary dimensional quantum system. We also illustrate the results through superconducting quantum circuits \cite{ref1, ref2, ref3, ref4, ref5}; however, the framework is general and flexible, allowing a realization for near term gate-model quantum computer implementations.
The novel contributions of this paper are as follows:
\begin{itemize}
\item \textit{We define a method for designing quantum circuits for gate-model quantum computers.}
\item \textit{We conceive the QTAM algorithm, which provides a quantum circuit minimization on the physical layout (circuit depth and area), quantum wire length minimization, objective function maximization, input size and measurement size minimization for quantum circuits.}
\item \textit{We define a multilayer structure for quantum computations using the hardware restrictions on the topology of gate-model quantum computers.}
\end{itemize}
This paper is organized as follows. In \sref{relw} the related works are summarized. \sref{sec2} proposes the system model. In \sref{sec4} the details of the optimization method are discussed, while \sref{sec5} studies the performance of the model. Finally, \sref{sec6} concludes the paper. Supplemental information is inlucded in the Appendix.
\section{Related Works}
\label{relw}
The related works are summarized as follows.
A strong theoretical background on the logical model of gate-model quantum computers can be found in \cite{ref17,ref16,ref18}. In \cite{ref7}, the model of a gate-model quantum neural network model is defined.
In \cite{refa1}, the authors defined a hierarchical approach to computer-aided design of quantum circuits. The proposed model was designed for the synthesis of permutation class of quantum logic circuits. The method integrates evolutionary and genetic approaches to evolve arbitrary quantum circuit specified by a target unitary matrix. Instead of circuit optimization, the work focuses on circuit synthesis.
In \cite{refa2}, the authors propose a simulation of quantum circuits by low-rank stabilizer decompositions. The work focuses on the problem of simulation of quantum circuits containing a few non-Clifford gates. The framework focuses on the theoretical description of the stabilizer rank. The authors also derived the simulation cost.
A method for the designing of a T-count optimized quantum circuit for integer multiplication with $4n+1$ qubits was defined in \cite{int}. The T-count \cite{tc} measures the number of T-gates, and has a relevance because of the implementation cost of a T gate is high. The aim of the T-count optimization is to reduce the number of T-gates without substantially increasing the number of qubits. The method also applied for quantum circuit designs of integer division \cite{int2}. In the optimization takes into consideration both the T-count and T-depth, since T-depth is also an important performance measure to reduce the implementation costs. Another method for designing of reversible floating point divider units was proposed in \cite{div}.
In \cite{logic}, a methodology for quantum logic gate construction was defined. The main purpose of the scheme was to construct fault-tolerant quantum logic gates with a simple technique. The method is based on the quantum teleportation method \cite{tel}.
A method for the synthesis of depth-optimal quantum circuits was defined in \cite{depth}. The aim of the proposed algorithm is to compute the depth-optimal decompositions of logical operations via an application of the so-called meet-in-the-middle technique. The authors also applied their scheme for the factorizations of some quantum logical operations into elementary gates in the in the Clifford+T set.
A framework to the study the compilation and description of fault-tolerant, high level quantum circuits is proposed in \cite{ft}. The authors defined a method to convert high level quantum circuits consisting of commonly used gates into a form employing all decompositions and ancillary protocols needed for fault-tolerant error correction. The method also represents a useful tool for quantum hardware architectures with topological quantum codes.
The Quantum Approximate Optimization Algorithm (QAOA) optimization algorithm is defined in \cite{ref16}. The QAOA has been defined to evalute approximate solutions for combinatorial optimization problems fed into the quantum computer.
Relevant attributes of the QAOA algorithm are studied in \cite{refa3}.
In \cite{refa4}, the authors analyzed the performance of the QAOA algorithm on near-term gate-model quantum devices.
The implementation of QAOA with parallelizable gates is studied in \cite{refa5}.
In \cite{refa6} the performance of QAOA is studied on different problems. The analysis covers the MaxCut combinatorial optimization problem, and the problem of quantum circuit optimizations on a classical computer using automatic differentiation and stochastic gradient descent. The work also revealed that QAOA can exceed the performance of a classical polynomial time algorithm (Goemans-Williamson algorithm \cite{refgw}) with modest circuit depth. The work also concluded that the performance of QAOA with fixed circuit depth is insensitive to problem size.
In \cite{refa7}, the authors studied the problem of ultrafast state preparation via the QAOA with long range interactions. The works provides an application for the QAOA in near-term gate-model quantum devices. As the authors concluded, the QAOA-based approach leads to an extremely efficient state preparation, for example the method allows us to prepare Greene-Horne-Zeilinger (GHZ) states with $\mathcal{O}\left( 1 \right)$ circuit depth. The results were also demonstrated by several other examples.
Another experimental approach for the implementation of qubit entanglement and parallel logic operations with a superconducting circuit was presented in \cite{song}. In this work, the authors generated entangled GHZ states with up to 10 qubits connecting to a bus resonator in a superconducting circuit. In the proposed implementation, the resonator-mediated qubit-qubit interactions are used to control the entanglement between the qubits and to operate on different pairs in parallel.
A review on the noisy intermediate-scale quantum (NISQ) era can be found in \cite{refpr}.
The subject of quantum computational supremacy is discussed in \cite{refha, aar}.
For a survey on the attributes of quantum channels, see \cite{ref11}, a survey on quantum computing technology is included in \cite{refsur}.
\section{System Model}
\label{sec2}
The simultaneous physical-layer minimization and the maximization of the objective function are achieved by the Quantum Triple Annealing Minimization (QTAM) algorithm. The QTAM algorithm utilizes the framework of simulated annealing (SA) \cite{ref24, ref25, ref26, ref27, ref28, ref29, ref30}, which is a stochastic point-to-point search method.
The procedure of the QTAM algorithm with the objective functions are depicted in \fref{fig1}. The detailed descriptions of the methods and procedures are included in the next sections.
\begin{center}
\begin{figure}[h!]
\begin{center}
\includegraphics[angle = 0,width=1\linewidth]{fig1.pdf}
\caption{The QTAM method for quantum computers. The quantum gate ($QG$) circuit computation model consists of an input array of $n$ quantum states (depicted by the green box), layers of quantum gates integrated into a quantum circuit (depicted by the purple box), and a measurement phase (depicted by the orange box). The quantum gates that act on the quantum states formulate a quantum circuit with a given circuit height and depth. The area of the quantum circuit is minimized by objective function $F_{{\rm 1}} $, while the total quantum wire area of the quantum circuit is minimized by $F_{{\rm 2}} $ ($F_{{\rm 1}} \wedge F_{{\rm 2}} $ is referred via the quantum circuit minimization). The result of the minimization is a quantum circuit of quantum gates with minimized quantum circuit area, minimized total quantum wire length, and a minimized total Hamiltonian operator. The maximization of a corresponding objective function of arbitrary selected computational problems for the quantum computer is achieved by $F_{{\rm 3}} $ (referred via the objective function maximization). Objective functions $F_{{\rm 4}} $ and $F_{{\rm 5}} $ are defined for the minimization of the number of quantum states (minimization of input size), and the total number of measurements (minimization of measurements).}
\label{fig1}
\end{center}
\end{figure}
\end{center}
\subsection{Computational Model}
By theory, in an SA-based procedure a current solution $s_{A} $ is moved to a neighbor $s_{B} $, which yields an acceptance probability \cite{ref24, ref25, ref26, ref27, ref28, ref29, ref30}
\begin{equation} \label{eq1}
{\Pr }\left(f\left(s_{A} \right),f\left(s_{B} \right)\right)=\frac{{\rm 1}}{{\rm 1}+e^{\left(\frac{f\left(s_{A} \right)-f\left(s_{B} \right)}{Tf\left(s_{A} \right)} \right)} } ,
\end{equation}
where $f\left(s_{A} \right)$ and $f\left(s_{B} \right)$ represent the relative performances of the current and neighbor solutions, while $T$ is a control parameter, $T\left(t\right)=T_{\max } {\rm exp}\left(-R\left(t/k\right)\right)$, where $R$ is the temperature decreasing rate, $t$ is the iteration counter, $k$ is a scaling factor, while $T_{\max } $ is an initial temperature.
Since SA is a probabilistic procedure it is important to minimize the acceptance probability of unfavorable solutions and avoid getting stuck in a local minima.
Without loss of generality, if $T$ is low, \eqref{eq1} can be rewritten in function of $f\left(s_{A} \right)$ and $f\left(s_{B} \right)$ as
\begin{equation} \label{eq2}
{\Pr }\left(f\left(s_{A} \right),f\left(s_{B} \right)\right)=\left\{\begin{split} {{\rm 1,if\text{ }}f\left(s_{A} \right)>f\left(s_{B} \right)} \\ {{\rm 0,if\text{ }}f\left(s_{A} \right)\le f\left(s_{B} \right)} \end{split}\right. .
\end{equation}
In the QTAM algorithm, we take into consideration that the objectives, constraints, and other functions of the method, by some fundamental theory, are characterized by different magnitude ranges \cite{ref24, ref25, ref26, ref27, ref28, ref29, ref30}. To avoid issues from these differences in the QTAM algorithm we define three annealing temperatures, $T_{f} \left(t\right)$ for objectives, $T_{g} \left(t\right)$ for constraints and $T_{c} \left(t\right)$ for the probability distribution closeness (distance of the output distributions of the reference quantum circuit and the reduced quantum circuit).
In the QTAM algorithm, the acceptance probability of a new solution $s_{B} $ at a current solution $s_{A} $ is as
\begin{equation} \label{eq3}
{\Pr }\left(s_{A} ,s_{B} \right)=\frac{{\rm 1}}{{\rm 1}+e^{\tilde{d}\left(f\right)T_{f} \left(t\right)} e^{\tilde{d}\left(g\right)T_{g} \left(t\right)} e^{\tilde{d}\left(c\right)T_{c} \left(t\right)} } ,
\end{equation}
where $\tilde{d}\left(f\right)$, $\tilde{d}\left(g\right)$ and $\tilde{d}\left(c\right)$ are the average values of objective, constraint and distribution closeness domination, see Algorithm 1.
To aim of the QTAM algorithm is to minimize the cost function
\begin{equation} \label{eq4}
\min f\left({\rm x}\right)=\alpha _{{\rm 1}} F_{{\rm 1}} \left({\rm x}\right)+\ldots +\alpha _{N_{obj} } F_{N_{obj} } \left({\rm x}\right)+F_{s} ,
\end{equation}
where ${\rm x}$ is the vector of design variables, while $\alpha $ is the vector of weights, while $N_{obj} $ is the number of primarily objectives. Other $i$ secondary objectives (aspect ratio of the quantum circuit, overlaps, total net length, etc.) are minimized simultaneously via the single-objective function $F_{s} $ in \eqref{eq4} as
\begin{equation} \label{eq5}
F_{s} =\sum _{i} \alpha _{i} F_{i} \left(x\right).
\end{equation}
\subsection{Objective Functions}
We defined $N_{obj} =5$ objective functions for the QTAM algorithm. Objective functions $F_{{\rm 1}} $ and $F_{{\rm 2}} $ are defined for minimization of $QG$ quantum circuit in the physical layer. The aim of objective function $F_{{\rm 1}} $ is the minimization of the $A_{QG} $ quantum circuit area of the $QG$ quantum gate structure,
\begin{equation} \label{eq6}
F_{{\rm 1}} :\min \left(A_{QG} \right)=\min \left(H'_{QG} \cdot D'_{QG} \right),
\end{equation}
where $H'_{QG} $ is the optimal circuit height of $QG$, while $D'_{QG} $ is the optimal depth of $QG$.
Focusing on superconducting quantum circuits \cite{ref1, ref2, ref3, ref4, ref5}, the aim of $F_{{\rm 2}} $ is the physical layout minimization of the $w_{QG} $ total quantum wire area of $QG$, as
\begin{equation} \label{eq7}
F_{{\rm 2}} :w_{QG} =\min \sum _{k=1}^{h} \left(\sum _{i=1}^{p} \sum _{j=1}^{q} \ell _{ij} \cdot \delta _{ij} \left(\psi _{ij} \right)\right),
\end{equation}
where $h$ is the number of nets of the $QG$ circuit, $p$ is the number of quantum ports of the $QG$ quantum circuit considered as sources of a condensate wave function amplitude \cite{ref1, ref2, ref3, ref4, ref5}, and $q$ the number of quantum ports considered as sinks of a condensate wave function amplitude, $\ell _{ij} $ is the length of the quantum wire $ij$, $\delta _{ij} $ is the effective width of the quantum wire $ij$, while $\psi _{ij} $ is the (root mean square) condensate wave function amplitude \cite{ref1, ref2, ref3, ref4, ref5} associated to the quantum wire $ij$.
Objective function $F_{{\rm 3}} $ is defined for the maximization of the expected value of an objective function $C_{L} (\vec{\Phi })$ as
\begin{equation} \label{eq8}
F_{3} :\max {C_{L} (\vec{\Phi })}=\max {\langle \vec{\Phi }|C|\vec{\Phi }\rangle },
\end{equation}
where $C$ is an objective function, $\vec{\Phi }$ is a collection of $L$ parameters
\begin{equation} \label{eq9}
\vec{\Phi }=\Phi _{{\rm 1}} ,\ldots ,\Phi _{L}
\end{equation}
such that with $L$ unitary operations, state $|\vec{\Phi }\rangle $ is evaluated as
\begin{equation} \label{eq10}
|\vec{\Phi }\rangle =U_{L} \left(\Phi _{L} \right),\ldots ,U_{{\rm 1}} \left(\Phi _{{\rm 1}} \right)\left| \varphi \right\rangle ,
\end{equation}
where $U_{i} $ is an $i$-th unitary that depends on a set of parameters $\Phi _{i} $, while $\left| \varphi \right\rangle $ is an initial state. Thus the goal of $F_{{\rm 3}} $ is to determine the $L$ parameters of $\vec{\Phi }$ (see \eqref{eq9}) such that $\langle \vec{\Phi }|C|\vec{\Phi }\rangle $ is maximized.
Objective functions $F_{{\rm 4}} $ and $F_{{\rm 5}} $ are defined for the minimization of the number of input quantum states and the number of required measurements. The aim of objective function $F_{{\rm 4}} $ is the minimization of the number of quantum systems on the input of the $QG$ circuit,
\begin{equation} \label{eq11}
F_{{\rm 4}} :\min \left(n\right).
\end{equation}
The aim of objective function $F_{{\rm 5}} $ is the minimization of the total number of measurements in the $M$ measurement block,
\begin{equation} \label{eq12}
F_{{\rm 5}} :\min \left(m\right)=\min {\left(N_{M} \left|M\right|\right)},
\end{equation}
where $m=N_{M} \left|M\right|$, where $N_{M} $ is the number of measurement rounds, $\left|M\right|$ is the number of measurement gates in the $M$ measurement block.
\subsection{Constraint Violations}
The optimization at several different objective functions results in different Pareto fronts \cite{ref24, ref25, ref26, ref27} of placements of quantum gates in the physical layout. These Pareto fronts allow us to find feasible tradeoffs between the optimization objectives of the QTAM method. The optimization process includes diverse objective functions, constraints, and optimization criteria to improve the performance of the quantum circuit and to take into consideration the hardware restrictions of quantum computers. In the proposed QTAM algorithm the constraints are endorsed by the modification of the Pareto dominance \cite{ref24, ref25, ref26, ref27} values by the different sums of constraint violation values. We defined three different constraint violation values.
\subsubsection{Distribution Closeness Dominance}
In the QTAM algorithm, the Pareto dominance is first modified with the sum of distribution closeness violation values, denoted by $c_{s} \left(\cdot \right)$. The aim of this iteration is to support the closeness of output distributions of the reduced quantum circuit $QG$ to the output distribution of the reference quantum circuit $QG_{R} $.
Let $P_{QG_{R} } $ the output distribution after the $M$ measurement phase of the reference (original) quantum circuit $QG_{R} $ to be simulated by $QG$, and let $Q_{QG} $ be the output distribution of the actual, reduced quantum circuit $QG$. The distance between the quantum circuit output distributions $P_{QG_{R} } $ and $Q_{QG} $ (distribution closeness) is straightforwardly yielded by the relative entropy function, as
\begin{equation} \label{eq13}
D\left(\left. P_{QG_{R} } \right\| Q_{QG} \right)=\sum _{i} P_{QG_{R} } \left(i\right)\log _{2} \frac{P_{QG_{R} } \left(i\right)}{Q_{QG} \left(i\right)} .
\end{equation}
For two solutions $x$ and $y$, the $d_{x,y} \left(c\right)$ distribution closeness dominance function is defined as
\begin{equation} \label{eq14}
d_{x,y} \left(c\right)=c_{s} \left(x\right)-c_{s} \left(y\right),
\end{equation}
where $c_{s} \left(\cdot \right)$ is evaluated for a given solution $z$ as
\begin{equation} \label{eq15}
c_{s} \left(z\right)=\sum _{i=1}^{N_{v} } v_{i}^{c} ,
\end{equation}
where $v_{i}^{c} $ is an $i$-th distribution closeness violation value, $N_{v} $ is the number of distribution closeness violation values for a solution $z$.
In terms of distribution closeness dominance, $x$ dominates $y$ if the following relation holds:
\begin{equation} \label{eq16}
\begin{split} {\left(\left(c_{s} \left(x\right)<0\right)\wedge \left(c_{s} \left(y\right)<0\right)\wedge \left(c_{s} \left(x\right)>c_{s} \left(y\right)\right)\right)} \\ \vee{\left(\left(c_{s} \left(x\right)=0\right)\wedge \left(c_{s} \left(y\right)<0\right)\right),} \end{split}
\end{equation}
thus \eqref{eq16} states that $x$ dominates $y$ if both $x$ and $y$ are unfeasible, and $x$ is closer to feasibility than $y$, or $x$ is feasible and $y$ is unfeasible.
By similar assumptions, $y$ dominates $x$ if
\begin{equation} \label{eq17}
\begin{split} {\left(\left(c_{s} \left(x\right)<0\right)\wedge \left(c_{s} \left(y\right)<0\right)\wedge \left(c_{s} \left(x\right)<c_{s} \left(y\right)\right)\right)} \\ \vee{\left(\left(c_{s} \left(x\right)<0\right)\wedge \left(c_{s} \left(y\right)=0\right)\right).} \end{split}
\end{equation}
\subsubsection{Constraint Dominance}
The second modification of the Pareto dominance is by the sum of constraint violation values,
\begin{equation} \label{eq18}
d_{x,y} \left(g\right)=g_{s} \left(x\right)-g_{s} \left(y\right),
\end{equation}
where $g_{s} \left(\cdot \right)$ is the sum of all constraint violation values, evaluated for a given solution $z$ as
\begin{equation} \label{eq19}
g_{s} \left(z\right)=\sum _{i=1}^{N_{g} } v_{i}^{g} ,
\end{equation}
where $v_{i}^{g} $ is an $i$-th constraint violation value, $N_{g} $ is the number of constraint violation values for a solution $z$.
Similar to \eqref{eq16} and \eqref{eq17}, in terms of constraint dominance, $x$ dominates $y$ if the following relation holds:
\begin{equation} \label{eq20}
\begin{split} {\left(\left(g_{s} \left(x\right)<0\right)\wedge \left(g_{s} \left(y\right)<0\right)\wedge \left(g_{s} \left(x\right)>g_{s} \left(y\right)\right)\right)} \\ \vee{\left(\left(g_{s} \left(x\right)=0\right)\wedge \left(g_{s} \left(y\right)<0\right)\right),} \end{split}
\end{equation}
thus \eqref{eq16} states that $x$ dominates $y$ if both $x$ and $y$ are unfeasible, and $x$ is closer to feasibility than $y$, or $x$ is feasible and $y$ is unfeasible.
By similar assumptions, $y$ dominates $x$ with respect to $g_{s} \left(\cdot \right)$ if
\begin{equation} \label{eq21}
\begin{split} {\left(\left(g_{s} \left(x\right)<0\right)\wedge \left(g_{s} \left(y\right)<0\right)\wedge \left(g_{s} \left(x\right)<g_{s} \left(y\right)\right)\right)} \\ \vee{\left(\left(g_{s} \left(x\right)<0\right)\wedge \left(g_{s} \left(y\right)=0\right)\right).} \end{split}
\end{equation}
\subsubsection{Objective Dominance}
Let $x$ and $y$ refer to two solutions, then, by theory, the $d_{x,y} \left(f\right)$ objective dominance function is defined as
\begin{equation} \label{eq22}
d_{x,y} \left(f\right)=\prod _{i=1,f_{{\rm 1}} \left(x\right)\ne f_{{\rm 1}} \left(y\right)}^{N_{obj} } \frac{\left|f_{i} \left(x\right)-f_{i} \left(y\right)\right|}{R_{i} } ,
\end{equation}
where $N_{obj} $ is the number of objectives (in our setting $N_{obj} =5$), $R_{i} $ is the range of objective $i$, while $x$ dominates $y$ if $f_{i} \left(x\right)\le f_{i} \left(y\right)$ for $\forall _{i} =1,\ldots ,N_{obj} $, and for at least one $i$ the relation $f_{i} \left(x\right)<f_{i} \left(y\right)$ holds.
\subsection{Objective Function Maximization}
\label{A1}
The quantum circuit $QG$ executes operations in the ${\rm {\mathcal{H}}}$ Hilbert space. The dimension of the ${\rm {\mathcal{H}}}$ space is
\begin{equation} \label{eq64}
{\rm dim}\left({\rm {\mathcal{H}}}\right)=d^{n} ,
\end{equation}
where $d$ is the dimension of the quantum system ($d=2$ for a qubit system), while $n$ is the number of quantum states.
Using the formalism of \cite{ref16, ref17, ref18}, let assume that the computational problem fed into the quantum circuit $QG$ is specified by $n$ bits and $m$ constraints. Then, the objective function is defined as
\begin{equation} \label{eq65}
C\left(z\right)=\sum _{\alpha =1}^{m} C_{\alpha } \left(z\right),
\end{equation}
where
\begin{equation} \label{eq66}
z=z_{{\rm 1}} \ldots z_{n}
\end{equation}
is an $n$-length bitstring, and $C_{\alpha } \left(z\right)=1$ if $z$ satisfies constraint $\alpha $, and $C_{\alpha } \left(z\right)=0$ otherwise \cite{ref16, ref17, ref18}.
Assuming a Hilbert space of $n$ qubits, ${\rm dim}\left({\rm {\mathcal{H}}}\right)={\rm 2}^{n} $, using the computational basis vectors $\left| z\right\rangle $, operator $C\left(z\right)$ in \eqref{eq65} is a diagonal operator in the computational basis \cite{ref16, ref17, ref18}. Then, at a particular angle $\gamma $, $\gamma \in \left[0, \pi \right]$, unitary $U\left(C,\gamma \right)$ is evaluated as
\begin{equation} \label{eq67}
U\left(C,\gamma \right)=e^{-i\gamma C} =\prod _{\alpha =1}^{m} e^{-i\gamma C_{\alpha } } ,
\end{equation}
such that all terms in the product are diagonal in the computational basis.
Then, for the $\mu $ dependent product of commuting operators, $\mu \in \left[0,\pi \right]$ \cite{ref16, ref17, ref18}, a unitary $U\left(B,\mu \right)$ is defined as
\begin{equation} \label{eq68}
U\left(B,\mu \right)=e^{-i\mu B} =\prod _{j=1}^{n} e^{-i\mu \sigma _{x}^{j} } ,
\end{equation}
where $B=\sum _{i} X_{i} $, $X_{i} =\sigma _{x}^{i} $, $\sigma _{x} $ is the Pauli $X$-operator, while $\mu $ is a control parameter \cite{ref16, ref17, ref18, ref19}, $\mu \in \left[0,\pi \right]$ . For a qubit setting, the $\left| s\right\rangle $ initial state of the quantum computer is the uniform superposition over computational basis states,
\begin{equation} \label{eq69}
\left| s\right\rangle =\frac{{\rm 1}}{\sqrt{{\rm 2}^{n} } } \sum _{z} \left| z\right\rangle .
\end{equation}
Let assume that the $G_{QG}^{k,r} $ multilayer structure of the $QG$ quantum circuit contains $n$ quantum ports of several quantum gates, and edge set
\begin{equation} \label{eq70}
{\rm {\mathcal{S}}}_{E} =\left\{\left\langle jk\right\rangle \right\}
\end{equation}
of size $m$. Then, the aim of the optimization is to indentify a string $z$ \eqref{eq66} that the maximizes the objective function
\begin{equation} \label{eq71}
C=\sum _{\left\langle jk\right\rangle } C_{\left\langle jk\right\rangle } ,
\end{equation}
where
\begin{equation} \label{eq72}
C_{\left\langle jk\right\rangle } =\frac{{\rm 1}}{{\rm 2}} \left({\rm 1}-z_{i} z_{j} \right),
\end{equation}
where $z_{i} =\pm 1$.
In $G_{QG}^{k,r} $ different unitary operations can be defined for the single quantum ports (qubits) and the connected quantum ports, as follows.
Let $U_{q_{s} } \left(\mu _{j} \right)$ be a unitary operator on a $q_{s} $ single port (qubits) in $G_{QG}^{k,r} $, be defined such that for each quantum ports a $\mu _{j} $ parameter is associated as
\begin{equation} \label{eq73}
U_{q_{s} } \left(\mu _{j} \right)=e^{-i\mu _{j} X_{j} } .
\end{equation}
For the collection
\begin{equation} \label{eq74}
\vec{\mu }=\left(\mu _{{\rm 1}} ,\ldots ,\mu _{n} \right),
\end{equation}
the resulting unitary is
\begin{equation} \label{eq75}
U_{q_{s} } \left(\vec{\mu }\right)=\prod _{j} U_{q_{s} } \left(\mu _{j} \right).
\end{equation}
The unitary $U_{q_{s} } \left(\vec{\mu }\right)$ is therefore represents the applications of the unitary operations at once in the quantum ports of the $QG$ quantum circuit.
Then, let unitary $U_{q_{jk} } \left(\gamma _{jk} \right)$ be defined for connected quantum ports $q_{jk} $ in $G_{QG}^{k,r} $, as
\begin{equation} \label{eq76}
U_{q_{jk} } \left(\gamma _{jk} \right)=e^{i\gamma _{jk} Z_{j} Z_{k} } ,
\end{equation}
where $Z_{i} =\sigma _{z}^{i} $, where $\sigma _{z} $ is the Pauli $Z$-operator. Since the eigenvalues of $X_{i} $ and $Z_{j} Z_{k} $ are $\pm 1$, it allows us to restrict the values \cite{ref16, ref17, ref18} of parameters $\gamma $ and $\mu $ to the range of $\left[0,\pi \right]$.
Then, defining collection
\begin{equation} \label{eq77}
\vec{\gamma }=(\gamma _{jk}^{{\rm 1}} ,\ldots ,\gamma _{jk}^{h} ),
\end{equation}
where $h$ is the number of individual $\gamma _{jk} $ parameters, the unitary $U_{q_{jk} } \left(\vec{\gamma }\right)$ is yielded as
\begin{equation} \label{eq78}
U_{q_{jk} } \left(\vec{\gamma }\right)=\prod _{\left\langle jk\right\rangle \in G_{QG}^{k,r} } U_{q_{jk} } \left(\gamma _{jk} \right).
\end{equation}
Assuming that there exists a set ${\rm {\mathcal{S}}}_{\vec{\mu }}^{u} $ of $u$ collections of $\vec{\mu }$'s
\begin{equation} \label{eq79}
{\rm {\mathcal{S}}}_{\vec{\mu }}^{u} :\vec{\mu }^{\left({\rm 1}\right)} ,\ldots ,\vec{\mu }^{\left(u\right)}
\end{equation}
and a set ${\rm {\mathcal{S}}}_{\vec{\gamma }}^{u} $ of $u$ collections of $\vec{\gamma }$'s,
\begin{equation} \label{eq80}
{\rm {\mathcal{S}}}_{\vec{\gamma }}^{u} :\vec{\gamma }^{\left({\rm 1}\right)} ,\ldots ,\vec{\gamma }^{\left(u\right)} ,
\end{equation}
a $\left| \phi \right\rangle $ system state of the $QG$ quantum circuit is evaluated as
\begin{equation} \label{eq81}
\begin{split}
\left| \phi \right\rangle &=\left| \mathcal{S}_{{\vec{\mu }}}^{u},\mathcal{S}_{{\vec{\gamma }}}^{u},C \right\rangle \\
& ={{U}_{{{q}_{s}}}}\left( {{{\vec{\mu }}}^{\left( u \right)}} \right){{U}_{{{q}_{jk}}}}\left( {{{\vec{\gamma }}}^{\left( u \right)}} \right)\ldots {{U}_{{{q}_{s}}}}\left( {{{\vec{\mu }}}^{\left( 1 \right)}} \right){{U}_{{{q}_{jk}}}}\left( {{{\vec{\gamma }}}^{\left( 1 \right)}} \right)\left| s \right\rangle ,
\end{split}
\end{equation}
where $\left| s\right\rangle $ is given in \eqref{eq69}.
The maximization of objective function \eqref{eq65} in the multilayer $G_{QG}^{k,r} $ structure is therefore analogous to the problem of finding the parameters of sets ${\rm {\mathcal{S}}}_{\vec{\mu }}^{u} $ \eqref{eq79} and ${\rm {\mathcal{S}}}_{\vec{\gamma }}^{u} $ \eqref{eq80} in the system state $\left| \phi \right\rangle $ \eqref{eq81} of the $QG$ quantum circuit.
\subsection{The QTAM Algorithm}
\label{sec3}
\begin{theorem} The QTAM algorithm utilizes annealing temperatures $T_{f} \left(t\right)$, $T_{g} \left(t\right)$ and $T_{c} \left(t\right)$ to evaluate the acceptance probabilities, where $T_{f} \left(t\right)$ is the annealing temperature for the objectives, $T_{g} \left(t\right)$ is the annealing temperature for the constraints and $T_{c} \left(t\right)$ is the annealing temperature for the distribution closeness.
\end{theorem}
\begin{proof}
The detailed description of the QTAM procedure is given in Algorithm 1.
\setcounter{algocf}{0}
\begin{algo}
\DontPrintSemicolon
\caption{\textit{Quantum Triple Annealing Minimization (QTAM)}}
\textbf{Step 1}. Define an archive ${\rm {\mathcal{A}}}$ with random solutions, and select a $\xi $ random solution from ${\rm {\mathcal{A}}}$.
\textbf{Step 2}. Define $\nu $ as $\nu =\Xi \left(\xi \right)$, where $\Xi \left(\cdot \right)$ is a moving operator. Determine the dominance relation between $\xi $ and $\nu $ via ${\rm {\mathcal{D}}}_{P} \left(\xi ,\nu \right)$, where function ${\rm {\mathcal{D}}}_{P} \left(\cdot \right)$ is the constrained Pareto dominance checking function.
\textbf{Step 3}. Evaluate acceptance probabilities based on ${\rm {\mathcal{D}}}_{P} \left(\xi ,\nu \right)$.
\textbf{(a)}: If ${\rm {\mathcal{D}}}_{P} \left(\xi ,\nu \right)=\nu \angle \xi $ ($\xi $ dominates $\nu $, where $\angle $ is the Pareto dominance operator), then $\xi =\nu $, with probability
\begin{equation} \label{eq23}
{\Pr }\left(\left. \xi =\nu \right|\nu \angle \xi \right)=\frac{{\rm 1}}{{\rm 1}+e^{\tilde{d}\left(f\right)T_{f} \left(t\right)} e^{\tilde{d}\left(g\right)T_{g} \left(t\right)} e^{\tilde{d}\left(c\right)T_{c} \left(t\right)} } ,
\end{equation}
where $\tilde{d}\left(f\right)$, is the average objective dominance, evaluated as
\begin{equation} \label{eq24}
\tilde{d}\left(f\right)=\frac{\left(\sum _{i=1}^{k} d_{i,\nu } \left(f\right)\right)+d_{\xi ,\nu } \left(f\right)}{k+{\rm 1}} ,
\end{equation}
where $d_{x,y} \left(f\right)$ is the objective dominance function as given in \eqref{eq22}, while $\tilde{d}\left(g\right)$ average constraint dominance as
\begin{equation} \label{eq25}
\tilde{d}\left(g\right)=\frac{-\left(\sum _{i=1}^{k} d_{\nu ,i} \left(g\right)\right)-d_{\nu ,\xi } \left(g\right)}{k+{\rm 1}} ,
\end{equation}
where $d_{x,y} \left(g\right)$ is the constraint dominance function as given in \eqref{eq18}, and $\tilde{d}\left(c\right)$ is average distribution closeness dominance as
\begin{equation} \label{eq26}
\tilde{d}\left(c\right)=\frac{-\left(\sum _{i=1}^{k} d_{\nu ,i} \left(c\right)\right)-d_{\nu ,\xi } \left(c\right)}{k+{\rm 1}} ,
\end{equation}
where $d_{x,y} \left(c\right)$ is the distribution closeness dominance function as given in \eqref{eq14}, while $T_{f} \left(t\right)$ is the annealing temperature for the objectives
\begin{equation} \label{eq27}
T_{f} \left(t\right)=T_{f_{\max } } e^{-R\left(\frac{t}{k} \right)} ,
\end{equation}
where $R$ is the temperature decreasing rate, $T_{f_{\max } } $ is a maximum (initial) value for annealing the objectives factor, $T_{g} \left(t\right)$ is the annealing temperature for the constraints
\begin{equation} \label{eq28}
T_{g} \left(t\right)=T_{g_{\max } } e^{-R\left(\frac{t}{k} \right)} ,
\end{equation}
where $T_{g_{\max } } $ is a maximum (initial) value for annealing the constraint factor, and $T_{c} \left(t\right)$ is the annealing temperature for the distribution closeness
\begin{equation} \label{eq29}
T_{c} \left(t\right)=T_{c_{\max } } e^{-R\left(\frac{t}{k} \right)} ,
\end{equation}
where $T_{c_{\max } } $ is a maximum (initial) value for annealing the distribution closeness factor, respectively.
\end{algo}
\setcounter{algocf}{0}
\begin{algo}
\DontPrintSemicolon
\caption{\textit{Quantum Triple Annealing Minimization (QTAM), cont.}}
\textbf{(b)}. If ${\rm {\mathcal{D}}}_{P} \left(\xi ,\nu \right)=\left(\nu \neg \angle \xi \right)\wedge \left(\xi \neg \angle \nu \right)$ ($\nu $ and $\xi $ are non-dominating to each other) such that $\nu $ is dominated by $k\ge {\rm 1}$ points in ${\rm {\mathcal{A}}}$, ${\rm {\mathcal{D}}}_{P} \left(\xi ,\nu \right)=\nu \angle \left({\rm {\mathcal{A}}}\right)_{k} $, then $\xi =\nu $, with probability
\begin{equation} \label{eq30}
\begin{split} {{\Pr}\left(\left. \xi =\nu \right|\left(\nu \neg \angle \xi \right)\wedge \left(\xi \neg \angle \nu \right),\nu \angle \left({\rm {\mathcal{A}}}\right)_{k} \right)} \\ {=\frac{{\rm 1}}{{\rm 1}+e^{\tilde{d}\left(f\right)_{k} T_{f} \left(t\right)} e^{\tilde{d}\left(g\right)_{k} T_{g} \left(t\right)} e^{\tilde{d}\left(c\right)_{k} T_{c} \left(t\right)} } ,} \end{split}
\end{equation}
where
\begin{equation} \label{eq31}
\tilde{d}\left(f\right)_{k} =\tilde{d}\left(f\right)-d_{\xi ,\nu } \left(f\right),
\end{equation}
where $\tilde{d}\left(f\right)$ is as in \eqref{eq24},
\begin{equation} \label{eq32}
\tilde{d}\left(g\right)_{k} =\tilde{d}\left(g\right)+d_{\nu ,\xi } \left(g\right),
\end{equation}
where $\tilde{d}\left(g\right)$ is as in \eqref{eq25}, while
\begin{equation} \label{eq33}
\tilde{d}\left(c\right)_{k} =\tilde{d}\left(c\right)+d_{\nu ,\xi } \left(c\right),
\end{equation}
where $\tilde{d}\left(c\right)$ is as in \eqref{eq26}.
\textbf{(c)}: If ${\rm {\mathcal{D}}}_{P} \left(\xi ,\nu \right)=\left(\nu \neg \angle \xi \right)\wedge \left(\xi \neg \angle \nu \right)$, and ${\rm {\mathcal{D}}}_{P} \left(\nu ,{\rm {\mathcal{A}}}\right)=\left({\rm {\mathcal{A}}}\neg \angle \nu \right)$, thus $\nu $ is non-dominating with respect to ${\rm {\mathcal{A}}}$, then apply Sub-procedure 1.
\textbf{(d)}: If ${\rm {\mathcal{D}}}_{P} \left(\xi ,\nu \right)=\left(\nu \neg \angle \xi \right)\wedge \left(\xi \neg \angle \nu \right)$, and ${\rm {\mathcal{D}}}_{P} \left(\nu ,{\rm {\mathcal{A}}}\right)=\left(\left({\rm {\mathcal{A}}}\right)_{k} \angle \nu \right)$, thus $\nu $ dominates $k\ge {\rm 1}$ points in ${\rm {\mathcal{A}}}$, then apply Sub-procedure 2.
\textbf{(e)}: If ${\rm {\mathcal{D}}}_{P} \left(\xi ,\nu \right)=\xi \angle \nu $ such that ${\rm {\mathcal{D}}}_{P} \left(\nu ,{\rm {\mathcal{A}}}\right)=\left(\nu \angle \left({\rm {\mathcal{A}}}\right)_{k} \right)$, thus $\nu $ is dominated by $k\ge {\rm 1}$ points in ${\rm {\mathcal{A}}}$, then set $\xi =\nu $, with probability
\begin{equation} \label{eq34}
{\Pr }\left(\left. \xi =\nu \right|\xi \angle \nu ,\nu \angle \left({\rm {\mathcal{A}}}\right)_{k} \right)=\frac{{\rm 1}}{{\rm 1}+e^{-\tilde{d}\left(\min \right)} } ,
\end{equation}
where $\tilde{d}\left(\min \right)$ is evaluated as
\begin{equation} \label{eq35}
\tilde{d}\left(\min \right)=\mathop{\min }\limits_{\forall k} {\left(d_{\nu ,k} \left(f\right)-\left(d_{k,\nu } \left(g\right)+d_{k,\nu } \left(c\right)\right)\right)}.
\end{equation}
Using \eqref{eq35}, apply Sub-procedure 3. To evaluate the ${\rm {\mathcal{D}}}_{P} \left(\nu ,{\rm {\mathcal{A}}}\right)$ relations between $\nu $ and the elements of ${\rm {\mathcal{A}}}$ at ${\rm {\mathcal{D}}}_{P} \left(\xi ,\nu \right)=\xi \angle \nu $, apply Sub-procedure 4.
Step 4. Apply Steps 2-3, until $i<N_{it} $, where $i$ is the actual iteration, $N_{it} $ is the total number of iterations.
\end{algo}
The related steps are detailed in Sub-procedures 1-4. In Step 3 of Sub-procedure 1, the best ${{\mathcal{A}}_{s}}$ solutions refer to those solutions from $\mathcal{A}$ that have the largest values of the crowding distance \cite{ref27}. Particularly, in this step, the solutions are also sorted and compared by a crowded comparison operator to find the best solution.
\setcounter{algocf}{0}
\begin{subproc}
\DontPrintSemicolon
\caption{\textit{}}
\textbf{Step 1}. Set $\xi =\nu $, and add $\nu $ to ${\rm {\mathcal{A}}}$.
\textbf{Step 2}. If $\left|{\rm {\mathcal{A}}}\right|>A_{s} $, where $\left|{\rm {\mathcal{A}}}\right|$ is the number of elements in ${\rm {\mathcal{A}}}$, $A_{s} $ is the maximal archive size, then assign $\Delta _{cr} \left({\rm {\mathcal{A}}}\right)$ to ${\rm {\mathcal{A}}}$, where $\Delta _{cr} \left(\cdot \right)$ is the crowding distance.
\textbf{Step 3}. Select the best $A_{s} $ elements.
\end{subproc}
\begin{subproc}
\DontPrintSemicolon
\caption{\textit{}}
\textbf{Step 1}. Set $\xi =\nu $, and add $\nu $ to ${\rm {\mathcal{A}}}$.
\textbf{Step 2}. Remove all the $k$ dominated points from ${\rm {\mathcal{A}}}$.
\end{subproc}
\begin{subproc}
\DontPrintSemicolon
\caption{\textit{}}
\textbf{Step 1}. Set $\xi =k_{\tilde{d}\left(\min \right)} $, where $k_{\tilde{d}\left(\min \right)} $ is a point of ${\rm {\mathcal{A}}}$ that corresponds to $\tilde{d}\left(\min \right)$ (see \eqref{eq35}) with probability ${\Pr }\left(\left. \xi =\nu \right|\xi \angle \nu ,\nu \angle \left({\rm {\mathcal{A}}}\right)_{k} \right)$ (see \eqref{eq34}).
\textbf{Step 2}. Otherwise set $\xi =\nu $.
\end{subproc}
\begin{subproc}
\DontPrintSemicolon
\caption{\textit{}}
\textbf{Step 1}. If ${\rm {\mathcal{D}}}_{P} \left(\nu ,{\rm {\mathcal{A}}}\right)={\rm {\mathcal{A}}}\neg \angle \nu $, i.e., $\nu $ is non-dominating with respect to ${\rm {\mathcal{A}}}$, then set $\xi =\nu $, and add $\nu $ to ${\rm {\mathcal{A}}}$. If $\left|{\rm {\mathcal{A}}}\right|>A_{s} $, then assign $\Delta _{cr} \left({\rm {\mathcal{A}}}\right)$ to ${\rm {\mathcal{A}}}$, and select the best $A_{s} $ elements.
\textbf{Step 2}. If ${\rm {\mathcal{D}}}_{P} \left(\nu ,\left({\rm {\mathcal{A}}}\right)_{k} \right)=\left({\rm {\mathcal{A}}}\right)_{k} \angle \nu $, i.e., $\nu $ dominates $k$ points in ${\rm {\mathcal{A}}}$, then set $\xi =\nu $, and add $\nu $ to ${\rm {\mathcal{A}}}$. Remove the $k$ points from ${\rm {\mathcal{A}}}$.
\end{subproc}
\end{proof}
\subsubsection{Computational Complexity of QTAM}
Following the complexity analysis of \cite{ref24, ref25, ref26, ref27}, the computational complexity of QTAM is evaluated as
\begin{equation} \label{eq36}
{\rm \mathcal{O}}\left(N_{d} N_{it} \left|{\rm {\mathcal{P}}}\right|\left(N_{obj} +\log _{2} \left(\left|{\rm {\mathcal{P}}}\right|\right)\right)\right),
\end{equation}
where $N_{d} $ is the number of dominance measures, $N_{it} $ is the number of total iterations, $\left|{\rm {\mathcal{P}}}\right|$ is the population size, while $N_{obj} $ is the number of objectives.
\section{Wiring Optimization and Objective Function Maximization}
\label{sec4}
\subsection{Multilayer Quantum Circuit Grid}
An $i$-th quantum gate of $QG$ is denoted by $g_{i} $, a $k$-th port of the quantum gate $g_{i} $ is referred to as $g_{i,k} $. Due to the hardware restrictions of gate-model quantum computer implementations \cite{ref16, ref17, ref18, ref19}, the quantum gates are applied in several rounds. Thus, a multilayer, $k$-dimensional (for simplicity we assume $k=2$), $n$-sized finite square-lattice grid $G_{QG}^{k,r} $ can be constructed for $QG$, where $r$ is the number of layers, $l_{z} $, $z=1,\ldots ,r$ . A quantum gate $g_{i} $ in the $z$-th layer $l_{z} $ is referred to as $g_{i}^{l_{z} } $, while a $k$-th port of $g_{i}^{l_{z} } $ is referred to as $g_{i,k}^{l_{z} } $.
\subsection{Method}
\begin{theorem}
There exists a method for the parallel optimization of quantum wiring in physical-layout of the quantum circuit and for the maximization of an objective function $C_{\alpha } \left(z\right)$.
\end{theorem}
\begin{proof}
The aim of this procedure (Method 1) is to provide a simultaneous physical-layer optimization and Hamiltonian minimization via the minimization of the wiring lengths in the multilayer structure of $QG$ and the maximization of the objective function (see also \sref{A1}). Formally, the aim of Method 1 is the $F_{{\rm 2}} \wedge F_{{\rm 3}} $ simultaneous realization of the objective functions $F_{{\rm 2}} $ and $F_{{\rm 3}} $.
Using the $G_{QG}^{k,r} $ multilayer grid of the $QG$ quantum circuit determined via $F_{{\rm 1}} $ and $F_{{\rm 2}} $, the aim of $F_{{\rm 3}} $ maximization of the objective function $C\left(z\right)$, where $z=z_{{\rm 1}} \ldots z_{n} $ in an $n$-length input string, where each $z_{i} $ is associated to an edge of $G_{QG}^{k,r} $ connecting two quantum ports. The objective function $C\left(z\right)$ associated to an arbitrary computational problem is defined as
\begin{equation} \label{eq38}
C\left(z\right)=\sum _{\left\langle i,j\right\rangle \in G_{QG}^{k,r} } C_{\left\langle i,j\right\rangle } \left(z\right),
\end{equation}
where $C_{\left\langle i,j\right\rangle } $ is the objective function for an edge of $G_{QG}^{k,r} $ that connects quantum ports $i$ and $j$.
The $C^{{\rm *}} \left(z\right)$ maximization of objective function \eqref{eq38} yields a system state $\Psi $ for the quantum computer \cite{ref16, ref17, ref18, ref19} as
\begin{equation} \label{eq39}
\Psi =\left\langle \left. \gamma ,\mu ,C^{{\rm *}} \left(z\right)\right|\right. C^{{\rm *}} \left(z\right)\left| \gamma ,\mu ,C^{{\rm *}} \left(z\right)\right\rangle ,
\end{equation}
where
\begin{equation} \label{eq40}
{\left| \gamma ,\mu ,C^{*} \left(z\right) \right\rangle} =U\left(B,\mu \right)U\left(C^{*} \left(z\right),\gamma \right){\left| s \right\rangle} ,
\end{equation}
while
\begin{equation} \label{eq43}
U\left(C^{{\rm *}} \left(z\right),\gamma \right)\left| z\right\rangle =e^{-i\gamma C^{{\rm *}} \left(z\right)} \left| z\right\rangle ,
\end{equation}
where $\gamma $ is a single parameter \cite{ref16, ref17, ref18, ref19}.
The objective function \eqref{eq38} without loss of generality can be rewritten as
\begin{equation} \label{eq44}
C\left(z\right)=\sum _{\alpha } C_{\alpha } \left(z\right),
\end{equation}
where $C_{\alpha } $ each act on a subset of bits, such that $C_{\alpha } \in \left\{{\rm 0,1}\right\}$. Therefore, there exists a selection of parameters of $\vec{\Phi }$ in \eqref{eq9} such that \eqref{eq44} picks up a maximized value $C^{{\rm *}} \left(z\right)$, which yields system state $\Upsilon $ as
\begin{equation} \label{eq45}
\Upsilon =\langle \vec{\Phi }|C^{{\rm *}} (z)|\vec{\Phi }\rangle .
\end{equation}
Therefore, the resulting Hamiltonian $H$ associated to the system state \eqref{eq45} is minimized via $F_{{\rm 2}} $ (see \eqref{eq57}) as
\begin{equation} \label{eq46}
E_{L} (\vec{\Phi })=\min {\langle \vec{\Phi }|H|\vec{\Phi }\rangle },
\end{equation}
since the physical-layer optimization minimizes the $\ell _{ij} $ physical distance between the quantum ports, therefore the energy $E_{L} (\vec{\Phi })$ of the Hamiltonian associated to $\vec{\Phi }$ is reduced to a minima.
The steps of the method $F_{{\rm 2}} \wedge F_{{\rm 3}} $ are given in Method 1. The method minimizes the number of quantum wires in the physical-layout of $QG$, and also achieves the desired system state $\Psi $ of \eqref{eq39}.
\setcounter{algocf}{0}
\begin{proced}
\DontPrintSemicolon
\caption{\textit{Quantum Wiring Optimization and Objective Function Maximization}}
\textbf{Step 1}. Construct the $G_{QG}^{k,r} $ multilayer grid of the $QG$ quantum circuit, with $r$ layers $l_{{\rm 1}} ,\ldots ,l_{r} $. Determine the
\[C\left(z\right)=\sum _{\left\langle i,j\right\rangle \in G_{QG}^{k,r} } C_{\left\langle i,j\right\rangle } \left(z\right)\]
objective function, where each $C_{\left\langle i,j\right\rangle } $ refers to the objective function for an edge in $G_{QG}^{k,r} $ connecting quantum ports $i$ and $j$, defined as
\[C_{\left\langle i,j\right\rangle } \left(z\right)=\frac{{\rm 1}}{{\rm 2}} \left({\rm 1}-z_{i} z_{j} \right),\]
where $z_{i} =\pm 1$.
\textbf{Step 2}. Find the optimal assignment of separation point $\Delta $ in $G_{QG}^{k,r} =\left(V,E,f\right)$ at a physical-layer blockage $\beta $ via a minimum-cost tree in $G_{QG}^{k,r} $ containing at least one port from each quantum gate $g_{i} $, $i=1,\ldots ,\left|V\right|$. For all pairs of quantum gates $g_{i} $, $g_{j} $, minimize the $f_{p,c} $ path cost (${\rm L1}$ distance) between a source quantum gate $g_{i} $ and destination quantum gate $g_{j} $ and then maximize the overlapped ${\rm L1}$ distance between $g_{i} $ and $\Delta $.
\textbf{Step 3}. For the $s$ found assignments of $\Delta $ in Step 2, evaluate the objective functions $C_{\alpha _{i} } $, $k=1,\ldots ,s$, where $C_{\alpha _{0} } $ is the initial value. Let the two paths ${\rm {\mathcal{P}}}_{{\rm 1}} $ and ${\rm {\mathcal{P}}}_{{\rm 2}} $ between quantum ports $g_{i{\rm ,1}} $, $g_{j{\rm ,1}} $, $g_{j{\rm ,2}} $ be given as ${\rm {\mathcal{P}}}_{{\rm 1}} :g_{i{\rm ,1}} \to \Delta \to g_{j{\rm ,1}} $, and ${\rm {\mathcal{P}}}_{{\rm 2}} :g_{i{\rm ,1}} \to \Delta \to g_{j{\rm ,2}} $. Evaluate objective functions $C_{\left\langle g_{i{\rm ,1}} ,\Delta \right\rangle } \left(z\right)$, $C_{\left\langle \Delta ,g_{j{\rm ,1}} \right\rangle } \left(z\right)$ and $C_{\left\langle \Delta ,g_{j{\rm ,2}} \right\rangle } \left(z\right)$.
\textbf{Step 4}. Select that $k$-th solution, for which
\[{{C}_{{{\alpha }_{k}}}}\left( z \right)=C_{\left\langle {{g}_{i,1}},\Delta \right\rangle }^{\left( k \right)}\left( z \right)+C_{\left\langle \Delta ,{{g}_{j,1}} \right\rangle }^{\left( k \right)}\left( z \right)+C_{\left\langle \Delta ,{{g}_{j,2}} \right\rangle }^{\left( k \right)}\left( z \right)\]
is maximal, where $C_{\left\langle i,j\right\rangle }^{\left(k\right)} $ is the objective function associated to a $k$-th solution between quantum ports $g_{i{\rm ,1}} $, $g_{j{\rm ,1}} $, and $g_{i{\rm ,1}} $, $g_{j{\rm ,2}} $ in $G_{QG}^{k,r} $. The resulting $C_{\alpha }^{{\rm *}} \left(z\right)$ for ${\rm {\mathcal{P}}}_{{\rm 1}} $ and ${\rm {\mathcal{P}}}_{{\rm 2}} $ is as
\[C_{\alpha }^{*} \left(z\right)=\mathop{\mathop{\max }}\limits_{k} {\kern 1pt} \left(C_{\alpha _{k} } \left(z\right)\right).\]
\textbf{Step 5}. Repeat steps 2-4 for all paths of $G_{QG}^{k,r} $.
\end{proced}
\end{proof}
The steps of Method 1 are illustrated in \fref{fig2}, using the $G_{QG}^{k,r} $ multilayer topology of the $QG$ quantum gate structure, $l_{i} $ refers to the $i$-th layer of $G_{QG}^{k,r} $.
\begin{center}
\begin{figure}[h!]
\begin{center}
\includegraphics[angle = 0,width=0.8\linewidth]{fig2.pdf}
\caption{The aim is to find the optimal wiring in $G_{QG}^{k,r} $ for the $QG$ quantum circuit (minimal path length with maximal overlapped path between $g_{i{\rm ,1}} $ and $g_{j{\rm ,1}} $,$g_{j{\rm ,2}} $) such that the $C_{\alpha } $ objective function associated to the paths ${\rm {\mathcal{P}}}_{{\rm 1}} :g_{i{\rm ,1}} \to g_{j{\rm ,1}} $, and ${\rm {\mathcal{P}}}_{{\rm 2}} :g_{i{\rm ,1}} \to g_{j{\rm ,2}} $ is maximal. (a): The initial objective function value is $C_{\alpha _{0} } $. A physical-layer blockage $\beta $ in the quantum circuit allows no to use paths ${\rm {\mathcal{P}}}_{{\rm 1}} $ and ${\rm {\mathcal{P}}}_{{\rm 2}} $. (b): The wire length is optimized via the selection point $\Delta $. The path cost is $f_{p,c} =11+3f_{l} $, where $f_{l} $ is the cost function of the path between the layers $l_{{\rm 1}} $ and $l_{{\rm 2}} $ (depicted by the blue vertical line), the path overlap from $g_{i{\rm ,1}} $ to $\Delta $ is $\tau _{o} =5+f_{l} $. The objective function value is $C_{\alpha _{{\rm 1}} } $. (c): The path cost is $f_{p,c} =10$, the path overlap from $g_{i{\rm ,1}} $ to $\Delta $ is $\tau _{o} =4$. The objective function value is $C_{\alpha _{{\rm 2}} } $. (d): The path cost is $f_{p,c} =12$, the path overlap from $g_{i{\rm ,1}} $ to $\Delta $ is $\tau _{o} =6$. The objective function value is $C_{\alpha _{{\rm 3}} } $. The selected connection topology from (b), (c), and (d) is that which yields the maximized objective function $C_{\alpha }^{{\rm *}} $.}
\label{fig2}
\end{center}
\end{figure}
\end{center}
\subsection{Quantum Circuit Minimization}
For objective function $F_{{\rm 1}} $, the area minimization of the $QG$ quantum circuit requires the following constraints. Let $S_{v} \left(P_{i} \right)$ be the vertical symmetry axis of a proximity group $P_{i} $ \cite{ref24, ref25, ref26} on $QG$, and let $x_{S_{v} \left(P_{i} \right)} $ refer to the $x$-coordinate of $S_{v} \left(P_{i} \right)$. Then, by some symmetry considerations for $x_{S_{v} \left(P_{i} \right)} $,
\begin{equation} \label{eq47}
x_{S_{v} \left(P_{i} \right)} =\frac{{\rm 1}}{{\rm 2}} \left(x_{i}^{{\rm 1}} +x_{i}^{{\rm 2}} +\kappa _{i} \right),
\end{equation}
where $x_{i} $ is the bottom-left $x$ coordinate of a cell $\sigma _{i} $, $\kappa _{i} $ is the width of $\sigma _{i} $, and
\begin{equation} \label{eq48}
y_{i}^{{\rm 1}} +\frac{h_{i} }{{\rm 2}} =y_{i}^{{\rm 2}} +\frac{h_{i} }{{\rm 2}} ,
\end{equation}
where $y_{i} $ is the bottom-left $y$ coordinate of a cell $\sigma _{i} $, $h_{i} $ is the height of $\sigma _{i} $.
Let $\left(\sigma ^{{\rm 1}} ,\sigma ^{{\rm 2}} \right)$ be a symmetry pair \cite{ref24, ref25, ref26} that refers to two matched cells placed symmetrically in relation to $S_{v} \left(P_{i} \right)$, with bottom-left coordinates $\left(\sigma ^{{\rm 1}} ,\sigma ^{{\rm 2}} \right)=\left(\left(x_{i}^{{\rm 1}} ,y_{i}^{{\rm 1}} \right),\left(x_{i}^{{\rm 2}} ,y_{i}^{{\rm 2}} \right)\right)$. Then, $x_{S_{v} \left(P_{i} \right)} $ can be rewritten as
\begin{equation} \label{eq49}
x_{S_{v} \left(P_{i} \right)} =x_{i}^{{\rm 1}} -x_{i} =x_{i}^{{\rm 2}} +x_{i} +\kappa _{i} ,
\end{equation}
with the relation $y_{i}^{{\rm 1}} =y_{i}^{{\rm 2}} =y_{i} $.
Let $\sigma ^{S} =\left(x_{i}^{S} ,y_{i}^{S} \right)$ be a cell which is placed centered \cite{ref24, ref25, ref26} with respect to $S_{v} \left(P_{i} \right)$. Then, $x_{S_{v} \left(P_{i} \right)} $ can be evaluated as
\begin{equation} \label{eq50}
x_{S_{v} \left(P_{i} \right)} =x_{i}^{S} +\frac{\kappa _{i} }{{\rm 2}} ,
\end{equation}
along with $y_{i}^{S} =y_{i} $. Note that it is also possible that for some cells in $QG$ there is no symmetry requirements, these cells are denoted by $\sigma ^{0} $.
As can be concluded, using objective function $F_{{\rm 1}} $ for the physical-layer minimization of $QG$, a $d$-dimensional constraint vector ${\rm \mathbf{x}}_{F_{{\rm 1}} }^{d} $ can be formulated with the symmetry considerations as follows:
\begin{equation} \label{eq51}
{\rm \mathbf{x}}_{F_{{\rm 1}} }^{d} =\sum _{N_{\left(\sigma ^{{\rm 1}} ,\sigma ^{{\rm 2}} \right)} } \left(x_{i} ,y_{i} ,r_{i} \right)+\sum _{N_{\sigma ^{S} } } \left(y_{i} ,r_{i} \right)+\sum _{N_{\sigma ^{0} } } \left(x_{i} ,y_{i} ,r_{i} \right),
\end{equation}
where $N_{\left(\sigma ^{{\rm 1}} ,\sigma ^{{\rm 2}} \right)} $ is the number of $\left(\sigma ^{{\rm 1}} ,\sigma ^{{\rm 2}} \right)$ symmetry pairs, $N_{\sigma ^{S} } $ is the number of $\sigma ^{S} $-type cells, while $N_{\sigma ^{0} } $ is the number of $\sigma ^{0} $-type cells, while $r_{i} $ is the rotation angle of an $i$-th cell $\sigma _{i} $, respectively.
\subsubsection{Quantum Wire Area Minimization}
Objective function $F_{{\rm 2}} $ provides a minimization of the total quantum wire length of the $QG$ circuit. To achieve it we define a procedure that yields the minimized total quantum wire area, $w_{QG} $, of $QG$ as given by \eqref{eq7}. Let $\delta _{ij} $ be the effective width of the quantum wire $ij$ in the $QG$ circuit, defined as
\begin{equation} \label{eq52}
\delta _{ij} =\frac{\psi _{ij} }{J_{\max } \left(T_{ref} \right)h_{nom} } ,
\end{equation}
where $\psi _{ij} $ is the (root mean square) condensate wave function amplitude, $J_{\max } \left(T_{ref} \right)$ is the maximum allowed current density at a given reference temperature $T_{ref} $, while $h_{nom} $ is the nominal layer height. Since drops in the condensate wave function phase $\varphi _{ij} $ are also could present in the $QG$ circuit environment, the $\delta '_{ij} $ effective width of the quantum wire $ij$ can be rewritten as
\begin{equation} \label{eq53}
\delta '_{ij} =\frac{\psi _{ij} \ell _{eff} r_{0} \left(T_{ref} \right)}{\chi _{\varphi _{ij} } } ,
\end{equation}
where $\chi _{\varphi _{ij} } $ is a maximally allowed value for the phase drops, $\ell _{eff} $ is the effective length of the quantum wire, $\ell _{eff} \le \left(\chi _{\varphi _{ij} } \delta _{ij} \right)/\psi _{ij} r_{0} \left(T_{ref} \right),$ while $r_{0} \left(T_{ref} \right)$ is a conductor sheet resistance \cite{ref1, ref2, ref3, ref4, ref5}.
In a $G_{QG}^{k,r} $ multilayer topological representation of $QG$, the $\ell _{ij} $ distance between the quantum ports is as
\begin{equation} \label{eq54}
\ell _{ij} =\left|x_{i} -x_{j} \right|+\left|y_{i} -y_{j} \right|+\left|z_{i} -z_{j} \right|f_{l} ,
\end{equation}
where $f_{l} $ is a cost function between the layers of the multilayer structure of $QG$.
During the evaluation, let $w_{QG} \left(k\right)$ be the total quantum wire area of a particular net $k$ of the $QG$ circuit,
\begin{equation} \label{eq55}
w_{QG} \left(k\right)=\sum _{i=1}^{p} \sum _{j=1}^{q} \ell _{ij} \cdot \delta _{ij} \left(\psi _{ij} \right),
\end{equation}
where $q$ quantum ports are considered as sources of condensate wave function amplitudes, while $p$ of $QG$ are sinks, thus \eqref{eq7} can be rewritten as
\begin{equation} \label{eq56}
F_{{\rm 2}} :w_{QG} =\min {\sum _{k=1}^{h} w_{QG} \left(k\right)}.
\end{equation}
Since $\psi _{ij} $ is proportional to $\delta _{ij} \left(\psi _{ij} \right)$, \eqref{eq56} can be simplified as
\begin{equation} \label{eq57}
F_{{\rm 2}} :w'_{QG} =\min {\sum _{k=1}^{h} w'_{QG} \left(k\right)},
\end{equation}
where
\begin{equation} \label{eq58}
w'_{QG} \left(k\right)=\sum _{i=1}^{p} \sum _{j=1}^{q} \ell _{ij} \cdot \psi _{ij} ,
\end{equation}
where $\ell _{ij} $ is given in \eqref{eq54}.
In all quantum ports of a particular net $k$ of $QG$, the source quantum ports are denoted by positive sign \cite{ref24, ref25, ref26} in the condensate wave function amplitude, $\psi _{ij} $ assigned to quantum wire $ij$ between quantum ports $i$ and $j$ , while the sink ports are depicted by negative sign in the condensate wave function amplitude, $-\psi _{ij} $ with respect to a quantum wire $ij$ between quantum ports $i$ and $j$.
Thus the aim of $w_{QG} \left(k\right)$ in \eqref{eq55} is to determine a set of port-to-port connections in the $QG$ quantum circuit, such that the number of long connections is reduced in a particular net $k$ of $QG$ as much as possible. The result in \eqref{eq56} is therefore extends these requirements for all nets of $QG$.
\paragraph{Wave Function Amplitudes}
With respect to a particular quantum wire $ij$ between quantum ports $i$ and $j$ of $QG$, let $\psi _{i\to j} $ refer to the condensate wave function amplitude in direction $i\to j$, and let $\psi _{j\to i} $ refer to the condensate wave function amplitude in direction $j\to i$ in the quantum circuit. Then, the let be $\phi _{ij} $ defined for the condensate wave function amplitudes of quantum wire $ij$ as
\begin{equation} \label{eq59}
\phi _{ij} =\min {\left(\left|\psi _{i\to j} \right|,\left|\psi _{j\to i} \right|\right)},
\end{equation}
with a residual condensate wave function amplitude
\begin{equation} \label{eq60}
\xi _{i\to j} =\phi _{ij} -\psi _{i\to j} ,
\end{equation}
where $\psi _{i\to j} $ is an actual amplitude in the forward direction $i\to j$. Thus, the maximum amount of condensate wave function amplitude injectable to of quantum wire $ij$ in the forward direction $i\to j$ at the presence of $\psi _{i\to j} $ is $\xi _{i\to j} $ (see \eqref{eq60}). The following relations holds for a backward direction, $j\to i$, for the decrement of a current wave function amplitude $\psi _{i\to j} $ as
\begin{equation} \label{eq61}
\bar{\xi }_{j\to i} =-\psi _{i\to j} ,
\end{equation}
with residual quantum wire length
\begin{equation} \label{eq62}
\Gamma _{j\to i} =-\delta _{ij} ,
\end{equation}
where $\delta _{ij} $ is given in \eqref{eq52}.
By some fundamental assumptions, the ${\rm {\mathcal{N}}}_{R} $ residual network of $QG$ is therefore a network of the quantum circuit with forward edges for the increment of the wave function amplitude $\psi $, and backward edges for the decrement of $\psi $. To avoid the problem of negative wire lengths the Bellman-Ford algorithm \cite{ref24, ref25, ref26} can be utilized in an iterative manner in the residual directed graph of the $QG$ topology.
To find a path between all pairs of quantum gates in the directed graph of the $QG$ quantum circuit, the directed graph has to be strongly connected. The strong-connectivity of the $h$ nets with the parallel minimization of the connections of the $QG$ topology can be achieved by a minimum spanning tree method such as Kruskal's algorithm \cite{ref24, ref25, ref26}.
\begin{lemma}
The objective function $F_{{\rm 2}} $ is feasible in a multilayer $QG$ quantum circuit structure.
\end{lemma}
\begin{proof}
The procedure defined for the realization of objective function $F_{{\rm 2}} $ on a $QG$ quantum circuit is summarized in Method 2. The proof assumes a superconducting architecture.
\setcounter{algocf}{1}
\begin{proced}
\DontPrintSemicolon
\caption{\textit{Implementation of Objective Function $F_2$}}
\textbf{Step 1}. Assign the $\psi _{ij} $ condensate wave function amplitudes for all $ij$ quantum wires of $QG$ via Sub-method 2.1.
\textbf{Step 2}. Determine the residual network of $QG$ via Sub-method 2.2.
\textbf{Step 3}. Achieve the strong connectivity of $QG$ via Sub-method 2.3.
\textbf{Step 4}. Output the $QG$ quantum circuit topology such that $w_{QG} $ \eqref{eq7} is minimized.
\end{proced}
The sub-procedures of Method 2 are detailed in Sub-methods 2.1, 2.2 and 2.3.
\setcounter{algocf}{0}
\begin{subproc2}
\DontPrintSemicolon
\caption{}
\textbf{Step 1}. Create a ${\rm M}_{QG} $ multilayer topological map of the network ${\rm {\mathcal{N}}}$ of $QG$ with the quantum gates and ports.
\textbf{Step 2}. From ${\rm M}_{QG} $ determine the $L_{c} $ connection list of ${\rm {\mathcal{N}}}$ in $QG$.
\textbf{Step 3}. Determine the $\delta _{ij} $ the effective width of the quantum wire $ij$ via \eqref{eq52}, for $\forall ij$ wires.
\textbf{Step 4}. Determine $\phi _{ij} $ via \eqref{eq59} for all quantum wires $ij$ of the $QG$ circuit.
\textbf{Step 5}. For a $k$-th net of $QG$, assign the wave function amplitude values $\psi _{ij} $ to $\forall ij$ quantum wires such that $w_{QG} \left(k\right)$ in \eqref{eq55} is minimized, with quantum wire length $\ell _{ij} $ \eqref{eq54}.
\end{subproc2}
\begin{subproc2}
\DontPrintSemicolon
\caption{}
\textbf{Step 1}. Create a $\bar{{\rm M}}_{QG} $ multilayer topological map of the ${\rm {\mathcal{N}}}_{R} $ residual network of $QG$.
\textbf{Step 2}. From $\bar{{\rm M}}_{QG} $ determine the $\bar{L}_{c} $ connection list of the ${\rm {\mathcal{N}}}_{R} $ residual network of $QG$.
\textbf{Step 3}. For $\forall i\to j$ forward edges of $\bar{{\rm M}}_{QG} $ of ${\rm {\mathcal{N}}}_{R} $, compute the $\xi _{i\to j} $ residual condensate wave function amplitude \eqref{eq60}, and for $\forall j\to i$ backward edges of $\bar{{\rm M}}_{QG} $, compute the quantity $\bar{\xi }_{j\to i} $ via \eqref{eq61}.
\textbf{Step 4}. Compute the residual negative quantum wire length $\Gamma _{j\to i} $ via \eqref{eq62}, using $\delta _{ij} $ from \eqref{eq52}.
\textbf{Step 5}. Determine the $\bar{C}$ negative cycles in the ${\bar{{\rm M}}_{QG}} $ of the ${\rm {\mathcal{N}}}_{R} $ residual network of $QG$ via the ${\rm {\mathcal{A}}}_{BF} $ Bellman-Ford algorithm \cite{ref24, ref25, ref26}.
\textbf{Step 6}. If $N_{\bar{C}} >0$, where $N_{\bar{C}} $ is the number of $\bar{C}$ negative cycles in $\bar{{\rm M}}_{QG} $, then update the $\psi _{ij} $ wave function amplitudes of the quantum wires $ij$ in the to cancel out the negative cycles.
\textbf{Step 7}. Re-calculate the values of \eqref{eq60}, \eqref{eq61} and \eqref{eq62} for the residual edges of ${\rm {\mathcal{N}}}_{R} $.
\textbf{Step 8}. Repeat steps 5-7, until $N_{\bar{C}} >0$.
\end{subproc2}
\begin{subproc2}
\DontPrintSemicolon
\caption{}
\textbf{Step 1}. For an $i$-th $sn_{k,i} $ subnet of a net $k$ of the $QG$ quantum circuit, set the quantum wire length to zero, $\delta _{ij} =0$ between quantum ports $i$ and $j$, for all $\forall i$.
\textbf{Step 2}. Determine the $L{\rm 2}$ (Euclidean) distance between the quantum ports of the subnets $sn_{k,i} $ (from each quantum port of a subnet to each other quantum port of all remaining subnets \cite{ref24}).
\textbf{Step 3}. Weight the $\delta _{ij} >0$ non-zero quantum wire lengths by the calculated $L{\rm 2}$ distance between the connections of the subnets of the $QG$ quantum circuit \cite{ref1, ref2, ref3, ref4, ref5}, \cite{ref24, ref25, ref26}.
\textbf{Step 4}. Determine the minimum spanning tree ${\rm {\mathcal{T}}}_{QG} $ via the ${\rm {\mathcal{A}}}_{K} $ Kruskal algorithm \cite{ref24}.
\textbf{Step 5}. Determine the set $S_{{\rm {\mathcal{T}}}_{QG} } $ of quantum wires with $\delta _{ij} >0$ from ${\rm {\mathcal{T}}}_{QG} $. Calculate $\delta _{S_{{\rm {\mathcal{T}}}_{QG} } } =\max \left(\delta _{ij} ,\delta '_{ij} ,\delta _{0} \right)$, where $\delta _{0} $ is the minimum width can be manufactured, while $\delta _{ij} $ and $\delta '_{ij} $ are given in \eqref{eq52} and \eqref{eq53}.
\textbf{Step 6}. Add the quantum wires of $S_{{\rm {\mathcal{T}}}_{QG} } $ to the ${\rm M}_{QG} $ multilayer topological map of the network ${\rm {\mathcal{N}}}$ of $QG$.
\textbf{Step 7}. Repeat steps 4-6 for $\forall k$ nets of the $QG$ quantum circuit, until ${\rm M}_{QG} $ is not strongly connected.
\end{subproc2}
These conclude the proof.
\end{proof}
\subsubsection{Processing in the Multilayer Structure}
The $G_{QG}^{k,z} $ grid consists of all $g_{i} $ quantum gates of $QG$ in a multilayer structure, such that the $g_{i,k}^{l_{z} } $ appropriate ports of the quantum gates are associated via an directed graph ${\rm {\rm G}}=\left(V,E,f_{c} \right)$, where $V$ is the set of ports, $g_{i,k}^{l_{z} } \subseteq V$, $E$ is the set of edges, and $f_{c} $ is a cost function, to achieve the gate-to-gate connectivity.
As a hardware restriction we use a constraint on the quantum gate structure, it is assumed in the model that a given quantum system cannot participate in more than one quantum gate at a particular time.
The distance in the rectilinear grid $G_{QG}^{k,z} $ of $QG$ is measured by the $d_{{\rm L1}} \left(\cdot \right)$ ${\rm L1}$-distance function. Between two network ports $x,y\in V$, $x=\left(j,k\right)$, $y=\left(m,o\right)$, $d_{{\rm L1}} \left(\cdot \right)$ is as
\begin{equation} \label{eq63}
d_{{\rm L1}} \left(x,y\right)=d_{{\rm L1}} \left(\left(j,k\right),\left(m,o\right)\right)=\left|m-j\right|+\left|o-k\right|.
\end{equation}
The quantum port selection in the $G_{QG}^{k,r} $ multilayer structure of $QG$, with $r$ layers $l_{z} $, $z=1,\ldots ,r$, and $k=2$ dimension in each layers is illustrated in \fref{figA1}.
\begin{center}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[angle = 0,width=1\linewidth]{figA1.pdf}
\caption{The method of port allocation of the quantum gates in the $G_{QG}^{k,r} $ multilayer structure, with $r$ layers $l_{z} $, $z=1,\ldots ,r$, and $k=2$ dimension in each layers. The aim of the multiport selection is to find the shortest path between ports of quantum gates $g_{i} $ (blue rectangle) and $g_{j} $ (green rectangle) in the $G_{QG}^{{\rm 2,}r} $ multilayer structure. (a): The quantum ports needed to be connected in $QG$ are port $g_{i{\rm ,1}} $ in quantum gate $g_{i} $ in layer $l_{{\rm 1}} $, and ports $g_{j{\rm ,1}} $ and $g_{j{\rm ,2}} $ of quantum gate $g_{j} $ in layer $l_{{\rm 3}} $. (b): Due to a hardware restriction on quantum computers, the quantum gates are applied in several rounds in the different layers of the quantum circuit $QG$. Quantum gate $g_{j} $ is applied in two rounds in two different layers that is depicted $g_{j}^{l_{{\rm 3}} } $ and $g_{j}^{l_{{\rm 2}} } $. For the layer-$l_{{\rm 3}} $ quantum gate $g_{j}^{l_{{\rm 3}} } $, the active port is $g_{j{\rm ,1}}^{l_{{\rm 3}} } $ (red), while the other port is not accessible (gray) in $l_{{\rm 3}} $. The $g_{j{\rm ,1}}^{l_{{\rm 3}} } $ port, due to a physical-layer blockage $\beta $ in the quantum circuit of the above layer $l_{{\rm 2}} $ does not allow to minimize the path cost between ports $g_{i{\rm ,1}} $ and $g_{j{\rm ,1}}^{l_{{\rm 3}} } $. The target port $g_{j{\rm ,1}}^{l_{{\rm 3}} } $ is therefore referred to as a blocked port (depicted by pink), and a new port of is $g_{j}^{l_{{\rm 3}} } $ selected for $g_{j{\rm ,1}}^{l_{{\rm 3}} } $ (new port depicted by red). (c): For the layer-$l_{{\rm 2}} $ quantum gate $g_{j}^{l_{{\rm 2}} } $, the active port is $g_{j{\rm ,2}}^{l_{{\rm 2}} } $ (red), while the remaining port is not available (gray) in $l_{{\rm 2}} $. The white dots (vertices) represent auxiliary ports in the grid structure of the quantum circuit. In $G_{QG}^{{\rm 2,}r} $, each vertices could have a maximum of 8 neighbors, thus for a given port $g_{j,k} $ of a quantum gate $g_{j} $, ${\rm deg}\left(g_{j,k} \right)\le {\rm 8}$.}
\label{figA1}
\end{center}
\end{figure*}
\end{center}
\paragraph{Algorithm}
\begin{theorem}
The Quantum Shortest Path Algorithm finds shortest paths in a multilayer $QG$ quantum circuit structure.
\end{theorem}
\begin{proof}
The steps of the shortest path determination between the ports of the quantum gates in a multilayer structure are included in Algorithm 2.
\setcounter{algocf}{1}
\begin{algo}
\DontPrintSemicolon
\caption{\textit{Quantum Shortest Path Algorithm (QSPA)}}
\textbf{Step 1}. Create the $G_{QG}^{k,r} $ multilayer structure of $QG$, with $r$ layers $l_{z} $, $z=1,\ldots ,r$, and $k$ dimension in each layers. From $G_{QG}^{k,r} $ generate a list $L_{{\rm {\mathcal{P}}}\in {\rm {\rm Q}{\rm G}}} $ of the paths between each start quantum gate port to each end quantum gate port in the $G_{QG}^{k,r} $ structure of $QG$ quantum circuit.
\textbf{Step 2}. Due to the hardware restrictions of quantum computers, add the decomposed quantum gate port information and its layer information to $L_{{\rm {\mathcal{P}}}\in {\rm {\rm Q}{\rm G}}} $. Add the $\beta $ physical-layer blockage information to $L_{{\rm {\mathcal{P}}}\in {\rm {\rm Q}{\rm G}}} $.
\textbf{Step 3}. For a quantum port pair $\left(x,y\right)\in G_{QG}^{k,r} $ define the $f_{c} \left(x,y\right)$ cost function, as
\[f_{c} \left(x,y\right)=\gamma \left(x,y\right)+d_{{\rm L1}} \left(x,y\right),\]
where $\gamma \left(x,y\right)$ is the real path size from $x$ to $y$ in the multilayer grid structure $G_{QG}^{k,r} $ of $QG$, while $d_{{\rm L1}} \left(x,y\right)$ is the ${\rm L1}$ distance in the grid structure as given by \eqref{eq63}.
\textbf{Step 4}. Using $L_{{\rm {\mathcal{P}}}\in {\rm {\rm Q}{\rm G}}} $ and cost function $f_{c} \left(x,y\right)$, apply the $A^{{\rm *}} $ parallel search \cite{ref24, ref25, ref26} to determine the lowest cost path ${\rm {\mathcal{P}}}^{{\rm *}} \left(x,y\right)$.
\end{algo}
\end{proof}
\paragraph{Complexity Analysis}
The complexity analysis of Algorithm 2 is as follows. Since the QSPA algorithm (Algorithm 2) is based on the $A^{{\rm *}} $ search method \cite{ref24, ref25, ref26}, the complexity is trivially yielded by the complexity of the $A^{{\rm *}} $ search algorithm.
\section{Performance Evaluation}
\label{sec5}
In this section, we compare the performance of the proposed QTAM method with a multiobjective evolutionary algorithm called NSGA-II \cite{com1}. We selected this multiobjective evolutionary algorithm for the comparison, since the method can be adjusted for circuit designing.
The computational complexity of NSGA-II is proven to be ${\rm {\mathcal O}}\left(N_{it} N_{obj} \left|{\rm {\mathcal P}}\right|^{2} \right)$ in general, while at an optimized nondominated procedure, the complexity can be reduced to ${\rm {\mathcal O}}\left(N_{it} N_{obj} \left|{\rm {\mathcal P}}\right|\log _{2} \left|{\rm {\mathcal P}}\right|\right)$. We take into consideration both situations for a comparison. The complexity of QTAM is given in \eqref{eq36}.
The complexity of the methods in terms of the number of iterations, $N_{O}$, is compared in \fref{figA2}. The performance of QTAM is depicted in \fref{figA2}(a), while \fref{figA2}(b) and \fref{figA2}(c) illustrate the performances of the NSGA-II and optimized NSGA-II, respectively.
For the comparison, the $N_{obj} $ parameter is set to $N_{obj} =5$, while for the QTAM method, $N_{d} $ is set to $N_{d} =3$.
\begin{center}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[angle = 0,width=1\linewidth]{figP.pdf}
\caption{(a): The computational complexity ($N_{O} $: number of operations) of QTAM in function of $N_{it} $ and $\left|{\rm {\mathcal P}}\right|$, $N_{it} \in \left[1,100\right]$, $\left|{\rm {\mathcal P}}\right|\in \left[1,500\right]$. (b): The computational complexity of the NSGA-II method in function of $N_{it} $ and $\left|{\rm {\mathcal P}}\right|$, $N_{it} \in \left[1,100\right]$, $\left|{\rm {\mathcal P}}\right|\in \left[1,500\right]$. (c): The computational complexity of the optimized NSGA-II in function of $N_{it} $ and $\left|{\rm {\mathcal P}}\right|$, $N_{it} \in \left[1,100\right]$, $\left|{\rm {\mathcal P}}\right|\in \left[1,500\right]$.}
\label{figA2}
\end{center}
\end{figure*}
\end{center}
In the analyzed range, the maximized values of $N_{O} $ are $N_{O} \left({\rm QTAM}\right)\approx 2\cdot 10^{6} $, $N_{O} (\text{NSGA-II})\approx 1.25\cdot 10^{8} $, and for the optimized NSGA-II scenario, $N'_{O} \left({\text{NSGA-II}}\right)\approx 2.25\cdot 10^{6} $, respectively. In comparison to NSGA-II, the complexity of QTAM is significantly lower. Note, while the performance of QTAM and the optimized NSGA-II is closer, QTAM requires no any optimization of the complexity of the nondominated procedure.
\section{Conclusions}
\label{sec6}
The algorithms and methods presented here provide a framework for quantum circuit designs for near term gate-model quantum computers. Since our aim was to define a scheme for present and future quantum computers, the developed algorithms and methods were tailored for arbitrary-dimensional quantum systems and arbitrary quantum hardware restrictions. We demonstrated the results through gate-model quantum computer architectures; however, due to the flexibility of the scheme, arbitrary implementations and input constraints can be integrated into the quantum circuit minimization. The objective function that is the subject of the maximization in the method can also be selected arbitrarily. This allows a flexible implementation to solve any computational problem for experimental quantum computers with arbitrary hardware restrictions and development constraints.
\section*{Acknowledgements}
This work was partially supported by the European Research Council through the Advanced Fellow Grant, in part by the Royal Society’s Wolfson Research Merit Award, in part by the Engineering and Physical Sciences Research Council under Grant EP/L018659/1, by the Hungarian Scientific Research Fund - OTKA K-112125 and in part by the Engineering and Physical Sciences Research Council under Grant EP/L018659/1.
|
2,869,038,154,597 | arxiv | \section{Introduction}
Lyman limit systems are a special class of Ly$\alpha$ absorbers which span a range of
column densities: $\NHI = 1.6\times10^{17}\pcms - 2\times10^{20}\pcms$. The lower limit
is defined by the column density which gives an optical depth of unity at the Lyman limit
and the upper limit is defined by the transition to Damped Ly$\alpha$ (DLA) systems which
are mostly neutral. They are primarily observed through quasar absorption lines although
their absorption features have also been seen in the spectra of gamma-ray bursts. See
\cite{rauch_1998,meiksin_2009} for reviews of Ly$\alpha$ absorbers and
\cite{wolfe_et_al_2005} for a review of DLAs.
Observations of LLSs and DLAs in the high-redshift universe provide a fertile ground for
comparison with theoretical work. They give a unique window into the high-redshift
universe since the quasar absorption line observations provide an area-weighted survey of
these absorbers across a large range of redshifts which makes them especially simple to
compare with simulations.
While the absorption line studies provide rich statistics of these systems when averaged
over many lines of sight, it is difficult to deduce the environment in which individual
absorbers reside. The main goal of this work is to understand the environment of LLSs, as
well the physical mechanisms which control their properties. Many groups have studied the
properties of LLSs in simulations of varying mass resolution and with many of the
physical mechanisms which affect LLSs
\citep{kohler_gnedin_2007,altay_et_al_2011,mcquinn_et_al_2011,fumagalli_et_al_2011,yajima_et_al_2011,rahmati_et_al_2012,rahmati_et_al_2013,rahmati_et_al_2013_b}.
The simulations in this work have a relatively high mass resolution of $1.5\times 10^5
{\rm h}^{-1} M_\odot$, allowing us to study lower mass halos, $M < 10^9 {\rm h}^{-1}
M_\odot$, than has previously been achieved. This mass range is especially interesting
since H$_2$-based star formation models indicate that these halos will not form stars
\citep{gnedin_kravtsov_2010,kuhlen_et_al_2013} and hence they may only be detectable
using absorption line studies.
In addition to studying the halos in which LLSs reside, I will use these simulations to
study the self-shielding of LLSs to the UVB. LLSs are defined as having an optical depth
greater than unity to radiation at the Lyman limit, i.e. $\NHI = 10^{17.2}\pcms$. The
column density at which this self-shielding becomes effective is important since it
controls the turnover of the HI column density distribution as was shown in
\cite{altay_et_al_2011,mcquinn_et_al_2011,rahmati_et_al_2012}. In
\Secref{sec:effective_shielding}, I will show that due to the physical properties and
anisotropic shielding of LLSs, as well as the spectrum of the UVB, a column density of
$\NHI \sim 10^{18}\pcms$ is needed to shield against the UVB with an optical depth of
unity at $z\approx3$.
This paper is arranged as follows. In \Secref{sec:simulation_description}, I discuss the
simulations used in this paper. Next, I compare the simulation results to quasar
absorption line observations of the high-redshift universe in \Secref{sec:cdd} and find
that the simulations qualitatively reproduce the features seen in observations. In
\Secref{sec:LLS_and_Halos}, I explore the relation between LLSs and their host halos and
find that LLSs reside in halos with a large range of masses but that there is a cutoff at
low mass which is similar to the cutoff due to photoheating from the UVB. In
\Secref{sec:individual_LLS}, I investigate the physical mechanisms of individual LLSs and
test a simple model for LLSs developed in \cite{schaye_2001}. In \Secref{sec:anisotropy},
I study the anisotropy of LLSs and how this affects their self-shielding properties. In
\Secref{sec:effective_shielding}, I discuss how the physical properties of LLSs and the
spectral shape of the UVB affect the amount of self-shielding in these systems. In
\Secref{sec:other_works}, I compare the results from this work to some recent works on
LLSs. Finally, I conclude in \Secref{sec:conclusion}.
\section{Simulations} \label{sec:simulation_description}
In this work, I have used the simulation described in \cite{zemp_et_al_2012}, carried out
using the Adaptive Refinement Tree (ART) code
\citep{kravtsov_1999,kravtsov_et_al_2002,rudd_et_al_2008}. The code has adaptive mesh
refinement which gives a large dynamic range in spatial scale. These simulations follow
five different Lagrangian regions, each of five virial radii around a system which
evolves into a typical halo of an $L_*$ galaxy (${\rm M} \approx 10^{12} \Msun$) at
$z=0$. These Lagrangian regions are embedded in a cube of size $25.6$ comoving $h^{-1}$
Mpc to model the tidal forces from surrounding structures. The outer region is coarsely
resolved with a uniform $256^3$ grid. The dark matter mass resolution is $1.5 \times
10^{5} h^{-1} \Msun$ in the high-resolution Lagrangian region and the baryonic mass
resolution varies from $\sim 10^3 \Msun$ to $\sim 10^6 \Msun$ depending on cell size and
density. The maximum spatial resolution is $195$ comoving $h^{-1}$pc. The cosmological
parameters used are similar to the WMAP7 parameters: $\Omega_M = 0.28$, $\Omega_B =
0.046$, $\sigma_8 = 0.82$, $h=0.7$, and $n_s = 0.96$.
These simulations include three-dimensional radiative transfer of UV radiation from the
UVB as well as from stars formed in the simulation. This is done with the Optically Thin
Variable Eddington Tensor (OTVET) approximation \citep{gnedin_abel_2001}. The
contribution from the UVB uses the model in \cite{haardt_madau_2001}, while the
contribution from local sources uses a Miller-Scalo IMF \citep{miller_scalo_1979} and the
shape of the spectrum from local sources comes from Starburst99 modeling
\cite{leitherer_et_al_1999} and is plotted in Figure 4 of \cite{ricotti_et_al_2002}. The
OTVET method in this work follows the transfer of radiation at 4 frequencies: at the
$\HI$, $\HeI$, and $\HeII$ ionization thresholds, as well as one to follow non-ionizing
radiation at 1000 \AA. The fidelity of this RT prescription was tested in
\cite{iliev_et_al_2006,iliev_et_al_2009} where it was found to work well except for some
numerical diffusion of ionization fronts. The prescription has subsequently been improved
and numerical diffusion has been almost completely eliminated \cite{gnedin_2014}. This
detailed and faithful radiative transfer allows us to model the self-shielding of LLSs
against the UVB. It is also important for understanding the effect of local sources on
LLSs since they arise in close proximity to galaxies.
These simulations include a self-consistent, non-equilibrium chemical network of hydrogen
and helium, including the effects of ionization from photoionization (corrected for
dust-shielding), collisional ionization, and radiative recombination
\cite{gnedin_kravtsov_2011}. The chemical network also self-consistently models $H_2$,
including the formation of molecular hydrogen in both primordial phase and on dust grains
\citep[see][for details]{gnedin_kravtsov_2011}. This physics includes the cooling and
physical mechanisms needed to correctly model the gas in LLSs and allows for a realistic
H$_2$-based star-formation model.
Finally, the simulations include thermal supernova feedback with an energy deposition of
$2\times10^{51}$ erg from Type Ia and Type II supernovae. This feedback prescription is
known to be inefficient since the supernova energy is deposited in cells with high
densities and relatively low temperatures which results in extremely efficient cooling.
While efficient feedback has been shown to increase the cross-section of LLSs
\citep[e.g.][]{faucher_et_al_2015,rahmati_et_al_2015} examining the effect of realistic
feedback is beyond the scope of this work. Note that since feedback also depends on the
mass of the host galaxy, the inclusion of more efficient feedback would also likely
affect the LLS cross-section versus halo mass which is explored below.
\section{Column Density Distribution and Incidence of LLSs} \label{sec:cdd}
Before delving into the properties of individual absorbers and their host halos, it is
useful to test how well the simulations are modeling the properties of LLSs by comparing
against observations. Two of the main statistics for LLSs measured by observers are the
number of LLSs per absorption length (the incidence frequency) and the number of systems
per unit absorption length per unit column density (the HI column density distribution).
The incidence frequency is written as,
\< l_{\rm LLS} = \frac{\der \mathcal{N}}{\der X}, \>
and the HI column density distribution is written as,
\< f(\NHI,z) = \frac{\der^2 \mathcal{N}}{\der\NHI \der X}, \>
where the absorption length is given by
\< \frac{\der X}{\der z} = \frac{H_0}{H(z)} (1+z)^2 .\>
These statistics are related since the HI column density distribution is the incidence
frequency per unit column density. The absorption length is defined this way so that
absorbers with a constant comoving number density and constant physical size have a
constant incidence frequency. Hence, any evolution in these quantities is due to
evolution in the cross-section of these systems, their number density, or a combination
of these two. Since LLSs reside in and around galaxies, their incidence can be written in
terms of the average LLS cross-section, $\sigma_{\rm LLS}(M,z)$, and the halo mass
function, $n(M,z)$, at redshift $z$ \citep[][]{gardner_et_al_1997}:
\< l_{\rm LLS} = \frac{c}{H_0}\int {\sigma}_{\rm LLS}(M,z) n(M,z) \der M
\label{eq:dNdX_halo_mf}.\>
Note that I will also consider the quantity $l_{\tau > \tau_0}$, which is the incidence
of systems with an optical depth greater than $\tau_0$ at the Lyman limit. Likewise, the
HI column density distribution can be written as
\< f(\NHI,z) = \frac{c}{H_0}\int \frac{\partial \sigma(\NHI,M,z)}{\partial \NHI} n(M,z)
\der M \label{eq:d2NdXdNHI_halo_mf} ,\>
where $\sigma(\NHI,M,z)$ is the average cross-section of absorbers with a column density
below $\NHI$ around halos of mass $M$.
\subsection{Observations of LLSs}
Observations of LLSs in the high-redshift universe are primarily made by using quasar
absorption lines. Since LLSs correspond to the flat portion of the curve of growth, their
column density is harder to determine than systems with lower or higher column densities.
The column densities of systems in the Ly$\alpha$ forest with $\NHI < 10^{17.2}\pcms$ can
be directly determined either from Voigt profile fits to the Ly$\alpha$ absorption, or
from fits to higher order Lyman transitions \citep[e.g.][]{rudie_et_al_2012}. For DLAs
and sub-DLAs, $\NHI > 10^{19} \pcms$, the natural line width of the Ly$\alpha$ transition
produces damping wings which make the column densities of these systems easy to determine
\citep[e.g.][]{wolfe_et_al_2005}. However, in the intermediate range, $10^{17.2} \pcms <
\NHI < 10^{19}\pcms$, the exact column density is difficult to measure and requires
precise observations of both the Ly$\alpha$ line and the Lyman limit break
\citep[e.g.][]{prochter_et_al_2010}. While the exact column density may be difficult to
determine in this range, the presence of an absorber with $\NHI > 10^{17.2}\pcms$ can be
inferred from the Lyman limit break. As a result, observers can more easily measure the
number of systems above a given threshold (typically $\NHI = 10^{17.2}\pcms$) which
provides an integral constraint on the HI column density distribution. In some works
\citep[i.e.][]{omeara_et_al_2012}, this counting is done for multiple thresholds which
can be used to constrain the column density distribution.
In \Figref{fig:dNdX}, I show observations of the incidence of LLSs over a variety of
redshifts. These come from \cite{prochaska_et_al_2010} and \cite{omeara_et_al_2012}. In
\Figref{fig:cdd}, I show the constraints on the HI column density distribution for LLSs
at $z\approx 2.4$ from \cite{omeara_et_al_2007} and \cite{omeara_et_al_2012}. Above $\NHI
= 10^{19}\pcms$ these constraints come from the detection of individual LLSs for which
the HI column density of each system can be determined. Between $\NHI = 10^{17.5} \pcms$
and $\NHI = 10^{19}\pcms$, the constraints are determined from $l_{\tau > 2}$. Below
$\NHI = 10^{17.5}\pcms$, the constraints are determined from the comparison of $l_{\tau >
2}$, $l_{\tau > 1}$, and $l_{\tau > 0.5}$ in \cite{omeara_et_al_2012}. See
\cite{omeara_et_al_2012} for a detailed discussion of these constraints.
\subsection{Measuring the Frequency and Column Density Distribution in Simulations}
Using a method similar to observations, the HI column density is computed by taking lines
of sight through the simulation, measuring the HI column density along these lines of
sight, and counting the number of absorbers in each column density bin. Observationally,
the HI column densities are determined by fitting profiles to the HI absorption lines. In
simulations, the HI column density can simply be integrated along lines of sight in the
three cartesian directions. Since systems in the simulation are randomly oriented with
respect to the simulation box, these lines of sight effectively probe random lines of
sight through systems in the simulation. This method gives the same HI column density as
fitting absorption lines as long as there are not multiple systems along each line of
sight.
In order to determine the column density at which these projection effects become
important, I considered lines of sight of various lengths along the cartesian directions.
These lines of sight were placed on a regular grid separated by 4 times the highest
resolution element, $781$ comoving $h^{-1}$ pc. This sampling fixes the number of lines
of sight taken through the simulation volume but does not affect the resolution along the
line of sight, which is controlled by the size of each cell. Along each line of sight, I
found the location of the cell with the maximum HI density and defined this to be the
center of the absorber. This definition will allow us to probe the environment which is
physically close to the absorber since we can take lines of sight originating from this
point. I then considered lines of sight of length 10kpc, 50kpc, 200kpc, and the full box
length, centered on the absorber. I found that while the 10kpc and 50kpc lines of sight
differed substantially below $\NHI = 10^{16}\pcms$, the 200kpc and full box lines of
sight showed fairly similar column densities (only 2.5\% of systems differed by more than
a factor of 2) indicating that the projection effects are not substantial for these
systems. In this work I will restrict the analysis to $\NHI > 10^{16.5}\pcms$ where
projection effects are even less important. This approach was also taken in
\cite{altay_et_al_2011} and \cite{rahmati_et_al_2012} where the projected column density
was used for systems with $\NHI > 10^{17}\pcms$ and $\NHI > 10^{16}\pcms$ respectively.
Note that these shorter lines of sight target gas associated with the absorber and will
also be used to measure quantities like the characteristic size of an absorber.
\subsection{LLS Incidence Frequency}
Due to the difficulty in directly measuring the column density of LLSs, the frequency of
LLSs per unit absorption length is the natural quantity to compare against observations.
I have computed this quantity using two approaches and plotted the result in
\Figref{fig:dNdX}. First, I counted the number of LLSs above $\NHI > 10^{17.5}\pcms$
along all of the sightlines in the simulation, and then divided by the absorption length
in the simulation:
\< l_{\rm \tau > 2} = \frac{\Delta \mathcal{N}_{\tau >2}}{\Delta X} .\>
The result of this simple approach is shown in \Figref{fig:dNdX} and is consistent with
observations although it has a somewhat different evolution in redshift.
In the second approach, I attempted to account for the bias inherent in a zoom-in
simulation by rescaling the contribution from each halo mass bin. Since the zoom-in
regions are selected to have a Milky Way progenitor, the mass function in these regions
will be biased as a random volume of this size would have fewer massive galaxies. One way
to account for this is to identify each LLS with its host halo, compute the mean
cross-section in each halo mass range, $\overline{\sigma}_{\tau > 2}(M_i,z)$, and then
compute the quantity
\< l_{\tau > 2} = \frac{c}{H_0} \sum_i \overline{\sigma}_{\tau>2}(M_i,z)
\overline{n}(M_i,z), \label{eq:discrete_dNdX}\>
where
\< \overline{n}(M_i,z) = \int_{M_i}^{M_i+\Delta M} n(M,z)\der M ,\label{eq:define_nbar}\>
and $n(M,z)$ is the true halo mass function. As long as the cross-section of individual
halos is correctly modeled, this discretized version of \Eqref{eq:dNdX_halo_mf} will
partially correct for the bias of the zoom-in simulation. Note that I have restricted
this sum to be over resolved halos with $M > 10^{8}h^{-1}\Msun$ (corresponding to
$\approx$1000 particles) below which we cannot model the cross-section and that I used
the halo mass function from \cite{sheth_tormen_2002} as the true halo mass function. Also
note that this sum is only covers the mass range of halos within the simulation but due
to the rapidly falling halo mass function and the relatively constant LLS covering
fraction which we will discuss in \Secref{sec:LLS_and_Halos}, the inclusion of higher
mass halos should not significantly change this result. The corrected incidence frequency
is plotted in \Figref{fig:dNdX}. It is lower than the basic counting result since it
lowers the contribution from more massive halos. While the simulated incidence frequency
is consistent with the observations until $z\sim 3.5$, there is significant deviation at
higher redshift. This is likely due to the zoom-in simulations used in this work which
cannot capture the contribution from the filamentary cosmic-web at high-redshifts. The
mean cross-section computed in the simulation can be found in \Figref{fig:xsec_vs_mvir}
and will be discussed in more detail in \Secref{sec:LLS_and_Halos}.
This technique also relies on the properties of the galaxies in the zoom-in region being
representative of the properties of average galaxies in the universe. While this bias
cannot be addressed with individual zoom-in regions, simulations with fixed-resolution
\citep[i.e.][]{rahmati_et_al_2013_b} give similar results for the cumulative distribution
function (CDF) of LLSs with respect to halo mass, indicating that the assumption is a
reasonable one. In Figure 6 of \cite{rahmati_et_al_2013_b}, the CDF shows a similar
behavior to what is found in \Figref{fig:total_xsection} of this work with $\sim 75\%,
\sim 15\%,$ and $\sim 10\%$ of LLSs arising in halos with masses in the range $M <
10^{10} M_\odot$, $10^{10} M_\odot < M < 10^{11} M_\odot$, and $M > 10^{11} M_\odot$,
respectively at $z=3$. In this work we find $\sim 71\%$, $\sim 22\%,$ and $\sim 7\%$ of
LLSs arising in halos with the same mass range.
\begin{figure}
\centering
\includegraphics[width=8cm]{dNdX_LLS.eps}
\caption{Incidence of systems with $\NHI > 10^{17.5}\pcms$ in simulations and observations
as a function of redshift.
The short-dashed black curve shows the basic estimate from counting the number of absorbers
in the simulation and dividing by the absorption length of the simulation volume.
The long-dashed blue curve shows the result of correcting for the halo mass function.
The data are from two surveys: the squares are from \protect\cite{omeara_et_al_2012} and the triangles
are from \protect\cite{prochaska_et_al_2010}.}
\label{fig:dNdX}
\end{figure}
\subsection{Evolution of the HI Column Density Distribution}\label{sec:cdd_evolve}
In order to compute the column density distribution, I count the number of absorbers in
each HI column density bin, and divide by the total absorption length in the simulation:
\< f(\NHI) = \frac{\Delta \mathcal{N}(\NHI)}{\Delta \NHI\Delta X}. \>
In \Figref{fig:cdd}, I compare the HI column density distribution for LLSs in simulations
to observations. Since the column density distribution is quite steep over this range, I
have plotted the quantity $\log_{10} \NHI f(\NHI,z)$ in order to aid comparison. The HI
column density distribution in simulations has a qualitatively similar structure to the
observed HI column density distribution. The column density distribution is steep at low
$\NHI$ and then flattens out when self-shielding becomes important as I will discuss
further in \Secref{sec:individual_LLS}. Once the gas becomes sufficiently neutral, the
column density distribution steepens once again. This structure has been seen in many of
the recent simulations of Ly$\alpha$ absorbers
\citep[i.e.][]{mcquinn_et_al_2011,fumagalli_et_al_2011,altay_et_al_2011,rahmati_et_al_2012}.
In the observations, the flattening of the HI column density distribution is poorly
constrained since it occurs on the flat portion of the curve of growth where there are
only integral constraints on the HI column density distribution.
Interestingly, \Figref{fig:cdd} indicates that the HI column density distribution remains
relatively flat over a larger range than seen in the observations. A similar shape was
found in \cite{mcquinn_et_al_2011}. Note that since the quantity being plotted is
proportional to the number of absorbers per logarithmic $\NHI$ bin, \Figref{fig:cdd}
implies that there are more systems per logarithmic interval at $\NHI = 10^{20}\pcms$
than at $\NHI = 10^{19}\pcms$. A similar inversion is seen in the data although at
slightly lower column density. I will discuss the location of this turnover further in
\secref{sec:individual_LLS}.
From \Figref{fig:cdd}, it is apparent that the shape of the HI column density
distribution undergoes little evolution between $z=2$ and $z=5$, although there is a
slight flattening at low column densities and low redshift. This lack of evolution agrees
with the previous results found by \cite{fumagalli_et_al_2011} and
\cite{rahmati_et_al_2012}. Note that this work finds slightly less evolution in the
column density distribution from $z=5$ to $z=3$ than is found in
\cite{rahmati_et_al_2012}. This difference is likely due to the same reason that I
underpredict the incidence of LLS in \Figref{fig:dNdX}, the zoom-in simulations in this
work do not capture the large-scale filaments at high redshift.
\begin{figure}
\centering
\includegraphics[width=8cm]{LLS_NHIxCDD_vs_data.eps}
\caption{HI column density distribution compared to observations centered around $z\approx 2.4$.
Since the column density distribution is fairly steep, I plot $\log_{10} \NHI f(\NHI,z)$ so that the features
are more salient. The light blue shaded region comes from constraints on $l_{\tau > 2}$ from \protect\cite{omeara_et_al_2012}.
The dark blue shaded region comes from constraints on the slope of the column density distribution
in the range $\NHI \in 10^{16.9}-10^{17.5} \pcms$ from the constraints on $l_{\tau > 2}$,
$l_{\tau > 1}$, and $l_{\tau > 0.5}$ \protect\citep{omeara_et_al_2012}. The light red region comes from \protect\cite{omeara_et_al_2007}.
The red squares come from \protect\cite{noterdaeme_et_al_2012}. Note that the column density
distribution from the simulations has not been re-scaled in any way. Since all of the observations
are centered around $z\approx 2.5$, they should be compared with the $z=2$ and $z=3$ column
density distribution.} \label{fig:cdd}
\end{figure}
\section{LLSs and Their Host Halos}\label{sec:LLS_and_Halos}
While these observations provide relatively unbiased statistics of the incidence of LLSs,
individual lines of sight cannot easily be used to study the halos in which LLSs reside.
Previous theoretical work has attempted to identify the host halos of these systems. Much
of the early work that explored the halo mass range lacked the mass resolution to study
absorbers in low-mass halos and extrapolated their properties from those of more massive
halos \citep[i.e.][]{katz_et_al_1996,abel_mo_1998,gardner_et_al_2001}. Making use of
simulations with better mass resolution, \cite{kohler_gnedin_2007} found that LLSs are
associated with a large range of halo masses but that low-mass halos do not dominate the
total cross-section of LLSs. More recent studies with similar resolution to this work
found that while LLSs are associated with a large range of halo masses, there is a
correlation between $\NHI$ and halo mass with lower column density systems more likely to
be found near lower mass halos
\citep[e.g.][]{van_de_voort_et_al_2012,rahmati_et_al_2013_b}. Using simulations with even
better mass resolution, as well as additional physics, I will now explore the relation
between LLSs and their host halos.
\subsection{LLS Cross-Section versus Halo Mass}
A simple statistic to consider is the mean LLS cross-section as a function of halo mass.
Some previous studies connect LLS and galaxies based on their projected separation. This
choice is similar to what is done in observational studies which most likely is the main
motivation for doing so in theoretical studies which aim to compare their results against
observations, \citep[e.g.][]{fumagalli_et_al_2011}. However, this can potentially lead to
unphysical correlations when the gas is near multiple halos in projection. In this work,
the nearest halo is instead determined by associating a given line of sight with the halo
closest to the maximum density point along the line of sight. By associating the LLS with
the nearest halo in 3-d space, the resulting cross-section should more accurately
represent the gas residing in that halo. The cross-section for each halo is computed in
each cartesian direction and then averaged.
In \Figref{fig:xsec_vs_mvir}, I plot this mean cross-section for systems within a virial
radius of the host halo at four different redshifts. For reference, I also include a line
with a logarithmic slope of $\frac{2}{3}$. The average cross-sections have a similar
slope to this line, indicating that $\sigma_{\rm LLS} \propto r_{\rm vir}^2$ over a wide
range of halo masses. This implies that the halos have a fairly constant covering
fraction for LLSs within their virial radii. This covering fraction (both its magnitude
and mass independence) is similar to the values reported in \cite{fumagalli_et_al_2014}
with a $\sim 15\%$ covering fraction at $z=2$ and a $\sim 20\%$ covering fraction at
$z=3$ within the virial radius, in agreement with Figure 2 of their work. Given that
strong feedback is known to increase the LLS covering fractions
\citep[e.g.][]{faucher_et_al_2015,rahmati_et_al_2015} it is likely that these LLS
covering fractions are lower limits. The average cross-section also has a sharp drop-off
below a characteristic mass which I will discuss further below. The average cross-section
evolves with redshift in two ways. First, there is a decrease in the mean cross-section
at a given mass as the redshift decreases. Second, the characteristic mass below which
the cross-section drops-off increases with redshift. Note that if the LLS is instead
associated with the nearest and most massive halo within a projected virial radius, the
low mass halos, $M<10^9 {\rm h}^{-1} M_\odot$, will have a slightly lower cross-section
since some of the gas which belongs to them gets associated with a larger halo instead.
\begin{figure}
\centering
\includegraphics[width=80mm]{LLS_X_section_vs_z_mean_in_rvir.eps}
\caption{Mean LLS cross-section versus the mass of the closest halo at different redshifts.
The black dashed line
is for reference and has a logarithmic slope of $\frac{2}{3}$. The curves have a similar
slope to this line, indicating that $\sigma_{\rm LLS} \propto r_{\rm vir}^2$. There is a
clear evolution in redshift with halos of a given mass having a smaller LLS cross-section
at lower redshifts. In addition, there is a cutoff at low mass which increases with decreasing redshift.}
\label{fig:xsec_vs_mvir}
\end{figure}
This characteristic mass and its evolution can be interpreted in terms of the
photoionization of halos due to the UVB, a process described in
\cite{hoeft_et_al_2006,okamoto_et_al_2008}. In \cite{okamoto_et_al_2008}, the authors
studied the baryon fraction of halos as a function of halo mass and redshift. They found
that there is a characteristic mass which evolves with redshift at which the halos retain
half of the universal baryon fraction. Below this mass, the halos are unable to retain
their gas due to photoheating from the UVB. Note that the reference simulation used in
that work had a similar mass resolution ($2.2 \times 10^{5}h^{-1}\Msun$) to the
simulations used in this work so the same effect should be seen. Instead of the baryonic
fraction, I use the LLS covering fraction within a virial radius:
\<f_{\rm LLS} = \frac{\sigma_{\rm LLS}}{\pi r_{\rm vir}^2}.\>
For large halos, this covering fraction asymptotes to a constant value which depends on
redshift (see \figref{fig:xsec_vs_mvir}). I then find the characteristic mass at which
the covering fraction drops to half of this asymptotic value, $M_{\frac{1}{2}}$. Below
this mass, the covering fraction falls rapidly. I compare the characteristic mass derived
from the LLSs covering fraction with the characteristic mass from
\cite{okamoto_et_al_2008} in \Figref{fig:okamoto_mass}. I find that they roughly agree
and have a similar evolution with redshift which suggests that the drop in the LLS
covering fraction is due to photoionization of low-mass halos. Note that this comparison
is only a qualitative one since the characteristic mass as derived from the baryonic
fraction is not expected to be the same as the characteristic mass as derived from the
LLS covering fraction.
\begin{figure}
\centering
\includegraphics[width=80mm]{okamoto_mass.eps}
\caption{Characteristic mass scale for halos to retain half their gas. The solid black curve
shows the characteristic mass from \protect\cite{okamoto_et_al_2008}, $M_c$, at which halos retain half
of their baryonic mass. The dashed blue curve shows the characteristic mass, $M_\frac{1}{2}$, at which the
covering fraction within the virial radius drops to half of the asymptotic value, as
described in the text.} \label{fig:okamoto_mass}
\end{figure}
\subsection{Contribution of Different Mass Halos to the LLS Population}
Next, I compute how much each halo mass range contributes to the total LLS population.
The cumulative contribution to the LLS incidence for halos with mass less than $M$ is
given by
\< l_{\rm LLS} (< M) = \frac{c}{H_0} \sum_{M_i = M_{\rm min}}^M \overline{\sigma}_{\rm
LLS}(M_i,z) \overline{n}(M_i,z) , \label{eq:xsection_hmf}\>
where $M_{\rm min}$ is a minimum mass, given by $10^{8} h^{-1} \Msun$ in this work and
$\overline{n}(M_i,z)$ is defined as in \Eqref{eq:define_nbar}. This cumulative incidence
is plotted in \Figref{fig:total_xsection} where it has been normalized by the total
incidence. I find that a large range of halos contribute to the total LLS frequency.
Furthermore, I find that for redshifts between $z=2$ and $z=5$, low-mass halos with $M <
10^{10}h^{-1} \Msun$ contribute the majority of LLSs. While the contribution to the LLS
population from halos with $M < 10^{10}{\rm h}^{-1} \Msun$ has been studied before
\citep[i.e.][]{rahmati_et_al_2013_b}, the mass resolution used in this work allows us to
extend this to the population of LLSs residing in halos with $M < 10^9 {\rm h}^{-1}
\Msun$ which contribute $\sim 30\%$ of the total LLSs at $z=3$.
This mass range is especially interesting since H$_2$-based models of star formation
predict that these halos with $M< 10^{10}h^{-1} \Msun$ will have little star formation
and hence should be dark \citep{gnedin_kravtsov_2010,kuhlen_et_al_2013}. The results of
\Figref{fig:total_xsection} indicate that while these halos may be dark, they will
contribute the majority of systems seen in surveys of LLSs.
\begin{figure}
\centering
\includegraphics[width=80mm]{LLS_X_section_cumulative.eps}
\caption{Cumulative LLS incidence versus halo mass at different redshifts. Note that
the contribution from each mass range has been corrected by the halo mass function. While LLSs
arise in a variety of systems, most LLSs arise in low-mass halos with $M < 10^{10}h^{-1} M_\odot$.
This figure also shows a clear evolution in redshift: at later times, LLSs arise in more massive
halos.} \label{fig:total_xsection}
\end{figure}
\subsection{Distance to the Nearest Halo}
Now that I have explored the mass range of systems hosting LLSs, I will study the
distance from the LLSs to the nearest halo. In \cite{kohler_gnedin_2007}, the authors
showed that the distance to the nearest halo scaled like the virial radius, although this
relation had significant scatter due to the resolution of the simulation and the lack of
statistics. In \Figref{fig:dhalo_mvir}, I plot the median distance to the nearest halo in
units of the virial radius of the halo, as a function of halo mass. As expected from
\Figref{fig:xsec_vs_mvir}, there is a self-similar structure where LLSs can be found at a
constant fraction of the virial radius down to the cutoff mass. This plot is from the
$z=3$ snapshot which has a cutoff mass of $M_{\frac{1}{2}}=6.3 \times 10^8 h^{-1}\Msun$
(see \figref{fig:okamoto_mass}). Below this mass, the median distance to the nearest halo
is dominated by systems outside of the virial radius and hence the distance to the
nearest halo increases at low halo masses.
\begin{figure}
\centering
\includegraphics[width=80mm]{LLS_nearest_halo_distance_over_rvir_vs_Mh.eps}
\caption{3d distance to nearest halo, in units of the virial radius, as a function of halo mass.
Note that this plot is made from the $z=3$ snapshot which has a characteristic mass of $6.3\times 10^8 h^{-1}\Msun$.
The black curve is the median, the light blue (dark blue) band is the 1$\sigma$
(2$\sigma$) scatter around the median. The constancy of this ratio over a wide range of masses
indicates that the LLSs have a self-similar structure around their host halos where LLSs are found at
the same fraction of the virial radius.} \label{fig:dhalo_mvir}
\end{figure}
A related and important quantity is how the distance to the nearest halo depends on the
column density of the absorber. In \Figref{fig:dhalo_NHI} I plot the median distance to
the nearest halo as a function of $\NHI$. This shows an anti-correlation between the
distance to the halo and $\NHI$, i.e. stronger absorbers are closer to their host halo.
This trend is very similar to what was found in \cite{rahmati_et_al_2013_b}with a fairly
weak anti-correlation for LLSs which becomes stronger in the DLA regime\citep[see Figure
2 in][]{rahmati_et_al_2013_b}.
\begin{figure}
\centering
\includegraphics[width=80mm]{LLS_nearest_halo_distance_vs_NHI.eps}
\caption{3d distance to nearest halo in kpc as a function of the column density of the absorber.
The black curve is the median, the light blue (dark blue) band is the 1$\sigma$
(2$\sigma$) scatter around the median. The trend shows an anti-correlation between distance and $\NHI$
with stronger absorbers residing closer to their host halos.} \label{fig:dhalo_NHI}
\end{figure}
\section{Physical Properties of Individual LLSs} \label{sec:individual_LLS}
Now that I have explored the observed properties of LLSs, as well as the halos in which
these systems reside, I will study the physical nature of individual LLSs. LLSs span a
wide range of column densities: from $\NHI = 10^{17.2} \pcms$ to $\NHI=10^{20.3}\pcms$.
At the lower end of this range, the systems are mostly ionized and are believed to be in
photoionization equilibrium \citep{schaye_2001}. As the column density increases, these
systems become significantly self-shielded and become mostly neutral by the DLA
threshold. In this section I will explore this transition and test the model developed in
\cite{schaye_2001}.
\subsection{Analytical Model}
\cite{schaye_2001} developed a simple model to describe the properties of LLSs. At low
column densities, the gas is taken to be in photoionization equilibrium with the UVB,
i.e.
\< \Gamma \nhi = \beta_{\rm HII} n_{\rm e} n_{\rm HII} \label{eq:PIR} \>
where $\Gamma$ is the photoionization rate, $\beta_{\rm HII}$ is the recombination
coefficient, and $\nhi, n_{\rm HII}, n_{\rm e}$ are the number densities of HI, HII, and
electrons respectively. This relation can be used to solve for the HI fraction in terms
of the photoionization rate, recombination rate, and the hydrogen density. The
recombination rate is a function of the temperature which can be found in
\cite{draine_2011}.
In addition, \cite{schaye_2001} argues that the characteristic size of the absorber is
given by the Jeans length of the system:
\< L_J = t_{\rm ff} c_{\rm s} = 0.52\: {\rm kpc}\: \Big(\frac{n_{\rm H}}{1 {\rm
cm}^{-3}}\Big)^{-1/2} T_4^{1/2}, \label{eq:jeans_length}\>
where $T_4 = T/10^4 {\rm K}$ is the temperature of the gas and I have assumed that the
gas is at the universal baryon fraction. The assumptions of this model are spelled out in
detail in \cite{schaye_2001} and assume that the gas in hydrostatic and photoionization
equilibrium and that density distribution is uniform. Note that the temperature depends
weakly on $\NHI$ but is on the order of $10^4 {\rm K}$ for LLSs. The photoionization
equilibrium assumption breaks down as the system becomes significantly self-shielded and
at large $\NHI$, the gas becomes fully neutral. For systems at large $\NHI$, assuming
that the gas is fully neutral with a scale length given by the Jeans length gives the
correct asymptotic behavior but not the normalization.
\subsection{Characteristic Size}
The model developed in \cite{schaye_2001} assumes that the typical length of these
systems is given by the Jeans length. As a measure of the characteristic size of the
absorber, I take the length needed to get 90\% of the total HI absorption along a line of
sight. This mitigates the contribution of HI which is not associated with the LLS which
can lead to artificially large sizes. This scheme was used by \cite{prochaska_wolfe_1997}
where they faced a similar problem in measuring the velocity width from a metal-line
absorption profile. I implement this method by taking 500kpc lines of sight centered on
the absorber and determining the HI column density along this line of sight. I then find
the distance needed to get 45\% of the total $\NHI$. I have tested that this
characteristic length has converged by considering longer lines of sight (up to 1Mpc).
In \Figref{fig:abs_length} I plot the median characteristic length as a function of
$\NHI$ along with the model from \cite{schaye_2001}. For the low $\NHI$ systems, I have
over-plotted the Jeans length assuming photoionization equilibrium. For the high $\NHI$
systems, I over-plotted the Jeans length assuming the gas is fully neutral. At low
$\NHI$, I find that the model is very close to the median. Note that the model should not
be expected to give an exact quantitative match but rather describe the scaling and
trends of the simulation results. Most importantly, the model reproduces the scaling
behavior at low $\NHI$, $L_J \propto \NHI^{-1/3}$, which follows from combining
\Eqref{eq:PIR} and \Eqref{eq:jeans_length}, i.e. assuming that the gas is in
photoionization equilibrium with the UVB and in local hydrostatic equilibrium. This is a
good assumption for optically thin gas at high redshift and explains the relation between
column density and density seen in Ly$\alpha$ forest simulations
\citep[e.g.][]{dave_et_al_2010,mcquinn_et_al_2011,altay_et_al_2011,rahmati_et_al_2012}.
\begin{figure}
\centering
\includegraphics[width=8cm]{LLS_abs_length_vs_NHI.eps}
\caption{Absorption length for 90\% of the absorption. The solid black curve is the median and the light blue (dark blue)
band is the 1$\sigma$ (2$\sigma$) scatter around the median. At low $\NHI$, the dashed red
line is the Jeans length assuming photoionization equilibrium and $T=1.5\times10^4$K - close to the average
temperature in the simulation at these $\NHI$. At high $\NHI$,
the dashed red line is the Jeans length assuming that the gas is fully neutral with an arbitrary
normalization to show that the model recovers the scaling behavior.}
\label{fig:abs_length}
\end{figure}
\subsection{Transition from Ionized to Neutral LLSs} \label{sec:shielding}
As the HI column density increases from the threshold of a LLS up to a DLA, the systems
go from mostly ionized to neutral due to self-shielding. In \Figref{fig:NHI_vs_NH}, I
plot the median HI column density versus the total hydrogen column density along 200kpc
lines of sight centered on the absorber. As in the previous plots, these quantities are
computed along lines of sight through the box. Note that I have plotted the total $\NH$
on the $x$-axis to emphasize that $\NHI$ depends on the total $\NH$. Since the average HI
fraction along a line of sight is given by $\NHI/\NH$, this plot also shows how the HI
fraction depends on $\NH$.
At low column density, $\NHI < 10^{18} \pcms$, I have included the photoionization
equilibrium model with the UVB. The gas is taken to be highly ionized and in
photoionization with the UVB. The column densities are thus given by $\NHI = n_{\rm HI}
L_J \propto n_{\rm H}^{\frac{3}{2}}$ and $\NH = n_{\rm H}L_J \propto n_{\rm
H}^{\frac{1}{2}}$ at constant temperature. Although this model does not quantitatively
match the simulation result, it does reproduce the scaling behavior of $\NHI \propto
\NH^3$. The main reason for the discrepancy is that atomic hydrogen is more localized
that the total hydrogen since it must be self-shielded. As a result, for the 200 kpc line
of sight used in \Figref{fig:NHI_vs_NH}, $\NH$ gets a more substantial contribution from
material outside the Jeans length which offsets the relation to the right of the model at
low column densities. The quantity considered below, $\langle n_{\rm H} \rangle$, avoids
this problem and has a better match at low $\NHI$.
Above the threshold of $\NHI = 10^{18}\pcms$, there is a rapid increase in $\NHI$ for a
small increase in $\NH$ due to self-shielding of the gas. For the highest column density
systems, $\NHI
> 10^{20.3} \pcms$, the systems asymptote to fully atomic systems. To showcase this
asymptotic behavior, I have included 3 lines in \Figref{fig:NHI_vs_NH} with successively
higher neutral fractions. Note that at even higher column densities, molecular physics
becomes important and non-negligible H$_2$ fractions make $\NHI < \NH$.
\begin{figure}
\centering
\includegraphics[width=8cm]{LLS_NHI_vs_NH.eps}
\caption{$\NH$ versus $\NHI$ along 200 kpc lines of sight. The solid black curve is the median and the light blue (dark blue)
band is the 1$\sigma$ (2$\sigma$) scatter around the median. At low $\NH$ I have
assumed photoionization equilibrium. The dashed red line corresponds to $T=1.5\times10^4$K - the average
temperature at these $\NHI$ in the simulation. Although the model does not quantitatively
match the median, it does reproduce the scaling behavior of $\NHI \propto \NH^3$ which is
described in the text. At large $\NH$, the gas becomes neutral and asymptotically approaches $\NHI = \NH$ until molecular hydrogen effects and
ionization from local sources become important. To guide the eye, I have included curves with $\NHI = 0.5\NH$, $\NHI = 0.9\NH$, and $\NHI=\NH$ which
are lines of constant HI fraction. The median in the simulation is asymptoting to fully neutral gas.} \label{fig:NHI_vs_NH}
\end{figure}
A related plot found in other works
\citep[i.e.][]{mcquinn_et_al_2011,altay_et_al_2011,rahmati_et_al_2012} is the median gas
density versus $\NHI$. As in these works, I compute the integral of $\nh$ weighted by
$\nhi$:
\< \langle \nh \rangle = \frac{\int \nh \nhi dl}{\int \nhi dl} .\>
Since $\nhi$ is more sharply peaked than $\nh$ due to self-shielding, this effectively
selects the central part of the absorber. I show the median $\langle \nh \rangle$ in
\Figref{fig:nH_vs_NHI}. I find that the photoionization equilibrium model reproduces the
properties well at low $\NHI$. It matches the scaling behavior of $\langle \nh \rangle
\propto \NHI^{2/3}$ derived from \Eqref{eq:PIR} and \Eqref{eq:jeans_length}. Above $\NHI
= 10^{18}\pcms$, self-shielding becomes important and there is a large increase in $\NHI$
for a small increase in $\langle \nh \rangle$. At the highest $\NHI$, the gas is expected
to be fully neutral and the model from \cite{schaye_2001} predicts that $\langle \nh
\rangle \propto \NHI^2$. As in \Figref{fig:abs_length}, the median does not asymptote to
the model curve.
\begin{figure}
\centering
\includegraphics[width=8cm]{LLS_nH_nHI_weighted_vs_NHI.eps}
\caption{$n_{\rm H}$ weighted by $n_\HI$ averaged along 200 kpc sightlines versus $\NHI$. The
solid black curve is the median and the light blue (dark blue)
band is the 1$\sigma$ (2$\sigma$) scatter around the median.
At low $\NHI$, the dashed red curve shows the prediction from the photoionization equilibrium model with $T=1.5\times 10^4$K which
reproduces the scaling behavior of the median. At large $\NHI$, the dashed red curves show the
prediction from fully neutral gas with an arbitrary normalization to show the model recovers the scaling behavior} \label{fig:nH_vs_NHI}
\end{figure}
For ease in comparison with other work, I also include a related quantity which is the
$n_{\rm HI}$ weighted $x_{\rm HI}$ fraction in \Figref{fig:xHI_vs_NHI}
\citep{mcquinn_et_al_2011,altay_et_al_2011}. The comparison between the results of those
works and this work is discussed in \Secref{sec:other_works}.
\begin{figure}
\centering
\includegraphics[width=8cm]{LLS_avg_xHI_vs_NHI_nHI_weighted.eps}
\caption{$x_{\rm HI}$ weighted by $n_\HI$ averaged along 200 kpc sightlines versus $\NHI$. The
solid black curve is the median and the light blue (dark blue)
band is the 1$\sigma$ (2$\sigma$) scatter around the median.} \label{fig:xHI_vs_NHI}
\end{figure}
\subsection{Effect of Self-Shielding on the Column Density Distribution}
\label{sec:cdd_fNH_vs_fNHI}
In \Secref{sec:cdd}, we saw that the HI column density distribution has a flattening at
$\NHI \sim 10^{18}\pcms$ which has been attributed to self-shielding
\citep{mcquinn_et_al_2011,altay_et_al_2011,rahmati_et_al_2012}. A priori it is unclear
that this flattening is only due to self-shielding and not due to some feature in the
total hydrogen column density distribution. This can be checked by comparing the HI
column density distribution, $f_\HI(\NHI)$, and the total hydrogen column density
distribution $f_{\rm H}(\NH)$, where I have included additional subscripts to emphasize
that they are different distributions. These two distributions are related by
\< f_\HI(\NHI) = f_{\rm H}(\NH) \frac{d\NH}{d\NHI}. \label{eq:fNHI_fNH}\>
The relation between $\NHI$ and $\NH$ is shown in \Figref{fig:NHI_vs_NH}. Using the
median of this relation, $\frac{d\NH}{d\NHI}$ can be computed. Furthermore, $f_{\rm
H}(\NH)$ can be computed in the simulation and then \Eqref{eq:fNHI_fNH} can be used to
compute $f_\HI(\NHI)$. The result of this procedure is shown in \Figref{fig:fNH_vs_fNHI}.
$f(\NH)$ is a power-law over the range in which the transition between ionized and
self-shielding occurs. Therefore, these simulations show that the feature at $\NHI \sim
10^{18}\pcms$ is a signature of self-shielding and not the distribution of the total
hydrogen at the corresponding column density.
\begin{figure}
\centering
\includegraphics[width=8cm]{fNH_vs_fNHI.eps}
\caption{HI column density distribution and H column density distribution. The black solid
curve shows the HI column density distribution as computed from the simulation. The blue, short-dashed, curve shows
the total hydrogen column density distribution as computed in the simulation. Finally, the red long-dashed curve shows the result of taking the median
profile in \protect\Figref{fig:NHI_vs_NH} to compute $\frac{d\NH}{d\NHI}$ and then computing the HI column density
distribution using \protect\Eqref{eq:fNHI_fNH}. Note that the median relation between $\NHI$ and $\NH$ was smoothed
over in order to reduce the noise in the derivative.} \label{fig:fNH_vs_fNHI}
\end{figure}
\subsection{Photoionization Rate}
In the limit where we can neglect radiative recombination and local sources of radiation,
the photoionization rate of LLSs directly measures the self-shielding of the LLS against
the UVB. Since the distance of an absorber from its host galaxy is anti-correlated with
its HI column density, as shown in \Figref{fig:dhalo_NHI}, low $\NHI$ systems will not be
significantly affected by the local radiation from their host halo. The decrease in the
photoionization rate in a LLS allows us to measure the effective shielding of the LLS
against the UVB. In \Figref{fig:PI_rate}, I plot the photoionization rate averaged along
lines of sight through the LLS, weighted by $\nhi$:
\< \langle \Gamma \rangle = \frac{\int \Gamma(l) n_{\HI} dl}{\int n_{\HI} dl}
.\label{eq:PI_exact}\>
If only the contribution from the UVB is considered, this integral can be solved for a
monochromatic UVB. In this limit, the differential optical depth can be written as $d\tau
= \nhi \sigma_{\rm HI} dl$ and get
\< \langle \Gamma \rangle = \frac{\int \Gamma_0 e^{-\tau} \sigma_{\rm HI}^{-1}
d\tau}{\NHI}, \>
where $\Gamma(\tau) = \Gamma_0 e^{-\tau}$ and $\sigma_{\rm HI}$ is independent of $\tau$.
This then gives
\< \langle \Gamma \rangle = \Gamma_0 \frac{1-e^{-\NHI \sigma_{\rm HI}}}{\NHI \sigma_{\rm
HI}} .\label{eq:PI_simplified}\>
In \Figref{fig:PI_rate}, I include this model for a slab with column density $\NHI/2$ and
find that a value of $\sigma = 10^{-17.7}{\rm cm}^2$ provides a fairly good fit at low
$\NHI$ although it does not match the slope at large $\NHI$. I use a column density of
$\NHI/2$ since the LLS is illuminated on all sides by the UVB and this model assumes that
the LLS is being illuminated from one direction. The difference between this model and
the median photoionization rate in the simulation for $\NHI > 10^{19} {\rm cm}^{-2}$ is
due to the increasingly important effects of radiative recombination and local radiation
as $\NHI$ increases
\citep[i.e.][]{miralda-escude_2005,schaye_2006,rahmati_et_al_2013,rahmati_et_al_2013_b}.
However, this effect is unimportant for determining the effective shielding of the LLS
which is determined at lower column densities.
I also include a model for the average photoionization rate for a slab with column
density $\NHI/2$ illuminated on one side by the UVB using CLOUDY v13.01
\citep{ferland_2013}. For this model, I set up a slab with a plane-parallel geometry,
irradiated by the Haardt-Madau background given in \cite{haardt_madau_2001}, with
appropriate helium and metal abundances. I varied the hydrogen density ($n_{\rm H} \in
[10^{-3},10^{-1}] {\rm cm}^{-3}$) and the metallicity ($Z/Z_\odot \in [10^{-3},10^{-1}]$)
and computed the HI photoionization rate as a function of HI column density through the
slab. I found that this relationship was robust and did not depend on the HI density or
metallicity. This result gives the long-dashed green curve in \Figref{fig:PI_rate} which
can be compared to the photoionization rate in actual simulations. This model has an
effective cross-section of $\sigma_{\rm HI} = 10^{-17.6}{\rm cm}^{2}$ at low column
densities. Interestingly, this model does not quantitatively match the absorption seen in
the simulation. This discrepancy is due to the anisotropy of the LLS which I will discuss
in the next section.
\begin{figure}
\centering
\includegraphics[width=8cm]{LLS_photoionization_nHI_weighted_vs_NHI.eps}
\caption{Median photoionization rate versus $\NHI$. The photoionization rate is averaged along
sightlines and weighted by the HI density. The solid black curve is the median and the light blue (dark blue)
band is the 1$\sigma$ (2$\sigma$) scatter around the median. The short-dashed red curve is the model of the photoionization
rate from \protect\Eqref{eq:PI_simplified} which assumes a mono-chromatic UVB with $\sigma_{\rm HI} = 10^{-17.7}{\rm cm}^2$ and has a column density of $\NHI/2$.
The long-dashed green line comes from computing the average photoionization rate, \protect\Eqref{eq:PI_exact}, of a slab with
column density $\NHI/2$ illuminated by the UVB and was done with CLOUDY. These models are discussed
further in the text.}
\label{fig:PI_rate}
\end{figure}
\section{Anisotropic Shielding of LLSs}\label{sec:anisotropy}
In the previous section, I tested the model developed in \cite{schaye_2001} and found
that it successfully reproduced may of the properties of LLSs. In this model, LLSs are
characterized by a single column density and the self-shielding of the absorber depends
on this quantity. However, for a non-spherical absorber the column density will depend on
the angular direction. To test the importance of this column density variation, I first
identified the centers of LLS by finding the maximum density along a line of sight.
Around this maximum, I then compute the column density along the 6 cartesian directions
originating from this point to determine the HI column density in these 6 directions. In
\Figref{fig:NHI_directional}, I show the column density along the original line of sight,
$\NHI$, versus the difference between $\NHI$ and the minimum/maximum column density in
the other 6 cartesian directions.
\begin{figure}
\centering
\includegraphics[width=8cm]{LLS_NHI_direcitonal_comparison.eps}
\caption{Comparison of the difference between the column density along a line of sight and the column density along different directions originating from the absorber.
The red shaded (upper) region shows the 1 and 2$\sigma$ scatter of the difference between the maximum $\NHI$ and the blue
shaded (lower) region shows the 1 and 2$\sigma$ scatter of the minimum $\NHI$
along the 6 cartesian directions originating at the center of the absorber. The black curves in the center of each region show the median.} Note that the
column density on the $x$ axis is the column density through the entire system. This was
chosen to highlight the difference between the observed $\NHI$ of a system along a random
line of sight, and the characteristic minimum/maximum $\NHI$ between the center of the absorber
and the UVB.
\label{fig:NHI_directional}
\end{figure}
\Figref{fig:NHI_directional} shows that if a random line of sight in the system has a
column density of $\NHI$, on average there will be a line of sight originating from the
center of that system with a column density 0.6-0.7 dex lower, approximately $\NHI/4$. As
a result, systems will be more ionized than naively expected from the column density in a
single direction. This result is important for understanding the column density
distribution (\figref{fig:cdd}), as well as the relationship between $\NHI$ and $\NH$
(\figref{fig:NHI_vs_NH}).
In \Figref{fig:PI_rate_min}, I compare the average photoionization rate along a cartesian
direction with the average rate along the direction with the lowest $\NHI$. Since the
absorbers are randomly oriented with respect to the box, this cartesian direction probes
an effectively random direction with respect to the absorber. The average photoionization
along this direction is given by the black solid curve. Fitting this curve using
\Eqref{eq:PI_simplified} gives an effective cross-section of $\sigma_{\rm HI} =
10^{-17.7} {\rm cm}^2$ at low column densities. The second direction is the direction
originating from the center of the LLS with the lowest $\NHI$. The short-dashed blue
curve shows the average photoionization rate versus column density along this direction.
I also include a slab model using the UVB in the simulation. This is done using CLOUDY as
I described in \Secref{sec:individual_LLS} and is given by the long-dashed red curve.
\begin{figure}
\centering
\includegraphics[width=8cm]{LLS_photoionization_nHI_weighted_vs_NHI_min.eps}
\caption{Photoionization rate versus $\NHI$ in two different directions. The solid black curve
is the median of the photoionization along a a specific cartesian direction and hence along
an effectively random direction. As described in the text, the short-dashed blue curve is the median along the direction with the minimum $\NHI$ originating
from the center of the LLS. The long-dashed red curve is the photoionization
rate from CLOUDY assuming the Haardt-Madau background at $z=3$ \citep{haardt_madau_2001}.}
\label{fig:PI_rate_min}
\end{figure}
By comparing the curves in \Figref{fig:PI_rate_min}, I find that the photoionization rate
from the slab model in CLOUDY falls between the rate along a random direction and the
rate along the minimum direction in the simulation. This comparison is useful since it
shows that if one takes a random line of sight through a LLS, the gas along this line of
sight is less shielded than one would expect from the HI column density. This makes sense
since, on average, there will be a line of sight to the UVB with a significantly lower
column density (see \figref{fig:NHI_directional}) allowing for more photoionization than
naively expected. Likewise, for gas along the direction with the lowest column density,
there will be lines of sight with higher column densities which will result in a lower
photoionization rate than expected.
\section{Effective Shielding of LLSs} \label{sec:effective_shielding}
Putting together the results of \Secref{sec:individual_LLS} and \Secref{sec:anisotropy},
I find that the self-shielding of LLSs against the UVB is less than naively expected.
Given a LLS with column density $\NHI$, one would expect that this system is shielded by
an optical depth of $\tau = \NHI \sigma_{\rm HI}$, where $\sigma_{\rm HI}$ is an
effective cross-section of HI to the UVB. Since the self-shielding of LLSs is known to
flatten the column density distribution \citep[i.e.][or Section 5.5 of this
work]{altay_et_al_2011,mcquinn_et_al_2011,rahmati_et_al_2012}, it is important to
understand at what column density one should expect self-shielding to become important.
There are three effects which lower the amount of shielding. First, as I discussed in
\Secref{sec:individual_LLS}, since a LLS is bathed in the UVB from all sides, a system
with a column density of $\NHI$ is effectively only shielded by a column density of
$\NHI/2$. Second, the UVB is not monochromatic but has a spectrum which extends to high
energies. Since the cross-section of HI decreases with increasing energy, these photons
can penetrate deeper into the cloud and lower the effective cross-section of LLS to the
UVB. As I showed in \Figref{fig:PI_rate_min}, the effective cross-section against the UVB
at $z=3$ is $\sigma_{\rm HI} \approx 10^{-17.6}\pcms$, 0.4 dex lower than the
cross-section at the Lyman limit. Lastly, I investigated the effect of the anisotropy of
the LLS in \Secref{sec:anisotropy} and found that, on average, a LLS with a column
density of $\NHI$ will have a line of sight with column density $\NHI/4$ from the center
of the LLS to the UVB, i.e. half of what one would expect if the LLS was isotropic. This
anisotropy means that an average LLS will be less shielded than expected from the column
density. In \Figref{fig:PI_rate_min}, I found that this results in a $0.1-0.2$ dex
decrease in the optical depth as compared to a uniform slab.
Altogether, these three effects mean than a LLS need to have a column density of $\NHI
\sim 10^{18}\pcms$ in order to have an optical depth of unity. Since the flattening of
the column density distribution is due to this self-shielding, this means that we should
expect the column density distribution to start flattening around $\NHI \sim
10^{18}\pcms$, as I find in \Figref{fig:cdd}. In addition, the onset of self-shielding
can clearly be seen in the relation between $\NHI$ and $\NH$ in \Figref{fig:NHI_vs_NH}.
Note that the effective cross-section of HI also depends weakly on the redshift of the
LLS since the spectral shape of the UVB changes slowly with redshift.
\section{Comparison with Previous Work}\label{sec:other_works}
Both LLSs and DLAs have received significant attention in the literature and attempts are
now being made to quantitatively match observations. In this section, I will compare the
results in this work to papers which have made a similar attempt to understand the
properties of LLSs.
\cite{kohler_gnedin_2007} studied LLSs using simulations which had lower spatial and mass
resolution than the simulations in this work. They found many of the same trends found
here although they were limited on the low-mass end. They also studied the properties of
absorbers as a function of their parent halo and found that LLSs reside in halos with a
large range of masses and concluded that the majority of LLSs do not reside in very
low-mass halos. As in this work, they found that LLSs remain ionized up to fairly high
column densities, $\NHI = 10^{20}\pcms$. Despite including many of the physical
mechanisms needed to model the ionization state of the gas, their column density
distribution did not show any signs of self-shielding around $\NHI = 10^{18}\pcms$.
\cite{mcquinn_et_al_2011} studied LLSs using simulations with a similar simulation volume
as this work. They found a similar HI column density distribution as was found in this
work, with significant flattening due to self-shielding starting a little above $\NHI
\sim 10^{18}\pcms$. They also made comparisons to the model in \cite{schaye_2001} and
found that this model had a qualitative agreement with their results. Just as in this
work, they found that LLSs remain ionized up to high column densities, as can be seen in
the middle panel of Figure 5 in \cite{mcquinn_et_al_2011} , where they have a $n_{\rm
HI}$ weighted neutral fraction of $\sim 0.1$ at $\NHI = 10^{19} \pcms$, consistent with
the neutral fraction reported in \Figref{fig:xHI_vs_NHI} of this work.
\cite{altay_et_al_2011} studied both LLSs and DLAs and found a nice agreement with
observed column density distribution over a wide range of $\NHI$ and find self-shielding
starts to flatten the HI column density distribution above $\NHI = 10^{18}\pcms$.
Interestingly, the LLSs in their simulations are significantly less ionized than in this
work or in \cite{mcquinn_et_al_2011}. The left panel of Figure 3 in that works shows that
the $n_{\rm HI}$ weighted neutral fraction at $\NHI = 10^{19} \pcms$ is approximately
-0.2 dex, as compared to the -1 dex reported in \cite{mcquinn_et_al_2011} and
\Figref{fig:xHI_vs_NHI} of this work. Despite this difference in the ionization fraction,
their relation between $\langle n_{\rm H} \rangle$ versus $\NHI$ is very similar to what
was found in this work in \Figref{fig:nH_vs_NHI}.
\cite{rahmati_et_al_2012} studied the redshift evolution of the column density
distribution and found a similar evolution as to \Figref{fig:cdd}. While the amplitude
decreases with decreasing redshift, they find that the column density distribution
becomes slightly shallower at lower redshifts and low column densities. As was discussed
in \Secref{sec:cdd_evolve} , the overall normalization of their HI column density
distribution evolves more than this work between $z=5$ and $z=3$. This is likely due to
the same reason this work had difficulty reproducing the frequency of LLSs at
high-redshift in \Figref{fig:dNdX}: since this work uses zoom-in simulations, it does not
capture the large-scale filamentary structure at high redshift.
\cite{rahmati_et_al_2013_b} discussed many of the same properties of LLSs as in this work
using fixed dark matter particle mass of $6.3 \times 10^{6}$ h$^{-1} M_\odot$ ,as opposed
to the zoom-in simulations used in this work. The comparison between this work and
\cite{rahmati_et_al_2013_b} provides a good test of the assumption that the zoom-in
region is not overly biased. The cumulative LLS incidence with respect to halo mass is
also computed in the top right panel of Figure 6 in\cite{rahmati_et_al_2013_b} and shows
that there is not a large contribution from halos above $10^{12} M_\odot$, a range which
is inaccessible with the zoom-in simulations used in this work. On the low-mass end, the
simulations show that there is a significant contribution from halos below a halo mass of
$10^{10} M_\odot$, in agreement with \Figref{fig:total_xsection} of this work.
\cite{rahmati_et_al_2013_b} also studied the impact parameter of LLSs and found similar
results to \Figref{fig:dhalo_NHI} an anti-correlation between $N_{\rm HI}$ and the
distance to the nearest halo.
\section{Summary and Conclusion} \label{sec:conclusion}
In this work, I have explored the properties of LLSs using cosmological zoom-in
simulations which include on the fly radiative transfer and have high mass resolution.
The simulations in this work reproduce the observed incidence frequency of LLSs as well
as the HI column density distribution, indicating that the simulations are effectively
modeling LLSs.
Using these simulations, I investigated the host halos of LLSs. The high mass resolution
of these simulations allowed me to investigate the LLS content of halos down to $10^8
{\rm h}^{-1} M_\odot$. These results showed that halos have a nearly constant covering
fraction of LLSs within their virial radius over a wide range of halo masses, similar to
the results in \cite{fumagalli_et_al_2014}. However, it is important to note that the
simulations use in this work, as well as those in \cite{fumagalli_et_al_2014} use
inefficient feedback which leads an overproduction of stellar mass in the halos of
interest. As has been recently shown in \cite{faucher_et_al_2015} and
\cite{rahmati_et_al_2015}, including more efficient feedback which is needed to produce
realistic stellar masses also increases the covering fraction of LLSs and boosts it to
values significantly higher than what is found in this work and in
\cite{fumagalli_et_al_2014}. Efficient feedback will likely affect many of the properties
of LLSs and this will be investigated in future work.
In addition to this near-constant covering fraction, there is a cutoff at low halo masses
which increases as the redshift decreases. I argued that this evolution of the cutoff is
real since the simulations have the necessary mass resolution to adequately model these
halos and that the evolution can be explained by the photoionization of gas in the galaxy
due to the UVB. In addition, I found that between $z=2-5$, more than $50\%$ of LLSs
reside in halos with $M<10^{10} {\rm h}^{-1} M_\odot$. This is especially interesting
since H$_2$-based star formation models predict that these galaxies will be dark
\citep[i.e.][]{gnedin_kravtsov_2010,kuhlen_et_al_2013}. As a result, absorption line
studies of LLSs will be an important testing ground for simulations since they probe a
large reservoir of gas which will be difficult to detect with other means.
Next, I investigated the properties of individual LLSs. I tested a simple model from
\cite{schaye_2001} and found that it reproduced the characteristic size and HI fraction
of LLSs well for $\NHI < 10^{18} \pcms$. Above this threshold, the gas is no longer
optically thin and the model is no longer valid. However, in the DLA regime, the gas is
almost entirely neutral so the simple model is justified once again with a scale length
given by the Jeans length. Using the relation between $\NHI$ and $\NH$, I showed how
onset of self-shielding at $\NHI = 10^{18}\pcms$ is responsible for the flattening of the
HI column density distribution which has also been shown in
\cite{mcquinn_et_al_2011,altay_et_al_2011,rahmati_et_al_2012}.
Lastly, I studied why this self-shielding occurs at a higher value than one might naively
expect for LLSs. While the hard spectrum from the UVB accounts for most of the
difference, there is also a significant effect from the anisotropic structure of LLSs.
For an absorber with a column density of $\NHI$ in a given direction, I found that on
average, there are lines of sight which have significantly less shielding to the UVB.
This results in the absorber being more ionized than expected from the column density.
Together, these effects result in the onset of self-shielding being pushed to $\NHI =
10^{18}\pcms$. One consequence of this result is that if one can independently constrain
the UVB or the anisotropic structure of LLSs, the other quantity can be constrained by
measuring the column density at which self-shielding kicks in.
I would like to thank the anonymous referee for a thoughtful and thorough report which
improved the quality of the paper. I would like to acknowledge helpful comments from Nick
Gnedin, Andrey Kravtsov, Stephan Meyer, Dan Holz, Tom Witten, Oscar Agertz and Benedikt
Diemer. This work was supported in part by the NSF grant AST-0908063, and by the NASA
grant NNX-09AJ54G. The simulations used in this work have been performed on the Joint
Fermilab - KICP Supercomputing Cluster, supported by grants from Fermilab, Kavli
Institute for Cosmological Physics, and the University of Chicago. This work made
extensive use of the NASA Astrophysics Data System and the arXiv.org preprint server. I
made use of the CAMB code to generate power spectra in the course of this work.
\bibliographystyle{mn2e_long}
|
2,869,038,154,598 | arxiv | \section{Introduction}
The seminal works of Dirac \cite{Dirac} on constraint Hamiltonian systems
have been developed in several important research lines. One of these
developments is due to Fradkin, Fradkina, Batalin and Tyutin
(BFFT) \cite{BFFT}, where Hamiltonian systems submitted to second class constraints are conveniently considered. The method of BFFT consists in
enlarging the original phase-space of the theory by adding compensating fields which permit to convert the second class constraints into first class
ones. In doing so, it is possible to avoid Dirac brackets which
can present severe problems when one follows the canonical approach to quantization \cite{HT}. As first class constraints are also necessarily associated with local gauge symmetries, a system converted by the BFFT procedure can
be treated by using all the machinery associated with the BRST formalism
\cite{BRST}.
The BRST approach for quantization of gauge theories appears with all its
power in the field-antifield formalism \cite{BV,HT,GPS}. This formalism gives an elegant and
systematic way for constructing the functional generator of any general gauge theory, with possible reducible or open gauge algebras. At the same time, eventual obstructions to the gauge symmetries due to quantum
effects are naturally taken in account inside the field-antifield formalism.
\bigskip
In this work we consider, by using some tools of the field-antifield formalism, the quantization of first order gauge theories which have been obtained from
second class constrained systems by the process of
conversion developed by BFFT. We show that the compensating fields introduced by the conversion procedure do not belong to the BRST cohomology
\cite{HT} at ghost number one. So there is no possible term
in the space of fields and antifields with ghost number one and BRST closed not being BRST exact. This means that the Wess-Zumino consistence condition \cite{GPS} is solved in a trivial way: there is no gauge anomaly for such class of systems and the quantum master equation can always be solved
with the inclusion of a proper counterterm in the quantum action.
It is useful to observe that this counterterm, if it exists, can play a non-trivial role. We give an example where massive electrodynamics couples to chiral fermions. There we show that it is necessary to introduce a non-trivial counterterm in order to solve the quantum master equation. This counterterm permit us to extract an anomalous expectation value related to the divergence of the fermion Noether chiral current.
We would like to note that
compensating fields have been largely employed directly inside Lagrangian
descriptions \cite{Compensating,dWG}. There the purpose is not converting second class constraints, but to enlarge the symmetry content of a theory in such a way that the original description is recovered within some gauge choice. Under this last point of view, BFFT and Lagrangian compensating fields play similar roles. In several examples of Lagrangian descriptions it is proved that compensating fields also do not belong to the cohomology
at ghost number one and can be used as well to extract anomalous expectation values
of physically relevant quantities \cite{us}.
We organized this work as follows: In section {\bf2} we present a brief review of the BFFT conversion of first order systems submitted to pure
second class constraints. We display the local gauge invariance of the
first order action which is introduced by the BFFT compensating fields.
The functional quantization of such a system is described in section {\bf3},
by using the tools of the field-antifield formalism. We derive the BRST differential and explicitly show that the BFFT variables do not belong to
the BRST cohomology at ghost number one. This assures that the quantum master equation can be solved for any system of this class. In section {\bf4} the ideas presented in the first sections are applied to a model which
describes massive electrodynamics coupled to chiral fermions in four space-time dimensions. By using a regularization that keeps the vector symmetry as a preferential one, the quantum master equation is solved with the introduction of an specific counterterm in the quantum action. A few different gauge fixing choices are explored and covariant actions are obtained. When the gauge freedom is fixed by identifying the compensating fields with external functions, we show that the independence of the
path integral with respect to those external functions permit us to derive
expectation values which are related to the anomalous divergence of the Noether chiral current.
We reserve section {\bf5} to some general comments and concluding remarks.
\bigskip
\section{First order systems submitted to second class constraints}
\setcounter{equation}{0}
\bigskip
In this section we will review a few topics on constrained Hamiltonian
systems \cite{HT} and on the BFFT conversion procedure \cite{BFFT} in order to fix
notations and to introduce some results that will be useful for further developments.
Let us start by considering a generic first order system living in a (phase) space with
discrete bosonic coordinates
$y^\mu$, $\mu=1,2,...,2N$. The extension to more general situations can be trivially done.
Its action is written as
\begin{equation}
\label{z1}
S_0=\int\,dt\,\left(B_\mu\dot y^\mu-\lambda^\alpha\chi_\alpha-H\right)\,
\end{equation}
\noindent where $B_\mu,\,H$ and $\chi_\alpha$ are in principle arbitrary functions of the coordinates but do not depend on the velocities. The Lagrange multipliers $\lambda^\alpha$ are to be regarded as independent quantities.
From the above expression one can read the symplectic form
\begin{equation}
f_{\mu\nu}={{\partial B_\nu}\over{\partial y^\mu}}-{{\partial B_\mu}\over{\partial y^\nu}}
\label{z2}
\end{equation}
\noindent which has an inverse $f^{\mu\nu}$ if the system is well defined. With its aid, we can define the brackets between any
two functions $A(y)$ and $B(y)$ as
\begin{equation}
\bigl\{A,\,B\bigr\}={\partial A\over{\partial y^\mu}}
f^{\mu\nu}{\partial B\over{\partial y^\nu}}\label{z3}
\end{equation}
It follows that
\begin{equation}
\bigl\{y^\mu,\,y^\nu\bigr\}=f^{\mu\nu}\label{z4}
\end{equation}
The brackets appearing in the above expressions can be interpreted as Poisson brackets only in a broad sense, since they take in
account the primary second class constraints of the Dirac's scheme \cite{FJ}. In this sense they are primary Dirac brackets.
Let now $H$ and $\chi_\alpha$,
$\alpha=1,2,...,2n$, represent respectively a first class
Hamiltonian and a set of second class constraints.
The Hamiltonian and the constraints then satisfy
the structure
\begin{eqnarray}
\bigl\{\chi_\alpha,\chi_\beta\bigr\}&=&\Delta_{\alpha\beta}
\nonumber\\
\bigl\{H,\chi_\alpha\bigr\}&=&
V_\alpha^\beta\chi_\beta\label{3}
\end{eqnarray}
\bigskip \noindent As the $\chi$'s are second class,
the constraint
matrix $\Delta_{\alpha\beta}$ is regular.
\bigskip
It may be convenient to extend the phase-space by adding compensating variables $\phi^\alpha$, $\alpha=1,2,\dots,2n$, but at the same time converting the set of second class constraints into a first-class one. This assures that the number of degrees of freedom is not changed by the process, which also introduces local symmetries that permit one to quantize
the theory by using the powerful tools of local gauge theories.
To perform this conversion through the
BFFT procedure, it is assumed that the BFFT compensating variables $\phi^\alpha$
satisfy fundamental brackets given by
\begin{equation}
\bigl\{\phi^\alpha,\phi^\beta\bigr\}=\omega^{\alpha\beta}
\label{6}
\end{equation}
\bigskip
\noindent where $\omega$ is
some constant, antissymmetric and invertible matrix.
In order to avoid the introduction of further second class constraints, it may be convenient to
choose $\omega$ in such a way that the compensating variables form a set of canonical conjugated quantities.
In any case, it follows that in the BFFT extended space, the brackets between any two quantities
$A(y,\phi)$ and $B(y,\phi)$ are written as
\begin{equation}
\bigl\{A,\,B\bigr\}={\partial A\over{\partial y^\mu}}
f^{\mu\nu}{\partial B\over{\partial y^\nu}}+{\partial A\over{\partial \phi^\alpha}}
\omega^{\alpha\beta}{\partial B\over{\partial \phi^\beta}}
\label{7}
\end{equation}
\bigskip
\noindent as both sectors are
independent.
\bigskip
The general idea of the BFFT algorithm is to replace the old set of second class constraints and the old Hamiltonian by a new set of first class constraints
$\tilde\chi_\alpha = \tilde\chi_\alpha(y,\phi)$ and Hamiltonian
$\tilde H=\tilde H(y,\phi)$
in such a way that they become involutive:
\begin{eqnarray}
\bigl\{\tilde\chi_\alpha,\tilde\chi_\beta\bigr\}&=&0
\nonumber\\
\bigl\{\tilde H,\tilde\chi_\alpha\bigr\}&=&0
\label{8}
\end{eqnarray}
\bigskip
By requiring that $\tilde A(y,0)=A(y)$ for any quantity $A$ defined in the extended space,
it is assured that the original formulation of the theory is recovered
when the unitary
gauge $\phi^\alpha=0$ is implemented.
In Refs. \cite{BFFT} it is proved that Eqs. (\ref{8}), submitted to the
above condition, always have a power series solution in the compensating variables,
with coefficients with only $y^\mu$ dependence.
The second class constraints, for instance, can be extended to
\begin{equation}
\tilde\chi_\alpha(y,\phi)=\chi_\alpha(y)+X_{\alpha\beta}(y)\phi^\beta
+X_{\alpha\beta\gamma}(y)\phi^\beta\phi^\gamma+\dots
\label{9}
\end{equation}
\bigskip
Conditions (\ref{8}) impose restrictions
on the expansion coefficients. As an example,
the regular matrices $X_{\alpha\beta}$ must satisfy the identity
\begin{equation}
\label{10}
X_{\alpha\beta}\omega^{\beta\gamma}X_{\delta\gamma}=-\Delta_{\alpha\delta}
\end{equation}
\bigskip
Even if some quantity $A(y)$ is not a second class constraint, it
can also be extended to $\tilde A(y,\phi)$ in order to be involutive with the converted constraints $\tilde\chi_\alpha$. Following the
BFFT procedure we can show that in this situation
\begin{equation}
\tilde A(y,\phi)=A(y)- \phi^\alpha\omega_{\alpha\beta}X^{\beta\gamma}\{\chi_\gamma,A\}+...
\label{10a}
\end{equation}
\bigskip
\noindent where the dots represent al least second order corrections in $\phi$ to $A(y)$. In (\ref{10a}), the matrix $X$ with contravariant indices is to be considered as the inverse of the corresponding covariant one.
Now it is possible to prove that the first order action
\begin{equation}
S_0=\int\,dt\,[B_\mu \dot y^\mu+B_\alpha\dot\phi^\alpha-\lambda^\alpha\tilde\chi_\alpha-\tilde H]
\label{11}
\end{equation}
\bigskip
\noindent is invariant under the gauge transformations
\begin{eqnarray}
\delta y^\mu&=&\{y^\mu,\tilde\chi_\alpha\}\epsilon^\alpha\nonumber\\
\delta\phi^\alpha&=&\{\phi^\alpha,\tilde\chi_\beta\}\epsilon^\beta
\nonumber\\
\delta\lambda^\alpha&=&\dot\epsilon^\alpha
\label{12}
\end{eqnarray}
\noindent By using the Jacobi Identity and Eqs. (\ref{8}) we see that (\ref{12}) close in an Abelian
algebra. As in (\ref{z2}), in (\ref{11}) $B_\alpha$
is related to the inverse of $\omega^{\alpha\beta}$ through
\begin{equation}
\omega_{\alpha\beta}={{\partial B_\beta}\over{\partial\phi^\alpha}}-
{{\partial B_\alpha}\over{\partial\phi^\beta}}
\label{13}
\end{equation}
\bigskip
\noindent One can always choose
$B_\alpha={1\over2}\omega_{\alpha\beta}\phi^\beta$
without loss of generality.
By using some of the above equations, it is not difficult to show that
\begin{equation}
\delta[B_\mu \dot y^\mu+B_\alpha\dot\phi^\alpha-\lambda^\alpha\tilde\chi_\alpha-\tilde H]
={d\over{dt}}\{[B_\mu f^{\mu\nu}{{\partial\tilde\chi_\alpha}\over{\partial y^\nu}}+
B_\beta \omega^{\beta\rho}{{\partial\tilde\chi_\alpha}\over{\partial \phi^\rho}}
-\tilde\chi_\alpha]\epsilon^\alpha\}
\label{14}
\end{equation}
\bigskip
\noindent and consequently (\ref{11}) is indeed invariant under the local gauge transformations (\ref{12}), provided boundary terms can be discarded.
\bigskip
\section{Quantization}
\setcounter{equation}{0}
\bigskip
Let us perform the quantization of the system described above along the field-antifield formalism \cite{BV,HT,GPS}. To do so it is first necessary to introduce antifields $\Phi^*_A=(y^*_\mu,\phi^*_\alpha,\lambda^*_\alpha,c^*_\alpha)$ corresponding to the fields
$\Phi^A=(y^\mu, \phi^\alpha, \lambda^\alpha,c^\alpha)$. In our case,
$y^\mu$, $\phi^\alpha$ and $\lambda^\alpha$ are bosonic and have ghost number zero. The ghosts $c^\alpha$ are fermionic and have ghost number one. The corresponding antifields have opposite grassmanian parity and ghost number given by minus the ghost number of the corresponding field minus one. One can verify that the field-antifield action
\begin{equation}
\label{15}
S= S_0+\int\,dt\,[y^*_\mu\{y^\mu,\tilde\chi_\alpha\}c^\alpha+
\phi^*_\beta\{\phi^\beta,\tilde\chi_\alpha\}c^\alpha+\lambda^*_\alpha\dot c^\alpha]
\end{equation}
\bigskip
\noindent satisfies then the classical master equation
\begin{equation}
\label{16}
{1\over 2}(S,S)=0
\end{equation}
\bigskip\noindent
where the antibracket between any two quantities $X[\Phi,\Phi^*]$ and $Y[\Phi,\Phi^*]$ is defined as
$(X,Y) = {\delta_rX\over
\delta\Phi^A} {\delta_lY\over\delta\Phi^\ast_A}
- {\delta_rX\over \delta\Phi^\ast_A}
{\delta_lY\over \delta\Phi^A}$. When pertinent, we are assuming the de Witt's notation of sum and integration over intermediary variables.
\bigskip
In the BV formalism, the BRST differential is introduced through
\begin{equation}
\label{a17}
s\,X=(X,S)
\end{equation}
\noindent for any local functional $X=X[\Phi,\Phi^*]$. As a consequence of the master equation (\ref{16}) and Jacobi identity, $s$ is nilpotent. So, saying that the BV action satisfies the master equation is equivalent to say that it is BRST invariant.
To fix a gauge we need to
introduce trivial pairs $\bar c_\alpha\,,\bar\pi_\alpha$ as new fields,
and the corresponding antifields ${\bar c}^{*\alpha},{\bar\pi}^{*\alpha}$,
as well as a gauge-fixing fermion $\Psi$. The antifields are eliminated by choosing $\Phi^*_A = {{\partial\Psi}\over{\partial\Phi^A}}$. It is always possible to
choose
\begin{equation}
\label{17}
\Psi=\bar c_\alpha\phi^\alpha
\end{equation}
\bigskip
\noindent associated with the unitary gauge, but different
choices can be done.
It is also necessary to extend the field-antifield action to a non-minimal one
\begin{equation}
\label{18}
S\rightarrow S_{nm}=S+\int\,dt\,\bar\pi_\alpha{\bar c}^{*\alpha}
\end{equation}
\bigskip
\noindent in order to implement the gauge fixing introduced by $\Psi$. The gauge-fixed vacuum functional is then defined as
\begin{equation}
Z=\int[d\Phi^A][\det \omega]^{-{1\over2}} [\det f]^{-{1\over2}}
\exp\{\frac{i}{\hbar}\,S_{nm}[\Phi^A,\Phi^*_A - {{\partial\Psi}\over{\partial\Phi^A}}]\}
\label{19}
\end{equation}
\bigskip
In the unitary gauge, we observe that
besides the identification $\bar c^{*\alpha}=\phi^\alpha,\,\phi^*_\alpha=
\bar c_\alpha$, all the other antifields vanish. With this and the use of
Eqs. (\ref{9}-\ref{10}), we see that formally (\ref{19}) reduces to
the Senjanovic \cite{Senj} path integral
\begin{eqnarray}
Z&=&\int[dy^\mu]\vert \det\,f\vert^{-{1\over2}}\delta[\chi_\alpha]
\vert\det\Delta\vert^{1\over 2}\nonumber\\
& &\exp\left\{\frac{i}{\hbar}\int dt[B_\mu \dot y^\mu-
H]\right\}
\label{4}
\end{eqnarray}
\noindent Actually this reduction can only be done if
quantum effects do not obstruct the
gauge symmetries. Possible obstructions are related to
the dependence of the path integral with respect to redefinitions of the gauge-fixing fermion $\Psi$. In general, if the classical
field-antifield action $S$ can be replaced by some quantum action $W$
expressed as a local functional of fields and antifields
and satisfying the so-called quantum master equation
\begin{equation}
\label{20}
{1\over 2}(W,W)\, - \, i\hbar\Delta W\
\,=\,0
\end{equation}
\bigskip
\noindent then the gauge symmetries are not obstructed at quantum level.
In expression (\ref{20}) we have introduced the potentially singular operator
$\Delta \equiv
{\delta_r\over\delta\Phi^A}{\delta_l\over\delta\Phi^\ast_A}$
and it was assumed that $W$ can be expanded in powers
of $\hbar$ as
\begin{equation}
W[\Phi^A,\Phi^{\ast}_A ] =
S[\Phi^A ,\Phi^{\ast}_A ] +
\sum_{p=1}^\infty \hbar^p M_p [\Phi^A ,\Phi^{\ast}_A ]
\end{equation}
\noindent The two first terms of the quantum master equation (\ref{20}) are
\begin{eqnarray}
\label{21}(S,S) &=& 0\\
\label{22}
(M_1,S) &=& \,i\, \Delta S
\end{eqnarray}
As expected, the tree approximation gives (\ref{16}). Eq. (\ref{22}) is only formal, since the action of the operator
$\Delta$ must be regularized. If it vanishes when applied on $S$, the quantum action $W$ can be identified with $S$. If $\Delta S$ gives a non-trivial result but there exists
some $M_1$ expressed in terms of local fields such that (\ref{22}) is satisfied, gauge symmetries are not obstructed at one loop order. Otherwise, the theory presents an anomaly
\begin{equation}
\label{23}
{\cal A }[\,\phi, \phi^\ast \,]\, = \, \Delta S + { i }
( S , M_1 ) \,=\,a_\alpha\,c^\alpha+\dots\,.
\end{equation}
\bigskip
The nilpotency of the BRST operator implies that
$s{\cal A}=0$, which is the Wess-Zumino consistence condition.
So, looking for possible anomalies in any theory is the same as looking for local functionals with ghost number one that are BRST closed ($s{\cal A}=0$) but not BRST exact (${\cal A}\neq sB$).
\bigskip
By using cohomological arguments, we can show that the quantum master
equation, for first order systems with pure second class constraints converted with the use of the BFFT procedure, can always be solved. To prove this, let us first derive the BRST transformations of the fields and antifields for the converted system:
\begin{eqnarray}
\label{a2}
s\,y^\mu&=&\{y^\mu,\tilde\chi_\alpha\}c^\alpha\nonumber\\
s\,\phi^\beta&=&\{\phi^\beta,\tilde\chi_\alpha\}c^\alpha\nonumber\\
s\,\lambda^\alpha&=&\dot c^\alpha\nonumber\\
s\,c^\alpha&=&0\nonumber\\
s\,\bar c_\alpha&=&\bar\pi_\alpha\nonumber\\
s\bar\pi_\alpha&=&0\nonumber\\
s\, y_\mu^*&=&-{{\partial S}\over{\partial y^\mu}} \nonumber\\
s\,\phi_\alpha^*&=& -{{\partial S}\over{\partial \phi^\alpha}} \nonumber\\
s\,\lambda_\alpha^*&=& \tilde\chi_\alpha \nonumber\\
s\,c_\alpha^*&=& -y^*_\mu\{y^\mu,\tilde\chi_\alpha\}
-\phi^*_\beta\{\phi^\beta,\tilde\chi_\alpha\}-\dot\lambda^*
\nonumber\\
s\,{\bar c}_\alpha^*&=& 0 \nonumber\\
s\,{\bar \pi}^{*\alpha}&=& {\bar c}^{*\alpha}
\end{eqnarray}
\bigskip
\noindent where $S$ is given by (\ref{15}).
We see that ${\bar c_\alpha}$ and $\bar\pi_\alpha$ form
BRST doublets ($s\,B=C\,,s\,C=0$) and do not belong to the BRST cohomology \cite{HT}. The same is true for
their antifields. To show that the other fields and antifields do not contribute to the cohomology at ghost number one, it is enough to study the
cohomology of the linearized piece of $s$, which will be denoted by $s^{(1)}$\cite{s1}. If we assume that in the process of conversion of the constraints (see Eq. (\ref{9})), the invertible matrix $X(y)$ can be written as a power series in $y$ (which will be the case for the example we are going to consider),
\begin{equation}
\label{a3}
X(y)_{\alpha\beta}=X^{(0)}_{\alpha\beta}+
X^{(1)}_{\alpha\beta\mu}y^\mu+
X^{(2)}_{\alpha\beta\mu\nu}y^\mu y^\nu+\dots
\end{equation}
\noindent we see that
\begin{eqnarray}
\label{a5}
s^{(1)}\,\phi^\alpha&=&\omega^{\alpha\gamma}X^{(0)}_{\beta\gamma}c^\beta
\nonumber\\
s^{(1)}\,c^\alpha&=&0\nonumber\\
\end{eqnarray}
\noindent The equations above imply that
$\phi^\alpha$ and $C^\alpha=\omega^{\alpha\gamma}X^{(0)}_{\beta\gamma}c^\beta$ form doublets under the action of $s^{(1)}$ and as a consequence they also do not belong to the cohomology . As $c^\alpha$ is trivially obtained from $C^\alpha$, and since it is the only fundamental field (or antifield) with positive ghost number,
it is not possible to construct a local functional with ghost number one that is BRST closed not being BRST exact. This means that any candidate to an anomaly can always be canceled by some counterterm $M$. So the situations found in \cite{dWG} and later explored in \cite{us} appear also here:
enlarged symmetries due to compensating fields ( here the BFFT variables )
are not anomalous . This does not mean that they have a trivial role at the quantum level since the existence of a counterterm modify expectation values of relevant physical quantities \cite{us}. In the next section we are going to show an
example where all of these features
are carefully taken in account in order to derive consistent quantum actions.
\bigskip
\section{
Massive vector fields coupled to chiral\\
fermions }
\setcounter{equation}{0}
\bigskip
We shall now apply the ideas discussed above to massive chiral electrodynamics. Although the fermions couple only one chirality to the connection $A_\mu$, the second class system presents no gauge anomaly since it exhibits no gauge symmetry. When it is converted to a first class one, however, the fermions pass to transform in a chiral way and such a gauge transformation is known to lead to possible anomalies \cite{ABJ}. Accordingly to the ideas discussed in the last section, however, the BFFT variables play the
role of Wess-Zumino fields and
permit us to write the anomaly candidates as BRST exact functionals,
solving in this way the quantum master equation at one loop order.
\bigskip
We start by considering the first order action
\begin{equation}
S_0= \int d^4x \left\{ \dot A_\mu\pi^\mu
+i\bar\psi\gamma^0\dot\psi
-{\cal H}
-\lambda^\alpha\chi_\alpha
\right\}
\label{y1}
\end{equation}
\noindent where the second class constraints
\begin{eqnarray}
\chi_1&=&\pi^0\nonumber\\
\chi_2&=&\partial_i\pi^i-m^2A^0+J^0
\label{y2}
\end{eqnarray}
\noindent and the first class Hamiltonian
\begin{eqnarray}
\label{y3}
H&=& \int d^{3}x \left\{
\frac{1}{2}\pi_i^2+\frac{1}{4}F_{ij}^2+\frac{1}{2}m^2\left(A_0^2+A_i^2
\right)
\nonumber\right.\\&&\left.
-i\bar\psi\gamma^iD^+_i\psi+\partial_i A^i \chi_1-A_0\chi_2
\right\}
\end{eqnarray}\noindent have been introduced. In the above expressions
we have defined the covariant derivatives $D_\mu^+$ acting on the fermion $\psi$ and the chiral projectors $P^\pm$ respectively as
\begin{eqnarray}
\label{y4}
D_\mu^+&=&\partial_\mu-ieP^+A_\mu\nonumber\\
P^{\pm}&=&\frac{1}{2}\left(1\pm\gamma^5\right)
\end{eqnarray}
We have also adopted the metric convention $\eta^{\mu\nu}=\mbox{diag}(-1,+1,+1,+1)$. Dirac matrices satisfy the usual anticommutation relation $\{\gamma^\mu,\gamma^\nu\}=2\eta^{\mu\nu}$. As one can verify, action
(\ref{y1}) is the first order version of
\begin{equation}
\label{y5}
{{\cal S}_{cov}}=\int d^4x\left[-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}
-\frac{1}{2}m^2A_\mu A^\mu
+i\bar\psi\gamma^\mu D^+_\mu\psi\right]
\end{equation}
From (\ref{y1}) we extract the fundamental (equal time) brackets
\begin{equation}
\label{y6}
\left\{\psi(x),\bar\psi(y)\right\}=i\gamma^0\delta^3(x-y)
\end{equation}
\noindent for the fermionic sector and
\begin{equation}
\label{y7}
\left\{A_\mu(x),\pi^\nu(y)\right\}=\delta^\nu_\mu\delta^3(x-y)
\end{equation}
\noindent for the bosonic one. By using the above expressions, one can show, for instance, that the fermionic chiral current
\begin{equation}
\label{y8}
J^\mu\equiv\bar\psi\gamma^\mu P^+\psi
\end{equation}
has brackets between its components given by
\begin{eqnarray}
\label{y9}
\left\{J^\mu(x),J^\nu(y)\right\}&=&ie^2\bar\psi M^{\mu\nu}P^+\psi\,\delta^3(x-y)\nonumber\\
M^{\mu\nu}&=&\gamma^\mu\gamma^0\gamma^\nu- \gamma^\nu\gamma^0\gamma^\mu
\end{eqnarray}
It is now easy to verify that the constraints and the Hamiltonian satisfy the bracket structure
\begin{eqnarray}
\{ \chi_1(x),\chi_2(y) \}&=& -m^2\delta^3(x-y)\nonumber\\
\{ \chi_1(x),H\}&=&\chi_2(x)\nonumber\\
\{ \chi_2(x),H\}&=&\partial_i\partial^i\chi_1(x)
\label{y10}
\end{eqnarray}
Let us now use the BFFT algorithm for implementing the Abelian conversion of the above bracket structure. As we have two second class constraints,
we introduce two BFFT variables $\phi^\alpha$, $\alpha=1,2$, and
for simplicity demand that they
satisfy
\begin{equation}
\label{51}
\{ \phi^\alpha(x),\phi^\beta(y)\}=\epsilon^{\alpha\beta}\delta^3 (x-y)
\end{equation}
which gives the matrix $\omega^{\alpha\beta}$ as in Eq. (\ref{6}). In (\ref{51}) $\epsilon^{12}=-\epsilon^{21}=1$, $\epsilon^{11}=
\epsilon^{22}=0$. A possible solution to Eqs. (\ref{8}) via (\ref{9}-\ref{10a}) is achieved with \cite{BFFT}
\begin{eqnarray}\label{53}
{\tilde\chi}_1&=&\chi_1-m^2\phi^2\nonumber\\
{\tilde\chi}_2&=&\chi_2+\phi^1\nonumber\\
{\tilde H}&=&H+\int d^3x \left[
\frac{1}{2m^2}(\phi^1)^2
+\frac{1}{2}m^2{(\partial_i\phi^2)}^2
-\frac{\phi^1}{m^2}\tilde\chi_2
-\phi^2\nabla^2\tilde\chi_1
\right]\nonumber\\
&=& \int d^3x \left[\frac{1}{2}\pi^2_i+\frac{1}{4}F^2_{ij}
+\frac{1}{2}m^2\left(\tilde A_0^2+\tilde A_i^2\right)
-i\bar\psi\gamma^i D^+_i\psi
\right.\nonumber\\&&\left.
-\tilde A_0\tilde\chi_2
+(\partial_i\tilde A^i)\tilde\chi_1 \right]
\end{eqnarray}
where we have defined the quantities
\begin{eqnarray}
\tilde A_i&=&A_i-\partial_i\phi^2\nonumber\\
\tilde A_0&=&A_0+{\phi^1\over{m^2}}
\label{tilde}
\end{eqnarray}
Correspondingly we have a first order action
\begin{eqnarray}
S_0&=& \int d^4x\left\{ \dot A_\mu\pi^\mu
+\dot\phi^1\phi^2
+i\bar\psi\gamma^0\dot\psi
-\tilde{\cal H}
-\lambda^\alpha\tilde\chi_\alpha
\right\}
\label{54}
\end{eqnarray}
which is invariant under the gauge transformations generated by $\tilde\chi_1$ and
$\tilde\chi_2$ (see Eq. (\ref{12}))
\begin{equation}
\begin{array}{ll}
\delta\psi=-ie\epsilon^2P^+\psi\,\,\,\,\,\,\,\,\,\,\,\,\, &
\delta\bar\psi=ie\epsilon^2\bar\psi P^- \\
\delta A_0=\epsilon^1 &
\delta\pi^0=-m^2\epsilon^2 \\
\delta A_i=-\partial_i\epsilon^2 &
\delta\pi^i=0 \\
\delta\phi^1=-m^2\epsilon^1 &
\delta\phi^2=-\epsilon^2 \\
\delta\lambda^1=\dot\epsilon^1 &
\delta\lambda^2=\dot\epsilon^2
\label{55}
\end{array}
\end{equation}
In the expressions above $\epsilon^\alpha$ are arbitrary space-time dependent parameters. We note that the variables $\tilde A_\mu$ are
invariant under (\ref{55}).
\bigskip
In order to quantize this system along the lines of the field-antifield formalism,
associated with the parameters $\epsilon^\alpha$ we introduce the ghosts
$c^\alpha$. We introduce also the trivial pairs $\bar\pi_\alpha$, $\bar c_\alpha$ and write down a gauge-fixed vacuum functional as in (\ref{19})
with
\begin{eqnarray}
\label{66}
S_{nm}&=&S_0+\int dx^D \left[ A^{0*}c^1-m^2\pi_0^*c^2-A^{i*}\partial_ic^2
\right.\nonumber\\&&\left.
-m^2\phi_{1}^*c^1-\phi_{2}^*c^2 +\lambda_1^*\dot c^1+\lambda_2^*\dot c^2
\right.\nonumber\\&&\left.
-ie\psi^*P^+\psi c^2+ie\bar\psi P^-\bar\psi^* c^2
+\bar\pi_\alpha \bar c^{\alpha*}
\right]
\end{eqnarray}
where some proper gauge-fixing fermion $\Psi$ is assumed.
Now observe that the terms in $S_{nm}$ which involve the matter fields are
\begin{equation}
\label{68}
i\bar\psi\left[
\gamma^0\left(\partial_0-ieP^+(\tilde A_0-\lambda^2)\right)+\gamma^iD_i^+
\right]\psi
\end{equation}
The quantities $\bar A_0=\tilde A_0-\lambda^2$ and $\bar A_i=A_i$ transform
as $s\bar A_\mu=-\partial_\mu c^2$. As the fermions also transform consistently, as can be seen from (\ref{55}), we obtain the action of
the operator $\Delta$ over $S_\Psi$ adopting canonical procedures. For instance, in a Pauli-Villars regularization scheme with a fermionic mass term
with usual form, which means that the vector symmetry is taken as a preferential one, we see that
\bigskip
\begin{equation}
\label{58}
\Delta S_\Psi=-{1\over {96\pi}}\int d^4x c^2\epsilon^{\mu\nu\rho\sigma}\bar F_{\mu\nu}\bar F_{\rho\sigma}
\end{equation}
\bigskip
\noindent where $\bar F_{\mu\nu}=\partial_\mu\bar A_\nu-\partial_\nu\bar A_\mu$ and possible normal parity terms in the original space of fields have been discarded. Eq. (\ref{58}) represents the essential candidate to the anomaly.
It is easy to see, however, that
\begin{equation}
M_1={i\over{96\pi}}\int d^4x\phi^2\epsilon^{\mu\nu\rho\sigma}\bar F_{\mu\nu}\bar F_{\rho\sigma}
\label{M1abel}
\end{equation}
solves the one loop master equation, which means that
we have achieved a consistent route for the quantization of the theory. The gauge fixed vacuum functional reads
\begin{equation}
\label{Z-abel}
Z=\int [d\Phi^A]
\exp\left\{\frac{i}{\hbar} W [\Phi^A,\Phi^*_A=\frac{\partial\Psi}{\partial\Phi^A}] \right\}
\end{equation}
with $ [d\Phi^A]=(A_\mu,\pi^\mu,\phi^\alpha,\psi,\bar\psi,\lambda^\alpha,c^\alpha,
\bar c_\alpha,\bar\pi_\alpha) $, and all possible information about the system can be obtained from it.
If we wish to write an effective quantum action in an explicitly covariant way we may eliminate the momenta through functional
integrations in (\ref{Z-abel}). Let us assume that the gauge fixing fermion $\Psi$ does not depend on $\lambda^1$ or $\pi^\mu$, consequently $\lambda_1^*=\pi^*_\mu=0$. Suppose also that $\Psi$ possibly depends on $\lambda^2$ only through an $\bar A_0$ dependence. Integration in $\lambda^1$ and $\pi^0$ results in the substitution
$
\pi^0
\rightarrow
m^2 \phi^2
$ in $W$. Under the redefinition
\begin{equation}
A_0\longrightarrow A_0+\lambda^2-\frac{\phi^1}{m^2}
\label{cvabel}
\end{equation}
we obtain the intermediate auxiliary quantum action
\begin{eqnarray}
W_{aux}&=&\int d^4x \left[(A_0+\lambda^2)\dot\phi
+\dot A_i\pi^i
+i\bar\psi\gamma^0\dot\psi
-\frac{1}{4}F_{ij}^2
-\frac{1}{2}{\pi^i}^2
\nonumber\right.\\&&\left.
-\frac{1}{2}m^2(A_0+\lambda_2)^2
-\frac{1}{2}m^2\left(A_i-{\partial_i\phi}\right)^2
+i\bar\psi\gamma^i D_i\psi
\nonumber\right.\\&&\left.
+A_0\left(\partial_i\pi^i+J^0+m^2(A_0+\lambda^2)\right)\right]
+M_1+S_{\mbox{gf}}
\label{Waux}
\end{eqnarray}
where
\begin{eqnarray}
S_{\mbox{gf}}&=&\int d^4x \left[
-\frac{\delta\Psi}{\delta A_\mu}\partial_\mu c^2
-m^2\frac{\delta\Psi}{\delta \phi^{1}} c^1
-\frac{\delta\Psi}{\delta\phi^2}c^2
\right.\nonumber\\&&\left.
-ie\frac{\delta\Psi}{\delta\psi}P^+\psi c^2
+ie\bar\psi P^-\frac{\delta\Psi}{\delta\psi}c^2+\bar\pi_\alpha
\frac{\delta\Psi}{\bar c_\alpha}
\right]
\end{eqnarray}
and $M_1$ is given by (\ref{M1abel}) without the bars in $F_{\mu\nu}$ because of (\ref{cvabel}).
Further integration in $\lambda_2$ and $\pi^i$ results in the effective
quantum
action\bigskip
\begin{equation}
W_{\mbox{eff}}=\int d^4x \left[
-\frac{1}{4}F^2_{\mu\nu}
-\frac{1}{2}m^2{\left(A_\mu-{\partial_\mu\phi^2}\right)}^2
+i\bar\psi\gamma^\mu D^+_\mu \psi
\right]
+M_1
+S_{\mbox{gf}}
\end{equation}\bigskip
As we have already mentioned, a convenient choice of $\Psi$ fixes all the
gauge symmetry of the theory. We cite some possible choices for $\Psi$. The unitary gauge is achieved with $\Psi=\int d^4x\bar c_\alpha \phi^\alpha$ followed by functional integration on $\bar\pi_\alpha$ and $\phi^\alpha$. With this choice the quantum action reduces to the simple form
(\ref{y5}) and the path integral presents the usual Lioville's measure
for the pertinent fields.
The choice
$\Psi=\int d^4x\left[\bar c_2 ({{\alpha\bar\pi^2}\over{2}}+\partial_\mu A^\mu)+\bar c_1\phi^1\right]$
leads to the usual covariant Gaussian gauge fixing depending on the arbitrary parameter $\alpha$. In this situation
\bigskip
\begin{eqnarray}
S_{\mbox{gf}}&=&\int d^4x \left[
-\partial^\mu\bar c_2\partial_\mu c^2
+\bar\pi_2({{\alpha\bar\pi^2}\over2}+\partial_\mu A^\mu)
+\bar\pi^1\phi^1-m^2\bar c_1 c^1
\right]
\end{eqnarray}
and the integration over $c^1,\,\bar c_1,\,\bar \pi_1,\,\phi^1$ is trivial.
An interesting situation comes if we fix the compensating field $\phi^2$ to
some external value, say, $\phi^2=\beta$. By choosing $\Psi=\bar c_1\phi^1+ \bar c_2(\phi^2-\beta)$, we obtain, after a few trivial integrations and the absorption of some trivial normalization factors by the measure, that
\begin{equation}
Z[\beta]=\int [d\psi][d\bar\psi][d A^\mu]\exp\left\{{i\over\hbar} W_{ext}[\psi,\bar\psi,A,\beta]\right\}
\label{zbeta}
\end{equation}
where
\begin{eqnarray}
\label{Wbeta}
W_{ext}[\psi,\bar\psi,A,\beta]&=&\int d^4x [
-\frac{1}{4}F^{\mu\nu}F_{\mu\nu}
-\frac{1}{2}m^2\left(A_\mu-\partial_\mu\beta\right)^2\nonumber\\
&+&i\bar\psi\gamma^\mu D^+_\mu \psi
+{{i\hbar}\over{96\pi}}\beta\epsilon^{\mu\nu\rho\sigma}
F_{\mu\nu} F_{\rho\sigma}]
\end{eqnarray}
The condition that the path integral cannot depend on $\beta$, which comes from the
Fradkin-Vilkoviski theorem, gives, for instance,
that
\begin{equation}
\label{expectation}
i\hbar{{\delta Z[\beta]}\over{\delta\beta}}{\mid_{\beta=0}}
=<m^2\partial_\mu A^\mu + {{i\hbar}\over{96\pi}}\epsilon^{\mu\nu\rho\sigma}
F_{\mu\nu} F_{\rho\sigma}>_{\mid_{\beta=0}}=0
\end{equation}
which is a surprising result. If we observe, however, that
$\partial_\mu J^\mu=-m^2\partial_\mu A^\mu$ as a consequence of the equations of motion for the field $A_\mu$ in the unitary gauge,
we can interpret Eq. (\ref{expectation})
as the anomalous divergence of the Noether current (\ref{y8})
associated to the rigid chiral symmetry present in the original
theory given by actions (\ref{y1}-\ref{y5}). This is an unexpected result
derived from the quantum BFFT formalism. Similar results have recently been derived by using compensating fields at Lagrangian level \cite{us}. In these
last approaches, the compensating fields coupled directly to the chiral
current in an extended $QCD$ which presents not only vector but also
chiral gauge symmetry.
\section{Conclusions}
In this work we have considered the BFFT quantization of first order systems submitted to pure second class constraints. We have shown that the gauge symmetries introduced by the BFFT procedure are not obstructed at quantum level, since the compensating fields do not belong to the BRST cohomology at ghost number one. An specific example has been given, where massive electrodynamics couples to chiral fermions. The quantum master equation has been
solved and the corresponding counterterm has played an essential role in extracting anomalous expectation values of physically relevant quantities.
We would like to finish by commenting that a few generalizations could have
been considered. We could have started from an already gauge invariant first order system with both first and second class constraints. Then it would be necessary to take care of both symmetry sectors, the original one and that introduced by the BFFT conversion procedure. Another
possibility could be considering examples with more involving algebraic structure, as it occurs with some of the models cited in Ref. \cite{BFFT}.
We are now studying aspects of these subjects and results will be reported elsewhere \cite{AT}.
\vskip 1cm
\noindent {\bf Acknowledgment:} We are in debt to
N. R. F. Braga for an useful discussion.
This work is supported in part by
Conselho Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol\'ogico
- CNPq (Brazilian Research Agency).
|
2,869,038,154,599 | arxiv | \section{Introduction}
A dynamical system with time delays reacts not
only to its current state, but also to what occurred
in the past. It is well known in this context that
time-delayed dynamical systems are prone to instabilities
whenever the delay times become comparable to the time
scales needed to react to current events and
perturbations \cite{erneux2009applied,gros2015complex}.
To give an example from economy, consider just-in-time (JIT)
manufacturing, for which the time scales regulating the
delivery process are typically of the order of hours
\cite{singh2012just}. Even small perturbations in the
supply chain would lead to an immediate break down
of JIT manufacturing, as a whole, if the management of
the involved companies would need days or weeks to react
to an outage.
The dynamics of democratic political systems shares
certain basic similarities to manufacturing processes
like just-in-time manufacturing, with the political
institutions (parliament, government) reacting to
shifts in the demands of the electorate \cite{schnellenbach2015behavioral}.
It has been noticed in particular that the temporalities
of economy and culture are driven by every faster cycles
of innovation, change and replacement \cite{wolin1997time},
with political time remaining on the other side high
\cite{goetz2014question}.
There is hence an evolving mismatch of the speed
of formal democracy \cite{fleischer2013time} with regard
to the accelerating speed of capital \cite{tomba2014clash},
of economic decision making, opinion dynamics
\cite{wolffsohn2001nomen} and of modern life in
general \cite{rosa2013social}.
The ongoing differentiation of societal time scales,
with opinion dynamics accelerating in contrast to
institutional decision making, did manifest itself
in several political developments occurring
in 2016/2017. In French politics, to give an example,
electorate values changed so fast that the `En Marche'
movement could raise in essentially a single year from
nowhere to a center role in French politics
\cite{pain2017unusual}. The extended time scale of
three or more years, as presently envisioned, to carry
out the 2016 popular vote in favor of a Brexit
\cite{menon2016brexit}, is on the other side exemplary
for the prolonged time political institutions need
to react to demands of the electorate. Our aim is
here to develop a framework describing conflicts
of temporalities on a basis that abstracts from
specific circumstances. Our approach is particularly
well suited for advanced democracies, i.e.\ for societies
in an advanced state of acceleration.
Modern democracies are characterized additionally both
by an increasing level of skepticism towards political
institutions \cite{dalton2004advanced} and by the ongoing
refinement of political correctness norms \cite{hughes2011political,maass2013does}.
This continuously increasing sensitivity to deviations from
the mainstream normative order has its equivalent in economics,
where companies not adhering to normative standards of
reliability will if find difficult, in a world dominated by
JIT manufacturing, to build up profitable business relations.
Here we show that fine-tuned political correctness norms are
directly related to the underlying acceleration of societal
responses. Fast opinion dynamics and a high level of political
correctness are in our model both indicative of political
systems close to a dynamical instability. Fine-tuning a
political system reduces consequently its robustness against
perturbations.
\section{Model}
We denote with $D=D(t)$ and with $V=V(t)$
aggregate variables measuring the state of the democracy
and of the values of the electorate, its cultural
dimension \cite{abdollahian2012dynamics}. The time $t$ will
be measured in years. We remain here on a relative abstract
level, noting however that standard country-specific indicators
\cite{spaiser2014dynamics,alexander2012measuring}
for both democracy and values may be taken as proxies for
$D$ and $V$. Alternatively one may consider the level of
economic development, instead of the cultural dimension,
as the basic variable interacting with the state of
the democracy \cite{ranganathan2014bayesian}.
A political system is democratic, per definition, whenever
$D(t)$ is reactive to changes in the values $V(t)$ of the
electorate. This relation is captured by
\begin{equation}
T_D \frac{d}{dt} D(t) \ =\ V(t-T) - D(t)~,
\label{eq:dot_D}
\end{equation}
where $T_D$ denotes the time democratic institutions need to
aligns themselves to the demands expressed by the electorate.
There is however an additional time scale involved, the time lag
$T$. Time lags arise on one side from the circumstance that
the electorate has to wait in a representative democracy on the
average several years before it can express its value forcefully
at election time \cite{goetz2014question}. Time lags also
occur generically in political decision making. It will take
about three years, if at all, to implement popular will in the
case of the Brexit \cite{inglehart2016trump}.
The overall process modeled by (\ref{eq:dot_D}) describes a
highly idealized democracy. We note, however, that the
intricacies of real-life political decision making will
enhance the effect here studied.
\begin{figure}[t]
\centering
\resizebox{0.9\columnwidth}{!}{\includegraphics{fig1_Fermi_function.pdf}}
\caption{The rescaled Fermi function (\ref{eq:D_def})
entering the evolution
(\ref{eq:dot_V}) of the values $V$ of the electorate.
The monotonic decline of $\sigma(D)$ implies that
the desire to further increase the level $D$ of
democratic participation drops with its actual level.
The slope at the inflection point $\sigma(1)=1$ is $-\beta/2$,
viz proportional to the sensibility parameter $\beta$. The
time scale for opinion dynamics is hence of the order of
$2T_V/\beta$. Alternatively one may interpret the slope
and hence $\beta$ as a proxy for the rigor of
political correctness.
}
\label{fig:fermiFunction}
\end{figure}
For the time evolution of the value $V$ we propose
\begin{equation}
T_V \frac{d}{dt} V(t) = \sigma(D(t)) - V(t),
\label{eq:dot_V}
\end{equation}
which describes a competition between a trend towards
democracy $\sim\sigma(D(t))$ and an intrinsic decay term of
the democratic values $\sim(-V(t))$. It has been observed
in this regard that support for democratic values declines
steadily in western societies \cite{foa2016democratic}.
If asked, to give an example, whether it is essential to live
in a country that is governed democratically, over 70\% of US-citizens
born around 1930 would respond yes, but only about 30\% of those
born 1980 or later \cite{foa2016democratic}. This downward
trend translates in (\ref{eq:dot_V}) to a decay time
$T_V\approx 15-20$ years.
The actual shape of the function $\sigma(D)$ entering
(\ref{eq:dot_V}) is not relevant for the following
arguments, as long as it is monotonically declining
and hence reflecting that the desire to further increases
the current amount $D(t)$ of democratic participation
declines with its actual level. A monotonically declining
$\sigma(D)$ incorporates therefore the notion of diminishing
returns, which can be traced back in turn to the logarithmic
discounting performed by the neural circuitry of the brain
\cite{dehaene2003neural,gros2012neuropsychological}.
We have chosen here for simplicity a rescaled Fermi function,
\begin{equation}
\sigma(D) = \frac{2}{1+\exp(\beta(D-1))}~,
\label{eq:D_def}
\end{equation}
in physics jargon, for $\sigma(D)$, as illustrated in
Fig.~\ref{fig:fermiFunction}. At the inflection point $D=1$
we have $\sigma(D=1)=1$. The parameter $\beta$, which would
correspond to the inverse temperature in physics,
is a sensibility parameter, setting the slope
$d\sigma/dD=-\beta/2$ at the inflection point $D=1$.
The evolution equations for $D(t)$ and $V(t)$,
Eqs.~(\ref{eq:dot_D}) and (\ref{eq:dot_V}), have
been defined such that the common fixed point
$(D,V)=(1,1)$ remains unchanged for all parameter
settings. This implies, that (\ref{eq:dot_D}) and (\ref{eq:dot_V})
describe the time evolution of quantities which are
relative and not bare measures. The steady-state fixed point
would evolve on the other side if $D$ and $V$ had been
measured in absolute terms \cite{spaiser2014dynamics} and
not, as done here, relatively. The renormalization of
the steady state to $(1,1)$ does hence encompass the
secular backdrop of declining democratic values
\cite{foa2016democratic}.
\begin{figure*}[t]
\centering
\resizebox{0.9\textwidth}{!}{\includegraphics{fig2_50_10_20.pdf}}
\caption{The result of numerically simulating (\ref{eq:dot_D})
and (\ref{eq:dot_V}) for $T=4$, $T_D=4$ and $T_V=15$
(years). The system starts (as denoted by the label `starting')
right after the initial function, defined for $t\in[-T,0]$,
ends, with every filled point denoting one year
(decades are red). Note, that trajectories may intersects
themselves for dynamical systems with time delays, as it
happens for $\beta=20$. The fixed point $(D,V)=(1,1)$ is
stable for $\beta<\beta_c\approx 11.36$.
}
\label{fig:beta_5_10_20}
\end{figure*}
\section{Simulations results}
For the parameters entering the evolution equations
for the state of the democracy and for the values of
the electorate, (\ref{eq:dot_D}) and (\ref{eq:dot_V})
respectively, we take $T_D=4$ years for the typical
adaption time of political actors and $T_V=15$ years for
the decay time of political values \cite{foa2016democratic}.
We start with an overview of the properties of our
model, (\ref{eq:dot_D}) together with (\ref{eq:dot_V}),
for which we set the time delay to $T=4$ years. Alternative
values for $T$ will be considered subsequently together
with distinct ways to incorporate multiple time delays.
For the numerical simulations we
discretized the evolution equations (\ref{eq:dot_D}) and
(\ref{eq:dot_V}), taking one month ($\Delta t=1/12$ years)
as a basic time step. The such obtained results do not depend
qualitatively on the exact value of $\Delta t$.
The solution of a time-delayed systems is generically contingent
on the choice of the initial function $(D(t),V(t))$, where $t\in[-T,0]$
\cite{gros2015complex,richard2003time}. We find, however, that the system
(\ref{eq:dot_D}) and (\ref{eq:dot_V}) is robust in the sense that
the long-time state convergences in all cases to the identical attracting
set, which may be either a fixed point or a limit cycle, even when
fully random initial functions are selected.
In Fig.~\ref{fig:beta_5_10_20} we present typical trajectories
for $\beta=5,10,20$, where the starting function was
$(D(t),V(t))=(0.8,0.9)$, with $t\in[-T:0]$, together with a
random jitter $\Delta D=\Delta V=0.02$. The system is stable,
as expected, for small values of $\beta$, with the state
$(D(t),V(t))$ of the system spiraling toward the fixed point
$(1,1)$. The overall time-scale for the evolution is about
two decades, as consequence of $T_V=15$ year.
For an advanced democracy, characterized by a high sensibility
$\beta=20$ to deviations from the political standard, the overall
attracting set is a limit cycle with a period of about 24.5 years
and an average deviation
\begin{equation}
D_F = \left\langle \sqrt{\big(D(t)-1\big)^2+
\big(V(t)-1\big)^2}\right\rangle \approx 0.24
\label{eq:D_F}
\end{equation}
from the fixed point $(1,1)$, with the brackets $\langle\dots\rangle$
denoting the time average. In order to decide whether the limit
cycle is far away from the original fixed point, or close, we may
compare above value for $D_F$ with the functional dependency
of the response function $\sigma(D)$ entering (\ref{eq:dot_V}),
as illustrated in Fig.~\ref{fig:fermiFunction}. We observe,
that $D=0.8$ or $D=1.2$ leads to responses $\sigma(D)$ which
are exponentially close to 1 and 0 respectively. This implies,
that the limit
cycle observed for $\beta=20$ in Fig.~\ref{fig:beta_5_10_20}
is close to the maximal possible periodic solution supported
by (\ref{eq:dot_D}) and (\ref{eq:dot_V}). Even for a very
large $\beta=80$, to give an example, we find only a slightly
increased $D_F=0.27$.
Also shown in Fig.~\ref{fig:beta_5_10_20} is a trajectory for
$\beta=10$, which spirals in the end into the fixed point
$(D,V)=(1,1)$.
The extraordinary long time scale needed to reach the
equilibrium state, for $\beta=10$, is a consequence of
the critical slowing down close to a phase transition,
which occurs here at $\beta_c\approx11.36$ (see discussion below).
It may hence be difficult to distinguish real-world political
systems which are subcritical, but close to an instability,
from systems which are already unstable.
Our basic presumption is here, that advances in communication
and organizational structures lead to a progressing optimization
of our societies which is inevitably accompanied with a decreasing
tolerance of non-standard behaviors and hence with an increasing
$\beta$, as entering (\ref{eq:dot_V}). In Fig.~\ref{fig:beta_time}
we present a scenario simulation for a time-varying $\beta$,
which is held constant at $\beta=5$ for the first ten years,
at $\beta=10$ for the subsequent twenty years and at $\beta=20$
thereafter. The system tries initially to reach the equilibrium
state $(D,V)=(1,1)$, being subcritical for the first thirty years,
with the relaxation towards the fixed point slowing down dramatically
when $\beta\to10$ (compare Fig.~\ref{fig:beta_5_10_20}). Twenty years
at $\beta=10$ are not enough to equilibrate and the final
increase to $\beta=20$ leads therefore straightaway to limit-cycle
oscillations.
\begin{figure*}[t]
\centering
\resizebox{0.9\textwidth}{!}{\includegraphics{fig3_change_beta.pdf}}
\caption{The result of a numerical experiment, for $T=4$, $T_D=4$
and $T_V=15$ (years), where $\beta=5$ for the first 10 years,
$\beta=10$ for $t\in[10,30]$ and $\beta=20$ thereafter.
The evolution is shown for $D(t)$ as a function of time
({\it left}) and for $(D,V)$ in state space ({\it right}).
While still subcritical for $\beta=10$, the relaxation
process slows down dramatically due to the closeness
to the phase transition occurring at $\beta_c\approx11.36$,
compare Fig.~\ref{fig:beta_5_10_20}.
}
\label{fig:beta_time}
\end{figure*}
\subsection{Diverging recovery times close to the Hopf bifurcation}
Normal forms allow to classify the type of bifurcations
occurring in normal dynamical systems, viz in
dynamical systems without time delays \cite{gros2015complex}.
The transition observed here at $\beta_c\approx11.36$
is in this context akin to a classical supercritical Hopf
bifurcation, involving a bifurcation from a stable node
(fixed point) to a continuously expanding periodic
orbit (stable limit cycle) \cite{piotrowska2011nature}.
In order to corroborate this statement we have evaluated
the time dependent distance $D_F(t)$ of the trajectory from
the fixed point, as well as its long time average
(\ref{eq:D_F}). It is evident from Fig.~\ref{fig:beta_D_F},
that the size of the final limit cycle shrinks continuously
when $\beta$ approaches $\beta_c$ from above, as expected
for a second-order transition.
It is of interest to examine, for subcritical $\beta<\beta_c$,
the time scale $T_\lambda$ needed to close in to the
equilibrium state $(D,V)=(1,1)$, which is given by the
inverse of the largest Lyapunov exponent of the
fixed point \cite{wernecke2017test}. In
Fig.~\ref{fig:beta_D_F} we present alternatively the results
of a numerical experiment simulating the recovery from
an external shock. For a single trajectory, with starting
conditions as for Fig.~\ref{fig:beta_5_10_20}, the
displacement $D_F(t)$ from the steady state has been
evaluated and fitted by $\exp(-t/T_\lambda$). We notice that
the time needed to recover from the initial displacement becomes
of the order of three decades already for $\beta\approx 7.5$, which
is still substantially below the critical $\beta_c\approx11.36$.
The system is hence very slow to recover from external events
pushing it away from the fixed point.
\subsection{Mixture of time delays}
With (\ref{eq:dot_D}) we assumed that the state
$D(t)$ tries to align itself to values the
electorate expressed exactly $T$ years before.
A mixture of time delays may contribute in reality.
We consider with
\begin{equation}
T_D \frac{d}{dt} D(t) \ =\ \overline V_\alpha(t) - D(t)
\label{eq:dot_D_mixture}
\end{equation}
the coupling of $D(t)$ to two specific distributions
$\alpha=1,2$ of lag times,
\begin{eqnarray}
\label{eq:dot_D_mixture_1}
\overline V_1(t) &=& \frac{1}{2T}\int_0^{2T} V(t-\tau)d\tau \\
\overline V_2(t) &=& \frac{1}{T}\int_0^{\infty} \mathrm{e}^{-\tau/T}V(t-\tau)d\tau
\label{eq:dot_D_mixture_2}
\end{eqnarray}
where $V_1(t)$ and $V_2(t)$ correspond respectively to a flat
distribution, with $T\in [0,2T]$, and to exponentially discounted
delay times. The average time delay stays at $T$ in both cases.
We find, as shown in Fig.~\ref{fig:beta_D_F}, that a flat distribution,
viz $\overline V_1$ in Eq.~(\ref{eq:dot_D_mixture}),
induces only relative minor quantitative changes, with
all qualitative features of the original model (\ref{eq:dot_D})
remaining untouched. There is a slight upward renormalization,
when using $\overline V_1$, of the critical sensitivity from
$\beta_c\approx11.36$, as obtained for
(\ref{eq:dot_D}), to $\beta_c\approx13.5$.
For exponentially discounted lag times, describing the common but
not exclusive case that past messages are progressively discounted
in the context of political communication \cite{chong2010dynamic},
we find numerically that $\beta_c\approx 23.7$, which is now substantially
increased, but otherwise no overall qualitative changes.
\begin{figure*}[t]
\centering
\resizebox{0.9\textwidth}{!}{\includegraphics{fig4_Hopf_bifurcation.pdf}}
\caption{The results of evaluating the Euclidean distance
$D_F$ from the fixed point $(D,V)=(1,1)$. For $\beta>\beta_c$
(dashed vertical line) the time-average $D_F$,
Eq.~(\ref{eq:D_F}), of the limit cycle is shown (multiplied
by 100).
For $\beta<\beta_c$ the relaxation time $T_\lambda$ is
shown (in years). $T_\lambda$, which is also the time
needed to recover from external shocks, has been obtained by
fitting the time-dependent Euclidean distance $D_F=D_F(t)$ by
$\exp(-t/T_\lambda)$. The data is for the model with
a single time delay $T=4$ (Eq.~(\ref{eq:dot_D}), {\it left panel})
and for the model with a uniform mixture of time delays
(Eq.~(\ref{eq:dot_D_mixture}), {\it right panel}) and
otherwise identical parameters. The respective critical
sensitivities are $\beta_c\approx11.36$ ({\it left}) and
$\beta_c\approx13.5$ ({\it right}).
}
\label{fig:beta_D_F}
\end{figure*}
\section{Stability analysis}
The stability of the fixed point $(D,V)=(1,1)$ can be examined
\cite{boukas2012deterministic} by linearizing the evolution
equations (\ref{eq:dot_D}) and (\ref{eq:dot_V})
\begin{eqnarray}
\label{eq:D_dot_linearized}
T_D\frac{d}{dt}\delta D(t) &=& \delta V(t-T)-\delta D(t),\\
T_V\frac{d}{dt}\delta V(t) &=& -\frac{\beta}{2}\delta D(t)-\delta V(t)~,
\label{eq:V_dot_linearized}
\end{eqnarray}
where $\delta D= D-1$ and $\delta V= V-1$. The Ansatz
$\delta D(t) = D_0\exp(\lambda t)$ and
$\delta V(t) = V_0\exp(\lambda t)$ leads to
$$
V_0 \mathrm{e}^{-\lambda T} = D_0(1+T_D\lambda),
\qquad
D_0 = -\frac{2V_0}{\beta}(1+T_V\lambda)~,
$$
and hence to
\begin{equation}
\mathrm{e}^{-\lambda T} = -\frac{2}{\beta}(1+T_V\lambda)(1+T_D\lambda)~.
\label{eq:DV_lambda}
\end{equation}
The Lyapunov exponent $\lambda=\lambda'+i\lambda''$ is
generically complex, becoming purely imaginary, with
$\lambda'=0$, at the bifurcation $\beta\to\beta_c$.
The real and imaginary components of (\ref{eq:DV_lambda})
then are:
\begin{eqnarray}
\label{eq:DV_real}
\cos(\lambda''T) &=& -\frac{2}{\beta_c}\left(1-T_VT_D(\lambda'')^2\right),
\\
\sin(\lambda''T) &=& \frac{2}{\beta_c}(T_V+T_D)\lambda''~,
\label{eq:DV_imag}
\end{eqnarray}
or
\begin{equation}
\tan(T\lambda'') = \frac{(T_D+T_V)\lambda''}{T_DT_V(\lambda'')^2-1},
\label{eq:Hopf_full_lambda}
\end{equation}
and
\begin{equation}
\frac{\beta_c^2}{4}
= \left(1+(T_D\lambda'')^2\right)\left(1+(T_V\lambda'')^2\right)~,
\label{eq:Hopf_full_beta_c}
\end{equation}
where we have used that
\begin{eqnarray*}
(T_DT_V(\lambda'')^2-1)^2 &+& (T_D+T_V)^2 (\lambda'')^2 \\
&=& (1+(T_D\lambda'')^2)
(1+(T_V\lambda'')^2)~.
\end{eqnarray*}
One solves first (\ref{eq:Hopf_full_lambda}) for
$\lambda''$ and then (\ref{eq:Hopf_full_beta_c}) for $\beta_c$.
The corresponding
phase diagram is presented in Fig.~\ref{fig:PD_all} for fixed
$T_D=4$ and $T_V=15$. The locus of the phase transition at $T=4$
is $\beta_c=11.4$, which differs only marginally from the one
found in the numerical simulation, $\beta_c=11.36$, for
which time had been discretized (using $\Delta t=1/12$).
\begin{figure*}[t]
\centering
\resizebox{0.9\textwidth}{!}{\includegraphics{fig5_phase_diagrams.pdf}}
\caption{The Hopf bifurcation line for the case of a
single time delay (full red curve), for a uniform distribution
of delay times (dashed cyan curve) and for exponentially
distributed time delays (full blue curve).
The attracting state is a limit cycle above the respective
lines (viz in the shaded region for the case of a single
time delay), and a fixed point otherwise. The dashed rectangle
indicates the case of a single time delay $T=4$, and an
adaption time scale for $V(t)$ of $T_V=15$.
{\it Left:} For $T_D=4$ and the original model (\ref{eq:dot_D})
and (\ref{eq:dot_V}), respectively (\ref{eq:dot_D_mixture}),
for which the bifurcations lines $\beta_c$ are determined by
(\ref{eq:Hopf_full_beta_c}), (\ref{eq:Hopf_mix_beta_c}) and
(\ref{eq:beta_c_exp_discounted}).
{\it Right:} For the adiabatic limit (\ref{eq:dot_V_adiabatic}),
obtained when $T_D\to0$. In this limit there is no
Hopf bifurcation for exponentially discounted time delays.
}
\label{fig:PD_all}
\end{figure*}
\subsection{Uniform mixture of time delays}
For the case (\ref{eq:dot_D_mixture}) of an uniform
mixture of time delays one replaces
$\exp(-\lambda T)$ in (\ref{eq:DV_lambda}) by
$\int \exp(-\lambda \tau)d\tau/(2T)$, obtaining
\begin{eqnarray*}
\frac{1}{2T}\int_0^{2T} \cos(\lambda''\tau) d\tau
&=& \frac{2}{\beta_c}\big[T_VT_D(\lambda'')^2-1\big],
\\
\frac{1}{2T}\int_0^{2T} \sin(\lambda''\tau) d\tau
&=& \frac{2}{\beta_c}(T_V+T_D)\lambda''~,
\end{eqnarray*}
which results in turn, after carrying out the
respective integrals, in
\begin{eqnarray*}
\sin(2T\lambda'')
&=& \frac{4T\lambda''}{\beta_c}\big[T_VT_D(\lambda'')^2-1\big],
\\
1-\cos(2T\lambda'')
&=& \frac{4T\lambda''}{\beta_c}(T_V+T_D)\lambda''~.
\end{eqnarray*}
With $\sin(2\lambda'')=2\sin(T\lambda'')\cos(T\lambda'')$ and
$\cos(2T\lambda'')=1-2\sin^2(T\lambda'')$ we then obtain
\begin{equation}
\tan(T\lambda'') = \frac{(T_D+T_V)\lambda''}{T_DT_V(\lambda'')^2-1}
\label{eq:Hopf_mix_lambda}
\end{equation}
and
\begin{equation}
\frac{\beta_c}{2}\frac{T_D+T_V}{T} =
\left(1+(T_D\lambda'')^2\right)\left(1+(T_V\lambda'')^2\right)~.
\label{eq:Hopf_mix_beta_c}
\end{equation}
Note that the expressions (\ref{eq:Hopf_mix_lambda})
and (\ref{eq:Hopf_full_lambda}) for the imaginary
component $\lambda''$ of the Lyapunov exponents
are identical and, correspondingly, also the
right-hand sides of (\ref{eq:Hopf_mix_beta_c})
and (\ref{eq:Hopf_full_beta_c}).
For above transformations we used
$$
[1-\cos(2T\lambda'')]/\sin(2T\lambda'')=\tan(T\lambda'')
$$
and that
\begin{eqnarray*}
4\sin^2(T\lambda'') &=&
\frac{(4T\lambda'')^2}{\beta_c^2}
\left(1+(T_D\lambda'')^2\right)\left(1+(T_V\lambda'')^2\right)
\\ &= &
\frac{4\tan^2(T\lambda'')}{1+\tan^2(T\lambda'')}
\end{eqnarray*}
can be simplified when using (\ref{eq:Hopf_mix_lambda}).
The bifurcation line resulting from (\ref{eq:Hopf_mix_beta_c}), which has
been included in Fig.~\ref{fig:PD_all}, runs somewhat parallel to
the one obtained via (\ref{eq:Hopf_full_beta_c}) for the case of a single
delay time, closing in for $T\ll T_V$, when the actual distribution
of lag times becomes unimportant. For $T=4$ we find that $\beta_c$
increases from $\beta_c=11.4$ to $\beta_c=13.68$.
Comparing (\ref{eq:Hopf_full_lambda}) and (\ref{eq:Hopf_mix_lambda}) one
finds, remarkably, that the imaginary part $\lambda''$ of
the Lyapunov exponent is identical at criticality, albeit
at different values of $\beta_c$. This implies, that the
revolution frequencies of the resulting limit cycles are
identical in the respective limits $\beta\to\beta_c$
from above.
\subsection{Exponentially discounted time delays}
For exponentially discounted delay times (\ref{eq:dot_D_mixture})
we need
\begin{eqnarray*}
\frac{1}{T}\int_0^{2T} \mathrm{e}^{-t/T}\cos(\lambda''\tau) d\tau
&=& \frac{1}{1+(T\lambda'')^2},
\\
\frac{1}{T}\int_0^{2T} \mathrm{e}^{-t/T}\sin(\lambda''\tau) d\tau
&=& \frac{T\lambda''}{1+(T\lambda'')^2}~,
\end{eqnarray*}
which results respectively in
\begin{eqnarray}
\label{eq:DV_exp_discounted_real}
\frac{1}{1+(T\lambda'')^2} &=& \frac{2}{\beta_c}\left(T_VT_D(\lambda'')^2-1\right),
\\
\frac{1}{1+(T\lambda'')^2} &=& \frac{2}{\beta_c}\frac{T_V+T_D}{T}
\label{eq:DV_exp_discounted_imag}
\end{eqnarray}
\noindent
instead of (\ref{eq:DV_real}) and (\ref{eq:DV_imag}). We then find
\begin{eqnarray}
\nonumber
(\lambda'')^2&=&\frac{T_D+T_V+T}{T_DT_VT} \\
\beta_c &=& 2 \frac{T_D+T_V}{T}\big[1+(T\lambda'')^2\big]
\label{eq:beta_c_exp_discounted}
\end{eqnarray}
for the Hopf bifurcation line. The critical $\beta_c$ has been
included in Fig.~\ref{fig:PD_all}. For $T_D=4=T$ and $T_V=15$ the
resulting $\beta_c=24.1$ is again marginally larger than
the value, $\beta_c\approx23.7$, obtained from corresponding
time discretized numerical simulation.
\subsection{Adiabatic limit}
We have shown above that our model is robust against
changes in the distribution of time delays. The nature
of the attracting states are also not sensitively
dependent on the ratio of $T_D/T_V$. It is illustrative,
in this context, to examine the adiabatic limit $T_D\ll T_V$
of (\ref{eq:dot_D}) and (\ref{eq:dot_V}), for which
$D(t)$ follows closely $V(t-T)$. In this case one
can substitute $D(t)$ by $V(t-T)$ in (\ref{eq:dot_V}),
obtaining
\begin{equation}
T_V \frac{d}{dt} V(t) \ =\ \sigma(V(t-T)) - V(t)~.
\label{eq:dot_V_adiabatic}
\end{equation}
The locus of the bifurcation is determined by
(\ref{eq:Hopf_full_beta_c}) in the limit $T_D\to0$,
or, alternatively, by
\begin{equation}
\tan(x\,T/T_V) = -x,
\qquad
\frac{\beta_c^2}{4}=1+x^2,
\qquad
x = T_V\lambda'' ~,
\label{eq:beta_c_adiabatic}
\end{equation}
when using rescaled variables. $\beta_c$ is then dependent only
on the ratio $T/T_V$, as shown in Fig.~\ref{fig:PD_all}. For the
case of a uniform mixture of time delays (\ref{eq:Hopf_mix_lambda})
and (\ref{eq:Hopf_mix_beta_c}) reduce to
\begin{equation}
\tan(x\,T/T_V) = -x,
\qquad\quad
\frac{\beta_c}{2}\frac{T_V}{T}=1+x^2
\label{eq:beta_c_adiabatic_mix}
\end{equation}
in the limit $T_D\to0$. One notices, compare Fig.~\ref{fig:PD_all},
that there is a substantial quantitative difference in the
adiabatic limit between having a single and a mixture of
time delays.
Interestingly, there is no phase transition in the adiabatic limit
for the case of exponentially discounted time delays,
with (\ref{eq:DV_exp_discounted_real}) having no solution
in the limit $T_D\to0$.
\subsection{Properties of the phase diagram}
The phase diagrams presented in Fig.~\ref{fig:PD_all}
have a series of common features.
\begin{itemize}
\item The Hopf bifurcation line is a monotonically decreasing
function. For small
time delays $T$ one needs a higher sensibility
$\beta>\beta_c$ for the instability to occur,
and vice verse.
\item There is no minimal time delay $T$, viz there
is a critical $\beta_c<\infty$ for any $T>0$,
with
\begin{equation}
\lim_{T/T_V\to0} \beta_c(T/T_V) \to \infty~.
\label{eq:beta_c_limit}
\end{equation}
The fixed point is hence stable for all $\beta$ when
there is no time delay, $T=0$.
\item There is a lower $\beta_c$ below which the fixed point
is stable even when $T$ is arbitrary large. In the
adiabatic limit (\ref{eq:beta_c_adiabatic}) one needs
$\beta_c>2$.
\item The imaginary part $\lambda''$ of the Lyapunov exponent
needs to be non-zero for (\ref{eq:Hopf_full_lambda}) and
(\ref{eq:Hopf_mix_lambda}) to have a non-trivial solution.
$\lambda''$ is hence finite at the transition, the
tell-sign of a Hopf bifurcation \cite{gros2015complex}.
The revolution frequency of the limit cycle, which
is of the order of $1/|\lambda''|$, is hence not
critical, varying smoothly above the transition.
\end{itemize}
In the vicinity of the transition the sensibility $\beta$
induces a speed-up of the reactive value dynamics, as evident
from the linearized equations (\ref{eq:D_dot_linearized}) and
(\ref{eq:V_dot_linearized}), by a factor $\beta/2$, which may
be identified with a corresponding acceleration of opinion
dynamics. The overall time needed to reach the fixed point
nevertheless diverges as $1/\lambda'\sim 1/|\beta-\beta_c|$.
This phenomenon, known as critical slowing down, is
observed generically in dynamical systems close to a tipping
point. It is observed in a wide range of settings, affecting,
e.g., the resilience of ecosystems \cite{van2007slow} as well
as the evolution of the climate prior to a major shift
\cite{dakos2008slowing}. The increased time scales needed
to react to disturbances close the instability are also
evident in Fig.~\ref{fig:beta_D_F}.
\section{Discussion}
There are two mutually not exclusive routes
to describe the conflict between slow political
decision making and accelerating social dynamics
\cite{goetz2014question,tomba2014clash,rosa2013social}.
In the first view politics continuously adapts, over the
course of $T_D$ years, to the current demands of the
electorate. Time lags are absent in this scenario
and the system stable for all parameters. Politics
then evolves around a stable state, with deviations
from the fixed point driven exclusively by external events.
Here we have examined a second possibility, namely that
a certain fraction of political decision making results from
the response to demands the electorate voiced $T$ years
ago. The time delay $T$ may be either fixed or drawn
from a continuous distribution, as described by
Eqs.~(\ref{eq:dot_D}) and (\ref{eq:dot_D_mixture}) respectively.
For both cases we find that the socio-political system becomes
inherently unstable whenever the electorate responds
sensitively to political changes. This conclusion, which is
robust and independent of the details of the here used model,
results from the fact that time delays will inherently amplify
fluctuations once their influence becomes substantial.
In our model the sensitivity $\beta$ of the electorate
leads to typical reaction times $2T_V/\beta$, as evident
form the linearized evolution equation (\ref{eq:V_dot_linearized}),
where $T_V$ is the time scale for the long-term evolution
of basic political values. In order to obtain estimates for
real-world political communication we considered the
case of exponentially discounted time delays, for which
the instability occurs at
$\beta_c\approx24.1$ for $T=4$ and at
$\beta_c\approx50.7$ for $T=1$ (compare Fig.~\ref{fig:PD_all}).
Socio-political instabilities then start to manifest
themselves for $T=4$ when the corresponding times
scale $2T_V/\beta_c$ for the opinion dynamics falls
below $30/24.1$ years (about 15 months). For a time
delay of one year, $T=1$, instabilities develop when
the opinion dynamics takes place on time scale below
$30/50.7$ years (about 7 months).
Our estimates for the tipping point of political opinion
dynamics, 7-15 months when assuming mean time delays of
the order of 1-4 years, are for aggregate processes which
include the effects of fast news propagation as well as
the consequences of slowly but continuously changing
preset political beliefs. It is conceivable within out
model that western democracies have seen the unfolding of
a slow but steady long-term acceleration of opinion dynamics,
with the passing of the threshold of 7-15 months contributing
to the recent emergence of political styles disrupting
political conventions considered hitherto as fundamental
\cite{inglehart2016trump}.
External effects, such as the 2007-08 financial crisis
\cite{shiller2012subprime,funke2016going}, would induce
in this view an additional temporary but sharp rise in $\beta$.
An important aspect regards the time needed to recover from
an external disrupting event, such as a global crisis.
Naively one may expect that the accelerating pace of opinion
formation observed in advanced democracies would reduce
typical recovery times. The contrary is however the case.
It is well known, as illustrated in Fig.~\ref{fig:beta_D_F},
that second order instabilities lead to critical slowing down
in their proximity and hence to diverging recovery times.
As a consequence one observes long-lasting oscillations even
below the actual transition, illustrated shown in Fig.~\ref{fig:beta_5_10_20}.
Analogous oscillations matching both the period (about
20 years), and the magnitude (10\%-15\%), have be observed
since the early 1990th in Australian polls studying aggregate
value orientations along the materialism vs.\ postmaterialism axis
\cite{tranter2015impact}. A substantially larger corpus of
data would however been needed for an eventual validation, or
falsification, of the here presented approach. Note that
our framework describes instabilities arising within
representative democracies and not transitions to
non-democratic regimes.
The scope of the work presented here is to point
out a phenomenon of possible key importance for
the understanding of the long-term stability of
representative democracies. The instabilities we
find lead to oscillatory but not to irregular
socio-political states. One possibility to extend
our study would however be to consider time delays
varying periodically with the election cycle. It
is to be expected that such kinds of non-constant
time delays would act as periodic drivings
\cite{d1982chaotic}, which are in turn known to
induce transitions to chaotic states in non-linear
dynamical systems. We note in this context that
transitions to potentially disrupting states with
runaway opinion growth have been observed \cite{podobnik2017predicting}
in agent based simulations examining the response of
an electorate to rising levels of immigration.
\subsection*{Acknowledgments}
We thank Karolin Kappler regarding discussions concerning
social acceleration, Daniel Lambach regarding time
delays in democratic structures and Roser Valenti for
reading the manuscript.
\bibliographystyle{unsrt}
|
2,869,038,154,600 | arxiv | \section{Introduction}
Recently, great scientific interest has been taken in Transition Metal Dichalcogenides (TMDs) and their fascinating optical properties \cite{Mak_MoS2monofirst_PhysRevLett_2010}. TMDs materials like MoS$_2$, WS$_2$, MoSe$_2$ and WSe$_2$, being semiconductors with a bandgap in the visible wavelength range, offer many possibilities for applications in opto-electronics \cite{Zhang_TMDCtransistor_science_2014, Wang_TMDCelectronics_NatNano_2012}. In the TMDs semiconductor valleys, electron and hole pairs form stable excitons even at room temperature \cite{Chernikov_excitonBinding_PRL_2014}. Moreover, the interaction of TMDs with light is chiral, as their pseudospin allows the selective addressing of each TMDs valley by circularly polarized light with opposite handedness \cite{Cao_TMDCcircular_NatCom_2012, Xu_TMDCspins_NatPhys_2014, Zhu_WS2bilayerValleyPolarization_PNAS_2014}.
Chemical Vapor Deposition (CVD) provides a flexible platform for the fabrication of TMD nanostructures \cite{Song_CVDgrownWS2_ACSNano_2013, Zhang_CVDgrownWS2_ACSNano_2013, Cong_CVDgrownWS2_AdvOptMat_2014, Orofeo_CVDgrownWS2_APL_2014, Thangaraja_WS2crystals_MatLett_2015, Liu_CVDgrownWS2_NanoscResLett_2017}. While CVD can reproduce naturally occurring flat layered TMDs, it also offers the possilibity of fabricating vertical TMDs walls \cite{Jung_verticalTMD_NanoLett_2014}, pyramids \cite{Irina_pyramids_2020} and flower-like nanostructures \cite{Li_MoS2flowers_APL_2003, Li_TMDflowers_chem_2004, Prabakaran_WS2flowers_chemCom_2014, Sabrya2020}. Potential applications of flower-like TMDs structures range from catalysis \cite{Sabrya2020, Prabakaran_WS2flowers_chemCom_2014} to using their large field emission as a potential electron source \cite{Li_MoS2flowers_APL_2003, Li_TMDflowers_chem_2004}. However, so far TMDs nanoflowers have mainly been studied using electron microscopy tools \cite{Sabrya2020}, and little is known about their interaction with light. It is interesting to note that, in contrast to flat layers, no PL but only a Raman response is reported from vertical TMDs walls \cite{Jung_verticalTMD_NanoLett_2014, Fu_verticalWS2polarization_OptLett_2014}, TMDs pyramids \cite{Irina_pyramids_2020} or flower-like TMDs structures \cite{Li_MoS2flowers_APL_2003, Li_TMDflowers_chem_2004, Prabakaran_WS2flowers_chemCom_2014}.
Raman spectroscopy offers a powerful and non-invasive tool for the investigation of TMDs materials \cite{Lee_MoS2Ramanfirst_ACSNano_2010, Zhao_RamanTMDlinear_Nanoscale_2013, Berkdemir_RamanWS2_ScientRep_2013, Molas_RamanWS2_ScientRep_2017}. Commonly studied in TMDs are the characteristic vibrational modes, namely the E$^1_{2g}$ that corresponds to the in-plane displacement of the atoms and the A$_{1g}$ that corresponds to the out-of-plane displacement of the chalcogenide atoms, as well as the longitudinal acoustic phonon LA(M). Interestingly, the TMDs' Raman response is highly enhanced when the excitation is on resonance with an excitonic transition \cite{Berkdemir_RamanWS2_ScientRep_2013, McDonnell_resonantRamanWS2_NanoLetters_2018, Corro_resonantRamanTMD_NanoLetters_2016, Golasa_multiphononMoS2_APL_2014}. As this resonance effect can be observed in the Raman response even in the absence of photoluminescence, resonance Raman spectroscopy offers a way to study the TMDs exciton indirectly \cite{Irina_pyramids_2020}. As the TMDs bandgap energy depends on temperature, varying the temperature of a TMD material enables the tuning of the resonance condition for a fixed excitation frequency. Therefore, studying TMDs at various cryogenic temperatures provides insights on the influence of the excitonic transition on the Raman response. Moreover, temperature-dependent Raman spectroscopy can shed light on the structural properties of TMDs materials \cite{Fan_resonanceRamanTMD_JApplPhys_2014, Gaur_temperatureRamanWS2_PhysChemC_2015}.
The Raman response of TMDs is influenced by the polarization of the excitation light, where the in-plane and the out-of-plane vibrations of the atoms respond differently to either orthogonal, in-plane polarization \cite{Zhao_RamanTMDlinear_Nanoscale_2013}. Furthermore, given the chirality of the TMDs valleys and the resonant influence of the excitons on the Raman response, studying the interaction of TMDs phonons with circularly polarized light is important \cite{Chen_helicityRamanTMD_NanoLett_2015, Zhao_helicityMoS2_ACSNano_2020, Drapcho_helicityTMD_PRB_2017}. As the Raman effect depends on the polarizability of the material, the interaction of TMDs with polarized light is described by a Raman polarizability tensor \cite{Zhao_helicityMoS2_ACSNano_2020, Jin_MoSe2polarization_2020, Ding_RamanTensorsMoS2_optlett_2020}. It is important to note that these tensors are defined with respect to the atomic axes, \textit{e.g.}, typically assuming flat-layered TMDs with the excitation light perpendicular to it. Thus, the polarization-resolved Raman response of for instance a vertical TMDs wall will be completely different than that of a flat layer, \textit{e.g.} modes that are usually allowed/forbidden in cross-polarization will now be absent/observed \cite{Jin_MoSe2polarization_2020, Ding_RamanTensorsMoS2_optlett_2020, Hulman_MoS2polarizationVertical_PhysChemC_2019, Fu_verticalWS2polarization_OptLett_2014}. Therefore, polarization-resolved Raman studies will provide insight in the flowers' nanogeometry and orientation.
\begin{figure*}[htp]
\centering
\includegraphics[width = 0.65\linewidth] {introduction.pdf}
\caption{\textbf{Optical response of WS$_2$ nanoflowers} \\ \textbf{a.} SEM image of the WS$_2$ nanoflowers on a SiN membrane with circular holes. The flowers grow mainly around the holes, forming diverse flower-like shapes ranging from circles (red) and half-circles (green) to vertical walls (brown, pink) and more chaotic structures (purple, pink). In yellow the size of the excitation laser spot (\SI{500}{\mu m}). \textbf{b.} A schematic representation of the SiN substrate (grey) with holes (black), WS$_2$ nanoflowers (green) and the excitation laser (yellow). \textbf{c.} Map of the peak intensity of the first Raman feature (denoted with an arrow in \textbf{d}), where the shape of the flowers can be clearly correlated with the SEM image in \textbf{a} (see colored circles as guide to the eye). \textbf{d.} Spectra at different positions indicated with stars in \textbf{c} on flowers (red and pink) and on the substrate (green). Note that, even though the flowers have a diversity in shapes, the only difference between spectra of flowers is the intensity the Raman features.}
\label{fig_intro}
\end{figure*}
In this work, we study the polarization- and temperature-dependent optical response of WS$_2$ nanoflowers. The nanoflowers exhibit a highly reduced PL enabling the study of the thereby unobscured Raman response. At first glance, no spectral differences are observed between WS$_2$ flowers of different geometry, except for differences in Raman intensity. However, polarization- and helicity-resolved Raman spectroscopy reveals underlying structural differences between flowers. We find that petals of the flowers oriented vertically exhibit a different response to circularly polarized light than more flat flower petals. Moreover, we find that the relative in-plane orientation of the flower petals with respect to the polarization direction of linearly polarized light, affects the optical response. Surprisingly, the polarization- and helicity-dependent behaviour of the characteristic in-plane and out-of-plane WS$_2$ Raman modes is similar, indicating the similarity of the underlying Raman tensors. Studying the temperature-dependent spectral response of WS$_2$ nanoflowers, we observe the influence of the excitonic resonance on the Raman intensity, helicity and the ratio between the two characteristic WS$_2$ Raman features.
\section{Results and Discussion}
\subsection{Optical response of WS$_2$ nanoflowers}
Figure \ref{fig_intro}a depicts a Scanning Electron Microscopy (SEM) image of the WS$_2$ nanoflowers. The flowers are fabricated using CVD on a Si$_3$N$_4$ membrane (\SI{200}{nm} thickness) with an array of holes (\SI{2}{\mu m} radius and \SI{4}{\mu m} pitch, see Figure \ref{fig_other_peaks}a in the Supplementary Materials). Details about the fabrication and an in-depth study of the electronic and crystallographic properties of these nanoflowers are given by Van Heijst \textit{et al} \cite{Sabrya2020}. Just as natural flowers, these WS$_2$ nanostructures consist of randomly oriented flakes (the petals) expanding from a common point. The WS$_2$ nanoflowers arise mainly around the holes in the substrate (see Fig.\ref{fig_intro}a), forming diverse shapes ranging from circles (red) and half-circles (green) to vertical walls (brown, pink) and more complex structures (purple, pink). The larger structure to the right of Fig.\ref{fig_intro}a is probably a conglomeration of WS$_2$ grown around a dust particle. Figure \ref{fig_intro}b schematically presents the nanoflowers (green) around the holes (black) in the substrate (grey). The excitation light is along the z-axis, and the orientation of the flower petals ranges from completely flat (in x-y plane) to standing up (x-z or y-z plane). The petal thickness is estimated to be between 2 and \SI{30}{nm} \cite{Sabrya2020}. The previously performed scanning transmission electron microscopy (STEM) study reveals that the nanoflowers exhibit a crystallographic polytypism 2H/3R \cite{Sabrya2020} (see Section B.2 of the Supplementary Materials for details).
We investigate the optical response of the WS$_2$ nanoflowers, which consists mainly of a Raman response. Figure \ref{fig_intro}c presents the intensity of the first Raman feature (see arrow in Fig.\ref{fig_intro}d) of the flowers depicted in Fig.\ref{fig_intro}a. The Raman map can be correlated with the SEM image by comparing the shape and relative position of the flowers (\textit{e.g.}, compare the coloured circles in Fig.\ref{fig_intro}a and Fig.\ref{fig_intro}c). Not surprisingly, the more dense flowers, for instance the circular flower (red) and the half-circle (green), exhibit a larger Raman intensity than the structures with mainly upstanding walls (brown, upper part of pink). It is important to note in this context that the size of our diffraction limited excitation spot (\SI{500}{nm}, see Methods) is much larger than the size of an individual flower petal. For an easy comparison, the size of the excitation spot is indicated on scale in yellow in Fig.\ref{fig_intro}a. The Raman signal of the flowers in the Raman map is `smeared out', and the area of plain substrate is actually much larger than it seems on the Raman maps (compare Fig.\ref{fig_intro}c with the SEM image in Fig.\ref{fig_intro}a). The reason for the `smearing out' is that we measure a convolution of the excitation and detection volume with the spatial distribution of the optical response of the flowers. It is to be expected that the spatial distribution of the optical response of the nanoflower response is related to the size of the flower features.
Figure \ref{fig_intro}d presents optical spectra of the WS$_2$ nanoflowers. The spectra contain of 8-10 Raman features, where the first two features are the characteristic vibrational modes of WS$_2$ (see Figure \ref{fig_determine} in the Supplementary Materials). The first feature is a combination of the in-plane vibrational mode E$_{2g}$ and the longitudinal acoustic phonon 2LA(M) (in WS$_2$, the frequency of these modes is almost the same), and the second feature is the out-of-plane vibrational mode A$_{1g}$. We attribute the higher frequency Raman features to multiphonon resonances involving the LA(M) phonon, excited because the \SI{595}{nm} laser is in resonance with the A-exciton, in accordance with the attribution for WS$_2$ pyramids \cite{Irina_pyramids_2020} (see Section B.1 of the Supplementary Materials for details).
The spectra in Fig.\ref{fig_intro}d are collected from different positions of the sample: on the Si$_3$N$_4$ substrate (green), on a dense nanoflower (red) and on a vertical-wall nanoflower (pink) (indicated with stars in Fig.\ref{fig_intro}c). It is interesting to note that the only difference between the red spectrum of the more dense flower and the pink spectrum of the vertical-wall flower is in the overall Raman intensity and not in the spectral position of the Raman peaks. In other words: there are no specific Raman features more or less pronounced for flowers with different nanogeometries.
The WS$_2$ nanoflowers exhibit a strongly reduced photoluminescence (PL) with respect to horizontally layered WS$_2$. On some flowers, no PL can be observed from the nanoflowers within our detection efficiency. Specific parts of some nanoflowers do exhibit a low PL, which becomes apparent especially at cryogenic temperatures (see Figure \ref{fig_backgroundPL}d in the Supplementary Materials). At \SI{4}{K}, this is at most \SI{2}{\%} of the PL of a monolayer WS$_2$. Assuming that the absorption and the effective collection efficiency remain constant, we conclude that the CVD grown WS$_2$ nanoflowers have a lower quantum efficiency than horizontal WS$_2$ flakes. Here the assumption of a constant absorption is reasonable given the petal thickness, whereas the assumption of a constant effective collection efficiency is related to the unknown emission pattern from the nanoflower petals and therefore less strong. We attribute the decrease in the quantum efficiency to the increase in possible non-radiative loss channels due to the presence of all the edges of the nanoflower petals. This leads to a severe quenching of the exciton photoluminescence, without influencing the Raman response.
\begin{figure*}[htp]
\centering
\includegraphics[width = \linewidth] {linear_polarization.pdf}
\caption{\textbf{Excitation polarization} \\ \textbf{a.} SEM image of a WS$_2$ flower-like structure (brown circle in Fig.\ref{fig_intro}a) with mainly petals oriented in the x-z plane. \textbf{b-d,g-h} Map of the intensity of the first Raman feature of the flower-like structure upon \textbf{b,g.} vertically polarized, \textbf{c,h.} diagonally polarized and \textbf{d,i.} horizontally polarized excitation. \textbf{e.} Raman intensity of the flower in \textbf{a} (used pixels are marked with stars in \textbf{b-d}) as a function of excitation polarization angle. Note that the intensity increases drastically when the polarization direction is parallel to the WS$_2$ flower petals. \textbf{f.} SEM image of a WS$_2$ flower (purple circle in Fig.\ref{fig_intro}a) with mainly petals oriented in the y-z plane. \textbf{j.} Raman intensity of the flower in \textbf{f} (used pixels are marked with stars in \textbf{g-i}) as a function of excitation polarization angle. Note that the intensity decreases drastically when the polarization direction is perpendicular to the WS$_2$ flower petals.}
\label{fig_linearPol}
\end{figure*}
\subsection{Polarization-resolved Raman response} \label{sect_flowers_linear_polarization}
To investigate the optical differences between different flowers in more detail, we study the interaction of the flower Raman response with linearly polarized light. Here we excite the WS$_2$ nanoflowers with linearly polarized light, rotating the polarization direction from vertical to horizontal, and analyze the resulting emission intensity (see Fig.\ref{fig_helicity}a for a schematic of our set-up, where the quarter-wave plate and polarization analyzer are not used in the current section).
Figure \ref{fig_linearPol}a depicts an SEM image of a flower-like WS$_2$ structure (indicated in brown in Fig.\ref{fig_intro}a) with mainly wall-like petals, oriented in the x-z plane (see coordinate system in Fig.\ref{fig_intro}b). Figures \ref{fig_linearPol}b-d depict the intensity of the first Raman feature upon vertical polarization excitation, excitation polarization at \SI{45}{degrees} and horizontal polarization excitation. The Raman intensity is highest when the excitation polarization direction is parallel to the orientation of the nanoflower petals, in this case upon horizontal excitation (Fig.\ref{fig_linearPol}d). This becomes even more apparent in Fig.\ref{fig_linearPol}e, where the normalized Raman intensity of different parts of the nanoflower (positions are indicated in Fig.\ref{fig_linearPol}b-d) is plotted as a function of polarization angle (depicted by the arrows). The Raman intensity upon vertical polarization is 60 - 80 \% of the Raman intensity upon horizontal polarization. Note in Fig.\ref{fig_linearPol}b that the small flower petal to the right of the flower, oriented vertically in the y-z plane, can only be distinguished upon vertical polarization: it is not visible anymore in Fig.\ref{fig_linearPol}c and d.
To illustrate the correlation between the Raman intensity of differently oriented flower-like structures and the excitation polarization even more, Fig.\ref{fig_linearPol}f depicts a nanoflower (indicated in purple in Fig.\ref{fig_intro}a) which exhibits petals oriented in the y-z plane (see the coordinate system in Fig.\ref{fig_intro}b). Here, the Raman intensity upon vertical y polarization excitation (Fig.\ref{fig_linearPol}g) is higher than upon horizontal x polarization excitation (Fig.\ref{fig_linearPol}i). Figure \ref{fig_linearPol}j depicts the normalized Raman intensity of different parts of the nanoflower (positions are indicated in Fig.\ref{fig_linearPol}g-i) as a function of polarization angle (depicted by the arrows). For this flower, the Raman intensity upon horizontal excitation is now \mbox{70 - 90 \%} of the Raman intensity upon horizontal polarization. The lower contrast can be explained by the fact that this nanoflower is more dense, also containing petals oriented differently than strictly in the y-z plane, which demonstrates the sensitivity of this method. Flowers with petals oriented in random different directions do not exhibit a polarization dependence (see Figure \ref{fig_polarization_independence} in the Supplementary Materials).
The response of Raman modes to polarized light is described by Raman polarizability tensors, based on the crystal symmetries in the material \cite{Ding_RamanTensorsMoS2_optlett_2020, Hulman_MoS2polarizationVertical_PhysChemC_2019, Fu_verticalWS2polarization_OptLett_2014, Jin_MoSe2polarization_2020}. It is interesting to point out that the measured E$_{2g}$ and A$_{1g}$ Raman features exhibit the same polarization response (see Figure \ref{fig_polarization_modes} in the Supplementary Materials). This indicates that the Raman polarization tensor for both the in-plane (E$_{2g}$) and the out-of-plane (A$_{1g}$) Raman modes are the same. We also found that the polarization dependence of the Raman intensity does not depend on temperature and is also observed upon \SI{561}{nm} excitation (see Figure \ref{fig_polarization_modes} in the Supplementary Materials). We conclude that linear-polarization-resolved Raman measurements provide a way to distinguish between differently oriented WS$_2$ petals and to identify the dominant orientation.
\begin{figure*}[hbp]
\centering
\includegraphics[width = 0.9\linewidth] {helicity.pdf}
\caption{\textbf{Helicity of Raman features} \\ \textbf{a.} Schematic of our set-up, where the excitation light (\SI{595}{nm} wavelength) passes through a quarter-wave plate and is focused on the sample. The emission is collected in epi-configuration and passes through the same quarter-wave plate. Than it is directed to a spectrometer through a polarization analyzer. \textbf{b,c} Helicity-resolved nanoflower spectra, where the flowers are excited with $\sigma_+$ light and the helicity is determined from the difference in $\sigma_+$ and $\sigma_-$ emission. In \textbf{b}, the spectrum with the same polarization as the excitation light (blue) has a higher intensity (helicity is conserved). In \textbf{c}, the spectrum with opposite polarization to the excitation light (red) has a higher intensity (helicity is reversed). \textbf{d.} Map of the intensity of the first Raman feature of the nanoflower spectra. \textbf{e.} Map of the same region of the helicity of the first Raman feature. Note that the helicity of the Raman features around the WS$_2$ nanoflowers is negative (green star), whereas the Raman helicity is positive in regions next to the larger nanoflowers (pink star). \textbf{f.} Temperature-dependent helicity of the WS$_2$ nanoflower marked in green in Fig.\ref{fig_intro}a (taking into account all pixels associated with this flower). The lines present the temperature dependence of three locations on the flower marked in green (see Figure \ref{fig_helicity_other}a,b of the Supplementary Materials for the taken pixels). The helicity decreases slightly at room-temperature.}
\label{fig_helicity}
\end{figure*}
\subsection{Helicity of Raman features}
Another tool to investigate potential optical differences between nanoflowers with diverse geometries, is helicity-resolved Raman measurements. \mbox{Figure \ref{fig_helicity}a} depicts a schematic representation of our set-up. The excitation light (\SI{595}{nm} wavelength) passes through a quarter-wave plate and is focused on the sample through an objective lens (see Methods for details on the set-up). The emission is collected through the same objective lens, passes a quarter-wave plate, and is directed to a spectrometer through a polarization analyzer. This allows the detection of the polarization state of the emitted light, i.e., it allows for helicity-resolved measurements. \mbox{Figures \ref{fig_helicity}b,c} depict helicity-resolved nanoflower spectra. Here, the flowers are excited with $\sigma_+$ circularly polarized light and the helicity of the Raman features is determined from the difference in $\sigma_+$ and $\sigma_-$ emission. In Fig.\ref{fig_helicity}b, the blue spectrum with the same polarization as the excitation light, has a higher intensity ($\sigma_+$, helicity is conserved) than the red spectrum with the opposite polarization ($\sigma_-$, helicity is reversed). We calculate the helicity of the first Raman feature $H = \frac{I_{conserved} - I_{reversed}}{I_{conserved} + I_{reversed}}$ to be 0.172. In Fig.\ref{fig_helicity}c, the helicity-reversed spectrum (red) has a higher intensity than the helicity-conserved spectrum (blue), with H = -0.083.
The helicity of the Raman response of the WS$_2$ nanoflowers is position dependent. Figure \ref{fig_helicity}d presents a map of the nanoflower intensity of the first Raman feature (compare Fig.\ref{fig_intro}c). Figure \ref{fig_helicity}e presents a map of the experimentally determined helicity of the first Raman feature (stars indicate the position of spectra in Fig.\ref{fig_helicity}b,c). Note again that the measured position-dependent Raman intensity and helicity are a convolution of the excitation and detection volume with the spatial distribution of the optical response of the flowers, related to the size of the flower features. The Raman helicity of the WS$_2$ nanoflowers is negative: the intensity is higher for the helicity-reversed spectrum. Note however that the locations where the most negative Raman helicity is located, is not in the middle of the nanoflower, but towards the edge (\textit{e.g.}, compare the green star in Fig.\ref{fig_helicity}e and Fig.\ref{fig_helicity}d). We therefore conclude that we detect a negative Raman helicity at locations where the excitation spot interacts with the side of a nanoflower. The helicity is most positive on locations in between the WS$_2$ nanoflowers, for instance at the position of the pink star: here the intensity is higher for the helicity-conserved spectrum. The Raman response from these regions confirms the presence of WS$_2$, e.g., this is not the bare substrate. Comparing the position of the pink star in Fig.\ref{fig_helicity}d with the SEM image in Fig.\ref{fig_intro}a, it seems that the region of positive helicity is actually related to the WS$_2$ structure to the left of the flower indicated in purple in Fig.\ref{fig_intro}a. As this structure looks more flat than the wall-like petals in other flowers, we conclude that the sign of the Raman helicity becomes positive when the WS$_2$ is oriented in the x-y plane, horizontally with respect to the surface (see Fig.\ref{fig_intro}b for a coordinate system).
The Raman helicity response of the WS$_2$ nanoflowers is completely different than that of flat layers of WS$_2$. As alluded to before, the response of Raman modes to polarized light is described by Raman tensors \cite{Zhao_helicityMoS2_ACSNano_2020, Jin_MoSe2polarization_2020, Ding_RamanTensorsMoS2_optlett_2020} (see Section E of the Supplementary Materials). In case of TMDs materials, the Raman tensor dictates that the A$_{1g}$ mode is helicity-conserved \cite{Zhao_helicityMoS2_ACSNano_2020, Chen_helicityRamanTMD_NanoLett_2015}. This means that the second Raman feature in \ref{fig_helicity}b,c should only have had contributions with the same polarization as the excitation ($\sigma_+$), leading to H = 1.0. However, we observe a large contribution of light with the reversed helicity, in Fig.\ref{fig_helicity}c the helicity even becomes negative in places (see Figure \ref{fig_helicity_other} in the Supplementary Materials for a helicity map of the A$_{1g}$ mode).
Interpreting the helicity behaviour of the first Raman feature is less straightforward, as this feature contains both the 2LA(M) phonon and the E$_{2g}$, and the Raman tensor of the E$_{2g}$ depends on the resonance of the excitation. The tensor dictates that the E$_{2g}$ mode is helicity-reversed under non-resonant excitation \cite{Chen_helicityRamanTMD_NanoLett_2015} and helicity-conserved under resonant excitation \cite{Zhao_helicityMoS2_ACSNano_2020, Drapcho_helicityTMD_PRB_2017} (see Section E.2 of the Supplementary Materials). Since the nanoflowers are excited at resonance with the excitonic energy, the first Raman feature in \ref{fig_helicity}b,c should have had mainly contributions with the same polarization as the excitation. Therefore the resonance of the excitation explains why the E$_{2g}$ and the A$_{1g}$ features have a similar helicity \cite{Zhao_helicityMoS2_ACSNano_2020}. However, the observation of negative helicity is surprising for both Raman features, as the response is completely different than that of flat WS$_2$ layers.
It is important to note that the Raman polarization tensors are typically defined with respect to the crystal axes of flat TMDs layers, which for flat layers are readily connected to a suitable frame of reference of the incident light. The petals of the WS$_2$ nanoflowers exhibit a variety of orientations with respect to the incident light. Mathematically, a change of WS$_2$ flake orientation corresponds to a base transformation changing the Raman tensor, which may lead to allowed modes becoming forbidden and forbidden modes becoming allowed (see Figure \ref{fig_base_transformation} of the Supplementary Materials). From Fig.\ref{fig_helicity}e it is apparent that the Raman helicity of the WS$_2$ nanoflowers is in general slightly negative, with a larger helicity-reversed than helicity-conserved contribution. This corresponds to the nanoflowers on average having more wall-like petals (oriented in x-z or y-z plane, see Fig.\ref{fig_intro}b for a coordinate system), which is in agreement with the SEM images of the flowers. However, the fact that the helicity is at most -0.2 indicates that the contribution of both flat and vertically oriented flower petals within the diffraction-limited excitation spot is relatively large.
\begin{figure*}[htp]
\centering
\includegraphics[width = 0.65\linewidth] {temperature_dependence.pdf}
\caption{\textbf{Temperature dependence Raman intensity} \\ \textbf{a.} Nanoflower spectra (flower indicated in red in Fig.\ref{fig_intro}a) upon a \SI{595}{nm} excitation at temperatures ranging from \SI{4}{K} to room-temperature. Note that the intensity of all Raman features increases. \textbf{b.} Nanoflower spectra (flower indicated in red in Fig.\ref{fig_intro}a) upon a \SI{561}{nm} excitation at room-temperature and \SI{4}{K}. Note that the A$_{1g}$ mode is almost absent at room temperature. \textbf{c.} \textit{inset} Temperature-dependent intensity of the first Raman feature (E$_{2g}$,2LA(M)) of the nanoflower spectra upon \SI{595}{nm} excitation (orange) and \SI{561}{nm} excitation (green). Here at every temperature the intensity is taken from all pixels associated with the flower (indicated in red in Fig.\ref{fig_intro}d). \textit{main} The Raman intensity is plotted as a function of the wavelength difference between the WS$_2$ bandgap and the excitation. Upon cooling down, the WS$_2$ bandgap energy blue shifts. With a constant excitation energy, the difference between excitation and WS$_2$ bandgap energy will become smaller at lower temperatures, bringing the excitation more in resonance with the excitonic transition. \textbf{d.} Temperature-dependent ratio of the first two Raman features of the nanoflower spectra. Upon a \SI{595}{nm} excitation, the ratio changes from 0.8 at 4K to 1.6 at room-temperature, as can already seen by comparing the intensity of the first two Raman features in \textbf{a}. Upon a \SI{561}{nm} excitation, the A$_{1g}$ is almost absent at room-temperature. Therefore, the ratio between the two WS$_2$ flower Raman features increases drastically from 1.0 at 4K to 7.5 at room-temperature. }
\label{fig_temperature}
\end{figure*}
Based on the Raman tensor, flat flower petals (oriented in the x-y plane) should exhibit a positive helicity (see Section E of the Supplementary Materials). Comparing the helicity map with the SEM image in Fig.\ref{fig_intro}a, it is not always straightforward to correlate the regions of positive helicity with the orientation and nanogeometry of flower petals. We hypothesise that there might be flat flakes present that cannot be clearly distinguished from the Si$_3$N$_4$ substrate, but that do contribute to the positive Raman helicity. We conclude that the surprising helicity values for the nanoflower Raman response can be explained by the different orientations of the flower petals.
We determine the position-dependent helicity of the Raman features at different temperatures (see Figure \ref{fig_helicity_300K} in the Supplementary Materials). Figure \ref{fig_helicity}f depicts the temperature dependence of the Raman helicity of the flower marked in green in Fig.\ref{fig_intro}a,c. At all temperatures, the intensity is depicted of the first Raman feature of all spectra associated to this flower. The lines present the temperature dependence of three specific places on the flower marked in green (see Figure \ref{fig_helicity_other}a,b in the Supplementary Materials). The helicity at room temperature seems to be slightly lower than the helicity at cryogenic temperatures, but the trend is not clear. The helicity of the A$_{1g}$ mode and of the first Raman feature of spectra of other flowers also decreases at room temperature (see Figure \ref{fig_helicity_other}d,e in the Supplementary Materials). The lower helicity at room temperature can be explained by the excitation energy being more out-of-resonance with the excitonic bandgap energy (see Fig.\ref{fig_temperature}). We conclude that the main mechanism that determines the Raman helicity is the flower petal orientation and therefore independent of temperature. Therefore helicity-dependent Raman spectroscopy can be used to determine the orientation of WS$_2$ flakes and the contribution of flat vs. wall-like petals in WS$_2$ nanoflowers.
\subsection{Temperature-dependent Raman spectroscopy}
Given the phononic nature of Raman scattering, studying the temperature dependence of the Raman spectra of the WS$_2$ nanoflowers provides valuable information. \mbox{Figures \ref{fig_temperature}a,b} present the spectral response of the flower indicated in red in Fig.\ref{fig_intro}a (see Figure \ref{fig_polarization_independence} of the Supplementary Materials for an SEM image), upon a \SI{595}{nm} and a \SI{561}{nm} excitation at different temperatures. There are 8-10 Raman features distinguishable at room temperature and at cryogenic temperatures (see Figure \ref{fig_determine} in the Supplementary Materials), but the intensity of the features increases drastically with decreasing temperature. At \SI{4}{K} there is a broad background visible under the Raman features (at 200 - \SI{700}{cm^{-1}} in Fig.\ref{fig_temperature}a and at 1200 - \SI{1500}{cm^{-2}} in Fig.\ref{fig_temperature}b). We attribute this background to highly reduced WS$_2$ photoluminescence (see Section B.1 of the Supplementary Materials). The intensity of the Raman features is much lower for the \SI{561}{nm} excitation than for the \SI{595}{nm} excitation. This is attributed to the fact that the \SI{595}{nm} excitation light is close to the A-exciton resonance of WS$_2$, whereas the \SI{561}{nm} is out-of-resonance with the A-exciton. Raman modes of TMDs can be greatly enhanced when they are excited in resonance with an excitonic transition \cite{Berkdemir_RamanWS2_ScientRep_2013,Zhao_RamanTMDlinear_Nanoscale_2013, Corro_resonantRamanTMD_NanoLetters_2016, McDonnell_resonantRamanWS2_NanoLetters_2018}.
The inset of Fig.\ref{fig_temperature}c depicts the temperature-dependent intensity of the first Raman feature (E$_{2g}$,2LA(M)) upon \SI{595}{nm} excitation (orange) and \SI{561}{nm} excitation (green). Here, for every temperature the Raman intensity of all the spectra associated to the nanoflower are taken (flower indicated in red in Fig.\ref{fig_intro}c). The lines present the temperature dependence of three specific places on the flower. For an excitation at \SI{595}{nm}, the Raman intensity decreases with increasing temperature, but for an excitation at \SI{561}{nm}, the Raman intensity is independent of temperature. Figure \ref{fig_temperature}c depicts the intensity of the first Raman feature as a function of the difference between the WS$_2$ exciton and the excitation wavelength. The WS$_2$ bandgap energy and therefore the exciton energy is temperature dependent, experiencing a blue shift with decreasing temperature (see Figure \ref{fig_backgroundPL} of the Supplementary Materials). Therefore varying the temperature of the WS$_2$ nanoflowers enables the tuning of the exciton resonance condition for a fixed excitation frequency. Since the exciton energy is experiencing a blue shift with decreasing temperatures, cooling down the WS$_2$ nanostructures will bring the excitation more in resonance with the excitonic transition. It is clear in Fig.\ref{fig_temperature}c that the Raman intensity exhibits a resonant-like enhancement as the excitation wavelength approaches the excitonic transition. Since the \SI{561}{nm} excitation is relatively far away from the WS$_2$ bandgap, the resonance effect on the Raman intensity upon cool down is much less visible.
When comparing the spectra upon a \SI{595}{nm} excitation in Fig.\ref{fig_temperature}a, it becomes apparent that the ratio between the two characteristic WS$_2$ Raman features ($E_{2g}/A_{1g}$), is temperature dependent. At room-temperature, the $E_{2g}$ mode is 1.5 times as intense as the $A_{1g}$ mode, and at \SI{4}{K}, the $A_{1g}$ mode is 1.5 times as intense as the $E_{2g}$ mode. It has been reported before, that the different TMDs Raman modes respond differently to the excitonic resonance \cite{Carvalho2015, McDonnell_resonantRamanWS2_NanoLetters_2018, Corro_resonantRamanTMD_NanoLetters_2016}. When comparing the nanoflower spectra upon \SI{561}{nm} excitation in Fig.\ref{fig_temperature}b, the low intensity of the second Raman feature ($A_{1g}$) at room-temperature draws immediate attention. Figure \ref{fig_temperature}d depicts the temperature dependence of the $E_{2g}/A_{1g}$ ratio. At room temperature, the ratio between the characteristic WS$_2$ Raman features is around 7.0, for an excitation at \SI{561}{nm}. From Fig.\ref{fig_temperature}d we deduce that the $A_{1g}$ Raman feature is more sensitive to the resonance conditions than the $E_{2g}$,2LA(M) feature. Even if the \SI{561}{nm} excitation is relatively far away from the exciton wavelength, the A$_{1g}$ Raman mode is enhanced greatly at cryogenic temperatures, as the excitation is closer to the excitonic resonance. Therefore we conclude that the absence of photoluminescence does not prevent an indirect study of the exciton, the presence of which is revealed by resonant Raman spectroscopy.
\section{Conclusion}
We have studied the optical response of CVD grown WS$_2$ nanoflowers. In contrast to flat WS$_2$ flakes, the nanoflowers exhibit a highly reduced photoluminescence enabling the study of their clear Raman response. Even though the WS$_2$ exciton emission is reduced in the nanoflowers, the presence of the excitons is still notable in the Raman response upon resonance excitation. We study the temperature-dependent Raman intensity and observe an enhancement for cryogenic temperatures, where the intensity of the out-of-plane Raman mode A$_{1g}$ is enhanced more than the intensity of the in-plane Raman mode E$_{2g}$. We conclude that, due to the temperature-dependent bandgap and thus exciton energy shift, the WS$_2$ nanoflowers are excited more in resonance with the excitonic transition at cryogenic temperatures, leading to a resonant effect on the Raman intensity.
Furthermore, we study the interplay between flower geometry and spectral response. Even though the WS$_2$ nanoflowers have completely different geometries, at first sight the only spectral differences between them seem to be the Raman intensity. However, helicity-resolved and polarization-resolved Raman spectroscopy reveals underlying structural and geometrical differences between flowers. Studying the Raman response upon excitation with circularly polarized light reveals a completely different behaviour of the Raman helicity of the flowers with respect to flat WS$_2$ flakes. The Raman helicity of nanoflowers with many vertical walls is slightly negative, and the Raman response of flat lying WS$_2$ flower petals is slightly positive. We attribute the differences between the nanoflowers and the flat WS$_2$ to a difference in the Raman polarization tensor, induced by the differently oriented flower petals. Studying the Raman response upon excitation with linearly polarized light we observe that we can selectively address nanoflower petals oriented parallel to the used polarization. We conclude that there is a interplay between the orientation of the flower petals, the atomic vibrational modes and the polarization direction of the excitation light.
Therefore we envision that temperature-dependent Raman spectroscopy will open the way to study excitonic resonance effects, and polarization-resolved Raman spectroscopy will open the way to determine the nanogeometry and orientation of WS$_2$ flakes.
\section{Experimental Section}
The WS$_2$ nanoflowers are directly grown on a microchip using chemical vapour deposition (CVD) techniques. The sample preparation method is described in \cite{Sabrya2020}.
The optical measurements are performed using a home-built spectroscopy set-up, depicted schematically in Fig.\ref{fig_helicity}a. The sample is placed on a piezo stage in a Montana cryostation S100. Measurements are performed at a range of temperatures between room temperature and \SI{4}{K}. The sample is illuminated through an \SI{0.85}{NA} Zeiss 100x objective. Measurements are performed using a continuous wave laser with a wavelength of \SI{595}{nm} and a power of \\ \SI{1.6}{mW/mm^2} (Coherent OBIS LS 594-60), and the excitation light is filtered out using colour filters (Semrock NF03-594E-25 and FF01-593/LP-25). For the measurements depicted in Fig.\ref{fig_temperature}, a continuous wave laser with a wavelength of \SI{561}{nm} and a power of \SI{3.6}{mW/mm^2} is used (Cobolt 08-01/561). To avoid the depolarization consequences of tight focusing on (circular) polarization, a \SI{2}{mm} laser diameter is used, slightly underfilling the objective in the excitation path. Polarizers (Thorlabs LPVIS100-MP2) and superachromatic waveplates are used to rotate the linear polarization (Thorlabs SAHWP05M-700) and create circular polarization (Thorlabs SAQWP05M-700), respectively. The sample emission is collected in reflection through the same objective as in excitation, and projected onto a CCD camera (Princeton Instruments ProEM 1024BX3) and spectrometer (Princeton Instruments SP2358) via a 4f lens system.
\section{Acknowledgements}
M.C. acknowledges the financial support of the Kavli Institute of Nanoscience Delft through the KIND fellowships program. S.C.B and S.v.H. acknowledge funding from ERC Starting Grant “TESLA” No. 805021.
\clearpage
\section*{Supplementary Materials}
\subsection{Remnant photoluminescence}
As mentioned in the main text, the spectral response of the WS$_2$ nanoflowers exhibits 8-10 Raman features, in combination with a broad background. Figure \ref{fig_backgroundPL}a-d presents temperature-dependent spectra of a nanoflower upon \SI{595}{nm} excitation (in orange) and \SI{561}{nm} excitation (in light green). From Fig.\ref{fig_backgroundPL}c,d it becomes apparent, that for the spectra upon \SI{595}{nm} excitation, the maximum of the broad background is found at approximately \SI{615}{nm} and overlaps spectrally with the sharp Raman features. For the spectra upon \SI{561}{nm} excitation, the broad background is well separated from the Raman features. The spectral position of the broad background is the same for both excitations.
As a comparison, Fig.\ref{fig_backgroundPL}a-d also present temperature-dependent PL spectra of a WS$_2$ monolayer upon \SI{595}{nm} excitation (in red) and \SI{561}{nm} excitation (dark green). The spectral position of the PL shifts from \SI{630}{nm} at room temperature to \SI{615}{nm} at cryogenic temperatures. At \SI{100}{K} and \SI{4}{K} it is clearly visible, that the broad background under the Raman features of the nanoflowers is at the same spectral position as the monolayer PL spectra. The maximum peak intensity of the broad background is around 2\% of the PL intensity. At room temperature and \SI{200}{K}, the background under the Raman features is broader than the monolayer PL spectra. At these temperatures, the intensity of the background is not higher than 2\% of the monolayer photoluminescence. As the depicted spectra are taken from nanoflowers with the highest visible background, we conclude that the upper limit for the remnant PL in the nanoflower spectra is 2\%.
As mentioned in the main text, the thickness of the nanoflower petal is estimated to be between 2 and \SI{30}{nm}. Therefore we compare the response of the nanoflowers to that of few-layer WS$_2$ in Fig.\ref{fig_backgroundPL}e. The spectra of a trilayer (in blue) and five layers of WS$_2$ (in blue-green) (exfoliated on a Si substrate) exhibit PL both from the direct transition and the indirect transition (around 800 - \SI{850}{nm}). The intensity of the PL from the direct transition is an order of magnitude lower than the PL of the WS$_2$ monolayer, but it is still clearly distinguishable from the background. The intensity of the remnant PL of the nanoflower is however another order of magnitude lower than the PL of few-layer WS$_2$.
\begin{figure*}[htp]
\centering
\includegraphics[width = 0.9\linewidth] {PL_background_Raman.pdf}
\caption{\textbf{Comparison spectral response flower and monolayer WS$_2$} \\
\textbf{a-d.} Temperature-dependent spectral response of a WS$_2$ nanoflower upon \SI{595}{nm} excitation (orange) and \SI{561}{nm} excitation (light green), compared with the photoluminescence (PL) response of a monolayer WS$_2$ upon \SI{595}{nm} excitation (in red) and \SI{561}{nm} excitation (dark green), rescaled for an easy comparison (see legends). The spectral response of the nanoflowers contains both sharp Raman features and a broad background. In \textbf{c-d}, this broad background around \SI{615}{nm} overlaps spectrally with the sharp Raman features upon \SI{595}{nm} excitation (in orange), but is well separated from the Raman features upon \SI{561}{nm} excitation (light green). Especially in the spectra at \SI{4}{K} and \SI{100}{K} (\textbf{c-d}), the spectral background under the Raman features of the nanoflowers is at the same spectral position as the monolayer PL. At all temperatures, the remnant PL in the nanoflower spectra is at most 2\% of the monolayer PL. \textbf{e.} (Room-temperature) spectra of a WS$_2$ trilayer (in blue) and five layers of WS$_2$ (in blue-green) (exfoliated on a Si substrate), compared with a spectrum of a WS$_2$ nanoflower (in orange). For few-layer WS$_2$, the PL intensity from the direct transition is reduced by an order of magnitude with respect to a monolayer, but is still clearly distinguishable from the background. The intensity of the remnant PL from the nanoflower is however another order of magnitude lower than the PL of few-layer WS$_2$.}
\label{fig_backgroundPL}
\end{figure*}
\subsection{Characterization of Raman modes}
\subsubsection{Higher order WS$_2$ Raman modes}
The spectral positions of the sharp features in the spectra of the WS$_2$ nanoflower, taken with a different excitation wavelengths in Fig.\ref{fig_backgroundPL}, do not overlap in wavelength. These features are located at the same relative frequency distance to the excitation laser, as depicted in Fig.\ref{fig_determine}a, indicating that the collected light originates from Raman processes. The positions of the Raman features are indicated with arrows. Commonly, only three Raman modes are measured on both horizontal TMDs layers or nanostructures. Recently, we have reported the attribution of higher frequency Raman modes in spectra of CVD grown WS$_2$ pyramids to multiphonon resonances involving the LA(M) phonon \cite{Irina_pyramids_2020}, adopting the methodology for high frequency Raman features in MoS$_2$ \cite{Golasa_multiphononMoS2_APL_2014}. The light grey line in Fig.\ref{fig_determine}b depicts the higher order resonances of $A_{1g}$+n*LA(M). The features $A_{1g}$+1*LA(M) (at \SI{580}{cm^{-1}}) and $A_{1g}$+2*LA(M) (at \SI{769}{cm^{-1}}) have been reported before \cite{Molas_RamanWS2_ScientRep_2017, Berkdemir_RamanWS2_ScientRep_2013, Peimyoo_temperatureRamanWS2_NanoRes_2015, Gaur_temperatureRamanWS2_PhysChemC_2015}. The dark grey line in Fig.\ref{fig_determine}b depicts the higher order resonances of n*LA(M). The feature 4*LA(M) (at \SI{702}{cm^{-1}}) has been reported before \cite{Berkdemir_RamanWS2_ScientRep_2013, Peimyoo_temperatureRamanWS2_NanoRes_2015}. Although one would also expect a feature at 3*LA(M), this is usually not reported. Most experiments are performed with TMDs on a silicon substrate, and the Si resonance at \SI{520}{cm^{-1}} is around the same position as the mentioned WS$_2$ feature. For this experiment the flowers are positioned on a Si$_3$N$_4$ film far away from the silicon frame, therefore it is safe to assume that the measured feature is not related to Si, but can be attributed to 3LA(M). The features at \SI{475}{cm^{-1}} and \SI{833}{cm^{-1}} have been reported before \cite{Molas_RamanWS2_ScientRep_2017, McDonnell_resonantRamanWS2_NanoLetters_2018}. The features around \SI{955}{cm^{-1}}, \SI{1057}{cm^{-1}} and \SI{1128}{cm^{-1}} (marked with dotted arrows) cannot be distinguished from the background very well, but have been reported in the spectra of WS$_2$ pyramids \cite{Irina_pyramids_2020}.
\subsubsection{Atomic structure 2H vs 3R}
Naturally occurring WS$_2$ exhibits a hexagonal atomic structure called 2H. However, the scanning transmission electron microscopy (STEM) study reveals that the nanoflowers exhibit a crystallographic polytypism 2H/3R \cite{Sabrya2020}. The 2H and 3R atomic structures can be distinguished comparing the Raman signal of shear- and breathing modes \cite{Lee_MoS2Raman3R_ACSNano_2016, Baren_MoS2Raman3R_2Dmat_2019}. These Raman resonances have frequencies of 10-60 cm$^{-1}$ and therefore lie outside of our experimental spectral region. Differences have been reported in the layer dependent spectral position of the A$_{1g}$ and E$^1_{2g}$ Raman peaks as well as the spectral position of the photoluminescence \cite{Yang_WS2RamanPL3R_Nanotech_2019, Zeng_WS2RamanPL3R_AdvFuncMat_2019}, but the reported differences are too subtle to allow drawing any conclusions based on our measurements.
\begin{figure*}[htp]
\centering
\includegraphics[width = 0.7\linewidth] {raman_characterization.pdf}
\caption{\textbf{Characterization of Raman peaks} \\ \textbf{a.} The optical response of the WS$_2$ nanoflowers upon \SI{595}{nm} excitation (in orange) and \SI{561}{nm} excitation (in green, multiplied by 4 for an easy comparison). The spectral response on the two different lasers overlaps well, indicating that the collected light originates from Raman processes. Many more modes are observed using the more resonant \SI{595}{nm} excitation (orange), than using the \SI{561}{nm} excitation (green). The optical response of the WS$_2$ nanoflowers contains 8-10 Raman features. The signature features of WS$_2$, the 2LA(M),$E^1_{2g}$ modes around 350-\SI{355}{cm^{-1}} and the $A_{1g}$ mode at \SI{420}{cm^{-1}}, are clearly observed. \textbf{b.} The other features can be explained as being multiphonon resonances involving the LA(M) phonon (see \cite{Irina_pyramids_2020}). The light grey line under the spectrum depicts the higher order resonances of $A_{1g}$+n*LA(M). The features at \SI{580}{cm^{-1}} and at \SI{769}{cm^{-1}} have been reported before \cite{Molas_RamanWS2_ScientRep_2017, Berkdemir_RamanWS2_ScientRep_2013, Peimyoo_temperatureRamanWS2_NanoRes_2015, Gaur_temperatureRamanWS2_PhysChemC_2015}. As the nanoflower is not located on top of a silicon substrate, we associate the feature around \SI{525}{cm^{-1}} to 3LA(M) rather than to the Si Raman resonance. The dark grey line above the spectrum depicts the higher order resonances of n*LA(M). The feature at \SI{702}{cm^{-1}} has been reported before \cite{Berkdemir_RamanWS2_ScientRep_2013, Peimyoo_temperatureRamanWS2_NanoRes_2015}, as have the features at \SI{475}{cm^{-1}} and \SI{833}{cm^{-1}} \cite{Molas_RamanWS2_ScientRep_2017, McDonnell_resonantRamanWS2_NanoLetters_2018}. The features around \SI{955}{cm^{-1}}, \SI{1057}{cm^{-1}} and \SI{1128}{cm^{-1}} (marked with dotted arrows) cannot be distinguished from the background very well, but have been reported in the spectra of WS$_2$ pyramids \cite{Irina_pyramids_2020}.}
\label{fig_determine}
\end{figure*}
\subsubsection{Non-WS$_2$ Raman features}
At some positions, the measured spectra exhibit Raman features from other materials than WS$_2$. As mentioned in the main text, the studied WS$_2$ nanoflowers are fabricated on a Si$_3$N$_4$ membrane with an array of holes. This membrane spans a window in the middle of a silicon. Figure \ref{fig_other_peaks}a depicts an SEM image of a part of the sample, where the WS$_2$ nanoflowers are grown both on the Si frame (upper part of image) and on the Si$_3$N$_4$ membrane (lower part of the image). The holes cross both the Si and the Si$_3$N$_4$. Figure \ref{fig_other_peaks}b depicts a spectrum of the Si substrate (see green circle in Fig.\ref{fig_other_peaks}a), where the characteristic Raman features of Si are clearly present. These Raman features are not present in any other spectra presented in this work, as all the spectra are acquired from nanoflowers on the Si$_3$N$_4$ membrane.
Preceding the CVD growth procedure of the nanoflowers, WO$_3$ is deposited on the microchip \cite{Sabrya2020}. Signature Raman features of WO$_3$ can be distinguished in the spectrum in Fig.\ref{fig_other_peaks}c, acquired from the large white structure in the right corner of Fig.\ref{fig_other_peaks}a (see pink circle). We conclude that this white structure is a WO$_3$ crystal that has not reacted with sulfur. We do not measure any Raman signatures of WO$_3$ in other regions of the sample.
Next to Si and WO$_3$, signature Raman features of C can be distinguished in Fig.\ref{fig_other_peaks}d. We attribute this to the carbon paste that is used to attach the microchip to the sample holder. The carbon Raman features are only visible at the holes in the Si$_3$N$_4$ membrane (see blue circle in Fig.\ref{fig_other_peaks}a).
\begin{figure*}[htp]
\centering
\includegraphics[width = 0.6\linewidth] {other_region_raman.pdf}
\caption{\textbf{Non-WS$_2$ Raman response} \\ \textbf{a.} SEM image of the WS$_2$ nanoflowers. The substrate is composed of a silicon frame (upper half of image) with a Si$_3$N$_4$ window in the middle (lower half of image). The array of holes can be distinguished more clearly in the silicon region of the sample, but the holes are also present in the Si$_3$N$_4$ membrane. \textbf{b.} Spectrum on the Si substrate (see green circle in \textbf{a}), where the characteristic \SI{520}{cm^{-1}} and \SI{955}{cm^{-1}} Raman features can be clearly distinguished. These Raman features are not present on the Si$_3$N$_4$ membrane. \textbf{c.} Spectrum of a WO$_3$ particle (see pink circle in \textbf{a}), where the characteristic \SI{715}{cm^{-1}} and \SI{807}{cm^{-1}} Raman features can be clearly distinguished. In most regions of the sample, the only measured Raman features are from WS$_2$ and not WO$_3$. \textbf{d.} Spectrum with the characteristic Raman features of carbon at \SI{1340}{cm^{-1}} and \SI{1577}{cm^{-1}}. We attribute this to the carbon paste that is used to attach the microchip to the sample holder, as the Raman features are only visible at the holes in the Si$_3$N$_4$ membrane (see blue circle in \textbf{a}). }
\label{fig_other_peaks}
\end{figure*}
\subsection{Polarization-resolved Raman response}
As mentioned in the main text, we study the interaction of the WS$_2$ nanoflower Raman response with linearly polarized light. \mbox{Figures \ref{fig_polarization_independence}a,f} depict SEM images of flower-like WS$_2$ structures (indicated in pink and red respectively in Fig.1a in the main text). The right upper corner of the flower-like structure in Fig.\ref{fig_polarization_independence}a contains mainly petals oriented in the y-z plane (see coordinate system in Fig.1b in the main text), the petals of the flower in Fig.\ref{fig_polarization_independence}f are oriented in all directions. \mbox{Figures \ref{fig_polarization_independence}b-d and g-i} depict the intensity of the first Raman feature upon vertical polarization excitation, excitation polarization at \SI{45}{degrees} and horizontal polarization excitation. For the upper part of the flower in Fig.\ref{fig_polarization_independence}a, the Raman intensity is highest when the excitation polarization direction is parallel to the orientation of the petals, namely vertically polarized. The Raman intensity of the lower part of this flower, and of the flower in Fig.\ref{fig_polarization_independence}f, does not depend on the excitation polarization direction. Fig.\ref{fig_polarization_independence}e,j depicts the normalized Raman intensity of different parts of the nanoflower (positions are indicated in Fig.\ref{fig_polarization_independence}b-d) as a function of polarization angle (depicted by the arrows). In Fig.\ref{fig_polarization_independence}e, the Raman intensity upon horizontal excitation is 60 - 80 \% of the Raman intensity upon vertical polarization. No polarization dependence can be distinguished in Fig.\ref{fig_polarization_independence}j.
\begin{figure*}[htp]
\centering
\includegraphics[width = 0.9\linewidth] {other_flowers_polarization.pdf}
\caption{\textbf{Excitation polarization} \\
\textbf{a.} SEM image of a WS$_2$ flower-like structure (pink circle in Fig.1a in the main text). The right upper corner of this flower-like structure contains mainly petals oriented in the y-z plane. \textbf{b-d,g-h} Map of the intensity of the first Raman feature of the flower-like structure upon \textbf{b,g.} vertically polarized, \textbf{c,h.} diagonally polarized and \textbf{d,i.} horizontally polarized excitation. \textbf{e.} Raman intensity of the flower in \textbf{a} (used pixels are marked with stars in \textbf{b-d}) as a function of excitation polarization angle. Note that the intensity increases drastically when the polarization direction is parallel to the WS$_2$ flower petals. \textbf{f.} SEM image of a WS$_2$ flower (red circle in Fig.1a in the main text), with petals oriented in all directions. \textbf{j.} Raman intensity of the flower in \textbf{f} (used pixels are marked with stars in \textbf{g-i}) as a function of excitation polarization angle. No polarization dependence can be observed in the Raman intensity of this flower. }
\label{fig_polarization_independence}
\end{figure*}
Figure 2 in the main text displays the polarization dependence of the intensity of the first Raman feature, the combination of the E$_{2g}$,2LA(M) modes. Figure \ref{fig_polarization_modes}a,c depicts polarization dependent spectra of the nanoflowers presented in Fig.2 in the main text. The polarization response of the intensity of the first two Raman features is highly similar. This becomes apparent when comparing the polarization-dependent normalized intensity of the second Raman feature, the A$_{1g}$ mode, in Fig.\ref{fig_polarization_modes}b,d to the polarization-dependent response of the first Raman feature in Fig.2e,j in the main text. As the flower in Fig.2a contains mainly petals oriented in the x-z plane, the intensity of both the first and the second Raman feature is highest upon horizontal excitation polarization (Fig.\ref{fig_polarization_modes}b). As the flower in Fig.2f contains mainly petals oriented in the y-z plane, the intensity of two first two Raman modes is highest upon vertical excitation polarization (Fig.\ref{fig_polarization_modes}d).
\begin{figure*}[htp]
\centering
\includegraphics[width = 0.9\linewidth] {flowers_polarization_mode_temp.pdf}
\caption{\textbf{Excitation polarization} \\
\textbf{a,c.} Polarization-dependent spectra of the WS$_2$ nanoflowers presented in Fig.2a,f in the main text. The intensity of the different Raman features has the same polarization dependence. \textbf{b,d.} Raman intensity of the A$_{1g}$ mode of the flower in Fig.2a,f in the main text as a function of excitation polarization angle. As was the case for the Raman intensity of the first WS$_2$ mode E$_{2g}$,2LA(M), the intensity of the A$_{1g}$ mode increases drastically when the polarization direction is parallel to the WS$_2$ flower petals, e.g., for a horizontal polarization in \textbf{b} (compare Fig.2e in the main text) and a vertical polarization in \textbf{d} (compare Fig.2j in the main text). \textbf{e-h.} Intensity of the first Raman mode as a function of polarization angle of \textbf{e,f.} the flowers in Fig.2a,f in the main text at a temperature of \SI{200}{K} and upon \SI{595}{nm} excitation, and \textbf{g,h} of the flower in Fig.\ref{fig_polarization_independence}a and \textbf{h.} in Fig.2f in the main text, at room temperature upon \SI{561}{nm} excitation. As was the case for the Raman intensity at \SI{4}{K} upon \SI{595}{nm} presented in the main text, the Raman intensity increases when the polarization direction is parallel to the flower petals. Although the noise on the current data is higher, the contrast between parallel and perpendicular polarization is \textbf{e,f} the same for both temperatures upon \SI{595}{nm} excitation: around 0.60 for the flower in Fig.2a and 0.70 for the flower in Fig.2f, and the contrast is \textbf{g,h} slightly larger upon \SI{561}{nm} excitation: around 0.40 in \textbf{g.} and 0.60 in \textbf{h}.}
\label{fig_polarization_modes}
\end{figure*}
Where Fig.2 in the main text displayed the polarization dependence of the Raman intensity at \SI{4}{K}, Fig.\ref{fig_polarization_modes}e,f depicts the polarization dependence of the Raman intensity at \SI{200}{K}. Comparing Fig.\ref{fig_polarization_modes}e,f with Fig.2e,j, it becomes apparent that the Raman intensity at both temperatures increases when the polarization direction is parallel to the flower petals. Although the noise on the data at \SI{200}{K} is higher, the contrast between parallel and perpendicular polarization is the same for both temperatures, namely 0.60 for the flower in Fig.2a and 0.70 for the flower in Fig.2f. We conclude that the polarization-dependence of the Raman intensity does not depend on temperature.
So far, all mentioned polarization dependences have been using a \SI{595}{nm} excitation. Fig.\ref{fig_polarization_modes}g,h depict the polarization-dependent Raman intensity of the first Raman feature at room temperature upon a \SI{561}{nm} excitation, for the flower in Fig.\ref{fig_polarization_independence}a and in Fig.2j in the main text respectively. As the petals in both cases are mainly oriented in the y-z plane, the Raman intensity is lowest upon horizontal excitation polarization. Although the noise on the data for a \SI{561}{nm} excitation are higher, the contrast between parallel and perpendicular polarization is slightly larger than for a \SI{595}{nm} excitation.
\subsection{Helicity of Raman features}
Figure 3d-f in the main text displayed the intensity and helicity of the first WS$_2$ Raman feature, the combination of the E$_{2g}$,2LA(M) modes. Figure \ref{fig_helicity_other}a presents a map of the nanoflower intensity of the second Raman feature, the A$_{1g}$ mode. Figure \ref{fig_helicity_other}b presents a map of the experimentally determined helicity of the A$_{1g}$ mode. When comparing Fig.\ref{fig_helicity_other}b with Fig.3e in the main text, note that the helicity of the two Raman features has very similar position-dependent values. Comparing the position of the (bright) nanoflowers on the intensity map with the helicity map, it becomes apparent that the Raman helicity of the nanoflowers is slightly negative. Figure \ref{fig_helicity_other}c depicts the helicity of the first Raman feature as a function of intensity. At low intensity, the helicity values are spread in a range from -0.10 and +0.20. At high intensity, the spread in the helicity becomes smaller and converges to a value around -0.05.
Figure \ref{fig_helicity_other}d depicts the temperature dependence of the helicity of the A$_{1g}$ mode for the flower marked in green in Fig.1a. At all temperatures, the intensity is depicted of the first Raman feature of all spectra associated to this flower. The lines present the temperature dependence of three specific places on the flower, marked by green stars in Fig.\ref{fig_helicity_other}a,b. Figure \ref{fig_helicity_other}e depicts the temperature dependence of the helicity of the first Raman mode of another flower. The lines present the temperature dependence of three places on the flower, marked by grey stars in Fig.\ref{fig_helicity_other}a,b. As for the flower and the Raman feature in the main text, the helicity of these flowers slightly decreases from \SI{4}{K} to room temperature.
For comparison, we determine the position-dependent helicity of the Raman features at different temperatures. Figure \ref{fig_helicity_300K}a,b depict helicity-resolved nanoflower spectra, taken at room temperature. Here, the flowers are excited with $\sigma_+$ light and the helicity of the Raman features is determined from the difference in $\sigma_+$ and $\sigma_-$ emission (see Fig.3a in the main text). In Fig.\ref{fig_helicity_300K}a, the blue spectrum with the same polarization as the excitation light, has a higher intensity (helicity is conserved) than the red spectrum with the opposite polarization (helicity is reversed). In Fig.\ref{fig_helicity_300K}b, the helicity-reversed spectrum (in red) has a higher intensity than the helicity-conserved spectrum (in blue).
Figure \ref{fig_helicity_300K}d,e present a map of the nanoflower intensity and helicity of the first Raman feature, taken at room temperature. The stars indicate the position of the spectra in Fig.\ref{fig_helicity_300K}a,b (compare Fig.3d,e in the main text). As was the case at \SI{4}{K} (see Fig.3 in the main text), at room temperature the Raman helicity is also negative at the position of the WS$_2$ nanoflowers, and positive on locations in between the flowers (compare position of green and pink star in Fig.\ref{fig_helicity_300K}d,e). As depicted in Fig.3f in the main text, the Raman helicity is on average more negative at room temperature than at cryogenic temperatures. Figure \ref{fig_helicity_300K}c depicts the helicity of the first Raman feature as a function of intensity. As was the case at \SI{4}{K}, the spread in the helicity values is large for low intensities, but the spread becomes smaller at high intensity. Comparing Fig.\ref{fig_helicity_other}c and Fig.\ref{fig_helicity_300K}c, it becomes apparent that the helicity at room temperature has lower values: the maximum helicity is only 0.12 instead of 0.20, and the value at high intensity is -0.10 instead of -0.05.
\begin{figure*}[htp]
\centering
\includegraphics[width = 0.8\linewidth] {helicity_Amode.pdf}
\caption{\textbf{Raman helicity of A$_{1g}$ mode} \\
\textbf{a.} Map of the intensity of the second Raman feature, the A$_{1g}$, of the nanoflower spectra presented in Fig.3b,c in the main text. \textbf{b.} Map of the same region of the helicity of the A$_{1g}$ mode. The helicity of this Raman feature has a similar value as the first Raman feature presented in the main text, being negative around the WS$_2$ nanoflowers and positive in regions next to the larger nanoflowers (see Fig.3e in the main text). \textbf{c.} Distribution of the Raman helicity (of the first Raman feature, presented in the main text), as a function of Raman intensity. At low intensity, the helicity can take a broad range of values between -0.1 and +0.2. At high intensity, the helicity goes to a value of around -0.05. \textbf{d,e.} Temperature-dependent helicity \textbf{d.} of the A$_{1g}$ mode and \textbf{e.} of the first Raman feature of the WS$_2$ nanoflower marked in \textbf{d.} green and \textbf{e.} grey in \textbf{a,b.} (taking into account all pixels associated with this flower). The lines present the temperature dependence of three specific places on the flowers, denoted with stars in \textbf{a,b}. Like in Fig.3f in the main text, the helicity decreases slightly at room-temperature.}
\label{fig_helicity_other}
\end{figure*}
\begin{figure*}[htp]
\centering
\includegraphics[width = 0.8\linewidth] {helicity_300K.pdf}
\caption{\textbf{Raman helicity at room temperature} \\
\textbf{a,b} Helicity-resolved nanoflower spectra, where the flowers are excited with $\sigma_+$ light and the helicity is determined from the difference in $\sigma_+$ and $\sigma_-$ emission. The spectra are taken at room temperature at the same positions as in Fig.3 in the main text. In \textbf{a}, the spectrum with the same polarization as the excitation light (blue) has a higher intensity (helicity is conserved). In \textbf{b}, the spectrum with opposite polarization to the excitation light (red) has a higher intensity (helicity is reversed). \textbf{c.} Distribution of the helicity of the first Raman feature as a function of Raman intensity. At low intensity, the helicity can take a broad range of values between -0.15 and +0.1. At high intensity, the helicity goes to a value of around -0.1. \textbf{d.} Map of the intensity of the first Raman feature, taken at room temperature. \textbf{e.} Map of the same region of the Raman helicity. As in Fig.3, the helicity of the Raman features around the WS$_2$ nanoflowers is negative (green star), whereas the Raman helicity is positive in regions next to the larger nanoflowers (pink star). }
\label{fig_helicity_300K}
\end{figure*}
\subsection{Raman polarizability tensors}
\subsubsection{Tensors and Jones calculus}
As mentioned in the main text, the interaction of TMDs materials with polarized light can be described by Raman polarizability tensors. The Jones vectors for circularly polarized light are \cite{Zhao_helicityMoS2_ACSNano_2020}:
\begin{equation}
\begin{split}
\sigma_+ = 1/\sqrt{2} \begin{bmatrix} 1\\i\\0 \end{bmatrix} \textrm{and} \;
\sigma_- = 1/\sqrt{2} \begin{bmatrix} 1\\-i\\0 \end{bmatrix}.
\end{split}
\end{equation}
The first two Raman features of WS$_2$ are the combination of the E$_{2g}$ and 2LA(M) mode, and the A$_{1g}$ mode. We start with the Raman tensor of the second feature, the A$_{1g}$ mode \cite{Chen_helicityRamanTMD_NanoLett_2015}:
\begin{equation}
R_A = \begin{bmatrix}
a & 0 & 0 \\
0 & a & 0 \\
0 & 0 & b
\end{bmatrix} .
\end{equation}
If the incident and outgoing light have the same polarization handedness, the calculation yields a non-zero matrix element \cite{Chen_helicityRamanTMD_NanoLett_2015}: \mbox{$\sigma_{+}^\dagger R_A \sigma_{+} = a $.} If the incident and outgoing light have different polarization handedness, the matrix element is zero \cite{Chen_helicityRamanTMD_NanoLett_2015}: $\sigma_{+}^\dagger R_A \sigma_{-} = 0$.
Calculating the polarization response of the E$_{2g}$ mode is less straighforward, as the Raman tensors depend on the resonance conditions of the excitation light with the excitonic transition \cite{Zhao_helicityMoS2_ACSNano_2020}. If the excitation is out-of-resonance, the Raman tensor is \cite{Chen_helicityRamanTMD_NanoLett_2015}:
\begin{equation}
R_E = \begin{bmatrix}
0 & d & 0 \\
d & 0 & 0 \\
0 & 0 & 0
\end{bmatrix} .
\end{equation}
In this case, if the incident and outgoing light have the same polarization handednes, the matrix element is zero \cite{Chen_helicityRamanTMD_NanoLett_2015}: $\sigma_{+}^\dagger R_E \sigma_{+} = 0 $. If the incident and outgoing light have different polarization handedness, the matrix element is non-zero: $\sigma_{+}^\dagger R_E \sigma_{-} = d $.
In the main text, we calculate the helicity of the measured Raman features: \mbox{$H = \frac{I_{conserved} - I_{reversed}}{I_{conserved} + I_{reversed}}$}, where $I_{conserved}$ had a $\sigma_+$ and $I_{reversed}$ a $\sigma_-$ polarization. Note that the helicity is calculated based on intensities, therefore the matrix elements need to be squared. We calculate the helicity as:
\begin{equation}
H = \frac{I_{\sigma+\sigma+}-I_{\sigma+\sigma-}}{I_{\sigma+\sigma+}+I_{\sigma+\sigma-}} .
\end{equation}
The helicity of the A$_{1g}$ mode is \mbox{$H = \frac{a^2-0}{a^2+0} = +1$}, the helicity is conserved. The helicity of the E$_{2g}$ mode is \mbox{$H = \frac{-d^2}{d^2} = -1$}, the helicity is reversed. In summary: when the excitation light is out of resonance with the excitonic transition of a TMDs material, the first two Raman features respond to circular polarization in an opposite way: the helicity is reversed for the first feature, and conserved for the second.
\subsubsection{Resonant excitation}
The measured helicity-resolved Raman of the WS$_2$ nanoflowers exhibits a different type of behaviour. We experimentally observe that the two first Raman features exhibit the same response to circularly polarized light, either both being helicity conserved or helicity reversed. Part of this difference between theory and experiment can be explained by the fact that the incident light on the nanoflowers is in resonance with the excitonic transition.
In this case, the Raman tensor for the A$_{1g}$ mode remains the same, but the polarization response of the E$_{2g}$ mode has two contributions \cite{Zhao_helicityMoS2_ACSNano_2020, Drapcho_helicityTMD_PRB_2017}. The interaction between electrons, photons and excitons is governed by the so called deformation potential (DP) and Frohlich interaction (FI) \cite{Zhao_helicityMoS2_ACSNano_2020}, leading to the following Raman tensors:
\begin{equation}
\label{equat_tensors_resonant}
\begin{split}
R_{LO} =
\begin{bmatrix}
a_F & a_{DP} & 0 \\
a_{DP} & a_F & 0 \\
0 & 0 & a_F
\end{bmatrix} ,
R_{TO} =
\begin{bmatrix}
a_{DP} & 0 & 0 \\
0 & -a_{DP} & 0 \\
0 & 0 & 0
\end{bmatrix} .
\end{split}
\end{equation}
\begin{figure*}[htp]
\centering
\includegraphics[width = 0.6\linewidth] {base_coordination.pdf}
\caption{\textbf{Coordinate systems} \\
\textbf{a.} Coordination system with a flat WS$_2$ layer in the x-y plane, and circularly polarized excitation propagating along z. \textbf{b.} Exciting a wall-like WS$_2$ flower petal is like performing a base transform. Here the new base vectors are: $\hat{x}' = \hat{x}$, $\hat{y}' = -\hat{z}$ and $\hat{z}' = \hat{y}$. }
\label{fig_base_transformation}
\end{figure*}
If the incident and outgoing light have the same polarization handedness, \mbox{$\sigma_{+}^\dagger R_{LO} \sigma_{+} = a_F $} and \mbox{$\sigma_{+}^\dagger R_{TO} \sigma_{+} = 0 $}. If the incident and outgoing light have different polarization handedness, \mbox{$\sigma_{+}^\dagger R_{LO} \sigma_{-} = -ia_{DP} $} and \mbox{$\sigma_{+}^\dagger R_{TO} \sigma_{-} = a_{DP}$}. Therefore, independent on the polarization handedness, the interaction will always contain a non-zero matrix element. The helicity of the E$_{2g}$ mode is \mbox{$H = \frac{a_F^2-2a_{DP}^2}{a_F^2+2a_{DP}^2}$}. Depending on the contribution of the DP and FI interactions, the helicity of the E$_{2g}$ mode can be either conserved or reversed \cite{Zhao_helicityMoS2_ACSNano_2020}.
In summary: when the excitation light is in resonance with the excitonic transition of a TMDs material, both the E$_{2g}$ and the A$_{1g}$ mode can be helicity conserved (H $>$ 0) \cite{Zhao_helicityMoS2_ACSNano_2020, Drapcho_helicityTMD_PRB_2017}.
\subsubsection{Base transformation}
Still, the theory above does not adequately describe the measured helicity-resolved Raman of the WS$_2$ nanoflowers exhibits. Although the resonance condition explains why both Raman features exhibit the same helicity response, it does not explain the observed helicities between -0.20 and +0.20 for the A$_{1g}$ mode, where a helicity of +1.0 would be expected. As mentioned in the main text, the Raman polarizablity tensors are defined with respect to the crystal axes of flat TMDs layers, e.g., a frame of reference with the excitation light perpendicular to it. As the petals of the WS$_2$ flowers exhibit a variety of orientations with respect to the incident light, the Raman tensor needs to be defined in a different frame of reference (see \cite{Jin_MoSe2polarization_2020, Ding_RamanTensorsMoS2_optlett_2020, Hulman_MoS2polarizationVertical_PhysChemC_2019}). Figure \ref{fig_base_transformation}a presents schematically a WS$_2$ flake in the horizontal x-y plane, excited by circularly polarized light propagating along z. Figure \ref{fig_base_transformation}b presents a wall-like WS$_2$ petal in the x-z plane. The WS$_2$ Raman tensor needs to be defined in this rotated coordinate system, where the base vectors transform to: $\hat{x}' = \hat{x}$, $\hat{y}' = -\hat{z}$ and $\hat{z}' = \hat{y}$. The Raman tensor of the A$_{1g}$ mode will change to:
\begin{equation}
\label{equat_Amode}
R_A' = \begin{bmatrix}
a & 0 & 0 \\
0 & -a & 0 \\
0 & 0 & -b
\end{bmatrix} .
\end{equation}
If the incident and outgoing light have the same polarization handedness, the matrix element is zero: $\sigma_{+}^\dagger R_A' \sigma_{+} = 0 $. If the incident and the outgoing light have different polarization handedness, the matrix element is non zero: $\sigma_{+}^\dagger R_A' \sigma_{-} = 2a $. Note that now the A$_{1g}$ mode is helicity reversed instead of helicity conserved: H=-1.0.
Applying the same base transformation to the tensors of the E$_{2g}$ mode yields:
\begin{equation}
\begin{split}
R_{LO}' =
\begin{bmatrix}
a_F & 0 & -a_{DP} \\
0 & a_F & 0 \\
-a_{DP} & 0 & a_F
\end{bmatrix} , \\
R_{TO}' =
\begin{bmatrix}
a_{DP} & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & -a_{DP}
\end{bmatrix} .
\end{split}
\end{equation}
If the incident and outgoing light have the same polarization handedness, the matrix element are: \mbox{$\sigma_{+}^\dagger R_{LO}' \sigma_{+} = a_F $} and \mbox{$\sigma_{+}^\dagger R_{TO}' \sigma_{+} = 1/2a_{DP} $}. If the incident and outgoing light have different polarization handedness, the matrix elements are: \mbox{$\sigma_{+}^\dagger R_{LO}' \sigma_{-} = 0 $} and \mbox{$\sigma_{+}^\dagger R_{TO}' \sigma_{-} = 1/2a_{DP} $}. Therefore the helicity again depends on the contribution of the DP and FI interactions \cite{Zhao_helicityMoS2_ACSNano_2020}: $H = \frac{a_F^2}{a_F^2+1/2a_{DP}^2}$.
\bibliographystyle{unsrt}
|
2,869,038,154,601 | arxiv | \section{INTRODUCTION}
One of the most intriguing predictions of Quantum electrodynamics
is the existence of irreducible vacuum fluctuations of the
electromagnetic (e.m.) field. It was Casimir's fundamental
discovery \cite{casimir} to realize that this purely quantum
phenomenon was not confined to the atomic scale, as in the Lamb
shift, but would rather manifest itself also at the macroscopic
scale, in the form of a force of attraction between two discharged
plates. For the idealized case of two perfectly reflecting
plane-parallel plates at zero temperature, placed at a distance
$a$ in vacuum, Casimir obtained the following remarkably simple
estimate of the force per unit area \be F_{C}=\frac{\pi^2 \hbar c
}{240 \;a^4}\;.\label{paral}\ee An important step forward was made
a few years later by Lifshitz and co-workers \cite{lifs}, who
obtained a formula for the force between two homogeneous
dielectric plane-parallel slabs, at finite temperature. In this
theory, of macroscopic character, the material properties of the
slabs were fully characterized in terms of the respective
frequency dependent electric permittivities $\epsilon(\omega)$,
accounting for the dispersive and dissipative properties of real
materials. In this way, it was possible for the first time to
investigate the influence of material properties on the magnitude
of the Casimir force.
Over ten years ago, a series of brilliant experiments
\cite{lamor,roy} exploiting modern experimental techniques
provided the definitive demonstration of the Casimir effect. These
now historical experiments spurred enormous interest in the
Casimir effect, and were soon followed by many other experiments.
The subsequent experiments were aimed at diverse objectives. Some
of them explored new geometries: while the works \cite{lamor,roy}
used a sphere-plate setup, the original planar geometry
investigated by Casimir was adopted in the experiment
\cite{Bressi}, and a setup with crossed cylinders was considered
in \cite{ederth}. The important issue of the non trivial geometry
dependence of the Casimir effect is also being pursued
experimentally, using elaborate micro-patterned surfaces
\cite{Bao}. Other experiments aimed at demonstrating new possible
uses of the Casimir force, like for example the actuation of
micromachines \cite{capasso}, or at demonstrating the possibility
of a large modulation of the Casimir force \cite{chen,iannuzzi},
which could also result in interesting technological applications.
There are also experiments using superconducting Casimir cavities,
that aim at measuring the change of the Casimir energy across the
superconducting phase transition \cite{bimonte}. The experiments
performed in the last ten years are just too numerous to mention
them all here. For an updated account we refer the reader to the
very recent review paper \cite{Mohid}.
Apart from exploring new manifestations of the Casimir effect, a
large experimental effort is presently being made also to increase
the precision of Casimir force measurements, in simple geometries.
Already in the early experiment \cite{roy} a precision upto one
percent was obtained. More recently, a series of experiments with
microtorsional oscillators \cite{decca} reached an amazing
precision of 0.2 percent. The reader may wonder what is the
interest in achieving such a high precision in this kind of
experiments. There are several reasons why this is important. On
one hand, in the theory of dispersion forces puzzling conceptual
problems have recently emerged that are connected with the
contribution of free charges to the thermal Casimir force, whose
resolution crucially depends on the precision of the
theory-experiment comparison \cite{Mohid}. On the other hand, the
ability to accurately determine the Casimir force is also
important for the purpose of obtaining stronger constraints on
hypothetical long-range forces predicted by certain theoretical
scenarios going beyond the Standard Model of particle physics
\cite{Mohid}.
The remarkable precision achieved in the most recent experiments
poses a challenging demand on the theorist: is it possible to
predict the magnitude of the Casimir force with a comparable level
of precision, say of one percent? Assessing the theoretical error
affecting present estimates of the Casimir force is a difficult
problem indeed, because many different factors must be taken into
account \cite{Mohid}. Consider the typical experimental setting of
most of the current experiments, where the Casimir force is
measured between two bodies covered with gold, placed in vacuum at
a distance of a (few) hundred nanometers. In this separation
range, the main factor to consider is the finite penetration depth
of electromagnetic fields into the gold layer \footnote{In typical
experiments, the thickness of the gold layers covering the bodies
is large enough, that one can neglect the material of the
substrate, and treat the bodies as if they were made just of
gold.}, resulting from the finite conductivity of gold. The tool
to analyze the influence of such material properties as the
conductivity on the Casimir effect is provided by Lifshitz theory
\cite{lifs}. This theory shows that for a separation of 100 nm,
the finite conductivity of gold determines a reduction in the
magnitude of the Casimir force of about fifty percent in
comparison with the perfect metal result \cite{lambr}. Much
smaller corrections, that must nevertheless be considered if the
force is to be estimated with percent precision, arise from the
finite temperature of the plates and from their surface roughness.
Moreover, geometric effects resulting from the actual shape of the
plates should be considered. We should also mention that the
magnitude of residual electrostatic forces between the plates,
resulting from contact potentials and patch effects, must be
carefully accounted for. For a discussion of all these issues,
which received much attention in the recent literature on the
Casimir effect, we again address the reader to Ref. \cite{Mohid}.
See also the recent work \cite{rudy}.
In this paper, we focus our attention on the influence of the
optical properties of the plates which, as explained above, is by
far the most relevant factor to consider.
As we pointed out earlier, in Lifshitz theory the optical
properties of the plates enter via the frequency-dependent
electric permittivity $\epsilon(\omega)$ of the material
constituting plates. In order to obtain an accurate prediction of
the force, it is therefore of the utmost importance to use
accurate data for the electric permittivity. The common practice
adopted in all recent Casimir experiments with gold surfaces is to
use tabulated data for gold (most of the times those quoted in
Refs. \cite{palik}), suitably extrapolated at low frequencies,
where optical data are not available, by simple analytic models
(like the Drude model or the so-called generalized plasma model).
However, already ten years ago Lamoreaux observed \cite{lamor2}
that using tabulated data to obtain an accurate prediction of the
Casimir force
may not be a reliable practice, since
optical properties of gold films may vary significantly from
sample to sample, depending on the conditions of deposition. The
same author stressed the importance of measuring the optical data
of the films actually used in the force measurements, in the
frequency range that is relevant for the Casimir force. The
importance of this point was further stressed in \cite{Piro} and
received clear experimental support in a recent paper
\cite{sveto}, where the optical properties of several gold films
of different thicknesses, and prepared by different procedures,
were measured ellipsometrically in a wide range of wavelengths,
from 0.14 to 33 microns, and it was found that the frequency
dependent electric permittivity changes significantly from sample
to sample. By using the zero-temperature Lifshitz formula, the
authors estimated that the observed sample dependence of the
electric permittivity implies a variation in the theoretical value
of the Casimir force, from one sample to another, easily as large
as ten percent, for separations around 100 nm. It was concluded
that in order to achieve a theoretical accuracy better than ten
percent in the prediction of the Casimir force, it is necessary to
determine the optical properties of the films actually used in the
experiment of interest.
The aim of this paper is to improve the mathematical procedure
that is actually needed to obtain reliable estimates of the
Casimir force, starting from experimental optical data on the
material of the plates, like those presented in Ref. \cite{sveto}.
The necessity of such an improvement stems from the very simple
and unavoidable fact that experimental optical data are never
available in the entire frequency domain, but are always
restricted to a finite frequency interval $\omega_{\rm min} <
\omega < \omega_{\rm max}$.
To see why this constitutes problem we recall that Lifshitz
formula, routinely used to interpret current experiments,
expresses the Casimir force between two parallel plates as an
integral over {\it imaginary} frequencies ${\rm i} \xi$ of a
quantity involving the dielectric permittivities of the plates
$\epsilon({\rm i \xi})$. For finite temperature, the continuous
frequency integration is replaced by a sum over discrete so-called
Matsubara frequencies $\omega_n={\rm i} \xi_n$, where $\xi_n=2 \pi
n k_B T /\hbar$, with $n$ a non-negative integer, and $T$ the
temperature of the plates. In any case, whatever the temperature,
one needs to evaluate the permittivity of the plates for certain
imaginary frequencies. We note that, in principle, recourse to
imaginary frequencies is not mandatory because it is possible to
rewrite Lifshitz formula in a mathematically equivalent form,
involving an integral over the real frequency axis. In this case
however the integrand becomes a rapidly oscillating function of
the frequency, which hampers any possibility of numerical
evaluation. In practice, the real-frequency form of Lifshitz
formula is never used, and only its imaginary-frequency version is
considered. We remark that occurrence of imaginary frequencies
in the expression of the Casimir force, is a general feature of
all recent formalisms, extending Lifshitz theory to non-planar
geometries \cite{emig,kenneth,lambr2}. The problem is that the
electric permittivity $\epsilon(i \xi)$ at imaginary frequencies
cannot be measured directly by any experiment. The only way to
determine it by means of dispersion relations, which allow to
express $\epsilon(i \xi)$ in terms of the observable
real-frequency electric permittivity $\epsilon(\omega)$. In the
standard version of dispersion relations \cite{lifs}, adopted so
far in all works on the Casimir effect, $\epsilon({\rm i} \xi)-1$
is expressed in terms of an integral of a quantity involving the
imaginary part $\epsilon''(\omega)$ of the electric permittivity:
\be \epsilon({\rm i} \xi)-1=\frac{2}{\pi}\int_0^{\infty} d \omega
\frac{\omega\,
\epsilon''(\omega)}{\omega^2+\xi^2}\;.\label{disp}\ee The above
formula shows that, in principle, a determination of
$\epsilon({\rm i} \xi)$ requires knowledge of $\epsilon''(\omega)$
at all frequencies while, as we said earlier, optical data are
available only in some interval $\omega_{\rm min} < \omega <
\omega_{\rm max}$. In practice, the problem is not so serious on
the high frequency side, because the fall-off properties of
$\epsilon''(\omega)$ at high frequencies, together with the
$\omega^2$ factor in the denominator of the integrand, ensure that
the error made by truncating the integral at a suitably large
frequency $\omega_{\rm max}$ is small, provided that $\omega_{\rm
max}$ is large enough. Typically, an $\omega_{\rm max}$ larger
than, say, $15 c/(2 a)$, is good enough for practical purposes.
Things are not so easy though on the low frequency side. In the
case of insulators, optical data are typically available until
frequencies $\omega_{\min}$ much smaller than the frequencies of
all resonances of the medium. Because of this,
$\epsilon''(\omega)$ is almost zero for $\omega < \omega_{\rm
min}$, and therefore the error made by truncating the integral at
$\omega_{\rm min}$ is again negligible. Problems arise however in
the case of ohmic conductors, because then $\epsilon''(\omega)$
has a $1/\omega$ singularity at $\omega=0$. As a result
$\epsilon''(\omega)$ becomes extremely large at low frequencies,
in such a way that the integral in Eq. (\ref{disp}) receives a
very large contribution from low frequencies. For typical values
of $\omega_{\rm min}$ that can be reached in practice (for example
for gold, the tabulated data in \cite{palik} begin at $\omega_{\rm
min }=125$ meV$/\hbar$, while the data of \cite{sveto} start at 38
meV$/\hbar$) truncation of the integral at $\omega_{\rm min}$
results in a large error. The traditional remedy to this problem
is to make some analytical extrapolation of the data, typically
based on Drude model fits of the low-frequency region of data,
from $\omega_{\rm min}$ to zero, and then use the extrapolation to
estimate the contribution of the integral in the interval $0 <
\omega <\omega_{\rm min}$ where data are not directly available.
It is important to observe that this contribution is usually very
large. For example, even in the case of Ref. \cite{sveto},
the relative contribution of
the extrapolation is about fifty percent of the total value of the
integral, in the entire range of imaginary frequencies that are
needed for estimating the Casimir force.
Clearly, this procedure is not very satisfying. The use of
analytical extrapolations of the data introduces an uncertainty in
the obtained values of $\epsilon({\rm i} \xi)$, that is not easy
to quantify. The result may in fact depend a lot on the form of
the extrapolation, and there is no guarantee that the chosen
expression is good enough. Consider for example Ref. \cite{sveto},
which constitutes the most accurate existing work on this
problem. It was found there that the simple Drude model does not
fit so well the data of all samples, making it necessary to
improve it by the inclusion of an additional Lorentz oscillator.
Moreover, it was found that for each sample the Drude parameters
extracted from the data depended on the used fitting procedure,
and were inconsistent which each other within the estimated
errors, which is again an indication of the probable inadequacy of
the analytical expression chosen for the interpolation. This state
of things led us to investigate if it possible to determine
accurately $\epsilon({\rm i \xi})$ solely on the basis of
available optical data, without making recourse to data
extrapolations. We shall see below that this is indeed possible,
provided that Eq. (\ref{disp}) is suitably modified, in a way that
involves multiplying the integrand by an appropriate analytical
window function $f(\omega)$, which suppresses the contribution of
frequencies not belonging to the interval $\omega_{\rm min} <
\omega < \omega_{\rm max}$. As a result of this modification, the
error made by truncating the integral to the frequency range
$\omega_{\rm min} < \omega < \omega_{\rm max}$ can be made
negligible at both ends of the integration domain, rendering
unnecessary any extrapolation of the optical data outside the
interval where they are available.
The procedure outlined in this paper should allow to better
evaluate the theoretical uncertainty of Casimir force estimates
resulting from experimental errors in the optical data.
The plan of the paper is as follows: in Sec. II we derive a
generalized dispersion relation for $\epsilon({\rm i} \xi)$,
involving analytic window functions $f(z)$, and we provide a
simple choice for the window functions. In Sec III we present the
results of a numerical simulation of our window functions, for the
experimentally relevant case of gold, and in Sec IV we estimate
numerically the error on the Casimir pressure resulting from the
use of our window functions. Sec V contains our conclusions and a
discussion of the results.
\section{Generalized dispersion relations with window-functions }
As it it well known \cite{lifs}, analyticity properties satisfied
by the electric permittivity $\epsilon(\omega)$ of any causal
medium (and more in general by any causal response function, the
magnetic permeability $\mu(\omega)$ being another example) imply
certain integral relations between the real part
$\epsilon'(\omega)$ and imaginary part $\epsilon''(\omega)$ of
$\epsilon(\omega)$, known as Kramers-Kronig or dispersion
relations. The dispersion relation of interest to us is the one
that permits to express the value $\epsilon({\rm i} \xi)$ of the
response function at some imaginary frequency ${\rm i} \xi$ in
terms of an integral along the positive frequency axis, involving
$\epsilon''(\omega)$. It is convenient to briefly review here the
simple derivation of this important result, which is an easy
exercise in contour integration. For our purposes, it is more
convenient to start from an arbitrary complex function $u(z)$,
with the following properties: $u(z)$ is analytic in the upper
complex plane ${\cal C}^+=\{z: {\rm Im}(z)>0\}$, fall's off to
zero for large $|z|$ like some power of $|z|$, and admits at most
a simple pole at $\omega=0$. Consider now the closed integration
contour $\Gamma$ obtained by closing in the upper complex plane
the positively oriented real axis, and let $z_0$ be any complex
number in ${\cal C}^+$. It is then a simple matter to verify the
identity: \be \int_{\Gamma} dz \frac{z \, u(z)}{z^2-z_0^2}= {\rm
i} \pi u(z_0)\;.\label{intuno}\ee The assumed fall-off property
of $u(z)$ ensures that the half-circle of infinite radius forming
$\Gamma$ contributes nothing to the integral, and then from Eq.
(\ref{intuno}) we find: \be
u(z_0)=\frac{1}{{\rm i} \pi}\int_{-\infty}^{\infty} d\omega \frac{\omega
\, u(\omega)}{\omega^2-z_0^2}\;.\label{intdue}\ee Consider now a
purely imaginary complex number $z_0={\rm i} \xi$, and assume in
addition that along the real axis $u(\omega)$ satisfies the
symmetry property $u(-\omega)=u^*(\omega)$. From Eq.
(\ref{intdue}) we then find: \be u({\rm i} \xi)=\frac{2}{
\pi}\int_0^{\infty} d\omega \frac{\omega \,
u''(\omega)}{\omega^2+\xi^2}\;,\label{gendisp}\ee which is the
desired result.
The standard dispersion relation Eq. (\ref{disp}) used to compute
the electric permittivity for imaginary frequencies is a special
case of the above relation, corresponding to choosing
$u(z)=\epsilon(z)-1$. We note that Eq. (\ref{disp}) is valid both
for insulators, which have a finite permittivity at zero
frequency, as well as for ohmic conductors, whose permittivity has
a $1/\omega$ singularity in the origin. As we explained in the
introduction Eq. (\ref{disp}), even though perfectly correct from
a mathematical standpoint, has serious drawbacks, when it is
used to numerically estimate $\epsilon(i \xi)$ for ohmic
conductors, starting from optical data available only in some
interval $\omega_{\rm min} < \omega < \omega_{\rm max}$, because
the integral on the r.h.s. of Eq. (\ref{disp}) receives a large
contribution from frequencies near zero, where data are not
available. This difficulty can however be overcome in a very
simple way, as we now explain. Consider a window function $f(z)$,
enjoying the following properties: $f(z)$ is analytic in ${\cal
C}^+$, it has no poles in ${\cal C}^+$ except possibly a simple
pole at infinity, and satisfies the symmetry property \be
f(-z^*)=f^*(z)\;.\label{sym} \ee Consider now Eq. (\ref{gendisp}),
for $u(z)=f(z)(\epsilon(z)-1)$. Since for any medium
$(\epsilon(z)-1)$ falls off like $z^{-2}$ at infinity
\cite{lifs2}, the quantity $u(z)$ falls off at least like $z^{-1}$
at infinity, and it satisfies all the properties required for Eq.
(\ref{gendisp}) to hold. For any $\xi$ such that $f(i \xi) \neq
0$, we then obtain the following generalized dispersion relation:
\be \epsilon({\rm i} \xi)-1= \frac{2}{\pi \, f({\rm i} \xi)}
\int_0^{\infty} d\omega \frac{\omega \, }{\omega^2+\xi^2}{\rm
Im}[f(\omega)(\epsilon(\omega)-1)]\;.\label{generdisp}\ee
We note that the above relation constitutes an exact result,
generalizing the standard dispersion relation Eq. (\ref{disp}), to
which it reduces with the choice $f(z)=1$. Another form of
dispersion relation, frequently used
in the case of conductors or superconductors
\cite{bimonte,bimonte2} is obtained by taking $f(z)={\rm i}\,z$
into Eq. (\ref{generdisp}). Recalling the relation \cite{lifs2}
\be \epsilon(\omega)=1+\frac{4 \pi i}{\omega}
\,\sigma(\omega)\;,\ee it reads: \be \epsilon({\rm i} \xi)-1=
\frac{8}{ \xi} \int_0^{\infty} d\omega \frac{\omega \,
}{\omega^2+\xi^2}{\rm Im}\,[\sigma(\omega)]\;.\label{sigma}\ee The
above form is especially convenient in the case of
superconductors, because it avoids the $\delta(\omega)$
singularity characterizing the real part of the conductivity of
these materials \cite{bimonte2}.
We observe now, and this is the key point, that there is no reason
to restrict the choice of the function $f(z)$ to these two
possibilities. Indeed, we can take advantage of the freedom in the
choice of $f(z)$, to suppress the unwanted contribution of low
frequencies (as well as of high frequencies), where experimental
data on $\epsilon(\omega)$ are not available. In order to do that,
it is sufficient to choose a window function that goes to zero
fast enough for $\omega \rightarrow 0$, as well as for $\omega
\rightarrow \infty$. A convenient family of window functions which
do the job is the following: \be f(z)=A\, z^{2 p+1}\left[
\frac{1}{(z-w)^{2 q+1}} +\frac{1}{(z+w^*)^{2 q+1}}
\right]\;,\label{winfun}\ee where $w$ is an arbitrary complex
number such that ${\rm Im}(w) <0$, and $p$ and $q$ are integers
such that $p < q$. The constant $A$ is an irrelevant arbitrary
normalization constant, that drops out from the generalized
dispersion formula Eq. (\ref{generdisp}). As we see, in the limit
$z \rightarrow 0$, these functions vanish like $z^{2p+1}$, and
therefore by taking sufficiently large values for $p$ we can
obtain suppression of low frequencies to any desired level. On the
other hand, for $z \rightarrow \infty$, $f(z)$ vanishes like
$z^{2(p-q)}$, and therefore by taking sufficiently large values of
$q$, we can obtain suppression of high frequencies. Moreover, by
suitably choosing the free parameter $w$, we can also adjust the
range of frequencies that effectively contribute to the integral
on the r.h.s. of Eq. (\ref{generdisp}). In Figs. 1 and 2 we plot
the real and imaginary parts (in arbitrary units) of our window
functions $f(\omega)$, versus the frequency $\omega$ (expressed in
eV). The two curves displayed correspond to the choices
$p=1,\,q=2$ (dashed line) and $p=1,\,q=3$ (solid line). In both
cases, the parameter $w$ has the value $w=(1-2\, {\rm i}) \,{\rm
eV}/\hbar$.
\begin{figure}
\includegraphics{Fig1
\caption{\label{fig1} Real part $f'(\omega)$ (in arbitrary
units) of the window functions in Eq. (\ref{winfun}) versus
frequency $\omega$ (in eV/$\hbar$). The window parameters are
$p=1,\,q=2$ (dashed line), $p=1,\,q=3$ (solid line). In both
cases, the parameter $w$ has the value $w=(1-2\, {\rm i}) \,{\rm
eV}/\hbar$.}
\end{figure}
\begin{figure}
\includegraphics{Fig2
\caption{\label{fig2} Imaginary part $f''(\omega)$ (in arbitrary
units) of the window functions in Eq. (\ref{winfun}) versus
frequency $\omega$ (in eV/$\hbar$). The window parameters are
$p=1,\,q=2$ (dashed line), $p=1,\,q=3$ (solid line). In both
cases, the parameter $w$ has the value $w=(1-2\, {\rm i}) \,{\rm
eV}/\hbar$.}
\end{figure}
\begin{figure}
\includegraphics{Fig3
\caption{\label{fig3} Plots (in arbitrary units) of the window
functions $f({\rm i} \xi)$ in Eq. (\ref{winfun}) versus the
imaginary frequency $\xi$ (in eV/$\hbar$). The window parameters
are $p=1,\,q=2$ (dashed line), $p=1,\,q=3$ (solid line). In both
cases, the parameter $w$ has the $w=(1-2\, {\rm i}) \,{\rm
eV}/\hbar$.}
\end{figure}
We observe that along the real frequency axis, our window
functions have non-vanishing real and imaginary parts. This is not
a feature of our particular choice of the window functions, but
it is an unavoidable consequence of our demand of analyticity on
$f(z)$. Indeed, for real frequencies $\omega$ the real and
imaginary parts of $f(\omega)$ are related to each other by the
usual Kramers-Kronig relations \cite{lifs} that hold for the
boundary values of analytic functions. In the case when $f(z)$
vanishes at infinity, they read: \be f'(\omega)=\frac{1}{\pi}{\rm
P} \int_{-\infty}^{\infty} d \xi\,
\frac{f''(\xi)}{\xi-\omega}\;,\label{reKK}\ee \be
f''(\omega)=-\frac{1}{\pi}{\rm P} \int_{-\infty}^{\infty} d \xi\,
\frac{f'(\xi)}{\xi-\omega}\;,\label{imKK}\ee where the symbol
${\rm P}$ in front of the integrals denotes the principal value.
These relation show that vanishing of $f'(\omega)$ implies that of
$f''(\omega)$ and viceversa, and therefore neither $f'(\omega)$
nor $f''(\omega)$ can be identically zero. By virtue of this
property of the window functions, it follows from Eq.
(\ref{generdisp}) that both the real and imaginary parts of
$\epsilon(\omega)$ are needed to evaluate $\epsilon({\rm i} \xi)$
(unless the standard choices $f(z) \equiv 1$ or $f(z)={\rm i}\,z$
are made). We also note (see Fig. 1 and 2) that the real and
imaginary parts of $f(\omega)$ do not have a definite sign. This
feature also is a general consequence of our key demand that
$f(z)$ vanishes in the origin, as it can be seen by taking
$\omega=0$ in Eqs. (\ref{reKK}) and (\ref{imKK}). Since the l.h.s.
of both equations are required to vanish, the integrand on the
r.h.s. cannot have a definite sign. Finally, in Fig 3 we show
plots of two of our window functions $f({\rm i} \xi)$, versus the
imaginary frequency $\xi$, expressed in eV, for the same two
choices of parameters of Fig. 1 and 2. It is important to observe
that the window functions $f(z)$ are real along the imaginary axis
(as it must be, as a consequence of the symmetry property Eq.
(\ref{sym})). However, the sign of $f({\rm i} \xi)$ is not
definite, and as a result of this $f({\rm i} \xi)$ admits zeros
along the imaginary axis. When using Eq. (\ref{generdisp}) for
estimating $\epsilon({\rm i \xi})$ it is then important to choose
the window function such that none of its zeroes coincides with
the value of $\xi$ for which $\epsilon({\rm i \xi})$ is being
estimated.
\section{A numerical simulation}
In this Section, we perform a simple simulation to test the degree
of accuracy with which the quantity $\epsilon({\rm i} \xi)$ can be
reconstructed using our window functions, starting from data on
$\epsilon(\omega)$ referring to a finite frequency interval. To
do that we can proceed as follows.
According to the standard dispersion relation Eq. (\ref{disp}),
the quantity $\epsilon({\rm i} \xi)-1$ is equal to the integral on
the r.h.s. of Eq. (\ref{disp}). Following Refs. \cite{Piro,sveto},
we can split this integral into three pieces, as follows: \be
\frac{2}{\pi}\int_0^{\infty} d \omega \frac{\omega\,
\epsilon''(\omega)}{\omega^2+\xi^2}=I_{\rm low}(\xi)+I_{\rm
exp}(\xi)+I_{\rm high}(\xi)\;,\label{split}\ee where we set: \be
I_{\rm low}(\xi)=\frac{2}{\pi}\int_0^{\omega_{\rm min}} d \omega
\frac{\omega\, \epsilon''(\omega)}{\omega^2+\xi^2}\;,\ee \be
I_{\rm exp}(\xi)=\frac{2}{\pi}\int_{\omega_{\rm min}}^{\omega_{\rm
max}} d \omega \frac{\omega\,
\epsilon''(\omega)}{\omega^2+\xi^2}\;,\ee and \be I_{\rm
high}(\xi)=\frac{2}{\pi}\int_{\omega_{\rm max}}^{\infty} d \omega
\frac{\omega\, \epsilon''(\omega)}{\omega^2+\xi^2}\;.\ee By
construction, we obviously have: \be \epsilon({\rm i}
\xi)-1=I_{\rm low}(\xi)+I_{\rm exp}(\xi)+I_{\rm high}(\xi)\;.\ee
An analogous split can be performed in the integral on the r.h.s.
of the other standard dispersion relation involving the
conductivity Eq. (\ref{sigma}): \be \frac{8}{ \xi}
\int_0^{\infty} d\omega \frac{\omega \, }{\omega^2+\xi^2}{\rm
Im}\,[\sigma(\omega)]=K_{\rm low}(\xi)+K_{\rm exp}(\xi)+K_{\rm
high}(\xi)\;,\label{splitsigma}\ee with an obvious meaning of the
symbols. Again, we have the identity: \be \epsilon({\rm i}
\xi)-1=K_{\rm low}(\xi)+K_{\rm exp}(\xi)+K_{\rm high}(\xi)\;.\ee
On the other hand, according to our generalized dispersion
relation Eq. (\ref{generdisp}), the quantity $\epsilon({\rm i}
\xi)-1$ is also equal to the integral on the r.h.s. of Eq.
(\ref{generdisp}). We can split this integral too in a way
analogous to Eq. (\ref{split}):
$$ \frac{2}{\pi \, f({\rm i} \xi)} \int_0^{\infty} d\omega
\frac{\omega \, }{\omega^2+\xi^2}{\rm
Im}[f(\omega)(\epsilon(\omega)-1)]$$ \be =J_{\rm
low}^{(p,q)}(\xi)+J_{\rm exp}^{(p,q)}(\xi)+J_{\rm
high}^{(p,q)}(\xi)\;,\ee where we set: \be J_{\rm
low}^{(p,q)}(\xi)=\frac{2}{\pi\, f({\rm i}
\xi)}\int_0^{\omega_{\rm min}} d\omega \frac{\omega \,
}{\omega^2+\xi^2}{\rm Im}[f(\omega)(\epsilon(\omega)-1)]\;,\ee \be
J_{\rm exp}^{(p,q)}(\xi)=\frac{2}{\pi\, f({\rm i}
\xi)}\int_{\omega_{\rm min}}^{\omega_{\rm max}} d\omega
\frac{\omega \, }{\omega^2+\xi^2}{\rm
Im}[f(\omega)(\epsilon(\omega)-1)]\;,\ee and \be J_{\rm
high}^{(p,q)}(\xi)=\frac{2}{\pi \, f({\rm i}
\xi)}\int_{\omega_{\rm max}}^{\infty} d\omega \frac{\omega \,
}{\omega^2+\xi^2}{\rm Im}[f(\omega)(\epsilon(\omega)-1)]\;.\ee
Then by construction we also have: \be \epsilon({\rm i}
\xi)-1=J_{\rm low}^{(p,q)}(\xi)+J_{\rm exp}^{(p,q)}(\xi)+J_{\rm
high}^{(p,q)}(\xi)\;.\ee The quantities $I_{\rm exp}(\xi)$,
$K_{\rm exp}(\xi)$ and $J_{\rm exp}^{(p,q)}(\xi)$ evidently
represent the contribution of the experimental data. On the
contrary the quantities $I_{\rm low}(\xi)$, $K_{\rm low}(\xi)$
and $J_{\rm low}^{(p,q)}(\xi)$ can be determined only by
extrapolating the data in the low frequency region $0 \le \omega
\le \omega_{\rm min}$, while determination of the quantities
$I_{\rm high}(\xi)$, $K_{\rm high}(\xi)$ and $J_{\rm
high}^{(p,q)}(\xi)$ is only possible after we extrapolate the data
in the high frequency interval $\omega_{\rm max} \le \omega <
\infty$. Ideally, we would like to have $I_{\rm low}(\xi)$,
$I_{\rm high}(\xi)$, $K_{\rm low}(\xi)$, $K_{\rm high}(\xi)$,
$J_{\rm low}^{(p,q)}(\xi)$ and $J_{\rm high}^{(p,q)}(\xi)$ as
small as possible.
To see how things work, we can perform a simple simulation of real
experimental data. We imagine that the electric permittivity of
gold is described by the following six-oscillators approximation
\cite{Mohid}, which is known to provide a
rather good description of the permittivity of gold for the
frequencies that are relevant to the Casimir effect: \be
\epsilon(\omega)=1-\frac{\omega_p^2}{\omega(\omega+{\rm i}
\gamma)}+\sum_{j=1}^6 \frac{g_j}{\omega_j^2-\omega^2-{\rm i}
\gamma_j \omega}\,.\label{sixosc}\ee Here, $\omega_p$ is the
plasma frequency and $\gamma$ is the relaxation frequency for
conduction electrons, while the oscillator terms describe core
electrons. The values of the parameters $g_j$, $\omega_j$ and
$\gamma_j$ can be found in the second of Refs. \cite{decca}. For
$\omega_p$ and $\gamma$ we use the reference values for
crystalline bulk samples, $\omega_p=9$ eV$/\hbar$ and
$\gamma=0.035$ eV$/\hbar$. Of course with such a simple model for
the permittivity of gold, there is no need to use dispersion
relations to obtain the expression of $\epsilon({\rm i}\xi)$, for
this can be simply done by the substitution $\omega \rightarrow
{\rm i} \xi$ in the r.h.s. of Eq. (\ref{sixosc}): \be
\epsilon({\rm i} \xi)=1+\frac{\omega_p^2}{\xi(\xi+
\gamma)}+\sum_{j=1}^6 \frac{g_j}{\omega_j^2+\xi^2+ \gamma_j
\xi}\,.\label{sixoscim}\ee Simulating the real experimental
situation, let us pretend however that we know that the optical
data of gold are described by Eq. (\ref{sixosc}) only in some
interval $\omega_{\rm min} < \omega < \omega_{\rm max}$, and
assuming that we do not want to make extrapolations of the data
outside the experimental interval, let us see how well the
quantities $I_{\rm exp}(\xi)$, $K_{\rm exp}(\xi)$ and $J_{\rm
exp}^{(p,q)}(\xi)$ defined earlier reconstruct the exact value of
$\epsilon({\rm i} \xi)-1$ given by Eq. (\ref{sixoscim}). In our
simulation we took $\omega_{\rm min }=0.038$ eV$/\hbar$
(representing the minimum frequency value for which data for gold
films were measured in \cite{sveto}) while for $\omega_{\rm max}$
we choose the value $\omega_{\rm max}=30$ eV$/\hbar$. The chosen
value of $\omega_{\rm max}$ is about thirty times the
characteristic frequency $c/(2 a)$ for a separation $a=100$ nm.
The result of our simulation are summarized in Figs. 4 and 5.
\begin{figure}
\includegraphics{Fig4
\caption{\label{fig4} Numerical simulation of the errors (in
percent) in the estimate of $\epsilon({\rm i}\xi_n)-1$ for gold,
resulting from using the quantities $I_{\rm exp}(\xi_n)$ (black
squares) and $K_{\rm exp}(\xi_n)$ (grey triangles) as estimators,
in the hypothesis that data are available from $\omega_{\rm min
}=0.038$ eV$/\hbar$ to $\omega_{\rm max}=30$ eV$/\hbar$. The
integer on the abscissa labels the Matsubara mode $\xi_n=2 \pi n
k_B T/\hbar$ ($T=300$ K).}
\end{figure}
In Fig. 4, we report the relative per cent errors $\delta_I=100
\,[1-I_{\rm exp}(\xi_n)/(\epsilon({\rm i}\xi_n)-1)]$ (black
squares) and $\delta_K=100 \,[1-K_{\rm exp}(\xi_n)/(\epsilon({\rm
i}\xi_n)-1)]$ (grey triangles) which are made if the quantities
$I_{\rm exp}(\xi_n)$ or $K_{\rm exp}(\xi_n)$ are used,
respectively, as estimators of $\epsilon({\rm i}\xi_n)-1$. The
integer number on the abscissa labels the Matsubara mode $\xi_n=2
\pi n k_B T/\hbar$ ($T=300$ K). Only the first sixty modes are
displayed, which are sufficient to estimate the Casimir force at
room temperature, for separations larger than 100 nm, with a
precision better than one part in ten thousand. As we see, both
$I_{\rm exp}(\xi_n)$ and $K_{\rm exp}(\xi_n)$ provide a poor
approximation to $\epsilon({\rm i}\xi_n)-1$, with $I_{\rm
exp}(\xi_n)$ performing somehow better at higher imaginary
frequencies, and $K_{\rm exp}(\xi_n)$ doing better at lower
imaginary frequencies. Indeed, $I_{\rm exp}(\xi_n)$ and $K_{\rm
exp}(\xi_n)$ suffer from opposite problems. On one hand the large
error affecting $I_{\rm exp}(\xi_n)$ arises mostly from neglect
of the large low-frequency contribution $I_{\rm low}(\xi_n)$, and
to a much less extent from neglect of the high frequency
contribution $I_{\rm high}(\xi_n)$ (The magnitude of the high
frequency contribution $I_{\rm high}(\xi_n)$ is less than two
percent of $\epsilon({\rm i}\xi_n)-1$ for all $n \le 60$). The
situation is quite the opposite in the case of $K_{\rm
exp}(\xi_n)$. This difference is of course due to the opposite
limiting behaviors of the imaginary parts of the permittivity
$\epsilon''(\omega)$ in the limits $\omega \rightarrow 0$, and
$\omega \rightarrow \infty$, as compared to those of the imaginary
part of the conductivity $\sigma''(\omega)$. Indeed, for $\omega
\rightarrow 0$, $\epsilon''(\omega)$ diverges like $\omega^{-1}$,
while $\sigma''(\omega)$ approaches zero like $\omega$. This
explains while the low frequency contribution $I_{\rm low}(\xi_n)$
is much larger than $K_{\rm low}(\xi_n)$. On the other hand, in
the limit $\omega \rightarrow \infty$, $\epsilon''(\omega)$
vanishes like $\omega^{-3}$, while $\sigma''(\omega)$ vanishes
only like $\omega^{-1}$. This implies that large frequencies are
much less of a problem for $I_{\rm exp}(\xi_n)$ than for $K_{\rm
exp}(\xi_n)$. The conclusion to be drawn from these considerations
is that, if either of the two standard forms Eq. (\ref{disp}) or
Eq. (\ref{sigma}) of dispersion relations are used, in order to
obtain a good estimate of $\epsilon({\rm i}\xi_n)-1$, one is
forced to extrapolate somehow the experimental data both to
frequencies less than $\omega_{\rm min}$, and larger than
$\omega_{\rm max}$.
We can now consider our windowed dispersion relation, Eq.
(\ref{generdisp}), with our choice of the window functions $f(z)$
in Eq. (\ref{winfun}). In Fig. 5, we display the relative per cent
error $\delta^{(p,q)}=100 \,[1-J_{\rm
exp}^{(p,q)}(\xi_n)/(\epsilon({\rm i}\xi_n)-1)]$ which is made if
the quantity $J_{\rm exp}^{(p,q)}(\xi_n)$ is used as an estimator
of $\epsilon({\rm i}\xi_n)-1$. We considered two choices of
parameters for our window functions in Eq. (\ref{winfun}), i.e.
$p=1,\,q=2$ (grey triangles) and $p=1,\,q=3$ (black squares). In
both cases, we took for the parameter $w$ the constant value
$w=(1-2\, {\rm i}) \,{\rm eV}/\hbar$ (See Figs. 1, 2 and 3).
\begin{figure}
\includegraphics{Fig5
\caption{\label{fig5} Numerical simulation of the error (in
percent) in the estimate of $\epsilon({\rm i}\xi_n)-1$ for gold,
resulting from using the quantity $J_{\rm exp}^{(p,q)}(\xi_n)$ as
a estimator, in the hypothesis that data are available from
$\omega_{\rm min }=0.038$ eV$/\hbar$ to $\omega_{\rm max}=30$
eV$/\hbar$. The integer on the abscissa labels the Matsubara mode
$\xi_n=2 \pi n k_B T/\hbar$ ($T=300$ K). Grey triangles are for
the window function having $p=1,\,q=2$, black squares for
$p=1,\,q=3$. In both cases $w=(1-2\, {\rm i}) \,{\rm eV}/\hbar$.}
\end{figure}
It is apparent from Fig. 5 that both window functions perform very
well, for all considered Matsubara modes. The error made by using
$J_{\rm exp}^{(1,2)}(\xi_n)$ is less than one percent, in absolute
value, while the error made by using $J_{\rm exp}^{(1,3)}(\xi_n)$
is less than 0.25 percent. The jumps displayed by the relative
errors in Fig. 5 (around $n=6$ for the grey dots, and $n=14$ for
the black ones) correspond to the approximate positions of the
zeroes of the respective window functions $f({\rm i} \xi)$ (see
Fig. 3). Such jumps can be easily avoided, further reducing at the
same time the error, by making a different choice of the free
parameter $w$ for each value of $n$. We did not do this here for
the sake of simplicity. It is clear that in concrete cases one is
free to choose for each value of $n$, different values of all the
parameters $p, q$ and $w$, in such a way that the error is as
small as possible.
\section{Simulation of the Casimir force}
In this Section, we investigate the performance of our window
functions with respect to the determination of the Casimir force.
We consider for simplicity the prototypical case of two identical
plane-parallel homogeneous and isotropic gold plates, placed in
vacuum at a distance $a$. As it is well known, the Casimir force
per unit area is given by the following Lifshitz formula: \be
P(a,T)= \frac{k_B T}{ \pi} \sum_{n \ge 0}{\,'} \int \!\! d{
k_{\perp}} { k_{\perp}} q_n\!\!\!\! \sum_{\alpha={\rm TE,TM}}
\left(\frac{e^{2 a q_n}}{r_{\alpha}^2({\rm i} \xi_n,{ k_{\perp}})}
-1 \right)^{-1},\label{lifs} \ee where the plus sign corresponds
to an attraction between the plates. In this Equation, the prime
over the $n$-sum means that the $n=0$ term has to taken with a
weight one half, $T$ is the temperature, ${ k_{\perp}}$ denotes
the magnitude of the projection of the wave-vector onto the plane
of the plates and $q_n =\sqrt{k_{\perp}^2+\xi_n^2/c^2}$, where
$\xi_n= 2 \pi n\,k_B T /\hbar$ are the Matsubara frequencies. The
quantities $ r_{\alpha}({\rm i} \xi_n,{ k_{\perp}})$ denote the
familiar Fresnel reflection coefficients of the slabs for
$\alpha$-polarization, evaluated at imaginary frequencies $i
\xi_n$. They have the following expressions: \be r_{\rm TE}({\rm
i} \xi_n,{ k_{\perp}})=\frac{q_n-k_n}{q_n+k_n}\;,\ee \be r_{\rm
TM}({\rm i} \xi_n,{\bf k_{\perp}})=\frac{\epsilon({\rm i}
\xi_n)\, q_n-k_n}{\epsilon({\rm i} \xi_n)\,q_n+k_n}\;,\ee where
$k_n=\sqrt{k_{\perp}^2+\epsilon({\rm i} \xi_n)\xi_n^2/c^2}$.
We have simulated the error made in the estimate of $P(a,T)$ if
the estimate of $\epsilon({\rm i}\xi_n)$ provided by the
window-approximations $J_{\rm exp}^{(p,q)}(\xi_n)$ is used: \be
\epsilon({\rm i}\xi_n) \simeq 1+ J_{\rm exp}^{(p,q)}(\xi_n)\;,\ee
again assuming the simple six-oscillator model of Eq.
(\ref{sixosc}) for $\epsilon(\omega)$. The results are summarized
in Fig 6, where we plot the relative error $\delta_P^{(p,q)}$ in
percent, as a function of the separation $a$ (in microns). The
window functions that have been used are the same as in Fig. 5.
\begin{figure}
\includegraphics{Fig6
\caption{\label{fig6} Simulation of the error (in percent),
versus plate separation (in $\mu$m) in the estimate of the Casimir
force per unit area, between two plane-parallel gold plates in
vacuum at a temperature $T=300 K$, resulting from using $J_{\rm
exp}^{(p,q)}(\xi_n)$ as an estimator of $\epsilon({\rm
i}\xi_n)-1$. The window functions are the same as in Fig. 5: the
dashed line is for $p=1, q=2$ and the solid line for $p=1, q=3$.
All values of the other parameters are same as in Fig. 5.}
\end{figure}
We see from the figure that already with this simple and
not-optimized choice of window functions, the error is much less
than one part in a thousand in the entire range of separations
considered, from 100 nm to one micron.
\section{Conclusions and discussion}
In recent years, a lot of efforts have been made to measure
accurately the Casimir force. At the moment of this writing, the
most precise experiments using gold-coated micromechanical
oscillators claim a precision better than one percent
\cite{decca}. It is therefore important to see if an analogous
level of precision in the prediction of the Casimir force can be
obtained at the theoretical level. A precise determination of the
theoretical error is indeed as important as reducing the
experimental error, in order to address controversial questions
that have emerged in the recent literature on dispersion forces,
regarding the influence of free charges on the thermal correction
to the Casimir force \cite{Mohid}.
Addressing the theoretical error in the magnitude of the Casimir
force is indeed difficult, because many physical effect must be
accounted for. However, it has recently been pointed out
\cite{sveto} that perhaps the largest theoretical uncertainty
results from incomplete knowledge of the optical data for the
surfaces involved in the experiments. On one hand, the large
variability depending on the preparation procedure, of the optical
properties of gold coatings, routinely used in Casimir
experiments, makes it necessary to accurately characterize the
coatings actually used in any experiment. On the other hand, even
when this characterization is done, another problem arises,
because for evaluating the Casimir force one needs to determine
the electric permittivity $\epsilon({\rm i} \xi)$ of the coatings
for certain imaginary frequencies ${\rm i} \xi$. This quantity is
not directly accessible to any optical measurement, and the only
way to determine it is via exploiting dispersion relations, that
permit to express $\epsilon({\rm i} \xi)$ in terms of the
measurable values of the permittivity $\epsilon(\omega)$ for real
frequencies $\omega$. When doing this, one is faced with the
difficulty that optical data are necessarily known only in a
finite interval of frequencies $\omega_{\rm min} < \omega <
\omega_{\rm max}$. This practical limitation constitutes a severe
problem in the experimentally relevant case of good conductors,
because of their large conductivity at low frequencies. With the
standard forms of dispersion relations Eq. (\ref{disp}) and Eq.
(\ref{sigma}), one finds that for practical values of $\omega_{\rm
min}$ and $\omega_{\rm max}$, low frequencies less than
$\omega_{\rm min}$ and/or large frequencies larger than
$\omega_{\rm max}$ give a very large contribution to
$\epsilon({\rm i} \xi)$. In order to estimate $\epsilon({\rm i}
\xi)$ accurately, one is then forced to extrapolate available
optical data outside the experimental region, on the basis of some
theoretical model for $\epsilon(\omega)$. Of course, this
introduces a further element of uncertainty in the obtained values
of $\epsilon({\rm i} \xi)$, and the resulting theoretical error is
difficult to estimate quantitatively.
In this paper we have shown that this problem can be resolved by
suitably modifying the standard dispersion relation used to
compute $\epsilon({\rm i} \xi)$, in terms of appropriate analytic
window functions $f(z)$ that suppress the contributions both of
low and large frequencies. In this way, it becomes possible to
accurately estimate $\epsilon({\rm i} \xi)$ solely on the basis of
the available optical data, rendering unnecessary any
uncontrollable extrapolation of data. We have checked numerically
the performance of simple choices of window functions, by making a
numerical simulation based on an analytic fit of the optical
properties of gold, that has been used in recent experiments on
the Casimir effect \cite{Mohid}. We found that already very simple
forms of the window functions permit to estimate the Casimir
pressure with an accuracy better than one part in a thousand, on
the basis of reasonable intervals of frequencies for the optical
data. It would be interesting to apply these methods to the
accurate optical data for thin gold films quoted in Ref.
\cite{sveto}.
Before closing the paper, we should note that the relevance of
the sample-to-sample dependence of the optical data observed in
\cite{sveto} for the theory of the Casimir effect has been
questioned by the authors of Ref. \cite{Mohid}, who observed
that this dependence mostly originates from relaxation processes
of free conduction electrons at infrared and optical frequencies,
due for example to different grain sizes in thin films. The main
consequence of these sample-dependent features is the large
variability of the Drude parameters, extracted from fits of the
low-frequency optical data of the films, which constitutes the
basic source of variation of the computed Casimir force reported
in Ref. \cite{sveto}. According to the authors of Ref.
\cite{Mohid}, relaxation properties of conduction electrons in
thin films, described by the fitted values of the Drude
parameters, are not relevant for the Casimir effect. Indeed,
according to these authors the quantity $\epsilon(\omega)$ to be
used in Lifshitz formula should not be understood as the actual
electric permittivity of the plate, as derived from optical
measurements on the sample, but it should be rather regarded as a
phenomenological quantity connected to but not identical to the
optical electric permittivity of the film. The ansatz offered by
them for $\epsilon(\omega)$ is dubbed as generalized plasma model,
and following Ref. \cite{Mohid} we denote it as $\epsilon_{\rm
gp}(\omega)$. This quantity is a semianalytical mathematical
construct, defined by the formula: \be \epsilon_{\rm
gp}(\omega)=\epsilon_c(\omega)-\omega_p^2/\omega^2\;,\label{genpla}\ee
where $\epsilon_c(\omega)$ represents the contribution of core
electrons, while the term proportional to the square of the plasma
frequency $\omega_p$ describes conduction electrons. The most
striking qualitative feature of this expression is the neglect of
ohmic dissipation in the contribution from conduction electrons,
but this is not all. Indeed, the ansatz prescribes that only the
core-electron contribution $\epsilon_c(\omega)$ should be
extracted from optical data of the film. On the contrary, and more
importantly, according to Ref. \cite{Mohid} the value of the
plasma frequency $\omega_p$ to be used in Eq. (\ref{genpla})
should be the one pertaining to a perfect crystal of the ${\it
bulk}$ material, and not the one obtained by a Drude-model fit of
the low-frequency optical data of the film actually used in the
experiment. The justification provided for this choice of the
plasma frequency by the authors of Ref. \cite{Mohid} is that the
contribution of conduction electrons to the Casimir force should
depend only on properties determined by the structure of the
crystal cell, which are independent of the sample-to-sample
variability determined by the peculiar grain structure of the
film, reported in Ref. \cite{sveto}. It should be noted that for
gold, the value of the plasma frequency advocated in \cite{Mohid},
$\omega_p=9$ eV/$\hbar$, is much higher than the fit values quoted
in Ref. \cite{sveto}, which range from 6.8 to 8.4 eV/$\hbar$. As a
result, the approach advocated in Ref. \cite{Mohid} leads to
larger magnitudes of the Casimir force, as compared to the values
derived in Ref. \cite{sveto}, with differences ranging, depending
on the sample, from 5 $\%$ to 14 $\%$ at 100 nm. There is no room
here to further discuss the merits and faults of these
approaches, and we refer the reader to \cite{Mohid} for a thorough
analysis. It is fair to note though that a series of recent
experiments by one experimental group \cite{decca} appears to
favor the generalized plasma approach, and to rule out the more
conventional approach based on actual optical data followed in
Refs. \cite{lamor2,sveto}.
The future will tell what is the correct description. In the
meanwhile, we remark that whatever approach is followed, the
methods proposed in this paper may prove useful to obtain more
reliable estimates of the Casimir force for future experiments.
\noindent {\it Acknowledgements} The author thanks the ESF
Research Network CASIMIR for financial support.
|
2,869,038,154,602 | arxiv | \section{Introduction}
\label{sec:intro}
Amyotrophic lateral sclerosis (ALS), also known as motor neuron disease is a progressive, ultimately fatal disease causing progressive loss of motor function~\cite{Oliver2017}. ALS progression is heterogeneous in terms of the pattern of spread across body parts and the rate of functional decline \cite{green2021}. Between 80-95\% of people living with ALS (PALS) experience progressive dysarthria and increasing difficulty communicating daily needs via speech~\cite{Beukelman2011}. Speech decline is fastest for individuals first presenting symptoms in the head and neck muscles~\cite{Eshghi2022,Makkonen2018} and dysarthria can progress rapidly, rendering speech unusable within 23 months from diagnosis~\cite{Eshghi2022}.
Automatic speech recognition (ASR) may significantly extend functional communication in PALS ~\cite{Cave2021}. However, the speech of PALS may be challenging to recognize due to progressing dysarthria~\cite{Caballero2014}. Dysarthria due to ALS is characterized by spectral and temporal alterations to the speech signal resulting in prolonged, distorted, and less distinct phonemes~\cite{Rowe2022}, increased nasal resonances~\cite{Eshghi2021}, decreased vocal harmonics~\cite{tomik2015}, and increased duration and frequency of pauses~\cite{green2004}.
Recent work shows that ASR systems trained on typical speech poorly generalize to dysarthric speech~\cite{DeRussis2019}. In contrast, personalized models trained using samples from the end-user speaker, can be highly accurate - even for severe dysarthria~\cite{green2021, Shor2019, doshi2021} under some speaking conditions (i.e., short, prompted phrases) and with limited amount of data to personalize on~\cite{tobin2022}. However the performance of these models is likely to degrade over time in PALS as speech becomes slower and less intelligible. Little is known about the tolerance of personalized ASR models to progressive speech changes, and when models need to be updated to optimize accuracy. Specialized training strategies and recording schedules may be needed to boost performance during advanced disease progression. Performance might be enhanced by using recordings collected during the early stage of progression for training. For this study, we identified four speakers from the Euphonia Corpus~\cite{macdonald2021} where patterns of degenerating speech could be observed. We then analyzed how speaker independent and speaker dependent ASR models degrade over time as a function of speech severity, and explored strategies to improve personalized models over the course of progression with limited amounts of new data.
\section{Methods}
\label{sec:methods}
\begin{table*}[t]
\centering
\small
\begin{tabular}{|l|l|l|l|l|l|}
\hline
& & bin 1 & bin 2 & bin 3 & bin 4\\
\hline
Subject 1 & \# test utterances & 1211 & 45 & 213 & 487 \\
& severity & MILD & MODERATE & SEVERE & PROFOUND \\
& days from baseline & 0 - 55 & 83 - 89 & 191 - 216 & 324 - 421 \\
\hline
Subject 2 & \# test utterances & 61 & 100 & 295 & 49 \\
& severity & MODERATE & SEVERE & PROFOUND & ANARTHRIC\\
& days from baseline & 0 - 14 & 17 - 56 & 42 - 132 & 133 - 194 \\
\hline
Subject 3 & \# test utterances & 262 & 121 & 48 & \\
& severity & MILD & MODERATE & SEVERE & \\
& days from baseline & 0 - 29 & 191 - 292 & 314 - 314 & \\
\hline
Subject 4 & \# test utterances & 387 & 233 & 88 & \\
& severity & MILD & MODERATE & SEVERE & \\
& days from baseline & 0 - 50 & 55 - 79 & 214 - 220 & \\
\hline
\end{tabular}
\caption{Severity bins per speaker and resulting test set sizes.}
\label{tab:severity_bins}
\end{table*}
\vspace{-1em}
\subsection{Subjects and speech recordings}
Four subjects with progressive dysarthria were identified from the Euphonia dataset, a corpus of over 1 million speech samples from over 1000 individuals with impaired speech ~\cite{macdonald2021}. The Euphonia dataset was collected over several years and many of the subjects recorded over multiple months, allowing us to find cases with declining speech. The four subjects who were selected had (1) at least a 10\% drop in ASR performance on the U-SI model over time (Section~\ref{ss:asr_models}), and (2) an increase in speech severity by at least two points between their first and last recording sessions (Section~\ref{ss:severity_rating}). Speech recordings
were binned into successive 30 day intervals so that, for example, speech recorded between the first and 30th day
were coded as bin 1, and data recorded between the 31st and 60th day from first recording were coded as bin 2. After the speech severity ratings were first assigned to each 30-day bin, we then re-grouped these into fewer, purely severity based bins. For example, for a speaker with a mild severity rating across two consecutive 30-day bins, all recordings were labeled as “mild” and collapsed into one bin. We obtained 3-4 severity-based bins per speaker (Table~\ref{tab:severity_bins}).
\subsection{Perceptual speech severity ratings}
\label{ss:severity_rating}
Speech severity rating was completed by two licensed speech-language pathologists (SLP), who listened to at least 10 utterances from each original 30-day bin and rated overall speech severity on a 5-point Likert scale (typical, mild, moderate, severe, and profound) ~\cite{stipancic2021}. The raters used professional grade headphones and were allowed to adjust the gain as needed. Interrater reliability was assessed by computing a two-way, single measure model intraclass correlation coefficient
which resulted in an ICC of 0.88~\cite{koo2016}. For the reliability analysis, the two SLPs rated speech severity for the same 50 recordings on a different dataset, part of the parent project.
\subsection{Selecting Utterances for experimentation}
The Euphonia dataset consists of recordings from different domains\footnote{e.g., home automation, caregiver requests, and conversational phrases} which vary in average utterance length. To control confounding effects of utterance length we include only short phrases of 3-5 words in length. Such short phrases were chosen because those are most prevalent in the Euphonia dataset and we wanted to maximize the number of utterances per speaker (see Table~\ref{tab:severity_bins} for a summary). Because the goal of this analysis was to compare model performance across different levels of speech degradation, we held the number of training data per speaker constant in order to not confound the model comparisons by the total volume of training data (see ~\cite{tobin2022} on how training set size influences model personalization on impaired speech). We chose the training set sizes of each speaker to be as large as possible while leaving enough utterances per bin to be used as test set. These training set sizes can be thought of as a “budget” (i.e., the maximum number of utterances we assume a speaker to record). Table~\ref{tab:training_data} shows the training set size per speaker.
Training sets of the identified size were then randomly sampled from the recordings from each severity bin. From the remaining utterances we created test sets for each speaker, ensuring no phrase overlap with their training sets. The same resulting test sets per speaker and per severity bin were used across all experiments ensuring comparability (see Table~\ref{tab:severity_bins}).
\begin{table}[h]
\centering
\small
\begin{tabular}{|l|l|l|l|}
\hline
Subject 1 & Subject 2 & Subject 3 & Subject 4\\
\hline
400 & 100 & 300 & 300 \\
\hline
\end{tabular}
\caption{Number of training utterances per subject.}
\label{tab:training_data}
\end{table}
\vspace{-1.4em}
\subsection{Evaluation}
We calculated word error rate (WER) per severity bin and applied bootstrap sampling (1000 repetitions with replacement) to obtain estimates of the mean WER as well as 95\% confidence intervals approximated by +/- 2 standard deviations.\footnote{Analysis showed that the WER across samples was normally distributed.}
For comparing personalized models, 95\% confidence intervals (CIs) for the difference in WER within speaker were generated using bootstrap sampling as well, where CIs not overlapping 0 show significant difference between two strategies.
\subsection{ASR Models}
\label{ss:asr_models}
We used end-to-end ASR models based on the well-studied RNN-T architecture~\cite{graves2013sequence}, with an encoder network consisting of 8 layers and the predictor network of 2 layers of uni-directional LSTM cells. Inputs were 80-dimensional log-mel filterbank energies. Outputs were probability distributions over a 4k word piece model vocabulary.
The Unadapted Speaker Independent (U-SI) model was trained as described in~\cite{narayanan2018}, using $\approx 162k$ hours of typical speech (from Google's internal production dataset).
For the Adapted Speaker Independent (A-SI) model, we further fine-tuned the U-SI model on a large subset of the whole Euphonia dataset with the goal to provide a model that should work better out-of-the-box for impaired speech. For this, the Euphonia dataset was split into a training and a test set across all speakers. There was no speaker- or phrase-overlap between these two sets and also excluded the speakers used in this study from the training portion. The training portion consisted of $\approx 1200$ speakers of a wide range of speech impairment types and severities (including $\approx 20\%$ ALS), and included $\approx 900k$ utterances from various domains.
Lastly, the Adapted Speaker Dependent (A-SD) models were also adaptations of the U-SI model, but fine-tuned only with data from the specific speaker. We followed the general personalization recipe as described in \cite{green2021}. We explored variants of the A-SD model based on sampling data from different severity bins.
\section{Results}
\label{sec:results}
\subsection{Impact of Degenerating Speech on ASR performance}
\label{ss:impact}
\begin{figure*}[t]
\centering
\subfloat{\includegraphics[width=2.9in]{subject1.png}}
\subfloat{\includegraphics[width=2.9in]{subject2.png}}\\
\subfloat{\includegraphics[width=2.9in]{subject3.png}}
\subfloat{\includegraphics[width=2.9in]{subject4.png}}
\caption{WER across all severity bins and 3 different models for all four speakers. Each dot is the mean WER on the speaker’s test set with the given model and the shaded area represents the 95\% CIs.}
\label{fig:wer_across_bins}
\end{figure*}
We first compared the performance of the U-SI, the A-SI and the A-SD model personalized on the training data of the first severity bin per speaker to analyze the impact of degenerating speech on recognition performance. Figure~\ref{fig:wer_across_bins} displays a progression charts.
As expected, the U-SI models consistently performed the worst across all speakers and severity levels while the A-SD models performed best and the A-SI models often were somewhere in between. As a general observation, an increase in severity consistently led to an increase in WER, especially in the higher severity levels.
For Subject 2, recognition performance of the A-SD model was almost as poor as the U-SI model; in this case, even the A-SI performed slightly better. For Subject 3, the A-SI was not much worse than the A-SD model. Note that the A-SI model did not include any data from the target speaker.
\subsection{Mitigation Strategies}
Based on our findings that the performance of personalized models degraded when trained on speech recorded during the early stage of speech impairment, we explored the effectiveness of two practical mitigation strategies for optimizing recognition during the most severe stages of speech decline when fewer training samples are typically available. In these experiments, the overall maximum number of training utterances was held constant (Table~\ref{tab:training_data}), and we only tested the performance of the A-SD models on the last severity bin, when WERs are highest. We compared the following 4 variants of the A-SD model:
\begin{itemize}
\item Baseline - 100\% of data from bin 1 (as in Section~\ref{ss:impact})
\item Mitigation Strategy 1 (“start-over”) - Assuming we have used 50\% of our recording budget on bin 1, we now use another 50\% of recordings from the last severity bin. In the start-over scenario, we use only this 2nd 50\% of recordings for adaptation.
\item Mitigation Strategy 2 (“Continued Training”) - data allocation like in the start-over scenario but here we simulate training continuation in that we use both the bin 1 and bin 4 recordings for adaptation.
\item Upper Bound - 100\% of the recordings from bin 4 – this is an idealized and unpractical scenario only used to show the best possible performance if all training data is from the most recent severity level.
\end{itemize}
Figure~\ref{fig:mitigatio_strategies} shows results for Subject 1 and Subject 2.\footnote{Other subjects omitted due to insufficient utterances and severity stages to allow for meaningful experiments.} The baseline model performed the worst of the four scenarios, while training on as much data as possible from the last severity phase resulted in the best recognition. While these findings were expected, they clearly show the negative impact of “outdated” data and how much, in contrast, a model can be improved by using the most recent data of the same size.\footnote{Note that for both speakers, the final WERs were high even when data from the most recent severity bin was included. This can be attributed to the relatively small number of recordings in these bins (Table~\ref{tab:severity_bins}). In a typical recording scenario, one would probably want to increase the recording budgets for speakers with such severe levels of speech impairment.}
Comparing the two mitigation strategies (start-over and continued training), we found both to significantly improve WER over the baseline approach for both subjects. The continued training strategy provided a significant improvement over the start-over strategy for Subject 2 but not for Subject 1. The upper bound scenario was significantly better than both mitigation strategies for both subjects, which emphasizes that doubling the number of recordings in later stages may be beneficial (if larger amounts of recordings are possible). Table~\ref{tab:mitigation_ci} shows the 95\% CIs for these comparisons.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{mitigation_strategies.png}
\caption{WERs on the highest severity bin for A-SD models.}
\label{fig:mitigatio_strategies}
\end{figure}
\begin{table}[h]
{ \centering
\footnotesize
\begin{tabular}{c | T{0.04\textwidth} | T{0.04\textwidth} |T{0.04\textwidth} | T{0.045\textwidth} |T{0.045\textwidth} |T{0.045\textwidth} }
Subject & Baseline - Upper Bound & Baseline - Start Over & Baseline - Cont'd & Upper Bound - Start Over & Upper Bound - Cont'd & Cont'd - Start Over \\ \hline
1 & 19.76, 39.42 & 6.92, 25.03 & 14.37, 31.75 & -22.60, -4.63 & -12.36, -0.69 & -12.82, -1.35 \\ \hline
2 & 15.71, 19.93 & 8.53, 12.87 & 10.35, 14.67 & -8.94, -5.30 & -7.02, -3.60 & -3.65, 0.03
\end{tabular}
\caption{95\% Confidence Intervals for the difference in WER between mitigation strategies}
\label{tab:mitigation_ci}
}
\end{table}
\section{Discussion}
\label{sec:discussion}
This study investigated the impact of degrading speech on ASR accuracy in individuals with progressive dysarthria. Speech samples were recorded over time and selected to represent a substantial within-subject decline in speech. Recognition accuracy of the three ASR models decreased as speech degraded, particularly during the more severe stages of speech decline.
To the best of our knowledge, this is the first time that the impact of speech degeneration on personalized models has been studied systematically.
Our experiments suggest that personalized models become less effective over the course of progression unless updated with more current recordings. Both the start-over scenario discarding “outdated” recordings, as well as continued training adding more recent and keeping outdated data, significantly improved recognition when compared to recording all utterances up-front without updating. Our experiments also suggest, that in absence of more recent data, keeping data from previous severity stages
does not seem to incur any harm, but potentially improves performance.
Overall, our findings emphasize the importance of continued recording and model retraining when providing personalized models for individuals with progressive speech impairments.
Amassing speech recordings during the early stage of the disease may be unnecessary if it is solely to improve future recognition when speech becomes more severe.
Our finding that A-SI models can perform similar or even better than un-updated personalized models suggests that A-SI models may be a worthwhile option if re-recording is not possible.
These experiments were performed on a relatively small cohort of speakers. In the future, we plan to extend to more speakers and include other etiologies that lead to degenerating speech. We are currently recording a more controlled, longitudinal dataset with additional participants.
|
2,869,038,154,603 | arxiv | \section{Introduction}
If electroweak symmetry breaking in the Standard Model (SM) arises solely from
the presence of a fundamental scalar, the scale of the electroweak interactions
requires a severe fine-tuning. The economy of the Higgs mechanism thus comes
at the cost of making the SM unnatural. Technicolor models \cite{TC} aim to
ameliorate this instability by considering the Higgs as a composite
state; however, these simplest models are ruled out by their large
oblique corrections \cite{TCoblique}. A new approach to a composite
Higgs is provided by the AdS/CFT correspondence
\cite{AdS/CFT}, in particular as represented by Randall-Sundrum (RS1)-type
setups \cite{RS1}. Typically the Higgs has been confined to a
particular brane in the 5D picture, thus corresponding to a 4D state of
infinite scaling dimension \cite{dimension}. This, however, is more than
is necessary to avoid issues of extreme fine-tuning. Even if the Higgs is
localized somewhere near the IR brane of RS1, the corresponding 4D state is
interpreted as a composite and can be light with tuning at only the percent
level.
This particular
relaxation of the usual assumptions is the salient feature of the Gaugephobic
Higgs model \cite{Gaugephobic} we consider below (see also \cite{bulkhiggs} for
other treatments of a 5D Higgs). The crucial aspect of this model that we
exploit is that the Higgs can be made light (e.g. $m_H < 10$ GeV) while
simultaneously suppressing its couplings to fermions and weak gauge bosons,
such that current experimental constraints are evaded.
\section{The Gaugephobic Higgs Model}
The Gaugephobic model is described in \cite{Gaugephobic}; here we
review only the features important for Higgs production at B-factories. As in
RS1, we have a slice of AdS$_5$ with conformally flat metric (taking
$z$ to denote the coordinate of the extra spatial dimension):
\begin{equation}
ds^2 = \left(\frac{R}{z}\right)^2 (\eta_{\mu \nu} dx^\mu dx^\nu-dz^2).
\end{equation}
$R$ corresponds to the position of the UV brane and sets the curvature scale of
the extra dimension. The second boundary is at $z=R'$ with $R' \gg R$
generating the weak-Planck hierarchy due to the warp factor. $R$ is a free
parameter, while $R'$ is set by the masses of the weak gauge bosons. The bulk
gauge group $SU(2)_L \times SU(2)_R \times U(1)_X$ is broken to
$U(1)_{EM}$ by boundary conditions and a bi-fundamental Higgs
with zero $X$ charge. With the Higgs taken to be a bulk field, we choose the three
parameters $\beta, m_H, V$ to describe it. In our analysis we parameterize the
effect of the Higgs bulk mass $\mu$ by $\beta \equiv \sqrt{4+\mu^2}$.
Conventional RS1 is described by the limit $\beta \to \infty$.
The profile of the vacuum expectation value (VEV) is
controlled by UV brane boundary conditions to be
\begin{equation}
\label{VEVprofile}
v(z) = \sqrt{\frac{2(1+\beta)\log R'/R}{1-(R/R')^{2(1+\beta)}}}
\frac{g V}{g_5} \frac{R^\prime}{R} \left(\frac{z}{R^\prime} \right)^{2+\beta},
\end{equation}
where $g$ is the SM $SU(2)$ gauge coupling, and $g_5$ is the 5-dimensional
$SU(2)_{L/R}$ gauge
coupling. The normalization $V$ of the VEV is chosen such that the SM is
recovered as one takes $V\to 246$ GeV: in this limit the gauge
boson profiles are flat, with all mass coming from direct overlap with the Higgs.
Conversely, in the limit $V\to \infty$ the profiles
of the gauge bosons are pushed towards the UV (away from the IR-localized VEV)
so that their mass comes entirely from momentum in the fifth dimension.
This corresponds to the Higgsless limit \cite{higgsless}: in this case the
Kaluza-Klein (KK) scale is lowered, so that the appearance of the weakly-coupled KK states
fulfill the Higgs boson's additional role of restoring unitarity in $WW$-scattering.
The other ingredient that establishes the profile (\ref{VEVprofile}) is
the Higgs quartic coupling $\lambda$, which is confined to the IR brane
to ensure that electroweak symmetry breaking takes place there. We
trade this parameter for the mass $m_H$ of the physical Higgs mode via
the effective potential's minimization condition, in the same way as in
the SM. The couplings between the Higgs and other states is provided by
the overlap of the corresponding 5D profiles, so field localization
governs interaction strength.
\begin{table}[t]
\begin{center}
\begin{tabular}{c|c}
Parameter & Range \\ \hline
$m_h$ [GeV] & [0, \ 10] \\ \hline
$\beta$ & [2, \ 10] \\ \hline
$V$ [GeV] & [250, \ 1500] \\ \hline
$c_L$($b$) & [0, \ 0.5] \\ \hline
$c_R$($b$) & [-0.79, \ -0.7]
\end{tabular}
\caption{Range of the scanned parameter space with the AdS scale set
by $R^{-1}=10^8$ GeV. The range of $\beta$ is chosen to localize
the Higgs VEV towards the IR brane, while the range of $V$ is chosen
to interpolate between the SM and ``almost Higgsless'' limits. The
bulk mass for the left- and right-handed bottom quark are
constrained by the required precision of their coupling to the $Z$.}
\label{tab:params}
\end{center}
\end{table}
The light fermions in the model are arranged in doublets of the bulk
gauge group. The 5D fermions must be
vector-like due to the nature of the 5D realization of the Dirac algebra,
so that bulk mass terms are allowed for them and will dictate their localization.
They each have dimensionless bulk
masses $c_L$ and $c_R$ for the left- and right-handed pieces as well as
a UV kinetic term to split the masses within a given multiplet.
The inclusion of the third
quark generation requires more care, however, since the heavy top quark
requires a large overlap with the Higgs VEV. With the top and bottom
arranged together in doublets, this would lead to an unacceptable deviation in
the $Zb_L {\bar b}_L$ coupling. We choose to solve this problem as in
\cite{custodian} where non-universal corrections to the $Z$-couplings
are avoided by representing the left-handed bottom quark in a bi-doublet
of the bulk $SU(2)_L \times SU(2)_R$. The total field content of the
third generation thus contains the new fields $T$ and $X$,
where the quantum numbers of the $T$ allow it to mix with $t$.
The new exotic quark $X$ has electric charge 5/3 so won't mix with
the other fields. The lowest lying $X$ state enters at
$m_X \sim 1 \ {\rm TeV}$.
\section{Parameter Space and Constraints}
\begin{figure}[t]
\centering
\includegraphics[width=8.4cm]{Fig12_XiHZZ2_XiHbb2_vs_V}
\hspace{0pt}
\caption{$\xi^2$ vs. $V$. As $V\rightarrow 246$ GeV from above the
SM is approached, i.e. $g_{HZZ}\rightarrow g_{HZZ}^{\rm SM}$ while
as $V$ is increased the gauge bosons decouple from the Higgs.}
\label{fig:HZZHbbvsV}
\end{figure}
The Gaugephobic model is described by the five parameters shown in
Table~\ref{tab:params}, with the ranges we considered. In
Fig.~\ref{fig:HZZHbbvsV}
we scan over
the parameter space imposing the constraints in this section. We find
that {\it all} of the Higgs couplings are suppressed in this
model.
LEP searched for the Higgs in the Higgsstrahlung mode in which it is
radiated off a $Z$ boson through the $HZZ$ coupling. By decoupling the
Higgs from the $Z$, LEP would have a sufficiently small rate that it
could not discover the Higgs~\cite{LEP}. We apply the decay mode independent bound on the
Higgsstrahlung cross section. This limit varies by a factor of two as a function of mass; we apply $\xi_{HZZ}^2 < 2.1 \times 10^{-2}$ which is the upper bound for the limit in the range $2 m_\tau < m_H < m_{\Upsilon(3S)}$, where
we define the suppression relative to the SM of $Z$ bosons and bottom
quarks as
\begin{equation}
\xi^2_{HZZ} \equiv \left(g_{HZZ}/g_{HZZ}^{SM}\right)^2;
\qquad
\xi_{bbH}^2 \equiv \left(y_b/y_b^{SM}\right)^2,
\label{eq:xiHZZ}
\end{equation}
with $g_{HZZ}$
denoting the $H \to ZZ$ coupling and $y_b$ the bottom Yukawa. These
suppression factors are shown in
Fig.~\ref{fig:HZZHbbvsV}
and are uncorrelated with
the Higgs mass. The LEP constraint depends only on the
$HZZ$ coupling and is independent of other modifications which would
change the Higgs decays.
With the Higgs decoupled from the $Z$, the next most relevant
constraints come from radiating the Higgs off $b$ quarks. For $2 m_\mu
< m_H < 2 m_\tau$, the SM Higgs was first ruled out by
ARGUS~\cite{Alam:1989mta} in the channels $B \to K H$ and $B \to K^* H$
with the assumption that $m_t = 50$ GeV. However today we know from CDF
and D0~\cite{Yao:2006px} that $m_t = 172$ GeV, which strongly enhances this branching
ratio. For a SM Higgs in this mass range, these channels
would be dominant~\cite{Grinstein:1988yu} because of an $m_t^4$
enhancement in the rate:
\begin{eqnarray}
\label{eq:GammabHs}
\frac{\Gamma(b \to H s)}{\Gamma(b \to c e \overline{\nu}_e)} =
\qquad \qquad \qquad \qquad \qquad \qquad \qquad
\\
\frac{27 \sqrt{2}}{64 \pi^2} G_F m_b^2
\frac{\left(1-\frac{m_H^2}{m_b^2}\right)^2}{f(m_c/m_b)} \left|\frac{V_{st}^\dagger
V_{tb}}{V_{cb}}\right|^2\left(\frac{m_t}{m_b}\right)^4,
\nonumber
\end{eqnarray}
where $f(m_c/m_b)\sim 0.5$ is the dimensionless phase space factor for
$b \to ce\overline{\nu}_e$. We use this standard result to approximate
the rate even in this model. New contributions coming from KK quarks
will contain suppression not only from the top Yukawa couplings, but also
from both gauge couplings appearing in the diagram: the overall
suppression from these three couplings makes their contribution
substantially smaller than Eq.~\ref{eq:GammabHs}. The exotic $X$ quark
does not contribute to this process.
Thus to avoid regions that are tightly constrained to have an extremely
weak Higgs coupling, we prefer $m_H > 2 m_\tau$. However, as can be seen
in Fig.~\ref{fig:HZZHbbvsV}, the
couplings of the Higgs become arbitrarily small as $V \to \infty$, so that a
large enough VEV could provide an adequate suppression in the top Yukawa
coupling to explain the observed rate. With the measured value~\cite{Yao:2006px}
of $B \to s \mu^+ \mu^-$ and assuming $BR(H\to \mu^+ \mu^-) = 5\%$,
the Gaugephobic Higgs with $m_H<2 m_\tau$ is allowed when
$V
> 3.1$ TeV.
At this point we have a suppression of the top Yukawa coupling
$\xi^2_{ttH} \sim 10^{-5}$ while $\xi^2_{bbH} \sim 10^{-4}$.
For $m_H > 2 m_\tau$ the most profitable mode to search is in
$\Upsilon(nS) \to \gamma H$~\cite{Wilczek} where $n = 1,2,3$, which we discuss in
detail in the next section. Once the $HZZ$ constraints
are taken into account, the Gaugephobic Higgs also has suppressed
couplings to $b$ quarks and therefore $\Upsilon$'s. This mode was not
as vigorously pursued as Higgsstrahlung and $B$ meson decays because
there is sufficient theoretical uncertainty in the predictions for this
mode. Even including these uncertainties, this mode only barely reached
the expected SM level. Therefore LEP data was used to rule out the SM
Higgs in the $m_B-m_K < m_H < M_\Upsilon$ region instead. Searches were
performed by the CLEO collaboration using $\Upsilon(1S)$ decays to
mono-energetic photons~\cite{Besson:1985xw}. They limit \[ BR(\Upsilon(1S)
\to \gamma H) < 0.4\%; \qquad 8.4 {\rm GeV} < M_H < 9.4 {\rm GeV}. \]
The CUSB Collaboration measured the entire photon spectrum from Upsilon
decays~\cite{Franzini:1987pv}. They rule out earlier claims from
Mark III~\cite{Baltrusaitis:1985pu} and
Crystal Ball~\cite{xiclaim} of evidence for Higgs resonances at 2.2 GeV
and 8.3 GeV respectively.
This limit just barely reaches the SM expectation
$BR(\Upsilon \to \gamma H) \sim 2 \times 10^{-4}$ for $M_H \to 0$ and
worsens to limit $BR(\Upsilon \to \gamma H) < 1.5 \times 10^{-3}$ as
$M_H$ increases.
Finally the ARGUS collaboration searched for a monochromatic photon
line~\cite{Albrecht:1985qz} in the ranges
\begin{eqnarray}
\nonumber
BR(\Upsilon(1S) \to \gamma H) < 0.1\%;&& \quad 2.1{\rm GeV} < m_H < 8.9{\rm GeV} \\
\nonumber
BR(\Upsilon(2S) \to \gamma H) < 0.5\%;&& \quad 3.2{\rm GeV} < m_H < 9.5{\rm GeV}
\label{eq:ARGUSlimits}
\end{eqnarray}
where the limits quoted are at the lowest $m_H$ and worsen slightly for
higher $m_H$.
Additionally, there is an important indirect constraint from the coupling of
the $Z$ to $b$ quarks, $g_{Zbb}$: for left-handed $b$'s this is
constrained to be within $\sim$0.25\% of its SM value \cite{custodian}
while for the right-handed fields the constraint is relaxed to
$\sim$30\%~\cite{Choudhury:2001hs}. This accuracy is possible only with
the third generation incorporated in the representations described
above, and even then provides a stringent condition on the bulk masses
of those fields.
We point out that a complete analysis of electroweak precision
parameters is lacking for this model. However it has been shown that in
the Higgsless limit, the large contributions to the $S$-parameter
typical of Technicolor models can in fact be cancelled in a holographic
model by an appropriate ``de-localization'' (i.e. tuning of the bulk
masses) of the bulk fermions \cite{delocalization}. The effect of
de-localization on our results is small: we have confirmed numerically
that adding restrictions to the localization of the light fermions does
not qualitatively change our results.
\section{A Light Higgs in $\Upsilon$ Decays}
At low masses, the Gaugephobic Higgs is produced by radiation from the
heaviest fermion available. Data with heavy fermions comes dominantly
from producing $\Upsilon$ and $J/\Psi$ resonances. BaBar has collected
30.2 fb$^{-1}$ on the $\Upsilon(3S)$ and 14.45 fb$^{-1}$ on the
$\Upsilon(2S)$, complementing the 3 fb$^{-1}$ collected by Belle, and
older results from CLEO.
The Higgs is radiated from vector resonances $V \to \gamma
H$~\cite{Wilczek}. The photon is monochromatic with an energy
\begin{equation}
E_\gamma = \frac{M_V^2-M_H^2}{2 M_V}
\label{eq:egamma}
\end{equation}
because the Higgs is extremely narrow ($\Gamma_H < 1$ MeV) for these
masses.
The relative rate assuming a Coulomb-like potential for the $b {\bar b}$
state is~\cite{Wilczek}
\begin{eqnarray}
\frac{\Gamma(\Upsilon \to H \gamma)}{\Gamma (\Upsilon \to \mu \mu)} &=&
\frac{G_F \ m_b^2}{\sqrt{2} \pi \alpha} \left(1-\frac{m_h^2}{m_\Upsilon^2}\right)\,
\xi_{Hbb}^2 \epsilon \, ; \\
BR(\Upsilon \to H \gamma) &\simeq&
1 \times 10^{-4} \left( 1-\frac{m_H^2}{m_\Upsilon^2}\right)\, \xi_{Hbb}^2\epsilon \, ,
\end{eqnarray}
where $\xi_{Hbb}$ is the suppression relative to
the SM. The factor $\epsilon$ includes any next-to-leading order
corrections, most notably the leading one-loop QCD
correction~\cite{Vysotsky:1980cz,Barbieri:1975ki,Nason:1986tr}
and relativistic correction~\cite{aznauryan}.
All of these corrections reduce the branching ratio to
Higgs over the entire mass range, but there is considerable uncertainty as
to how to combine the various contributions. See \cite{HHG} for further
discussion. Since these two corrections are coming respectively from
hard and soft gluon effects, we simply combine the two to find the
approximate branching fraction for $\Upsilon (3S) \to H \gamma$ shown
in Fig.~\ref{fig:BR}. The relative uniformity of this plot
reflects the fact that the suppression of the bottom Yukawa coupling has
little direct dependence on the mass of the physical Higgs. Numerical differences
between this rate for the $3S$ state and the same rate for the lighter
$n=1,2$ resonances can be determined from the difference in the partial
width $\Gamma(\Upsilon \to \mu \mu)$ of each.
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{mass_BR}
\hspace{0pt}
\caption{Branching ratio of the $\Upsilon(3S)$ to a photon and
Higgs, as a function of Higgs mass.}
\label{fig:BR}
\end{figure}
Unfortunately the $\Upsilon(4S)$ data is almost useless in the Wilczek
mode because its width is so much larger. For the $\Upsilon(4S)$ data
to be competitive with $\Upsilon(3S)$ data, one needs approximately
$\Gamma_{\Upsilon(4S)}/\Gamma_{\Upsilon(3S)} \simeq 1000$ times more
data because the $\Upsilon(4S)$ is above threshold for decay into a pair
of $B$ mesons and consequently has a very large width. However, one can
profitably search for a Higgs in $B$ meson decays using $\Upsilon(4S)$
decays, albeit with reduced kinematic reach $m_H < 4.8$ GeV.
\section{Conclusions}
A light Higgs boson is experimentally excluded only when its couplings
to other SM fields are sufficiently large. There still exists a class of
viable models in which these couplings are suppressed in an ``almost
Higgsless'' scenario, allowing for the potential discovery of a light
Higgs at B-Factories. This discovery would be associated with the
discovery at the LHC of heavy $Z^\prime$ and $W^\prime$ Kaluza-Klein
resonances and no Higgs. We show the range of viable parameters
within the Gaugephobic Higgs model. For a Higgs lighter than 10 GeV,
the relevant signal would be an excess of monochromatic photons in
$\Upsilon(nS)$ data, associated with a pair of heavy fermions such as
charm or tau. A Higgs lighter than the $B$ meson is much more tightly
constrained to be nearly Higgsless, and can be discovered in $B \to K H$
using $\Upsilon(4S)$ data.
\section{Acknowledgements}
We thank Christophe Grojean, Jack Gunion, Damien
Martin, and John Terning for discussions.
The work
of J.G. and J.M. is supported by the US Department of Energy under
contract DE-FG03-91ER40674.
|
2,869,038,154,604 | arxiv | \section{Introduction}\label{section1}
Let $f$ be a locally integrable function on $\mathbb R^n$. The Hardy-Littlewood maximal operator of $f$ is defined by
\begin{align}
M(f)(x)=\sup\limits_{Q}\frac{1}{|Q|}\int_Q|f(y)|dy,\;\;x\in \mathbb R^n,
\end{align}
where the supremum is taken over all cubes containing $x$. It is well known that the Hardy-Littlewood maximal operator is one of the most important operators and plays a key role in harmonic analysis since maximal operators could control crucial quantitative information concerning the given functions. It is very a powerful tool for solving crucial problems in analysis, for example, applications to differentiation theory, in the theory of singular integral operators and partial differential equations (see \cite{FS1971}, \cite{St1993}, \cite{T1986} for more details).
\vskip 5pt
It is very important to study weighted estimates for maximal operators in harmonic analysis. B. Muckenhoupt \cite{Mu1972} first discovered the weighted norm inequality for the Hardy-Littlewood maximal operators in the real setting. More precisely, it is proved that
for $1<p<\infty$,
\begin{align}\label{maximal function}
\int\limits_{\mathbb R^n}\left|M(f)(x)\right|^p\omega(x)dx
\leq C \int\limits_{\mathbb R^n}\left|f(x)\right|^p\omega(x)dx,
\end{align}
holds for all $f$ in the weighted Lebesgue space $L^p(\omega(x)dx)$ if and only if $\omega$ belongs to the class of Muckenhoupt weights denoted by $A_p$.
\vskip 5pt
Later, Coifman and Fefferman \cite{CF1974} extended the theory of Muckenhoupt weights to general Calder\'{o}n-Zygmund operators. They also proved that $A_p$ weights satisfy the crucial reverse H\"{o}lder condition. The weighted norm inequalities for the maximal operators are also extended to the vector valued setting by Andersen and John in the work \cite{AJ1981}, and to the Lorentz spaces by Chung, Hunt, and Kurtz in \cite{CHK1982}. It is well known that the theory of weighted functions plays an important role in the study of boundary value problems on Lipschitz domains, in theory of extrapolation of operators and applications to certain classes of nonlinear partial differential equation.
\vskip 5pt
It is also useful to remark that in 2012, Tang \cite{Ta2012} established the weighted norm inequalities for maximal operators and pseudodifferential operators with smooth symbols associated to the class of new weighted functions $A_p(\varphi)$ (see in Section \ref{section2} below for more details) including the Muckenhoupt weighted functions.
It should be pointed out that the class of $ A_p(\varphi)$ weights do not satisfy the doubling condition.
\vskip 5pt
It is well known that Morrey \cite{Mo1938} introduced the classical Morrey spaces to study the local behavior of solutions to second order elliptic partial differential equations. Moreover, it is found that many properties of solutions to partial differential equations can be attributed to the boundedness of some operators on Morrey spaces. Also, the Morrey spaces have many important applications to Navier-Stokes and Schr\"{o}dinger equations, elliptic equations with discontinuous coefficients and potential theory (see, for example, \cite{Adams1975}, \cite{Caffarelli1988}, \cite {Fan1998}, \cite{Mazzucato03}, \cite{Ruiz1991}, \cite{T1992} and therein references).
During last decades, the theory of Morrey spaces has been significantly developed
into different contexts, including the study of classical operators of harmonic analysis, for instance, maximal functions, potential operators, singular integrals, pseudodifferential operators, Hausdorff operators and their commutators in generalizations of these spaces (see \cite{AGL2000}, \cite{Ch2018}, \cite{Guliyev2011}, \cite{G2016}, \cite{KS2009}). Especially, Wang, Zhou and Chen \cite{WZC2017} recently have established the interesting connection between the $A_p$ weights and Morrey spaces. More precisely, some new characterizations of Muckenhoupt weights are given by replacing the Lebesgue spaces by the Morrey spaces. Motivated by all of the above mentioned facts, the first main of this paper is to give some new characterizations of Muckenhoupt type weights such as $A_p$, $A(p,1)$, and $A_p(\varphi)$ by establishing the boundedness of maximal operators on the weighted Morrey and Lorentz spaces. In particular, we give the weighted norm inequality of weak type for new dyadic maximal operators associated to the $A_p^{\Delta,\eta}(\varphi)$ dyadic weights. The results are given in Section \ref{section3} of the paper.
\vskip 5pt
The second main of this paper is to study the boundedness of sublinear operators including many interesting operators in harmonic analysis, such as the Calder\'{o}n-Zygmund operator, Hardy-Littlewood maximal operator, strongly singular integrals, and so on, on the weighted Morrey spaces.
\vskip 5pt
Let us first give the definition of sublinear operators with strongly singular kernels. Let the operator $\mathcal{T}$ be well defined on the space of all infinitely differential functions with compact support $C^\infty_c(\mathbb R^n)$. It is said that $\mathcal{T}$ is a strongly singular sublinear operator if it is a linear or sublinear operator and satisfies the size condition as follows
\begin{align}\label{ineq-sub}
\left|\mathcal{T}f(x)\right|\leq C\int_{\mathbb R^n}\frac{|f(y)|}{|x-y|^{n+\lambda}}dy, \text{\; for\; a.e \;} x\not\in \text{supp}{f},
\end{align}
for all $f\in C^\infty_c(\mathbb R^n)$, where $\lambda$ is a non-negative real number.
\vskip 5pt
For a measurable function $b$, the commutator operator $[b, \mathcal{T}]$
is defined as a linear or a sublinear operator such that
\begin{align}\label{ineq-sub-com}
\left|[b, \mathcal{T}]f(x)\right|\leq C\int_{\mathbb R^n}\frac{|f(y)||b(x)-b(y)|}{|x-y|^{n+\lambda}}dy, \text{\; for\; a.e \;} x\not\in \text{supp}{f},
\end{align}
for every $f\in C^\infty_c(\mathbb R^n)$. For $\lambda\leq 0$, the sublinear operators $\mathcal{T}$ and $[b, \mathcal{T}]$ have been investigated by many authors. For example, see in the works \cite{Guliyev2011}, \cite{Kokilashvili2016}, \cite{Soria1994} and therein references. In the Section \ref{section4} of the paper, we establish the boundedness of sublinear operators $\mathcal{T}$ and $[b, \mathcal{T}]$ for $\lambda\geq 0$ on the weighted Morrey type spaces. As an application, we obtain some new results about boundedness of strongly singular integral operators and their commutators with symbols in BMO space on the weighted Morrey spaces. Moreover, maximal singular integral operators of Andersen and John type are studied on the two weighted Morrey spaces with vector valued functions in Section \ref{section4}.
\section{Some notations and definitions}\label{section2}
Throught the whole paper, we denote by $C$ a positive geometric constant that is independent of the main parameters, but can change from line to line.
We also write $a\lesssim b$ to mean that there is a positive constant $C$, independent of the main parameters, such that $a \le Cb$. The symbol $f\simeq g$ means that f is equivalent to g (i.e. $C^{-1} f\leq g \leq Cf)$. As usual, $\omega(\cdot)$ is a non-negative weighted function on $\mathbb{R}^n$. Denote $\omega(B)^{\alpha }=\big(\int_B\omega(x)dx\big)^{\alpha}$, for $\alpha\in\mathbb R$. Remark that if $\omega(x) = x^{\beta}$ for $\beta > -n$, then we have
\begin{align}\label{ineq-power}
\omega(B_r(0))=\int_{B_r(0)}|x|^{\beta}dx\simeq r^{\beta+n}.
\end{align}
We also denote by $B_r(x_0)=\{x\in\mathbb R^n:|x-x_0|<r\}$ a ball of radius $r$ with center at $x_0$, and let $rB$ define the ball with the same center as $B$ whose radius is $r$ times radius of $B$.
Now, we are in a position to give some notations and definitions of weighted Morrey spaces.
\begin{definition}
Let $1 \le q < \infty, 0 < \kappa < 1$ and $\omega_1$ and $\omega_2$ be two weighted functions. Then two weighted Morrey space is defined by
\begin{align*}
\mathcal{B}^{q,\kappa}_{\omega_1,\omega_2}(\mathbb{R}^n)=\{f\in L^q_{\omega_2,{\rm loc}}(\mathbb{R}^n):\|f\|_{\mathcal{B}^{q,\kappa}_{\omega_1,\omega_2}(\mathbb{R}^n)}<\infty \},
\end{align*}
where
\begin{align*}
\|f\|_{\mathcal{B}^{q,\kappa}_{\omega_1,\omega_2}(\mathbb{R}^n)}=\sup\limits_{\rm ball\,B} \Big(\frac{1}{\omega_1(B)^{\kappa}}\int_{B}|f(x)|^q\omega_2(x)dx \Big)^{\frac{1}{q}}.
\end{align*}
\end{definition}
It is easy to see that $\mathcal{B}^{q,\kappa}_{\omega_1,\omega_2}(\mathbb{R}^n)$ is a Banach space. Note that if $\omega_1=\omega, \omega_2=1$, we then write $\mathcal B^{q,\kappa}(\omega,\mathbb R^n):=\mathcal B^{q,\kappa}_{\omega_1,\omega_2}(\mathbb R^n)$. Also, if $\omega_1=\omega_2=\omega$, then we denote $\mathcal B^{q,\kappa}
_\omega(\mathbb R^n):= {\mathcal B}^{q,\kappa}_{\omega_1,\omega_2}(\mathbb R^n)$. In particular, for $\omega=1$ we write $\mathcal B^{q,\kappa}
(\mathbb R^n):=\mathcal B^{q,\kappa}_\omega(\mathbb R^n)$.
\begin{definition}
Let $1 \le q < \infty, 0 < \kappa < 1$. The local Morrey space is defined by
\begin{align*}
\mathcal{B}^{q,\kappa}_{\rm loc}(\mathbb{R}^n)=\{f\in L^q_{{\rm loc}}(\mathbb{R}^n):\|f\|_{\mathcal{B}^{q,\kappa}_{\rm loc}(\mathbb{R}^n)}<\infty \},
\end{align*}
where
\begin{align*}
\|f\|_{\mathcal{B}^{q,\kappa}_{\rm loc}(\mathbb{R}^n)}=\sup\limits_{x\in\mathbb{R}^n,0<R<1} \Big(\frac{1}{|B_R(x)|^{\kappa}}\int_{B_R(x)}|f(y)|^q dy \Big)^{\frac{1}{q}}.
\end{align*}
\end{definition}
Note that for $1\leq q\leq p<\infty$, the local Morrey space $\mathcal{B}^{q,1-\frac{q}{p}}_{\rm loc}(\mathbb{R}^n)$ has some important applications to the Navier-Stokes equations and other evolution equations (see in \cite{ Fe1993, T1992} for more details).
\begin{definition}
Let $1 \le q < \infty$ and $0 < \kappa < 1$. The weighted inhomogeneous Morrey space is defined by
\begin{align*}
{B}^{q,\kappa}_\omega(\mathbb{R}^n)=\{f\in L^q_{{\omega, \rm loc}}(\mathbb{R}^n):\|f\|_{{B}^{q,\kappa}_\omega(\mathbb{R}^n)}<\infty \},
\end{align*}
where
\begin{align*}
\|f\|_{{B}^{q,\kappa}_{\omega}(\mathbb{R}^n)}=\sup\limits_{x\in\mathbb{R}^n,R\geq 1} \Big(\frac{1}{\omega(B_R(x))^{\kappa}}\int_{B_R(x)}|f(x)|^q\omega(x) dx \Big)^{\frac{1}{q}}.
\end{align*}
\end{definition}
If $\omega=1$ and $1\leq q\leq p<\infty$, then the inhomogeneous Morrey space ${B}^{q,1-\frac{q}{p}}_\omega(\mathbb{R}^n)$ is introduced by Alvarez, Guzm\'{a}n-Partida and Lakey (see in \cite{AGL2000} for more details). Note that $\mathcal{B}^{q,\kappa}_{\rm loc}(\mathbb{R}^n)$ and ${B}^{q,\kappa}_\omega(\mathbb{R}^n)$ are two Banach spaces.
\begin{definition}
Let $1 \le q < \infty, 0 < \kappa < 1$ and $\omega$ be a weighted function. The weighted Morrey space is defined by
\begin{align*}
\mathcal{L}^{q,\kappa}_{\omega}(\mathbb{R}^n)=\{f\in L^q_{\omega,{\rm loc}}(\mathbb{R}^n):\|f\|_{\mathcal{L}^{q,\kappa}_{\omega}(\mathbb{R}^n)}<\infty \},
\end{align*}
where
\begin{align*}
\|f\|_{\mathcal{L}^{q,\kappa}_{\omega}(\mathbb{R}^n)}=\sup\limits_{\rm cube ~ Q} \Big(\frac{1}{\omega(Q)^k}\int_{Q}|f(x)|^q\omega(x) dx \Big)^{\frac{1}{q}}.
\end{align*}
\end{definition}
From this, for convenience, we denote $M^p_{q,\omega}(\mathbb{R}^n):=\mathcal{L}^{q,1-\frac{q}{p}}_{\omega}(\mathbb{R}^n)$ for the case $0 < q < p < \infty$.
\begin{definition}
Let $0 < q \le p < \infty$ and $\omega$ be a weighted function. Then the weighted weak Morrey space is defined by
\begin{align*}
WM^{p}_{q,\omega}(\mathbb{R}^n)=\{f\in L^q_{\omega,{\rm loc}}(\mathbb{R}^n):\|f\|_{WM^{p}_{q,\omega}(\mathbb{R}^n)}<\infty \},
\end{align*}
where
\begin{align*}
\|f\|_{WM^{p}_{q,\omega}(\mathbb{R}^n)}=\sup\limits_{\rm cube ~ Q}\frac{1}{\omega(Q)^{\frac{1}{q}-\frac{1}{p}}}\sup\limits_{\lambda>0}\lambda \Big(\int_{\{x\in Q:|f(x)|>\lambda \}}\omega(x) dx \Big)^{\frac{1}{q}}.
\end{align*}
\end{definition}
For a measurable function $f$ on $\mathbb{R}^n$, the distribution function of $f$ associated with the measure $\omega(x)dx$ is defined as follows
\begin{align*}
d_f(\alpha)=\omega\left(\{x\in \mathbb{R}^n: |f(x)|>\alpha \}\right).
\end{align*}
The decreasing rearrangement of $f$ with respect to the measure $\omega(x)dx$ is the function $f^*$ defined on $[0, \infty)$ as follows
\begin{align*}
f^*(t)=\inf \{s>0:d_f(s)\le t \}.
\end{align*}
\begin{definition}
(Section 2 in \cite{CHK1982}). Let $0 < p, q \le \infty$. The weighted Lorentz space $L^{p,q}_\omega (\mathbb{R}^n)$ is defined as the set of all measurable functions $f$ such that
$\|f\|_{L^{p,q}_\omega (\mathbb{R}^n)}<\infty$, where
\begin{align*}
\|f\|_{L^{p,q}_\omega (\mathbb{R}^n)}=\begin{cases}
\left(\frac{q}{p}\int_0^\infty \left[t^{\frac{1}{p}}f^*(t) \right]^q\frac{dt}{t} \right)^{\frac{1}{q}}, &{\rm if} ~ 0<q<\infty,\\
\sup\limits_{t>0}t^{\frac{1}{p}}f^*(t), &{\rm if}~ q=\infty.\end{cases}
\end{align*}
\end{definition}
Remark that if either $1 < p < \infty$ and $1 \le q \le \infty$, or $p = q = 1$, or $p = q = \infty$ then $L^{p,q}_\omega (\mathbb{R}^n)$ is a quasi-Banach space. Moreover, there is a constant $C > 0$ such that
\begin{align}\label{Lor-ineq}
C^{-1}\|f\|_{L^{p,q}_\omega (\mathbb{R}^n)}\le \sup\limits_{\|g\|_{L^{p',q'}_\omega(\mathbb R^n)}\le 1}\left|\int_{\,\mathbb{R}^n}f(x)g(x)\omega(x)dx \right|\le C\|f\|_{L^{p,q}_\omega (\mathbb{R}^n)}.
\end{align}
\begin{corollary}\label{Cor2.3-WZC2017}
{\rm(page 253 in \cite{H1966} and Corollary 2.3 in \cite{WZC2017})} If $0 < r < q < p < \infty$, $1\leq q_1\leq q_2\leq \infty$ and $\omega$ is a non-negative weighted function on $\mathbb{R}^n$, then there exists a constant $C > 0$ such that
\begin{align*}
C\|\cdot\|_{M^p_{r,\omega}(\mathbb{R}^n)}&\le \|\cdot\|_{WM^p_{q,\omega}(\mathbb{R}^n)}\le \|\cdot\|_{M^p_{q,\omega}(\mathbb{R}^n)}\le \|\cdot\|_{WM^p_{p,\omega}(\mathbb{R}^n)}\nonumber
\\
&=\|\cdot\|_{L^{p,\infty}_\omega (\mathbb{R}^n)}\leq \|\cdot\|_{L^{p,q_2}_\omega (\mathbb{R}^n)}\leq \|\cdot\|_{L^{p,q_1}_\omega (\mathbb{R}^n)}.
\end{align*}
\end{corollary}
Next, we present some basic facts on the class of weighted functions $A(p, 1)$ with $1 < p < \infty$. For further information on the weights, the interested readers may refer to the work \cite{CHK1982}. The weighted function $\omega(x)$ is in $A(p, 1)$ if there exists a positive constant $C$ such that for any cube $Q$, we have
\begin{align*}
\|\chi_Q \|_{L^{p,1}_\omega(\mathbb{R}^n)} \|\chi_Q\omega^{-1} \|_{L^{p',\infty}_\omega (\mathbb{R}^n)}\le C|Q|.
\end{align*}
\begin{lemma}\label{Lem2.8-CHK1982}
{\rm(Lemma 2.8 in \cite{CHK1982})} For $1\leq p<\infty$, we have $\omega \in A(p, 1)$ if and only if there exists a constant $C$ such that for any cube $Q$ and subset $E \subset Q$,
\begin{align*}
\frac{|E|}{|Q|}\le C\left(\frac{\omega(E)}{\omega(Q)} \right)^{\frac{1}{p}}.
\end{align*}
\end{lemma}
Remark that if $\omega\in A(p,1)$ with $1\leq p<\infty$ and $0<\kappa<1$, then $\mathcal{B}^{p,\kappa}_\omega(\mathbb R^n)=\mathcal{L}^{p,\kappa}_{\omega}(\mathbb{R}^n)$ with equivalence of norms.
\\
Let $1 \le r < \infty$ and $\vec{f}=\{f_k\}$ be a sequence of measurable functions on $\mathbb{R}^n$.
We denote
\begin{align*}
|\vec{f}(x)|_r=\left(\sum\limits_{k=1}^\infty |f_k(x)|^r \right)^{\frac{1}{r}}.
\end{align*}
As usual, the vector-valued space $X(\ell^r,\mathbb{R}^n)$ is defined as the set of all sequences of measurable functions $\vec{f}=\{f_k\}$ such that
\begin{align*}
\|\vec{f} \|_{X(\ell^r,\mathbb{R}^n)}=\||\vec{f}(\cdot)|_r \|_X<\infty,
\end{align*}
where $X$ is an appropriate Banach space.
Let us recall to define the BMO spaces of John and Nirenberg. For further
information on these spaces as well as their deep applications in harmonic analysis,
one can see in the famous book of Stein \cite{St1993}.
\begin{definition}
The bounded mean oscillation space $BMO(\mathbb{R}^n)$ is defined as the set of all functions $b\in L^1_{\rm loc}(\mathbb{R}^n)$ such that
\begin{align*}
\|b\|_{BMO(\mathbb{R}^n)}=\sup\limits_{\rm cube ~Q}\frac{1}{|Q|}\int_Q |b(x)-b_Q|dx<\infty,
\end{align*}
where $b_Q=\frac{1}{|Q|}\int_Q b(x)dx$.
\end{definition}
\begin{lemma}\label{BMO-Lemma}
{\rm (\cite{St1993})}
If $1<p<\infty$, we then have
\begin{align*}
\|b\|_{BMO(\mathbb{R}^n)}\simeq \sup\limits_{\rm cube ~Q}\Big(\frac{1}{|Q|}\int_Q |b(x)-b_Q|^pdx\Big)^{\frac{1}{p}}:=\|b\|_{BMO^p(\mathbb R^n)}.
\end{align*}
\end{lemma}
\begin{proposition}\label{Pro-T1986}
{\rm (Proposition 3.2 in \cite{T1986})}
If $b\in BMO(\mathbb R^n)$, then
$$
|b_{2^{j+1}B}-b_B|\leq 2^n(j+1)\|b\|_{BMO(\mathbb R^n)},\,\textit{\rm for all}\, j\in\mathbb N.
$$
\end{proposition}
Let us recall the definition of $A_p$ weights. For further readings on $A_p$ weights, the reader may find in the interesting book \cite{Grafakos2008}.
\begin{definition}
Let $1 < p < \infty$. It is said that a weight $\omega \in A_p(\mathbb{R}^n)$ if there exists a constant $C$ such that for all cubes $Q$,
\begin{align*}
\left(\frac{1}{|Q|}\int_Q \omega(x)dx \right) \left(\frac{1}{|Q|}\int_Q\omega(x)^{-\frac{1}{p-1}}dx \right)^{p-1}\le C.
\end{align*}
\end{definition}
A weight $\omega \in A_1(\mathbb{R}^n)$ if there is a constant $C$ such that
\begin{align*}
M(\omega)(x)\le C\omega(x), \;{ \rm for \; a. e} ~x\in \mathbb{R}^n.
\end{align*}
We denote $A_\infty(\mathbb{R}^n)=\mathop\cup\limits_{1\le p<\infty}A_p(\mathbb{R}^n)$.
\\
A closing relation to $A_\infty(\mathbb{R}^n)$ is the reverse H\"{o}lder condition. If there exist $r > 1$ and a fixed constant $C$ such that
\begin{align*}
\Big(\frac{1}{|B|}\int_B \omega(x)^rdx \Big)^{\frac{1}{r}}\le \frac{C}{|B|}\int_B \omega(x) dx,
\end{align*}
for all balls $B \subset \mathbb{R}^n$, we then say that $\omega$ satisfies the reverse H\"{o}lder condition of order $r$ and write $\omega\in RH_r (\mathbb{R}^n)$. According to Theorem 19 and Corollary 21 in \cite{IMS2015}, $\omega\in A_\infty (\mathbb{R}^n)$ if and only if there exists some $r > 1$ such that
$\omega\in RH_r (\mathbb{R}^n)$. Moreover, if $\omega\in RH_r (\mathbb{R}^n),r>1$, then $\omega\in RH_{r+\varepsilon} (\mathbb{R}^n)$ for some $\varepsilon>0$. We thus write $r_\omega = \sup\{r > 1 : \omega\in RH_r (\mathbb{R}^n)\}$ to denote the
critical index of $\omega$ for the reverse H\"{o}lder condition.
\begin{proposition}\label{rever-Holder}
Let $\omega \in A_p(\mathbb{R}^n) \cap RH_r(\mathbb{R}^n), p \ge 1$ and $r > 1$. Then, there exist two constants $C_1, C_2 > 0$ such that
\begin{align*}
C_1\left( \frac{|E|}{|B|}\right)^p\le \frac{\omega(E)}{\omega(B)} \le C_2\left( \frac{|E|}{|B|}\right)^{\frac{r-1}{r}},
\end{align*}
for any ball $B$ and for any measurable subset $E$ of $B$.
\end{proposition}
\begin{proposition}\label{pro2.4DFan}
If $\omega\in A_p(\mathbb R^n)$, $1 \leq p < \infty$, then for any $f\in L^1_{\rm loc}(\mathbb R^n)$ and any ball $B \subset \mathbb R^n$, we have
$$
\dfrac{1}{|B|}\int_{B}|f(x)|dx\leq C\Big(\dfrac{1}{\omega(B)}\int_{B}|f(x)|^p\omega(x)dx \Big)^{\frac{1}{p}}.
$$
\end{proposition}
Next, we write $\omega\in \Delta_2$, the class of doubling weights, if there exists $D > 0$ such that for any cube $Q$, we have
\begin{align*}
\omega(2Q)\le D\omega(Q).
\end{align*}
It is known that if $\omega\in A_\infty(\mathbb{R}^n)$ then $\omega\in \Delta$.
Now, let us recall the class of $ A_p(\varphi)$ weights proposed by Tang in the work \cite{Ta2012}.
\begin{definition}
Let $1 < p < \infty$ and $\varphi(t)=(1+t)^{\alpha_0}$ for $\alpha_0>0$ and $t\ge 0$. We say that a weight $\omega\in A_p(\varphi)$ if there exists a constant $C$ such that for all cubes $Q$,
\begin{align*}
\Big(\frac{1}{\varphi(|Q|)|Q|}\int_Q \omega(x)dx \Big).\Big(\frac{1}{\varphi(|Q|)|Q|}\int_Q\omega(x)^{-\frac{1}{p-1}}dx \Big)^{p-1}\le C.
\end{align*}
A weight $\omega\in A_1(\varphi)$ if there is a constant $C$ such that
$$
M_\varphi(f)(x)\le C\omega(x), {\rm\; for\; a. e} ~x\in \mathbb{R}^n,
$$
where
\begin{align*}
M_\varphi(f)(x)=\sup\limits_{x\in {\rm cube }~Q}\frac{1}{\varphi(|Q|)|Q|}\int_Q |f(y)|dy.
\end{align*}
\end{definition}
Denote $A_\infty(\varphi)=\mathop\cup\limits_{1\le p<\infty} A_p(\varphi)$. It is useful to remark that the $ A_p(\varphi)$ weights do not satisfy the doubling condition. For instance, $\omega(x)=(1+|x|)^{(-n+\eta)}$ for $0\leq\eta\leq n\alpha_0$ is in $A_1(\varphi)$, but not in $A_p$ weights and $\omega(x)dx$ is not a doubling measure. It is also important to see that $M_\varphi$ may be not bounded on the weighted Lebesgue spaces $L^p_\omega(\mathbb R^n)$ for every $\omega\in A_p(\varphi)$. To be more precise, see in Lemma 2.3 in \cite{Ta2012}. Similarly, in this paper we also introduce a class of dyadic weighted functions associated to the function $\varphi$ as follows.
\begin{definition}
Let $1<p<\infty$ and $0<\eta<\infty$. A weight $\omega\in A_p^{\Delta,\eta}(\varphi)$ if there exists a constant $C$ such that for all dyadic cubes $Q$,
\begin{align*}
\Big(\frac{1}{\varphi(|Q|)^{\eta}|Q|}\int_Q \omega(x)dx \Big).\Big(\frac{1}{\varphi(|Q|)^{\eta}|Q|}\int_Q\omega(x)^{-\frac{1}{p-1}}dx \Big)^{p-1}\le C.
\end{align*}
\end{definition}
It is obvious that $A_{p_1}^{\Delta,\eta}(\varphi)\subset A_{p_2}^{\Delta,\eta}(\varphi)$ for all $1<p_1<p_2<\infty$. It is also easy to show that $A_p(\mathbb{R}^n)\subset A_p(\varphi)\subset A_p^{\Delta,\eta}(\varphi)$ with $1<p<\infty$ and $0< \eta<\infty$. In particular, $A_1(\mathbb R^n) \subset A_1(\varphi)$.
Next, we give the definitions of the maximal operators $M_\omega$ and $M^\Delta_{\varphi,\eta}$ as follows
$$
M_\omega(f)(x)=\sup\limits_{x\in\text{ball\;} B}\frac{1}{\omega(5B)}\int_B|f(y)|\omega(y)dy,$$
$$M^\Delta_{\varphi,\eta}(f)(x)=\sup\limits_{x\in {\rm dyadic\,cube}\,Q}\frac{1}{\varphi(|Q|)^\eta |Q|}\int_Q |f(y)|dy, {\;\rm for~all}~0<\eta<\infty.
$$
Remark that by the similar arguments to Lemma 2.1 in \cite{Ta2012}, we also have
$$M^\Delta_{\varphi,\eta}(f)(x)\lesssim \left(M_\omega(|f|^p)(x)\right)^{\frac{1}{p}}, \;\;x\in\mathbb R^n,$$
where $\omega\in A_{p}^{\Delta,\eta}(\varphi)$ for $0<\eta<\infty$ and $1<p<\infty$. Moreover, we also get the same result as in Lemma 2.3 of the paper \cite{Ta2012}.
\begin{lemma}\label{Lemma-dyadic}
Let $1<p<\infty$ and $\omega\in A_p^{\Delta,\eta}(\varphi)$. Then, for any $p<r<\infty$, we have
$$
\|M^\Delta_{\varphi,\eta}(f)\|_{L^{r}_\omega(\mathbb R^n)}\leq C \|f\|_{L^{r}_\omega(\mathbb R^n)}.
$$
\end{lemma}
It seems to see that the inequality in Lemma \ref{Lemma-dyadic} may be not valid for $r=p$.
\begin{theorem}\label{Theo3.1-AJ1981}
{\rm (Theorem 3.1 in \cite{AJ1981})} If $1 < p < \infty$, then the operator $M$ is bounded from $L^p_\omega(\ell^r, \mathbb{R}^n)$ to itself if and only if $\omega\in A_p$.
\end{theorem}
\begin{theorem}\label{Theo2.12-CDD2017}
{\rm(Theorem 2.12 in \cite{CDD2017})} If $1 < p < r < \infty$, then the operator $M$ is
bounded from $L^{p,1}_\omega(\ell^r, \mathbb{R}^n)$ to $L^{p,\infty}_\omega(\ell^r, \mathbb{R}^n)$ if and only if $\omega\in A(p, 1)$.
\end{theorem}
\begin{lemma}\label{Lem2.3-Ta2012}
{\rm(Lemma 2.3 in \cite{Ta2012})} If $1 \le p < \infty$, then the operator $M_\varphi$ is bounded from $L^p_\omega(\mathbb{R}^n)$ to $L^{p,\infty}_\omega(\mathbb{R}^n)$ if and only if $\omega\in A_p(\varphi)$.
\end{lemma}
In 1981, Andersen and John \cite{AJ1981} established the weighted norm inequalities for vector-valued maximal functions and maximal singular integrals on the space $L^p_\omega(\ell^r,\mathbb R^n)$. Now, let us recall the definition of maximal singular integrals associated to the kernels due to Andersen and John. For more details, see in the work \cite{AJ1981}.
\begin{definition}
Let $K$ be the kernel such that
\begin{align}
|K(x)|&\le \frac{A}{|x|^n}, |\hat{K}(x)|\le A;\\
|K(x-y)-K(x)|&\le \mu (|y|/|x|)|x|^{-n}, \text{for all} ~|x|\ge 2|y|;
\end{align}
where $A$ is a constant and $\mu$ is non-decreasing on the positive real half-line,
$\mu(2t)\le C\mu (t)$ for all $t > 0$, and satisfies the Dini condition
\begin{align}
\int_0^1\frac{\mu(t)}{t}dt<\infty.
\end{align}
Then, the maximal singular integral operator $T^*$ is defined by
\begin{align*}
T^*(f)(x)= \mathop{\rm sup}\limits_{\varepsilon >0}\Big|\int_{\,\,|x-y|\geq \varepsilon}K(x-y)f(y)dy\Big|.
\end{align*}
\end{definition}
If $\{K_k (x)\}$ denote a sequence of singular convolution kernels satisfying the above conditions (2.3)-(2.5) with a uniform constant $A$ and a fixed function $\mu$ not dependent of $k$, then we write $T^*(\vec{f})=\{T^*_k(f_k) \}$, where $T^*_k$ is the operator above corresponding to the kernel $K_k$ .
\begin{theorem}\label{Theo-AJ1981}
{\rm(Theorem 5.2 in \cite{AJ1981})} Let $1 < r < \infty, 1 < p < \infty,$ and suppose $\omega\in A_p$. There exits a constant $C$ such that
\begin{align*}
\|T^*(\vec{f}) \|_{L^p_\omega(\ell^r,\mathbb{R}^n)}\le C\|\vec{f} \|_{L^p_\omega(\ell^r,\mathbb{R}^n)}, \text{ for all} ~f\in L^p_\omega(\ell^r,\mathbb{R}^n).
\end{align*}
\end{theorem}
Let $b$ be a measurable function. We denote by $\mathcal{M}_b$ the multiplication operator defined by $\mathcal{M}_bf (x)=b(x) f (x)$ for any measurable function $f$. If $\mathcal{H}$ is a linear or sublinear operator on some measurable function space, the commutator of Coifman-Rochberg-Weiss type formed by $\mathcal{M}_b$ and $\mathcal{H}$ is defined by $[\mathcal{M}_b, \mathcal{H}]f (x)=(\mathcal{M}_b\mathcal{H}-\mathcal{H}\mathcal{M}_b) f (x)$.
\section{The results about the boundedness of maximal operators}\label{section3}
By using Theorem \ref{Theo3.1-AJ1981} and estimating as Theorem 1.1 in \cite{WZC2017}, we immediately have the following useful characterization for the Muckenhoupt weights through boundedness of the Hardy-Littlewood maximal operators on the vector valued function spaces.
\begin{theorem}
Let $1 < q < p < \infty, 1 \le r < \infty$. Then, the following statements are equivalent:
\begin{enumerate}
\item[(1)] $\omega\in A_p$;
\item[(2)] $M$ is a bounded operator from $L^p_\omega(\ell^r,\mathbb{R}^n)$ to $L^{p,\infty}_\omega(\ell^r,\mathbb{R}^n)$;
\item[(3)] $M$ is a bounded operator from $L^p_\omega(\ell^r,\mathbb{R}^n)$ to $M^{p}_{q,\omega}(\ell^r,\mathbb{R}^n)$;
\item[(4)] $M$ is a bounded operator from $L^p_\omega(\ell^r,\mathbb{R}^n)$ to $WM^{p}_{q,\omega}(\ell^r,\mathbb{R}^n)$.
\end{enumerate}
\end{theorem}
Now, we give a new characterization for the class of $A(p,1)$ weights.
\begin{theorem}
Let $1 < q < p < r < \infty$. The following statements are equivalent:
\begin{enumerate}
\item[(1)] $\omega\in A(p,1)$;
\item[(2)] $M$ is a bounded operator from $L^{p,1}_\omega(\ell^r,\mathbb{R}^n)$ to $L^{p,\infty}_\omega(\ell^r,\mathbb{R}^n)$;
\item[(3)] $M$ is a bounded operator from $L^{p,1}_\omega(\ell^r,\mathbb{R}^n)$ to $M^{p}_{q,\omega}(\ell^r,\mathbb{R}^n)$;
\item[(4)] $M$ is a bounded operator from $L^{p,1}_\omega(\ell^r,\mathbb{R}^n)$ to $WM^{p}_{q,\omega}(\ell^r,\mathbb{R}^n)$.
\end{enumerate}
\end{theorem}
\begin{proof}
Note that Theorem \ref{Theo2.12-CDD2017} follows us to obtain the equivalence of (1) and (2). By Corollary \ref{Cor2.3-WZC2017}, we immediately have (2) $\Rightarrow$ (3) $\Rightarrow$ (4). Therefore, to complete the proof of the theorem, we need to prove (4) $\Rightarrow$ (1). For any cube $Q$, by the relation (\ref{Lor-ineq}), we find a function $f$ such that $\|f\|_{L^{p,1}_\omega(\mathbb{R}^n)}\le 1$ and
\begin{align}\label{ineq-Ap1}
\int_Q|f(x)|dx\ge \left|\,\int_{\mathbb{R}^n}f(x)\chi_Q\omega^{-1}\omega dx \right|\gtrsim \|\chi_Q\omega^{-1} \|_{L^{p',\infty}_\omega(\mathbb{R}^n)}.
\end{align}
It is obvious that $Q=\{x\in Q: M(f)(x)>\lambda \}$, where $\lambda=\frac{1}{2|Q|}\int_Q|f(x)|dx$. Thus, because $M$ is a bounded operator from $L^{p,1}_\omega(\mathbb{R}^n)$ to $WM^{p}_{q,\omega}(\mathbb{R}^n)$, we have
\begin{align*}
\lambda\omega(Q)^{\frac{1}{p}}&=\frac{1}{\omega(Q)^{\frac{1}{q}-\frac{1}{p}}}\lambda\Big(\int_{\{x\in Q: M(f)(x)>\lambda \}}\omega(x)dx \Big)^{\frac{1}{q}}\\
&\le \|M(f) \|_{WM^p_{q,\omega}(\mathbb{R}^n)}\le \|f\|_{L^{p,1}_\omega(\mathbb{R}^n)}\le 1.
\end{align*}
As a consequence, by (\ref{ineq-Ap1}), we give
\begin{align*}
\frac{1}{2|Q|}\|\chi_Q\omega^{-1} \|_{L^{p',\infty}_\omega(\mathbb{R}^n)}.\omega(Q)^{\frac{1}{p}}\lesssim 1.
\end{align*}
From this, we have
\begin{align*}
\|\chi_Q\|_{L^{p,1}_\omega(\mathbb{R}^n)}\|\chi_Q\omega^{-1} \|_{L^{p',\infty}_\omega(\mathbb{R}^n)}\lesssim |Q|.
\end{align*}
This implies that $\omega\in A(p,1)$, and the theorem is completely proved.
\end{proof}
Next, we establish the boundedness results for pseudo-differential operators of order $0$ on weighted Lorentz spaces.
For $m\in\mathbb R$, we say that the function $a(x,\xi)\in C^{\infty}(\mathbb R^n\times \mathbb R^n)$ is a
symbol of order $m$ if it satisfies the following inequality
$$
|\partial^{\beta}_x\partial^{\alpha}_\xi a(x,\xi)|\leq C_{\alpha,\beta}(1+|\xi|)^{m-|\alpha|},
$$
for all multi-indices $\alpha$ and $\beta$, where $C_{\alpha,\beta} > 0$ is independent of $x$ and $\xi$. Then, a pseudo-differential operator is a mapping $f\to T_{a}(f)$ given by
\begin{align*}
T_a(f)(x)=\int_{\mathbb R^n} a(x,\xi)\widehat f(\xi)e^{2\pi i x\xi}d\xi.
\end{align*}
Remark that $T_a$ is well defined on the space of Schwartz functions $S(\mathbb R^n)$ or on the space of all infinitely differentiable functions with compact support $C^{\infty}_c(\mathbb R^n)$, where $\widehat f$ is the Fourier transform of the function $f$.
\begin{lemma}\label{pseudo1}
Let $1 < q \leq p < \infty$, $\omega\in A_p(\mathbb R^n)$ and $T_a$
be a pseudo-differential operator of order $0$. Then, $T_a$ extends to a bounded operator from $L^{p,q}_\omega(\mathbb R^n)$ to $L^{p,\infty}_\omega(\mathbb R^n)$.
\end{lemma}
\begin{proof}
By Theorem 2 and Theorem 4 in \cite{CHK1982}, we have
\begin{align}
\|M(f)\|_{L^{p,\infty}_\omega(\mathbb R^n)}\lesssim \|f\|_{L^{p,q}_\omega(\mathbb R^n)},\,\textit{\rm for all}\,f\in C^{\infty}_c(\mathbb R^n).\nonumber
\end{align}
Next, by estimating as Theorem 2 in \cite{G2016}, we see that
$$
|T_a(f)(\cdot)|\lesssim M(f)(\cdot),\,\textit{\rm for all}\,f\in C^{\infty}_c(\mathbb R^n).
$$
Thus,
\begin{align}
\|T_a(f)\|_{L^{p,\infty}_\omega(\mathbb R^n)}\lesssim \|f\|_{L^{p,q}_\omega(\mathbb R^n)},\,\textit{\rm for all}\,f\in C^{\infty}_c(\mathbb R^n).\nonumber
\end{align}
As mentioned above, since $C^{\infty}_c(\mathbb R^n)$ is dense in $L^{p,q}_\omega(\mathbb R^n)$ (see Corollary 3.2 in \cite{NTY2004}), we immediately have the desired result.
\end{proof}
By using Lemma \ref{pseudo1} and Corollary \ref{Cor2.3-WZC2017} and applying the Lorentz version Marcinkiewicz interpolation theorem as the proof of Theorem 3 in \cite{CHK1982}, we obtain the following useful result.
\begin{theorem}\label{pseudo2}
Let $1 < p < \infty$, $1<q\leq \infty$, $\omega\in A_p(\mathbb R^n)$ and $T_a$
be a pseudo-differential operator of order $0$. Then, the following statements are true:
\begin{enumerate}
\item[(1)] $T_a$ extends to a bounded operator from $L^{p,q}_\omega(\mathbb R^n)$ to $L^{p,q}_\omega(\mathbb R^n)$;
\item[(2)] $T_a$ extends to a bounded operator from $L^{p,q}_\omega(\mathbb{R}^n)$ to $M^{p}_{q,\omega}(\mathbb{R}^n)$;
\item[(3)] $T_a$ extends to a bounded operator from $L^{p,q}_\omega(\mathbb{R}^n)$ to $WM^{p}_{q,\omega}(\mathbb{R}^n)$.
\end{enumerate}
\end{theorem}
For $1 < p < \infty$, by Lemma \ref{Lem2.8-CHK1982}, we observe that $\omega\in A(p,1)$ implies $\omega\in \Delta$. Thus, combining with Theorem 3.1 in \cite{KS2009}, we can get the following result.
\begin{theorem}
If $1 < p < \infty, 0 < \kappa < 1, \omega\in A(p,1)$, then the operator $M_\omega$ is bounded on $\mathcal{L}^{q,\kappa}_\omega(\mathbb{R}^n)$.
\end{theorem}
Similarly to the known characterizations of the $A_p$ weights given in \cite{WZC2017}, we also have another characterizations for the $A_p(\varphi)$ weights as follows.
\begin{theorem}\label{max-phi}
Let either $1 < q < p < \infty$ or $0 < q < p = 1$. Then, the following statements are equivalent:
\begin{enumerate}
\item[(1)] $\omega\in A_p(\varphi)$;
\item[(2)] $M_\varphi$ is a bounded operator from $L^{p}_\omega(\mathbb{R}^n)$ to $L^{p,\infty}_\omega(\mathbb{R}^n)$;
\item[(3)] $M_\varphi$ is a bounded operator from $L^{p}_\omega(\mathbb{R}^n)$ to $M^{p}_{q,\omega}(\mathbb{R}^n)$;
\item[(4)] $M_\varphi$ is a bounded operator from $L^{p}_\omega(\mathbb{R}^n)$ to $WM^{p}_{q,\omega}(\mathbb{R}^n)$.
\end{enumerate}
\end{theorem}
\begin{proof}
By Lemma \ref{Lem2.3-Ta2012} and Corollary \ref{Cor2.3-WZC2017}, it is clear that the relation (1) $\Leftrightarrow$ (2) and (2) $\Rightarrow$ (3) $\Rightarrow$ (4). Thus, to complete the proof, we need to prove the (4) $\Rightarrow$ (1). More precisely, it is as the following.
\vskip 5pt
In the case $1 < q < p < \infty$, let $Q$ be any cube and take $f_\varepsilon=(\omega+\varepsilon)^{1-p'}\chi_Q$, for all $\varepsilon>0$, where $p'$ is a
conjugate real number of $p$, i.e $\frac{1}{p}+\frac{1}{p'}=1$. It immediately follows that $f_\varepsilon\in L^p_\omega(\mathbb{R}^n)$.
For any $0 < \lambda < \frac{(\omega+\varepsilon)^{1-p'}(Q)}{\varphi(|Q|)|Q|}$, by letting $x\in Q$, it is clear to see that
\begin{align*}
M_\varphi(f_\varepsilon)(x)\ge \frac{1}{\varphi(|Q|)|Q|}\int_Q|f_\varepsilon(y)|dy=\frac{1}{\varphi(|Q|)|Q|}\int_Q (\omega+\varepsilon)^{1-p'}dy>\lambda.
\end{align*}
Hence, we obtain
\begin{align*}
Q=\{x\in Q: M_\varphi(f_\varepsilon)(x)>\lambda \}.
\end{align*}
Consequently, because $M_\varphi$ is a bounded operator from $L^p_\omega(\mathbb{R}^n)$ to $WM^p_{q,\omega}(\mathbb{R}^n)$, we infer
\begin{align}\label{omegaQ-weak}
\lambda\omega(Q)^{\frac{1}{p}}&=\frac{1}{\omega(Q)^{\frac{1}{q}-\frac{1}{p}}}\lambda\Big(\int_{\{x\in Q: M_\varphi(f_\varepsilon)(x)>\lambda \}}\omega(x)dx \Big)^{\frac{1}{q}}\notag\\
&\le \|M_\varphi(f_\varepsilon) \|_{WM^p_{q,\omega}(\mathbb{R}^n)}\lesssim \|f_\varepsilon\|_{L^{p}_\omega(\mathbb{R}^n)}=\Big(\int_Q (\omega+\varepsilon)^{-p'}\omega(x)dx \Big)^{\frac{1}{p}}.
\end{align}
Thus, by choosing $\lambda=\frac{(\omega+\varepsilon)^{1-p'}(Q)}{2\varphi(|Q|)|Q|}$, we get
\begin{align*}
\Big(\frac{1}{\varphi(|Q|)|Q|} \int_Q (\omega+\varepsilon)^{-p'}\omega(x)dx\Big)^p.\Big(\int_Q\omega(x)dx \Big)\lesssim \int_Q (\omega+\varepsilon)^{-p'}\omega(x)dx,
\end{align*}
which implies that
\begin{align*}
\Big(\frac{1}{\varphi(|Q|)|Q|} \int_Q\omega(x)dx\Big) \Big( \frac{1}{\varphi(|Q|)|Q|} \int_Q (\omega+\varepsilon)^{-p'}\omega(x)dx\Big)^{p-1}\lesssim 1,
\end{align*}
for all $\varepsilon>0$. By letting $\varepsilon\to 0^+$ and using dominated convergence theorem of Lebesgue, we obtain $\omega\in A_p(\varphi)$.
\vskip 5pt
In the case $0 < q < p = 1$, let us fix $Q$ and take any cube $Q_1 \subset Q$. Thus, we choose $f = \chi_{Q_1}$. For any $0 < \lambda <\frac{|Q_1|}{\varphi(|Q|)|Q|}$, by estimating as (\ref{omegaQ-weak}) above, we immediately have
\begin{align*}
\lambda\Big( \int_{Q}\omega(x)dx \Big)\le \int_{Q_1}\omega(x)dx.
\end{align*}
Next, by choosing $\lambda=\frac{|Q_1|}{2\varphi(|Q|)|Q|}$, we infer
\begin{align*}
\frac{1}{|Q|}\int_Q\omega(x)dx\lesssim \frac{1}{|Q_1|}\int_{Q_1}\omega(x)dx, \text{ for any} ~Q_1\subset Q.
\end{align*}
Hence, by the definition of operator $M_\varphi$ and the Lebesgue differentiation theorem, it follows that
\begin{align*}
M_\varphi(\omega)(x)\lesssim \omega(x), \text{ for \;a.e.}\; x\in\mathbb{R}^n,
\end{align*}
which gives $\omega\in A_1(\varphi)$.
\end{proof}
In final part of this section, we give the weighted norm inequality of weak type for new dyadic maximal operators $M^\Delta_{\varphi,2\eta}$ on the vector valued Lebesgue spaces with weighted functions in $A_p^{\Delta,\eta}(\varphi)$.
\begin{theorem}\label{max-Delta-2eta}
If $1 < p < r < \infty, \omega \in A_p^{\Delta,\eta}(\varphi)$ for $\eta>0$, then operator $M^\Delta_{\varphi,2\eta}$ is bounded from $L^p_\omega(\ell^r,\mathbb{R}^n)$ to $L^{p,\infty}_\omega(\ell^r,\mathbb{R}^n)$.
\end{theorem}
\begin{proof}
Let $\vec{f}\in \vec S$ and $\alpha > 0$, where $\vec S$ the linear space of sequences $\vec f = \{f_k\}$ such that each $f_k(x)$ is a simple function on $\mathbb R^n$ and $f_k(x)\equiv 0$ for all sufficiently large $k$. By using Lemma 2.5 in \cite{Ta2012}, there exists a disjoint union of maximal dyadic cubes $\{Q_j\}$ such that
\begin{align}\label{ineq-fr}
|\vec{f}(x)|_r \le \alpha,x\notin \Omega=\cup_{j=1}^\infty Q_j;
\end{align}
\begin{align}\label{ineq-fr-alpha}
\alpha \le \frac{1}{\varphi(|Q_j|)^\eta|Q_j|}\int_{Q_j}|\vec{f}(x)|_rdx &\le 2^n\varphi(4n).\alpha, \text{for all} ~j\in\mathbb{Z}^+.
\end{align}
Now, we compose $\vec{f}=\vec{f'}+\vec{f^{''}}$, where $\vec{f'}=\{f'_k\},f'_k(x)=f_k(x)\chi_{\mathbb{R}^n\backslash \Omega}(x)$. This gives
\begin{align*}
|M^\Delta_{\varphi,2\eta}(\vec{f})(x)|_r\le |M^\Delta_{\varphi,2\eta}(\vec{f'})(x)|_r +|M^\Delta_{\varphi,2\eta}(\vec{f^{''}})(x)|_r.
\end{align*}
As a consequence, we need to prove the following two results
\begin{align}\label{esti-omega1}
\omega\left(\{x\in\mathbb{R}^n: |M^\Delta_{\varphi,2\eta}(\vec{f'})(x)|_r>\alpha \} \right)\lesssim \alpha^{-p}\|\vec{f}\|^p_{L^p_\omega(\ell^r, \mathbb R^n)},
\end{align}
and
\begin{align}\label{esti-omega2}
\omega\left(\{x\in\mathbb{R}^n: |M^\Delta_{\varphi,2\eta}(\vec{f^{''}})(x)|_r>\alpha \} \right)\lesssim \alpha^{-p}\|\vec{f}\|^p_{L^p_\omega(\ell^r, \mathbb R^n)}.
\end{align}
By Lemma \ref{Lemma-dyadic}, for $\omega \in A_p^{\Delta,\eta}(\varphi)$ we have
$$\int_{\mathbb{R}^n}|M^\Delta_{\varphi,2\eta}(f'_k)(x)|^r\omega(x)dx\leq \int_{\mathbb{R}^n}|M^\Delta_{\varphi,\eta}(f'_k)(x)|^r\omega(x)dx
\lesssim \int_{\mathbb{R}^n}|f'_k(x)|^r\omega(x)dx.$$
This implies that
\begin{align*}
\int_{\mathbb{R}^n}|M^\Delta_{\varphi,2\eta}(\vec f')(x)|^r_r\omega(x)dx&=\int_{\mathbb{R}^n}\sum_{k=1}^\infty |M^\Delta_{\varphi,2\eta}(f'_k)(x)|^r\omega(x)dx\\
&=\sum_{k=1}^\infty \int_{\mathbb{R}^n}|M^\Delta_{\varphi,2\eta}(f'_k)(x)|^r\omega(x)dx\\
&\lesssim\sum_{k=1}^\infty \int_{\mathbb{R}^n}|f'_k(x)|^r\omega(x)dx\\
&\lesssim\int_{\mathbb{R}^n}|\vec f'(x)|_r^r\omega(x)dx.
\end{align*}
Hence, by the Chebysev inequality, it immediately follows that
\begin{align}\label{ineq-Chebysev}
\omega\left(\{x\in\mathbb{R}^n: |M^\Delta_{\varphi,2\eta}(\vec{f'})(x)|_r>\alpha \} \right)\lesssim \alpha^{-r}\|\vec{f'}\|^r_{L^r_\omega(\ell^r,\mathbb R^n)}.
\end{align}
On the other hand, by (\ref{ineq-fr}), we infer
\begin{align*}
|\vec{f'}(x)|_r^r\le \alpha^{r-p}|\vec{f}(x)|_r^p,
\end{align*}
which implies that, by (\ref{ineq-Chebysev}), the inequality (\ref{esti-omega1}) is holded.
It remains only to show that the inequality (\ref{esti-omega2}) is true. To estimate the inequality (\ref{esti-omega2}), we put $\overline{f}=\{\overline{f}_k \}$ as follows
\begin{align*}
\overline{f}_k(x)=\begin{cases}\frac{1}{\varphi(|Q_j|)^\eta|Q_j|}\int_{Q_j}|f_k(y)|dy,& x\in Q_j, j=1,2,...,\\
0,& \text{otherwise}. \end{cases}
\end{align*}
Then, we obtain the important inequality as follows
\begin{align}\label{Fefferman-Stein}
M^\Delta_{\varphi,2\eta}(f^{''}_k)(x)\le M^\Delta_{\varphi,\eta}(\overline{f}_k)(x),x\notin \Omega.
\end{align}
Indeed, let $x\notin \Omega$ and $Q$ be any dyadic cube such that $x \in Q$. Thus, one has
\begin{align*}
\int_Q|f^{''}_k(y)|dy=\int_{Q\cap \Omega}|f_k(y)|dy=\sum\limits_{j\in J}\int_{Q\cap Q_j}|f_k(y)|dy,
\end{align*}
where $J = \{j \in \mathbb{N} : Q_j \cap Q \ne \emptyset\}$. Since $\{Q_j\}$ and $Q$ are dyadic cubes, and $x \in Q$, we immediately have $J = \{j \in \mathbb{N} : Q_j \subset Q\}$.
Hence, we infer
\begin{align}\label{int_f''}
\int_Q|f^{''}_k(y)|dy=\sum\limits_{j\in J}\int_{Q_j}|f_k(y)|dy.
\end{align}
On the other hand, we get
\begin{align*}
\int_{Q_j}\overline{f}_k(y)dy=\int_{Q_j}\Big(\frac{1}{\varphi(|Q_j|)^\eta|Q_j|} \int_{Q_j}|f_k(t)|dt\Big)dy=\frac{1}{\varphi(|Q_j|)^\eta}\int_{Q_j}|f_k(t)|dt.
\end{align*}
Therefore, by (\ref{int_f''}), one has
\begin{align}
\frac{1}{\varphi(|Q|)^{2\eta}|Q|}\int_Q|f^{''}_k(y)|dy&=\frac{1}{\varphi(|Q|)^{2\eta}|Q|}\sum\limits_{j\in J}\Big(\varphi(|Q_j|)^\eta\int_{Q_j}\overline{f}_k(y)dy \Big)\nonumber
\\
&=\frac{1}{\varphi(|Q|)^{\eta}|Q|}\sum\limits_{j\in J}\Big(\frac{\varphi(|Q_j|)^\eta}{\varphi(|Q|)^\eta}\int_{Q_j}\overline{f}_k(y)dy \Big)\nonumber
\\
&\le \frac{1}{\varphi(|Q|)^{\eta}|Q|}\int_Q\overline{f}_k(y)dy.\nonumber
\end{align}
This implies that inequality (\ref{Fefferman-Stein}) is true.
\\
Next, for any $x \in \Omega$, there only exists a dyadic cube $Q_j$ such that $x \in Q_j$. Thus, by the Minkowski inequality and (\ref{ineq-fr-alpha}), we have
\begin{align*}
|\overline{f}(x)|_r&=\Big(\sum\limits_{k=1}^\infty\Big(\frac{1}{\varphi(|Q_j|)^{\eta}|Q_j|}\int_{Q_j}|f_k(y)|dy \Big)^r \Big)^{\frac{1}{r}}\\
&\le \frac{1}{\varphi(|Q_j|)^{\eta}|Q_j|}\int_{Q_j}|\vec{f}(y)|_rdy\le 2^n\varphi(4n).\alpha.
\end{align*}
Hence, by using (\ref{Fefferman-Stein}) and estimating as (\ref{ineq-Chebysev}), it is clear to see that
\begin{align*}
&\omega\big(\{x\notin\Omega: |M^\Delta_{\varphi,2\eta}(\vec{f^{''}})(x)|_r>\alpha \} \big)\le \omega\left(\{x\notin\Omega: |M^\Delta_{\varphi,\eta}(\overline{f})(x)|_r>\alpha \} \right)
\\
&\leq \omega\left(\{x\in\mathbb R^n: |M^\Delta_{\varphi,\eta}(\overline{f})(x)|_r>\alpha \} \right)\lesssim \alpha^{-r}\|\overline{f} \|^r_{L^r_\omega(\ell^r,\mathbb R^n)} \lesssim \omega(\Omega),
\end{align*}
which leads to
\begin{align}\label{omega_f"}
&\omega\big(\{x\in\mathbb{R}^n: |M^\Delta_{\varphi,2\eta}(\vec{f^{''}})(x)|_r>\alpha \} \big)\notag\\
&\le \omega(\Omega)+\omega\big(\{x\notin\Omega: |M^\Delta_{\varphi,2\eta}(\vec{f^{''}})(x)|_r>\alpha \}\big)\lesssim \omega(\Omega).
\end{align}
Besides that, by using (\ref{ineq-fr-alpha}), the H\"{o}lder inequality and $\omega\in A_p^{\Delta,\eta}(\varphi)$, we get
\begin{align*}
&\omega(Q_j)\le \alpha^{-p}\Big(\frac{1}{\varphi(|Q_j|)^{\eta}|Q_j|}\int_{Q_j}|\vec{f}(x)|_rdx \Big)^p.\int_{Q_j}\omega(x)dx\\
&\le \alpha^{-p}\Big(\frac{1}{\varphi(|Q_j|)^{\eta}|Q_j|} \Big)^p \Big(\int_{Q_j} |\vec{f}(x)|_r^p\omega(x)dx \Big) \Big(\int_{Q_j}\omega^{-\frac{p'}{p}}(x)dx \Big)^{\frac{p}{p'}}.\int_{Q_j}\omega(x)dx
\\
& \le \alpha^{-p}\Big( \int_{Q_j} |\vec{f}(x)|^p_r\omega(x)dx\Big) \Big(\frac{1}{\varphi(|Q_j|)^{\eta}|Q_j|} \int_{Q_j}\omega(x)dx \Big) \Big(\frac{1}{\varphi(|Q_j|)^{\eta}|Q_j|} \int_{Q_j}\omega(x)^{-\frac{1}{p-1}}dx \Big)^{p-1}\\
&\lesssim \alpha^{-p}\Big( \int_{Q_j} |\vec{f}(x)|^p_r\omega(x)dx\Big), \text{for all} ~j\in\mathbb{N}.
\end{align*}
From the above inequality, we infer
\begin{align*}
\omega(\Omega)&=\sum\limits_{j=1}^\infty\omega(Q_j)\lesssim \alpha^{-p} \sum\limits_{j=1}^\infty \int_{Q_j} |\vec{f}(x)|^p_r\omega(x)dx=\alpha^{-p}\int_\Omega |\vec{f}(x)|^p_r\omega(x)dx
\\
&\le \alpha^{-p}\int_{\mathbb{R}^n}|\vec{f}(x)|^p_r\omega(x)dx.
\end{align*}
As an application, by (\ref{omega_f"}), the proof for the inequality (\ref{esti-omega2}) is finished. Finally, since $\vec S$ is dense in $L^p_\omega(\ell^r, \mathbb R^n)$ (see in \cite{BP1961}), the proof of the theorem is ended.
\end{proof}
As a consequence, by combining Theorem \ref{max-Delta-2eta}, Corollary \ref{Cor2.3-WZC2017} and making in the same way as Theorem \ref{max-phi}, we also obtain a necessary condition and a sufficient condition for the class of $A_p^{\Delta,\eta}(\varphi)$ weights. More precisely, the following is true.
\begin{theorem}
Let $1 < p < r < \infty$ and $\eta >0$. The following statements are true:
\begin{enumerate}
\item[(i)] If $\omega\in A_p^{\Delta,\eta}(\varphi)$, then $M^\Delta_{\varphi,2\eta}$ is a bounded operator from $L^p_\omega(\ell^r,\mathbb{R}^n)$ to $L^{p,\infty}_\omega(\ell^r,\mathbb{R}^n)$, $M^{p}_{q,\omega}(\ell^r,\mathbb{R}^n)$ and $WM^{p}_{q,\omega}(\ell^r,\mathbb{R}^n)$, respectively.
\item [(ii)] If $\omega\notin A_p^{\Delta,\eta}(\varphi)$, then $M^\Delta_{\varphi,\eta}$ is not a bounded operator from $L^p_\omega(\ell^r,\mathbb{R}^n)$ to $WM^{p}_{q,\omega}(\ell^r,\mathbb{R}^n)$.
\end{enumerate}
\end{theorem}
\section{The results about the boundedness of sublinear operators generated by singular integrals and its commutators}\label{section4}
Let us recall that the two weighted Morrey space $\mathcal B^{p,\kappa}_{\omega_1,\omega_2}(\ell^r,\mathbb R^n)$ with vector-valued functions is defined as the set of all sequences of measurable functions $\vec{f}=\{f_k\}$ such that
\begin{align*}
\|\vec{f} \|_{\mathcal B^{p,\kappa}_{\omega_1,\omega_2}(\ell^r,\mathbb R^n)}=\||\vec{f}(\cdot)|_r \|_{\mathcal B^{p,\kappa}_{\omega_1,\omega_2}(\mathbb R^n)}<\infty.
\end{align*}
It is not difficult to show that $\mathcal B^{p,\kappa}_{\omega_1,\omega_2}(\ell^r,\mathbb R^n)$ is a Banach space. Our first main result in this section is to give the boundedness of maximal singular integral operators with the kernels proposed by Anderson and John on the space $\mathcal B^{p,\kappa}_{\omega_1,\omega_2}(\ell^r,\mathbb R^n)$. More precisely, we have the following useful result.
\begin{theorem}
Let $1< r<\infty$, $1<p<\infty$, $\omega_1\in A(p,1)$, $\omega_2\in A_p$, $\delta\in (1,r_{\omega_2})$ and $0<\kappa<\frac{\delta-1}{\delta p}$. Then, $T^*$ is a bounded operator on $\mathcal B^{p,\kappa}_{\omega_1,\omega_2}(\ell^r,\mathbb R^n)$.
\end{theorem}
\begin{proof}
Let us choose any $\vec f\in \mathcal B^{p,\kappa}_{\omega_1,\omega_2}(\ell^r,\mathbb R^n)$ and ball $B_R(x_0):= B$. Next, we compose $\vec{f}=\vec{f}_1+\vec{f}_2$, where $\vec{f}_1=\{f_{1,k}\}$ such that $f_{1,k}(x)=f_k(x)\chi_{2B}(x)$. This implies that
\begin{align}\label{J12}
&\frac{1}{\omega_1(B)^{\kappa}}\int_B |T^*(\vec{f})(x)|_r^p\omega_2(x)dx\le \frac{1}{\omega_1(B)^{\kappa}}\int_B |T^*(\vec{f}_1)(x)|_r^p\omega_2(x)dx +\notag\\
&+ \frac{1}{\omega_1(B)^{\kappa}}\int_B |T^*(\vec{f}_2)(x)|_r^p\omega_2(x)dx:=J_1+J_2.
\end{align}
By Theorem \ref{Theo-AJ1981} and Lemma \ref{Lem2.8-CHK1982}, we have
\begin{align}\label{J1}
J_1&\le \frac{1}{\omega_1(B)^{\kappa}}\int_{\mathbb{R}^n} |T^*(\vec{f}_1)(x)|_r^p\omega_2(x)dx\lesssim \frac{1}{\omega_1(B)^{\kappa}}\int_{2B} |\vec{f}(x)|_r^p\omega_2(x)dx\notag\\
&\le \frac{\omega_1(2B)^{\kappa}}{\omega_1(B)^{\kappa}}\|\vec{f}\|^p_{\mathcal{B}^{p,\kappa}_{\omega_1,\omega_2}(\ell^r,\mathbb{R}^n)}\lesssim \|\vec{f}\|^p_{\mathcal{B}^{p,\kappa}_{\omega_1,\omega_2}(\ell^r,\mathbb{R}^n)}.
\end{align}
Now, for $x \in B$ and $y \in (2B)^c$, it is clear to see that $2R\leq |x_0 - y| \le 2|x - y|$. From this, we get
$$
|T^*(f_{2,k})(x)|\leq \int_{(2B)^c}\frac{A}{|x-y|^n}|f_k(y)|dy\lesssim \int_{(2B)^c}\frac{1}{|x_0-y|^n}|f_k(y)|dy,$$ for all $k\in\mathbb N$. Hence, by using the Minkowski inequality and the H\"{o}lder inequality and assuming $\omega_2\in A_p$, we obtain
\begin{align}
&|T^*(\vec f_2)(x)|_r\lesssim \int_{(2B)^c}\frac{|\vec f(y)|_r}{|x_0-y|^n}dy=\sum\limits_{j=1}^{\infty}\,\int_{2^{j}R\leq |x_0-y|<2^{j+1}R}\frac{|\vec f(y)|_r}{|x_0-y|^n}dy\nonumber
\\
&\lesssim \sum\limits_{j=1}^{\infty}\frac{1}{|2^jB|}\int_{2^{j+1}B}|\vec f(y)|_rdy\leq \sum\limits_{j=1}^{\infty}\frac{1}{|2^jB|}\Big(\int_{2^{j+1}B}|\vec f(y)|_r^p\omega_2(y)dy\Big)^{\frac{1}{p}}\Big(\int_{2^{j+1}B}\omega_2(y)^{1-p'}dy\Big)^{\frac{p-1}{p}}\nonumber
\\
&\lesssim \sum\limits_{j=1}^{\infty}\frac{1}{|2^jB|}\Big(\int_{2^{j+1}B}|\vec f(y)|_r^p\omega_2(y)dy\Big)^{\frac{1}{p}}\frac{|2^{j+1}B|}{\omega_2(2^{j+1}B)^{\frac{1}{p}}}\lesssim \|\vec f\|_{\mathcal B^{p,\kappa}_{\omega_1,\omega_2}(\ell^r,\mathbb R^n)}\sum\limits_{j=1}^{\infty}\frac{\omega_1(2^{j+1}B)^{\frac{\kappa}{p}}}{\omega_2(2^{j+1}B)^{\frac{1}{p}}}.\nonumber
\end{align}
Thus,
\begin{align}
J_2&\lesssim \frac{\omega_2(B)}{\omega_1(B)^{\kappa}} \|\vec f\|^p_{\mathcal B^{p,\kappa}_{\omega_1,\omega_2}(\ell^r,\mathbb R^n)}\Big(\sum\limits_{j=1}^{\infty}\frac{\omega_1(2^{j+1}B)^{\frac{\kappa}{p}}}{\omega_2(2^{j+1}B)^{\frac{1}{p}}}\Big)^{p}=\|\vec f\|^p_{\mathcal B^{p,\kappa}_{\omega_1,\omega_2}(\ell^r,\mathbb R^n)}. \mathcal K^p,\nonumber
\end{align}
where
$$
\mathcal K= \sum\limits_{j=1}^{\infty}\frac{\omega_1(2^{j+1}B)^{\frac{\kappa}{p}}}{\omega_1(B)^{\frac{\kappa}{p}}}.\frac{\omega_2(B)^{\frac{1}{p}}}{\omega_2(2^{j+1}B)^{\frac{1}{p}}}.
$$
Next, by applying Lemma \ref{Lem2.8-CHK1982}, we have
$
\big(\frac{\omega_1(2^{j+1}B)}{\omega_1(B)}\big)^{\frac{\kappa}{p}}\lesssim \big(\frac{|2^{j+1}B|}{|B|}\big)^{\kappa}\lesssim 2^{{(j+1)n\kappa}}.
$
On the other hand, by using Proposition \ref{rever-Holder}, we infer
$$
\Big(\frac{\omega_2(B)}{\omega_2(2^{j+1}B)}\Big)^{\frac{1}{p}}\lesssim \Big(\frac{|B|}{|2^{j+1}B|}\Big)^{\frac{(\delta-1)}{\delta.p}}\lesssim 2^{\frac{-(j+1)n(\delta-1)}{\delta p}}.
$$
Hence, by $\kappa<\frac{\delta-1}{\delta p}$, one has
$
\mathcal K\lesssim \sum\limits_{j=1}^{\infty} 2^{(j+1)n(\kappa-\frac{\delta-1}{\delta p})}<\infty.
$
Thus,
$$
J_2\lesssim \|\vec f\|^p_{\mathcal B^{p,\kappa}_{\omega_1,\omega_2}(\ell^r,\mathbb R^n)}.
$$
Combining this with (\ref{J12}) and (\ref{J1}) above, we obtain
$$
\|T^*(\vec f)\|_{\mathcal B^{p,\kappa}_{\omega_1,\omega_2}(\ell^r, \mathbb R^n)}\lesssim \|\vec f\|_{\mathcal B^{p,\kappa}_{\omega_1,\omega_2}(\ell^r, \mathbb R^n)},\,\textit{\rm for all }\, \vec f\in \mathcal B^{p,\kappa}_{\omega_1,\omega_2}(\ell^r, \mathbb R^n),
$$
which implies the proof of the theorem is finished.
\end{proof}
\vskip 5pt
Our second main result in this section is to establish the boundedness of sublinear operators generated by strongly singular operators on certain weighted Morrey spaces. As an application, we obtain the boundedness of some strongly singular integral operators on the weighted Morrey spaces.
Let us recall the definition of the weighted central Morrey spaces. Let $1 \le q < \infty, 0 < \kappa < 1$ and $\omega$ be a weighted function. Then the weighted central Morrey spaces is defined as the set of all functions in $L^q_{{\rm loc}}(\mathbb{R}^n)$ such that
\begin{align*}
\|f\|_{\mathcal{{\mathop B\limits^.}}^{q,\kappa}(\omega, \mathbb{R}^n)}=\sup\limits_{R>0} \Big(\frac{1}{\omega(B_R(0))^{\kappa}}\int_{B_R(0)}|f(x)|^qdx \Big)^{\frac{1}{q}}<\infty.
\end{align*}
It is evident that $\mathcal{{\mathop B\limits^.}}^{q,\kappa}(\omega, \mathbb{R}^n)$ is a Banach space. We denote by $\mathfrak{\mathop B\limits^.}^{q,\kappa}(\omega, \mathbb{R}^n)$ the closure of $L^{q}(\mathbb R^n)\cap\mathcal{{\mathop B\limits^.}}^{q,\kappa}(\omega, \mathbb{R}^n)$ with respect to the norm in $\mathcal{{\mathop B\limits^.}}^{q,\kappa}(\omega, \mathbb{R}^n)$.
We also recall that the central weighted local Morrey spaces $\mathcal{\mathop B\limits^.}^{q,\kappa}_{\rm loc}(\omega,\mathbb{R}^n)$ as the set of all functions in $L^q_{{\rm loc}}(\mathbb{R}^n)$ such that
\begin{align*}
\|f\|_{\mathcal{{\mathop B\limits^.}}^{q,\kappa}_{\rm loc}(\omega, \mathbb{R}^n)}=\sup\limits_{0<R<1} \Big(\frac{1}{\omega(B_R(0))^{\kappa}}\int_{B_R(0)}|f(x)|^qdx \Big)^{\frac{1}{q}}<\infty.
\end{align*}
\begin{theorem}\label{Theo-sublinear1}
Let $1<p<\infty$, $\lambda>0$, $0<\kappa<1$, and $\omega(x)=|x|^{\beta}$ for $-n+\frac{\lambda p}{\kappa}<\beta< \frac{\lambda p +(1-\kappa)n}{\kappa}$ and $\kappa_1 \in (0,\kappa-\frac{\lambda p}{n+\beta}]$. Then, the following is true:
{\rm(i)} If $\mathcal{T}$ extends to a bounded operator on $L^p(\mathbb R^n)$, then $\mathcal{T}$ can also extend to a bounded operator from $\mathfrak{\mathop B\limits^.}^{p,\kappa}(\omega,\mathbb R^n)$ to $\mathcal{{\mathop B\limits^.}}^{p,\kappa_1}_{\rm loc}(\omega, \mathbb{R}^n)$.
{\rm(ii)} Let $b \in L^{\eta}_{\rm loc}(\mathbb R^n)\cap BMO(\mathbb R^n)$ with $\eta >p'$. If the commutator $[b, \mathcal{T}]$ extends to a bounded operator on $L^p(\mathbb R^n)$, then it can also extend to a bounded operator from $\mathfrak{\mathop B\limits^.}^{p,\kappa}(\omega,\mathbb R^n)$ to $\mathcal{{\mathop B\limits^.}}^{p,\kappa_1}_{\rm loc}(\omega, \mathbb{R}^n)$.
\end{theorem}
\begin{proof} It is sufficient to prove the theorem for all $f\in L^p(\mathbb R^n)\cap\mathcal{{\mathop B\limits^.}}^{p,\kappa}(\omega, \mathbb{R}^n)$.
(i) By fixing a ball $B_R(x_0):=B$ (for $x_0=0$), with $0<R<1$ and decomposing $f=f_1+f_2$, where $f_1= f.\chi_{2B}$, one has
\begin{align}\label{I12-strong}
\frac{1}{\omega(B)^{\kappa_1}}\int_{B}|\mathcal{T}(f)(x)|^pdx &\leq \frac{1}{\omega(B)^{\kappa_1}}\int_{B}|\mathcal{T}(f_1)(x)|^pdx +\nonumber
\\
&\,\,\,+\frac{1}{\omega(B)^{\kappa_1}}\int_{B}|\mathcal{T}(f_2)(x)|^pdx:=I_1+I_2.
\end{align}
To estimate $I_1$, by $\mathcal{T}$ extends to a bounded operator on $L^p(\mathbb R^n)$ and the inequality (\ref{ineq-power}), we have
\begin{align}\label{I1-strong}
I_1&\leq \frac{1}{\omega(B)^{\kappa_1}}\int_{\mathbb R^n}|\mathcal{T}(f_1)(x)|^pdx \lesssim \frac{1}{\omega(B)^{\kappa_1}}\int_{2B}|f(x)|^pdx \leq \frac{\omega(2B)^{\kappa}}{\omega(B)^{\kappa_1}}\|f\|^p_{\mathcal {\mathop B\limits^.}^{p,\kappa}(\omega,\mathbb R^n)}\nonumber
\\
&\lesssim R^{(n+\beta)(\kappa-\kappa_1)}\|f\|^p_{\mathcal {\mathop B\limits^.}^{p,\kappa}(\omega,\mathbb R^n)}\leq \|f\|^p_{\mathcal {\mathop B\limits^.}^{p,\kappa}(\omega,\mathbb R^n)}.
\end{align}
On the other hand, by $f_2\in L^{p}(\mathbb R^n)$, one has that $g_m =f.\chi_{(2B)^c\cap {(2mB)}}\to f_2$ in $L^p(\mathbb R^n)$. Thus, by $\mathcal{T}$ bounded on $L^p(\mathbb R^n)$ again, there exists a subsequence $(\mathcal T(g_{m_k}))$-denoted by $(\mathcal T(g_m))$ such that $\mathcal T(g_m)\to \mathcal T(f_2)$ a.e on $\mathbb R^n$. From this, by having $\mathcal{T}$ still satisfies (\ref{ineq-sub}) on $L^{p}_{\rm {comp}} (\mathbb R^n)$ and letting $x\in B$ with $m$ large enough, we obtain
\begin{align}\label{Nakai}
|{\mathcal{T}}{(f_2)}(x)|=\mathop{\rm lim}\limits_{m\to \infty} |{\mathcal{T}}{(g_m)}(x)| \lesssim \mathop{\rm lim}\limits_{m\to \infty} \int_{\mathbb R^n}\frac{|g_m(y)|}{|x-y|^{n+\lambda}}dy= \int_{(2B)^c}\frac{|f(y)|}{|x-y|^{n+\lambda}}dy.
\end{align}
Notice that let $x\in B$ and $y\in (2B)^c$, we have $2R\leq |x_0 - y|\leq 2|x - y|$. This implies that
\begin{align}\label{Tf2}
|\mathcal{T}(f_{2})(x)|&\lesssim \int_{(2B)^c}\frac{1}{|x_0-y|^{n+\lambda}}|f(y)|dy=\sum\limits_{j=1}^{\infty}\,\int_{2^{j}R\leq |x_0-y|<2^{j+1}R}\frac{|f(y)|}{|x_0-y|^{n+\lambda}}dy\nonumber
\\
&\lesssim \sum\limits_{j=1}^{\infty}\frac{1}{|2^jB|^{(1+\frac{\lambda}{n})}}\int_{2^{j+1}B}|f(y)|dy.
\end{align}
From this, by the H\"{o}lder inequality, we deduce
\begin{align}\label{pre-I2}
|\mathcal{T}(f_{2})(x)|&\lesssim \sum\limits_{j=1}^{\infty}\frac{1}{|2^jB|^{(1+\frac{\lambda}{n})}}\Big(\int_{2^{j+1}B}|f(y)|^pdy\Big)^{\frac{1}{p}}|2^{j+1}B|^{\frac{1}{p'}}.\nonumber
\\
&\lesssim \|f\|_{\mathcal {\mathop B\limits^.}^{p,\kappa}(\omega,\mathbb R^n)}\sum\limits_{j=1}^{\infty}\frac{\omega(2^{j+1}B)^{\frac{\kappa}{p}}|2^{j+1}B|^{\frac{1}{p'}}}{|2^{j}B|^{(1+\frac{\lambda}{n})}}.
\end{align}
As a consequence, by (\ref{ineq-power}), we give
\begin{align}\label{I2-strong}
I_2&\lesssim \frac{|B|}{\omega(B)^{\kappa_1}} \|f\|^p_{\mathcal {\mathop B\limits^.}^{p,\kappa}(\omega,\mathbb R^n)}\Big(\sum\limits_{j=1}^{\infty}\frac{\omega(2^{j+1}B)^{\frac{\kappa}{p}}.|2^{j+1}B|^{\frac{1}{p'}}}{|2^{j}B|^{(1+\frac{\lambda}{n})}}\Big)^{p}\nonumber
\\
&\lesssim \|f\|^p_{\mathcal {\mathop B\limits^.}^{p,\kappa}(\omega, \mathbb R^n)}\Big(\sum\limits_{j=1}^{\infty}2^{j(\frac{\kappa(n+\beta)-n}{p}-\lambda)}.R^{\frac{(n+\beta)(\kappa-\kappa_1)}{p}-\lambda}\Big)^p\lesssim \|f\|^p_{\mathcal {\mathop B\limits^.}^{p,\kappa}(\omega,\mathbb R^n)}.
\end{align}
Therefore, by (\ref{I12-strong}) and (\ref{I1-strong}), we immediately have
$$
\|\mathcal{T}(f)\|_{\mathcal {\mathop B\limits^.}^{p,\kappa_1}_{\rm loc}(\omega,\mathbb R^n)}\lesssim \|f\|_{\mathcal {\mathop B\limits^.}^{p,\kappa}(\omega,\mathbb R^n)},\,\textit{\rm for all}\, f\in L^p(\mathbb R^n)\cap\mathcal {\mathop B\limits^.}^{p,\kappa}(\omega,\mathbb R^n),
$$
which gives that the proof of part (i) is ended.
(ii) As the proof of part (i) above, we also fix a ball $B_R(x_0):=B$ (for $x_0=0$) with $0<R<1$, and write $f=f_1+f_2$ with $f_1= f.\chi_{2B}$. Thus, we get
\begin{align}\label{K12-strong}
&\frac{1}{\omega(B)^{\kappa_1}}\int_{B}|[b,\mathcal{T}](f)(x)|^pdx \leq \frac{1}{\omega(B)^{\kappa_1}}\int_{B}|[b,\mathcal{T}](f_1)(x)|^pdx +\nonumber
\\
&\,\,\,+\frac{1}{\omega(B)^{\kappa_1}}\int_{B}|[b,\mathcal{T}](f_2)(x)|^pdx:=K_1+K_2.
\end{align}
Next, by $[b, \mathcal{T}]$ extends to a bounded operator on $L^p(\mathbb R^n)$ and the relation (\ref{ineq-power}) again, we obtain
\begin{align}\label{K1-strong}
K_1&\leq \frac{1}{\omega(B)^{\kappa_1}}\int_{\mathbb R^n}|[b,\mathcal{T}](f_1)(x)|^pdx \lesssim \frac{\|b\|^p_{BMO(\mathbb R^n)}}{\omega(B)^{\kappa_1}}\int_{2B}|f(x)|^pdx
\nonumber
\\
&\leq \frac{\omega(2B)^{\kappa}.\|b\|^p_{BMO(\mathbb R^n)}}{\omega(B)^{\kappa_1}}\|f\|^p_{\mathcal {\mathop B\limits^.}^{p,\kappa}(\omega,\mathbb R^n)}\lesssim \|b\|^p_{BMO(\mathbb R^n)}.\|f\|^p_{\mathcal {\mathop B\limits^.}^{p,\kappa}(\omega,\mathbb R^n)}.
\end{align}
Next, by $b\in L^{\eta}_{\rm loc}(\mathbb R^n)$ with $\eta> p'$ and the inequality (\ref{ineq-sub-com}), we get
\begin{align}
\left|[b,\mathcal{T}](g)(x)\right|\lesssim \int_{\mathbb R^n}\frac{|g(y)|.|b(x)-b(y)|}{|x-y|^{n+\lambda}}dy, \text{\; a.e \;} x\not\in \text{supp}{(g)},\forall \, g\in L^{p}_{\rm comp}(\mathbb R^n).\nonumber
\end{align}
Thus, by estimating as (\ref{Nakai}) above and letting $x\in B$ and $y\in (2B)^c$, we have
\begin{align}
|[b,\mathcal{T}](f_{2})(x)|&\lesssim \int_{(2B)^c}\frac{1}{|x-y|^{n+\lambda}}|f(y)|.|b(x)-b(y)|dy
\nonumber
\\
&\leq \int_{(2B)^c}\frac{1}{|x_0-y|^{n+\lambda}}|f(y)|.|b(x)-b(y)|dy\nonumber
\\
&\leq \Big(\int_{(2B)^c}\frac{|f(y)|}{|x_0-y|^{n+\lambda}}dy\Big)|b(x)-b_B|+\int_{(2B)^c}\frac{|f(y)|.|b_B-b(y)|}{|x_0-y|^{n+\lambda}}dy.\nonumber
\end{align}
This leads to that
\begin{align}\label{K21-K22}
K_2&\lesssim \frac{1}{\omega(B)^{\kappa_1}}\Big(\int_{(2B)^c}\frac{|f(y)|}{|x_0-y|^{n+\lambda}}dy\Big)^p.\Big(\int_{B}|b(x)-b_B|^pdx\Big)+\nonumber
\\
&\,\,\,+\frac{|B|}{\omega(B)^{\kappa_1}}\Big(\int_{(2B)^c}\frac{|f(y)|.|b_B-b(y)|}{|x_0-y|^{n+\lambda}}dy\Big)^p :=K_{2,1}+K_{2,2}.
\end{align}
For the term $K_{2,1}$, by using (\ref{Tf2}), (\ref{pre-I2}), (\ref{I2-strong}) and Lemma \ref{BMO-Lemma}, we infer
\begin{align}\label{K21-strong}
K_{2,1}&\lesssim \frac{|B|}{\omega(B)^{\kappa_1}}\|f\|^p_{\mathcal {\mathop B\limits^.}^{p,\kappa}(\omega,\mathbb R^n)}\Big(\sum\limits_{j=1}^{\infty}\frac{\omega(2^{j+1}B)^{\frac{\kappa}{p}}.|2^{j+1}B|^{\frac{1}{p'}}}{|2^{j}B|^{(1+\frac{\lambda}{n})}}\Big)^{p}\Big(\frac{1}{|B|}\int_{B}|b(x)-b_B|^pdx\Big)\nonumber
\\
&\lesssim \|f\|^p_{\mathcal {\mathop B\limits^.}^{p,\kappa}(\omega,\mathbb R^n)}.\|b\|^p_{BMO(\mathbb R^n)}.
\end{align}
For the term $K_{2,2}$, by the H\"{o}lder inequality, we have
\begin{align}
K_{2,2}&\lesssim \frac{|B|}{\omega(B)^{\kappa_1}}\Big(\sum\limits_{j=1}^{\infty}\frac{1}{|2^jB|^{(1+\frac{\lambda}{n})}}\int_{2^{j+1}B}|f(y)|.|b_B-b(y)|dy\Big)^p\nonumber
\\
&\leq\frac{|B|}{\omega(B)^{\kappa_1}}\Big(\sum\limits_{j=1}^{\infty}\frac{1}{|2^jB|^{(1+\frac{\lambda}{n})}}\Big(\int_{2^{j+1}B}|f(y)|^pdy\Big)^{\frac{1}{p}}.\Big(\int_{2^{j+1}B}|b_B-b(y)|^{p'}dy\Big)^{\frac{1}{p'}}\Big)^p\nonumber
\\
&\lesssim \frac{|B|}{\omega(B)^{\kappa_1}}\Big(\sum\limits_{j=1}^{\infty}\frac{\omega(2^{j+1}B)^{\frac{\kappa}{p}}}{|2^jB|^{(1+\frac{\lambda}{n})}}(L_{1,i}+L_{2,i})\Big)^p\|f\|^p_{\mathcal {\mathop B\limits^.}^{p,\kappa}(\omega,\mathbb R^n)},\nonumber
\end{align}
where $L_{1,i}=\Big(\int_{2^{j+1}B}|b(y)-b_{2^{j+1}B}|^{p'}dy\Big)^{\frac{1}{p'}}$ and $L_{2,i}=\Big(\int_{2^{j+1}B}|b_B-b_{2^{j+1}B}|^{p'}dy\Big)^{\frac{1}{p'}}$.
On the other hand, by Lemma \ref{BMO-Lemma} and Proposition \ref{Pro-T1986}, we also get
$$
L_{1,i}\leq \|b\|_{BMO(\mathbb R^n)}.|2^{j+1}B|^{\frac{1}{p'}}
$$
and
$$
L_{2,i}\leq \Big(\int_{2^{j+1}B}\Big(2^n(j+1)\|b\|_{BMO(\mathbb R^n)}\Big)^{p'}dy\Big)^{\frac{1}{p'}}\leq 2^n(j+1).\|b\|_{BMO(\mathbb R^n)}.|2^{j+1}B|^{\frac{1}{p'}}.
$$
Thus, by estimating as (\ref{I2-strong}) above, we immediately have
\begin{align}
K_{2,2}&\lesssim \frac{|B|}{\omega(B)^{\kappa_1}}\Big(\sum\limits_{j=1}^{\infty}\frac{(j+2).\omega(2^{j+1}B)^{\frac{\kappa}{p}}.|2^{j+1}B|^{\frac{1}{p'}}}{|2^jB|^{(1+\frac{\lambda}{n})}}\Big)^p\|f\|^p_{\mathcal B^{p,\kappa}(\omega,\mathbb R^n)}.\|b\|^p_{BMO(\mathbb R^n)}\nonumber
\\
&\lesssim \Big(\sum\limits_{j=1}^{\infty}(j+2)2^{j(\frac{\kappa(n+\beta)-n}{p}-\lambda)}\Big)^p.\|f\|^p_{\mathcal B^{p,\kappa}(\omega,\mathbb R^n)}.\|b\|^p_{BMO(\mathbb R^n)}\nonumber
\\
&\lesssim \|f\|^p_{\mathcal {\mathop B\limits^.}^{p,\kappa}(\omega,\mathbb R^n)}.\|b\|^p_{BMO(\mathbb R^n)}.\nonumber
\end{align}
From the above estimation, by (\ref{K12-strong})-(\ref{K21-strong}), we confirm
$$
\|[b,\mathcal{T}](f)\|_{\mathcal {\mathop B\limits^.}^{p,\kappa_1}_{\rm loc}(\omega,\mathbb R^n)}\lesssim \|b\|_{BMO(\mathbb R^n)}.\|f\|_{\mathcal {\mathop B\limits^.}^{p,\kappa}(\omega,\mathbb R^n)},\,\textit{\rm for all}\, f\in L^p(\mathbb R^n)\cap \mathcal {\mathop B\limits^.}^{p,\kappa}(\omega,\mathbb R^n).
$$
Therefore, the proof of this theorem is completed.
\end{proof}
Now, let us give some applications of Theorem \ref{Theo-sublinear1}. Note that Hirschman \cite{Hirschman1959}, Wainger \cite{Wainger1965}, Cho and Yang \cite{CY2010} studied the strongly singular convolution operators in the context of $L^p(\mathbb R^n)$ spaces defined as follows.
\begin{definition}
Let $0 < s < \infty$ and $0 < \lambda < \frac{ns}{2}$. The strongly singular integral operator $T^{s,\lambda}$ is defined by
\begin{align*}
T^{s,\lambda}(f)(x)=p.v.\int_{\mathbb{R}^n}\frac{e^{i|x-y|^{-s}}}{|x-y|^{n+\lambda}}\chi_{\{|x-y|<1 \}}f(y)dy.
\end{align*}
\end{definition}
\begin{theorem}\label{Theo-strong}
{\rm (see in \cite{Fefferman1970, Hirschman1959, Wainger1965})} Let $0<s<\infty$, $1<p<\infty$, $0<\lambda<\frac{ns}{2}$, $|\frac{1}{p}-\frac{1}{2}|<\frac{1}{2}-\frac{\lambda}{ns}$. Then $T^{s,\lambda}$ extends to a bounded operator from $L^p(\mathbb R^n)$ to itself.
\end{theorem}
\begin{definition}
Let $0 < \zeta, s,\lambda < \infty$, and $k$ be an integer with $k\geq 2$. The strongly singular integral operator $T_{\zeta,s,\lambda}$ is defined by
\begin{align*}
T_{\zeta,s, \lambda}(f)(x)=p.v.\int_{\mathbb{R}}\frac{e^{i\{\zeta.(x-y)^k+|x-y|^{-s}\}}}{(x-y)|x-y|^{\lambda}}f(y)dy.
\end{align*}
\end{definition}
\begin{theorem}\label{Theo-strong-CY2010}
{\rm (see in \cite{CY2010})} Let $0 < \zeta, s, \lambda < \infty$, $k\in\mathbb N$ with $k\geq 2$ and $s\geq 2\lambda$. Then $T_{\zeta,s,\lambda}$ extends to a bounded operator from $L^2(\mathbb R)$ to itself.
\end{theorem}
On the other hand, Li and Lu \cite{LL2006} also studied the Coifman-Rochberg-Weiss type commutator of strongly singular integral operator defined as follows
\begin{definition}
Let $0 < s < \infty$ and $0 < \lambda < \frac{ns}{2}$. The Coifman-Rochberg-Weiss type commutator of strongly singular integral operator is defined by
\begin{equation}\label{commuatator-strong}
[b,T^{s,\lambda}](f)(x)=p.v.\int_{\mathbb{R}^n}\frac{e^{i|x-y|^{-s}}}{|x-y|^{n+\lambda}}\chi_{\{|x-y|<1 \}}\big(b(x)-b(y)\big)f(y)dy,
\end{equation}
where $b$ is locally integrable functions on $\mathbb R^n$.
\end{definition}
Moreover, Li and Lu \cite{LL2006} proved the following interesting result.
\begin{theorem}\label{Theo1.1-LL2006}
{\rm (Theorem 1.1 \cite{LL2006})} Let $0 < s < \infty$, $1<p<\infty$, $0<\lambda<\frac{ns}{2}$, $|\frac{1}{p}-\frac{1}{2}|<\frac{1}{2}-\frac{\lambda}{ns}$ and $b\in BMO(\mathbb R^n)$. Then the commutator $[b, T^{s,\lambda}]$ extends to a bounded operator on $L^p(\mathbb R^n)$.
\end{theorem}
From Theorem \ref{Theo-sublinear1}, Theorem \ref{Theo-strong}, Theorem \ref{Theo-strong-CY2010} and Theorem \ref{Theo1.1-LL2006}, we obtain the useful results as follows.
\begin{corollary}\label{Theo-strong1}
Let $0<s<\infty$, $1<p<\infty$, $0<\lambda<\frac{ns}{2}$, $0<\kappa<1$, $|\frac{1}{p}-\frac{1}{2}|<\frac{1}{2}-\frac{\lambda}{ns}$, $-n+\frac{\lambda p}{\kappa}<
\beta< \frac{\lambda p +(1-\kappa)n}{\kappa}$, $\omega(x)=|x|^{\beta}$ and $\kappa_1\in (0,\kappa-\frac{\lambda p}{n+\beta}]$. Let $b\in L^{\eta}_{\rm loc}(\mathbb R^n)\cap BMO(\mathbb R^n)$ with $\eta>p'$.
Then $T^{s,\lambda}$ and $[b, T^{s,\lambda}]$ extend to bounded operators from $\mathfrak {\mathop B\limits^.}^{p,\kappa}(\omega,\mathbb R^n)$ to $\mathcal {\mathop B\limits^.}^{p,\kappa_1}_{\rm loc}(\omega,\mathbb R^n)$.
\end{corollary}
\begin{corollary}\label{Coro-CY-new}
Let $0<\zeta,\lambda,s<\infty$, $k\in\mathbb N$ with $k\geq 2$, $s\geq 2\lambda$, $0<\kappa<1$, $-1+\frac{2\lambda}{\kappa}<
\beta< \frac{2\lambda +(1-\kappa)}{\kappa}$, $\omega(x)=|x|^{\beta}$ and $\kappa_1\in (0,\kappa-\frac{2\lambda}{1+\beta}]$. Then $T_{\zeta, s,\lambda}$ extends to a bounded operator from $\mathfrak {\mathop B\limits^.}^{2,\kappa}(\omega,\mathbb R)$ to $\mathcal {\mathop B\limits^.}^{2,\kappa_1}_{\rm loc}(\omega,\mathbb R)$.
\end{corollary}
Remark that in the special case when the weight function in Theorem \ref{Theo-sublinear1}, Corollary \ref{Theo-strong1} and Corollary \ref{Coro-CY-new} is a constant function, then we can remove the central condition in the spaces, that is, we may replace
$\mathfrak {\mathop B\limits^.}^{p,\kappa}(\mathbb R^n)$ and $\mathcal {\mathop B\limits^.}^{p,\kappa_1}_{\rm loc}(\mathbb R^n)$ by $\mathfrak {M}^{p,\kappa}(\mathbb R^n)$ and $\mathcal { B}^{p,\kappa_1}_{\rm loc}(\mathbb R^n)$, respectively. Here ${\mathfrak{M} }^{p,\kappa}(\mathbb R^n)$ is the closure of $L^{p}(\mathbb R^n)\cap {\mathcal B}^{p,\kappa}(\mathbb R^n)$ in the space ${\mathcal B}^{p,\kappa}( \mathbb R^n)$.
\vskip 5pt
Finally, we give the boundedness of sublinear operators in the setting when the weighted function is in the class of Muckenhoupt weights. It is worth pointing out that when $\omega_1=\omega_2=\omega$, then the space $C^\infty_0(\mathbb R^n)$ is contained in ${\mathcal B}^{q,\kappa}_\omega(\mathbb R^n)$. The space ${\mathfrak{B} }^{q,\kappa}_\omega(\mathbb R^n)$ is denoted as the closure of $L^{q}(\mathbb R^n)\cap {\mathcal B}^{q,\kappa}_\omega(\mathbb R^n) $ in the space ${\mathcal B}^{q,\kappa}_\omega(\mathbb R^n)$.
\begin{theorem}\label{Theo-Sublinear2}
Let $1<p<\infty$, $0<\kappa<1$, and $1\leq p^*,\zeta<\infty$, $\omega\in A_{\zeta}$ with the finite critical index $r_\omega$ for the reverse H\"{o}lder. Assume that $p > p^{*}\zeta {r^{'}_\omega}, \delta\in (1,r_\omega)$ and $\kappa^*=\frac{p^*(\kappa-1)}{p}+1$. Then, if $\mathcal{T}$ extends to a bounded operator on $L^p(\mathbb R^n)$, then $\mathcal{T}$ can also extend to a bounded operator from ${\mathfrak{B} }^{p,\kappa}_\omega(\mathbb R^n)$ to ${B}^{p^*,\kappa^*}_\omega(\mathbb R^n)$.
\end{theorem}
\begin{proof}
Let us fix $f\in L^{p}(\mathbb R^n)\cap {\mathcal B}^{p,\kappa}_\omega(\mathbb R^n)$ and a ball $B_R(x_0):=B$ with $R\geq 1$. From assume that $p > p^*\zeta r^{'}_\omega $, one has $r\in(1, r_\omega)$ satisfying
$p = \zeta p^* r'$. Hence, by the H\"{o}lder inequality and the reverse H\"{o}lder condition, we lead to
\begin{align}\label{T_p*}
\Big(\int_{B}|\mathcal{T} (f)(x)|^{p*}\omega(x)dx\Big)^{\frac{1}{p^*}}&\leq \Big(\int_{B}|\mathcal{T} (f)(x)|^{\frac{p}{\zeta}}dx\Big)^{\frac{\zeta}{p}}.\Big(\int_B \omega(x)^r dx\Big)^{\frac{1}{rp*}}\nonumber
\\
&\lesssim \Big(\int_{B}|\mathcal{T}(f)(x)|^{\frac{p}{\zeta}}dx\Big)^{\frac{\zeta}{p}}\omega(B)^{\frac{1}{p*}}.|B|^{\frac{-\zeta}{p}}.
\end{align}
Next, we decompose $f=f_1+f_2$ where $f_1=f.\chi_{2B}$. Thus,
\begin{align}\label{A12}
\Big(\int_{B}|\mathcal{T}(f)(x)|^{\frac{p}{\zeta}}dx\Big)^{\frac{\zeta}{p}}&\lesssim \Big(\int_{B}|\mathcal{T} (f_1)(x)|^{\frac{p}{\zeta}}dx\Big)^{\frac{\zeta}{p}}+ \Big(\int_{B}|\mathcal{T}(f_2)(x)|^{\frac{p}{\zeta}}dx\Big)^{\frac{\zeta}{p}}\nonumber
\\
&:=A_1+A_2.
\end{align}
From assuming that $\mathcal{T}$ extends to a bounded operator on $L^p(\mathbb R^n)$ and using Proposition \ref{pro2.4DFan}, we get
\begin{align}\label{A1}
A_1&\leq \Big(\int_{\mathbb R^n}|\mathcal{T}( f_1)(x)|^{\frac{p}{\zeta}}dx\Big)^{\frac{\zeta}{p}}\lesssim \Big(\int_{2B}|f(x)|^{\frac{p}{\zeta}}dx\Big)^{\frac{\zeta}{p}}.\nonumber
\\
&\lesssim \Big(\int_{2B}|f(x)|^{p}\omega(x)dx\Big)^{\frac{1}{p}}\omega(2B)^{\frac{-1}{p}}|2B|^{\frac{\zeta}{p}}\lesssim \|f\|_{\mathcal B^{p,\kappa}_{\omega}(\mathbb R^n)}.\omega(2B)^{\frac{(\kappa-1)}{p}}.|B|^{\frac{\zeta}{p}}.
\end{align}
Next, let us give $x\in B$ and $y\in (2B)^c$. By applying the relation (\ref{Nakai}) above and estimating as (\ref{Tf2}) and (\ref{A1}), we obtain
\begin{align}
&|\mathcal{T}(f_2)(x)|\nonumber
\\
&\lesssim \sum\limits_{j=1}^{\infty}\frac{1}{|2^jB|^{(1+\frac{\lambda}{n})}}\int_{2^{j+1}B}|f(y)|dy\leq \sum\limits_{j=1}^{\infty}\frac{1}{|2^jB|^{(1+\frac{\lambda}{n})}}\Big(\int_{2^{j+1}B}|f(y)|^{\frac{p}{\zeta}}dy\Big)^{\frac{\zeta}{p}}|2^{j+1}B|^{1-\frac{\zeta}{p}} \nonumber
\\
&\leq \sum\limits_{j=1}^{\infty}\frac{1}{|2^jB|^{(1+\frac{\lambda}{n})}}\|f\|_{\mathcal B^{p,\kappa}_{\omega}(\mathbb R^n)}.\omega(2^{j+1}B)^{\frac{(\kappa-1)}{p}}.|2^{j+1}B|\lesssim \|f\|_{\mathcal B^{p,\kappa}_{\omega}(\mathbb R^n)}\sum\limits_{j=1}^{\infty}\frac{\omega(2^{j+1}B)^{\frac{(\kappa-1)}{p}}}{|2^jB|^{\frac{\lambda}{n}}}.\nonumber
\end{align}
Hence,
\begin{align}\label{A2}
A_2\lesssim \|f\|_{\mathcal B^{p,\kappa}_{\omega}(\mathbb R^n)}\Big(\sum\limits_{j=1}^{\infty}\frac{\omega(2^{j+1}B)^{\frac{(\kappa-1)}{p}}}{|2^jB|^{\frac{\lambda}{n}}}\Big)|B|^{\frac{\zeta}{p}}.
\end{align}
Next, by Proposition \ref{rever-Holder} and $\kappa\in (0,1)$, we deduce
\begin{align}
\Big(\frac{\omega(2^{j+1}B)}{\omega(B)}\Big)^{\frac{(\kappa-1)}{p}}\lesssim \Big(\frac{|2^{j+1}B|}{|B|}\Big)^{\frac{(\kappa-1)(\delta-1)}{p\delta}}\lesssim 2^{\frac{jn(\kappa-1)(\delta-1)}{p\delta}}.\nonumber
\end{align}
From this, by using (\ref{T_p*})-(\ref{A2}), $\kappa^*=\frac{p^*(\kappa-1)}{p}+1$ and $R\geq 1$, we get
\begin{align}
&\Big(\frac{1}{\omega(B)^{\kappa^*}}\int_{B}|\mathcal{T}(f)(x)|^{p*}\omega(x)dx\Big)^{\frac{1}{p^*}}\nonumber
\\
&\lesssim \frac{\|f\|_{\mathcal B^{p,\kappa}_{\omega}(\mathbb R^n)}}{\omega(B)^{\frac{\kappa^*}{p*}}}.\Big(\omega(2B)^{\frac{(\kappa-1)}{p}}+\sum\limits_{j=1}^{\infty}2^{-j\lambda}\omega(2^{j+1}B)^{\frac{(\kappa-1)}{p}}\Big).\omega(B)^{\frac{1}{p*}}\nonumber
\\
&\lesssim \|f\|_{\mathcal B^{p,\kappa}_{\omega}(\mathbb R^n)}.\Big(\sum\limits_{j=0}^{\infty}2^{-j\lambda}\Big(\frac{\omega(2^{j+1}B)}{\omega(B)}\Big)^{\frac{(\kappa-1)}{p}}\Big)
\lesssim \|f\|_{\mathcal B^{p,\kappa}_{\omega}(\mathbb R^n)}.\Big(\sum\limits_{j=0}^{\infty}2^{j(\frac{n(\kappa-1)(\delta-1)}{p\delta}-\lambda)}\Big)\nonumber
\\
&\lesssim \|f\|_{\mathcal B^{p,\kappa}_{\omega}(\mathbb R^n)}.\nonumber
\end{align}
Therefore,
$$
\|\mathcal{T}(f)\|_{ B^{p^*,\kappa^*}_\omega(\mathbb R^n)}\lesssim \|f\|_{\mathcal B^{p,\kappa}_\omega(\mathbb R^n)},\,\textit{\rm for all}\,f\in L^p(\mathbb R^n)\cap {\mathcal B}^{p,\kappa}_\omega(\mathbb R^n).
$$
This implies that theorem is proved.
\end{proof}
By Theorem \ref{Theo-Sublinear2}, we obtain the following interesting corollary.
\begin{corollary}\label{Theo-SIO-strong-2}
Let $s,\lambda,\kappa,p$ as Corollary \ref{Theo-strong1} and $1\leq p^*,\zeta<\infty$, $\omega\in A_{\zeta}$ with the finite critical index $r_\omega$ for the reverse H\"{o}lder. Assume that $p > p^{*}\zeta {r^{'}_\omega}, \delta\in (1,r_\omega)$ and $\kappa^*=\frac{p^*(\kappa-1)}{p}+1$. Then, $T^{s,\lambda}$ can extend to a bounded operator from ${\mathfrak{B} }^{p,\kappa}_\omega(\mathbb R^n)$ to ${B}^{p^*,\kappa^*}_\omega(\mathbb R^n)$.
\end{corollary}
\bibliographystyle{amsplain}
|
2,869,038,154,605 | arxiv | \section{Introduction}
The role of strangeness in low and medium energy nuclear physics is currently
of considerable interest, as it has the potential to deepen our
understanding of the relevant strong interaction mechanisms in
the non-perturbative regime of QCD.
For example, the system of a strange baryon (hyperon $Y$) and a
nucleon ($N$) is in principle an ideal testing ground for investigating
the importance of SU(3)$_{flavor}$ symmetry in hadronic interactions.
Existing meson exchange models of the $YN$ force usually assume SU(3)
symmetry for the hadronic coupling constants, and in some
cases \cite{Holz,Reu} even the SU(6) symmetry of the quark model.
The symmetry requirements provide relations between couplings of
mesons of a given multiplet to the baryon current, which greatly
reduce the number of free model parameters.
Specifically, coupling constants at the strange vertices are then
connected to nucleon-nucleon-meson coupling constants, which in
turn are constrained by the wealth of empirical information on $NN$
scattering. Essentially all these $YN$ interaction models can reproduce the
existing $YN$ scattering
data, so that at present the assumption of SU(3) symmetry for the
coupling constants cannot be ruled out by experiment.
One should note, however, that the various models differ dramatically
in the treatment of the scalar-isoscalar meson sector, which describes
the baryon-baryon interaction at intermediate ranges.
For example, the Nijmegen group \cite{NijII,NijIII,NijIV,NijV} views this
interaction as being generated by genuine scalar meson exchange.
In their model D \cite{NijII} an $\epsilon(760)$ is exchanged as an
SU(3)$_{\it flavor}$ singlet.
In models F~\cite{NijIII}, NSC~\cite{NijIV}, and NSC97 \cite{NijV} a scalar
SU(3) nonet is exchanged --- namely, two isospin-0 mesons (besides the $\epsilon(760)$,
the $\epsilon '(1250)$ in model F and $S^*(975)$ ($f_0(980)$) in model NSC (NSC97)),
an isospin-1 meson ($\delta$ or $a_0(980)$) and an isospin-1/2 strange meson $\kappa$
with a mass of 1000 MeV.
A genuine scalar SU(3) nonet is also present in the so-called Ehime
potential \cite{Ehime}, where besides the $S^*(975)$ and $\delta$ (or $a_0(980)$)
the $f_0(1581)$ and the $K_0^*(1429)$ are included. In additon the model incorporates
two effective scalar-meson exchanges, $\sigma (484)$ and $\kappa (839)$, that stand
for $(\pi\pi)_{I=0}$ and $(K\pi)_{I=1/2}$ correlations but are treated
phenomenologically. The T\"ubingen model, on the other hand, which is
essentially a constituent quark model supplemented by $\pi$ and
$\sigma$ exchange at intermediate and short ranges, treats the
$\sigma$ meson as an SU(3) singlet with a mass of 520 MeV \cite{Tueb}
or 675 MeV \cite{Zhang1}, respectively. Finally, in the quark-models of
Zhang et al. \cite{Zhang2} and Fujiwara et al. \cite{Fujiwara}
a scalar SU(3) nonet is exchanged, though in this case between
quarks and not between the
baryons.
In the (full) Bonn $NN$ potential~\cite{MHE} the intermediate range
attraction is provided by uncorrelated and correlated $\pi\pi$ exchange
processes (Figs.~\ref{fig1}(a)--(b) and Fig.~\ref{fig1}(c), respectively),
with $NN$, $N\Delta$ and $\Delta\Delta$ intermediate states.
{}From earlier studies of the $\pi\pi$ interaction it is known that
$\pi\pi$ correlations are important mainly in the scalar-isoscalar
and vector-isovector channels.
In one-boson-exchange (OBE) potentials these are included effectively
via exchange of sharp mass $\sigma$ and $\rho$ mesons.
One disadvantage of such a simplified treatment is that this
parameterization cannot be transferred into the hyperon sector
in a well defined manner.
Therefore in the earlier $YN$ interaction models of the J\"ulich
group~\cite{Holz}, which start from the Bonn $NN$ potential,
the coupling constants of the fictitious $\sigma$ meson at the
strange vertices ($\Lambda\Lambda\sigma$, $\Sigma\Sigma\sigma$)
are free parameters --- a rather unsatisfactory feature of the
models.
This is especially true for the extension to the strangeness $S=-2$
channels, interest in which initiated with the prediction of the
H-dibaryon by Jaffe~\cite{Jaffe}.
These problems can be overcome by an explicit evaluation of correlated
$\pi\pi$ exchange in the various baryon-baryon channels.
A corresponding calculation was initially done only for the $NN$ case
(Fig.~\ref{fig1}(c)) in Ref. \cite{Kim},
but was extended in a recent paper \cite{REUBER}
by the J\"ulich group so that now a full and consistent
microscopic derivation of correlated $\pi\pi$ exchange in various
baryon-baryon ($BB'$) channels with strangeness $S=0, -1$ and $-2$
is available.
The starting point was a field theoretical model for both the
$N\anti{N}\to\pi\pi$ Born amplitudes and the $\pi\pi$ and $K\anti{K}$
elastic scattering~\cite{Lohse,Janssen,Schutz}. Thus,
the $K\anti{K}$ channel is treated on an equal footing with the
$\pi\pi$ channel in order to reliably determine the influence of
$K\anti{K}$ correlations in the relevant $t$-channels.
Then, with the help of unitarity and dispersion relations the amplitude
for the correlated $\pi\pi$ exchange in the $NN$ channel but also
for the $YN$ and $YY$ systems were computed. Thus, within
this approach one can replace the phenomenological $\sigma$
and $\rho$ exchanges in the Bonn $NN$ \cite{MHE} and J\"ulich $YN$
\cite{Holz} models by correlated processes, i.e. eliminate undetermined
parameters such as the $BB'\sigma$ coupling constants.
In the present paper a new $YN$ model is presented that utilizes
this microscopic model of correlated $\pi\pi$ and $K\bar K$ exchange
to fix the contributions in the scalar-isoscalar ($\sigma$) and
vector-isovector ($\rho$) channels. The model incorporates also
the standard one boson exchange contributions of the lowest
pseudoscalar and vector meson multiplets with coupling constants determined
by SU(6) symmetry relations. Assuming the SU(6) symmetry means that
also the so-called $F/(F+D)$ ratios are fixed. In addition, there are further
new ingrediations as compared to the original J\"ulich $YN$ model \cite{Holz}.
First of all, the contribution from the $a_0(980)$ meson is taken into
account. Secondly, we consider the exchange of a strange scalar meson, the
$\kappa$, with mass $\sim 1000$~MeV.
Let us emphasize, however, that in analogy with the $\sigma$ meson these particles
are likewise not viewed as being members of a scalar meson SU(3) multiplet, but
rather as representations of strong meson-meson correlations in the
scalar--isospin-1/2 ($\pi K$) \cite{Lohse} and
scalar--isovector ($\pi\eta$--$K\bar K$) \cite{Janssen} channels
respectively.
In principle, their contributions can also be evaluated along the lines
of Ref.~\cite{REUBER}, however, for simplicity in the present model they
are effectively parameterized by one-boson-exchange diagrams with the
appropriate quantum numbers assuming the coupling constants to be free
parameters.
In the next two sections we describe the principal steps of the
derivation of correlated $\pi\pi$ and $K\anti{K}$ exchange potentials for
the baryon--baryon amplitudes in the $\sigma$ and $\rho$ channels.
In particular, in Sect. 2 we give a short outline of the microscopic model for
the required $B\anti{B'}\to\pi\pi,\,K\anti{K}$ amplitudes.
The derivation of the potentials themselves is indicated in Section 3.
Furthermore, we introduce and discuss the parameterization of correlated
$\pi\pi$ and $K\anti{K}$ exchange potentials by an effective $\sigma$
and $\rho$ exchange for the $YN$ channels. These effective
parameterizations are then adopted for the construction of the new
$YN$ model.
In Sect. 4 the other ingredients of our $YN$ model are introduced. Specifically,
we comment on the employed strategy for fixing the parameters of the model.
Then we present and discuss numerical results of the model for $YN$
scattering observables, phase shifts and effective range parameters.
Finally, some concluding remarks are made in Section 5.
\section{Model for correlated $2\pi$ exchange}
Based on a $\pi\pi - K\bar K$ amplitude
the evaluation of diagrams such as in
Fig.~\ref{fig1}(c) for any $BB'$ system can be done in two steps.
Firstly the $N\anti{N} \ (\Lambda \anti{\Lambda}, \ \Sigma \anti{\Sigma},
\ {\rm etc}.) \rightarrow 2\pi, K\bar K$ amplitudes
are determined in the pseudophysical region ($t \leq 4 m^2_\pi$)
and then dispersion theory and unitarity are applied to connect those
amplitudes with the corresponding physical amplitudes in the various
baryon-baryon channels.
Figure \ref{fig5} shows a graphic representation of our dynamical model
for correlated $2\pi - K\bar K$ exchange.
Here $B\anti{B'}$ stands for $N\anti{N}$, $\Lambda \anti{ \Lambda}$,
$\Lambda \anti{ \Sigma}$/$\Sigma \anti{ \Lambda}$
or $\Sigma \anti{ \Sigma}$. Formally the amplitudes for the processes
$B\anti{B'} \rightarrow \alpha$ (with $\alpha = \pi\pi$, $K\anti{K}$) are obtained
from solving the scattering equation
\begin{equation}
T_{B,\overline{B'} \to \alpha} =
V_{B,\overline{B'} \to \alpha} +
\sum_{\beta = \pi\pi, K\bar K} T_{\alpha,\beta} \,
G_{\beta} \, V_{B,\overline{B'} \to \beta} \ .
\label{DWBA}
\end{equation}
Here $T_{\alpha,\beta}$ is the $\pi\pi-K\bar K$ (coupled-channel)
reaction amplitude, $V_{B,\overline{B'} \to \beta}$ the
$B\overline{B'} \to \pi\pi,K\bar K$ transition Born amplitude and
$G_{\beta}$ the free ($\pi\pi$ or $K\bar K$) Green's function
The first two quantities are the basic ingredients of the model.
These amplitudes have to be known in the so-called pseudophysical region,
i.e. for energies below the $B\anti{B'}$ threshold. While for $N\anti{N} \rightarrow
\pi\pi$ the corresponding amplitudes can be derived from empirical
information on $\pi N$ and $\pi\pi$ scattering via an analytic continuation,
this is not possible for the transitions $Y\anti{Y'} \rightarrow \pi\pi, K\bar K$.
Thus, a microscopic model for the $B\anti{B'} \rightarrow \pi\pi$,$K\anti{K}$
is a pre-requisite for the evaluation of the correlated $\pi\pi$ and $K\bar K$
exchange in the $YN$ and $YY$ channels. Such a model was constructed by
Reuber et al. in Ref. \cite{REUBER}. A main feature of this model
is the completely consistent treatment of its two components, namely
the $B\anti{B'} \rightarrow \pi\pi, K\anti{K}$ Born amplitudes and the
$\pi\pi$-$K\anti{K}$ correlations with respect to the $\sigma$- and $\rho$-channels.
Both components are derived in field theory from an ansatz for the hadronic
Lagrangians \cite{REUBER}. The considered contributions are briefly described
in the subsections below. The subsequent evaluation of the baryon-baryon interaction
via dispersion theory and unitarity is summarized in the next section.
\subsection{The $\pi\pi - K\bar K$ amplitudes}
The dynamical model used for the $\pi\pi - K\bar K$ amplitudes is
derived within the meson exchange framework and involves the
$\pi\pi$ and $K\anti{K}$ coupled channels \cite{Lohse,Janssen,Schutz}.
The driving terms for the diagonal interactions consist of ($t$-channel) exchange
diagrams ($\rho$ and $\rho$, $\omega$, $\phi$, respectively) and ($s$-channel)
pole diagrams with $\epsilon \equiv f_0(1440)$, $\rho \equiv \rho (770)$ and
$f_2 \equiv f_2(1274)$ intermediate states.
The coupling $\pi\pi \rightarrow K\anti{K}$ is provided by $K^*(892)$
exchange. The corresponding diagrams are shown in Fig. \ref{pipi}.
The potentials derived from those diagrams are iterated in a coupled-channel
Lippmann-Schwinger-type scattering equation. The free parameters of the $\pi\pi - K\bar K$
model were adjusted to the empirical $\pi\pi$ phase shifts and inelasticities.
For details on the model and a comparision of the resulting $\pi\pi$
phase shifts with experimental values we refer the reader to Refs.
\cite{Janssen,Schutz}.
\subsection{The $B\anti{B} \rightarrow 2\pi,K\bar K$ Born amplitudes}
The Born amplitudes for the transition $B\anti{B} \rightarrow \alpha$ with
$\alpha = \pi\pi,K\bar K$ are built up from an ($s$-channel)
$\rho$-pole diagram and all possible diagrams involving the exchange of baryons out of the
$J^P = {1\over 2}^+$ octet or the $J^P = {3\over 2}^+$ decuplet \cite{REUBER}.
For illustration we show in Fig. \ref{Born} those diagrams that contribute to
the transition amplitude for $\Sigma\anti{\Sigma} \rightarrow 2\pi,K\bar K$.
In the construction of the model the number of free parameters has been kept to
a minimum. Specifically,
the coupling constants at the various vertices involving the pseudoscalar mesons
were fixed by SU(6) symmetry relations. As far as the $\rho$-pole diagram is concerned
the (bare) coupling constants and form factors at the $\pi\pi \rho^{(0)}$ and
$K\bar K \rho^{(0)}$ vertices were already determined in the model for the
$\pi\pi-K\bar K$ interaction \cite{Schutz} and were taken over from there. Then,
assuming that the bare $\rho$-meson couples universally to the isospin current all
vector couplings $g^{(0)}_{BB'\rho}$ to the baryonic vertices were fixed as well.
For the tensor couplings $f^{(0)}_{BB'\rho}$ again SU(6) symmetry relations were
applied.
The four remaining free parameters (the tensor coupling constant $f^{(0)}_{NN\rho}$,
the parameter $x_\Delta$ characterizing the strength of the off-shell
part in the $\Delta N\pi$ Lagrangian\footnote{In a more modern language, it
can be shown that such off-shell parameters really correspond to low-energy
constants of non-propagating contact interactions, see e.g. \cite{BKMdelta}},
and form-factor parameters for the
exchanged baryons for the octet and decuplet, respectively \cite{REUBER})
were fixed by adjusting the model predictions to the
quasi-empirical information on the amplitudes $N\bar N \to \pi\pi$ obtained by
H\"ohler at el.~\cite{Hoehler2} by analytically continuing the $\pi N$
and $\pi \pi$ scattering data.
Once this is done the model can be used to generate the amplitudes for
any $B\bar B' \to \pi\pi, K\bar K$ channel. Though it should have become
clear already from the discussion above, we want to emphasize again that the
extrapolation of the model for the $N\bar N \to \pi\pi$ amplitudes to other
channels depends crucially on the assumption of SU(3) symmetry
for the pseudo-scalar sector and it is based also on the hope that the
correct description of the quasiempirical $N\bar N \to \pi\pi$ amplitudes
guarantees a reasonable description of the other baryon-antibaryon channels,
for which no empirical information is available.
\section{Potential from correlated $\pi\pi$ and $K\anti{K}$ exchange}
Assuming analyticity for the amplitudes dispersion relations
can be formulated for the baryon-baryon amplitudes, which connect
physical amplitudes in the $s$-channel with singularities and
discontinuities of these amplitudes in the pseudophysical region of
the $t$-channel processes for the $J^P = 0^+$ ($\sigma$) and $1^-$ ($\rho$)
channel:
\begin{equation}
V^{(0^+,1^-)}_{B_1,B_2 \to B_1',B_2'}(t) \propto \int_{4m^2_\pi}^\infty
dt'
{ {\rm Im} V^{(0^+,1^-)}_{B_1,\overline{B_1'} \to \overline{B_2},B_2'}(t') \over t'-t}, \ \ t < 0 .
\label{dispersion}
\end{equation}
Via unitarity relations the singularity structure of the baryon-baryon
amplitudes for $\pi\pi$ and $K\anti{K}$ exchange are fixed by and
can be written as products of the
$B\anti{B'}\to\pi\pi,\,K\anti{K}$ amplitudes
\begin{equation}
{\rm Im} V^{(0^+,1^-)}_{B_1,\overline{B_1'} \to \overline{B_2},B_2'}(t') \propto
\sum_{\alpha = \pi\pi, K\bar K} T^{*,(0^+,1^-)}_{B_1,\overline{B_1'} \to \alpha}
\, T^{(0^+,1^-)}_{\overline{B_2},B_2' \to \alpha}.
\label{unitarity}
\end{equation}
Thus,
from the $B\anti{B'} \rightarrow 2\pi $ helicity amplitudes the
spectral functions can be calculated
\begin{equation}
\rho^{(0^+,1^-)}_{B_1,B_2 \to B_1',B_2'}(t') \propto
\sum_{\alpha = \pi\pi, K\bar K} T^{*,(0^+,1^-)}_{B_1,\bar{B_1'} \to \alpha}
\, T^{(0^+,1^-)}_{\bar{B_2},B_2' \to \alpha}
\label{spectral}
\end{equation}
which are then inserted into dispersion integrals to
obtain the (on-shell) baryon-baryon interaction:
\begin{equation}
V^{(0^+,1^-)}_{B_1,B_2 \to B_1',B_2'}(t) \propto \int_{4m^2_\pi}^\infty
dt'
{\rho^{(0^+,1^-)}_{B_1,B_2 \to B_1',B_2'}(t') \over t'-t}, \ \ t < 0 .
\label{potential}
\end{equation}
The underlying formalism is quite involved and has been outlined
in detail already in Ref.~\cite{REUBER}. Thus, we refrain from
repeating it here. Rather we want to provide only some more
general information.
Since the dispersion-theoretical evaluation is restricted to the
contribution of (correlated) $\pi\pi$ and $K\anti{K}$ exchange to the
baryon-baryon amplitudes only those singularities are taken into
account which are generated by $\pi\pi$ and $K\anti{K}$ intermediate
states, namely the discontinuities due to the
$\pi\pi$ and $K\anti{K}$ unitarity cut (the so-called right-hand cut).
The left-hand cuts, which are due to unitarity constraints
for the $u$-channel reaction, can be neglected in the baryon-baryon
channels considered here, since they start at large, negative
$t$-values (from which they extend to $-\infty$) and are therefore far
away from the physical region relevant for low-energy $s$-channel
processes.
The $B\anti{B'}\to\alpha$ amplitudes, which enter
in Eq.~(\ref{unitarity}) are derived from a microscopic model which is
based on the hadron-exchange picture, cf. Sect. II.
Of course, this model has a limited range of validity: for energies
far beyond $t'_{max}\approx 100\,m_\pi^2$ it cannot provide reliable
results.
The dispersion integral for the invariant amplitudes extending in
principle along the whole $\pi\pi$ right-hand cut has therefore to be
limited to an upper bound, $t'_{max}$, which has been put to
$t'_{max}$ = 120~$m_\pi^2$ in Ref. \cite{REUBER}.
The spectral function (\ref{spectral}) for the ($0^+$) $\sigma$-channel
has only one component but the one for the ($1^-$) $\rho$-channel
consists of four linearly independent components, which reflects
the more complicated spin structure of this channel.
Finally,
we should note that the helicity amplitudes obtained according to
Fig.~\ref{fig5} still generate the uncorrelated (first diagram on
the r.h.s. of Fig.~\ref{fig5}), as well as the correlated pieces
(second and third diagrams).
Thus, in order to obtain the contribution of the truely correlated
$\pi\pi$ and $K\anti{K}$ exchange one must eliminate the former from
the spectral function.
This is done by calculating the spectral function generated by
the Born term and subtracting it from the total spectral function:
\begin{equation}
\rho^{(0^+,1^-)} \longrightarrow \rho^{(0^+,1^-)} -
\rho^{(0^+,1^-)}_{\rm Born} .
\end{equation}
In practice this means that, e.g., for the full Bonn $NN$ model
contributions involving spin-1/2 as well as spin-3/2 baryons
have to be subtracted since corresponding contributions are already
treated explicitly in the s-channel in this model, namely via box
diagrams with intermediate $\Delta$-states as shown in
Fig.~\ref{fig1}(a). On the other hand only
uncorrelated contributions involving spin-1/2 baryons are to be
subtracted from the discontinuities of the invariant baryon-baryon
amplitudes in order to avoid double counting if a simple
OBE-model is used in the $s$-channel. This is the relevant procedure
for the $YN$ model that will be presented in the next section.
Note that the spectral functions characterize both the strength
and range of the interaction.
Clearly, for sharp mass exchanges the spectral function becomes
a $\delta$-function at the appropriate mass.
For convenience the authors of Ref. \cite{REUBER} have presented
their results in terms of effective coupling strengths, by
parameterizing the correlated processes
by (sharp mass) $\sigma$ and $\rho$ exchanges.
The interaction potential resulting from the exchange of a
$\sigma$ meson with mass $m_\sigma$ between two $J^P=1/2^+$
baryons $A$ and $B$ has the structure:
\begin{equation}
V^{\sigma}_{A,B \to A,B}(t) \ = \ g_{AA\sigma} g_{BB\sigma}
{F^2_\sigma (t) \over t - m^2_\sigma} ,
\label{formd}
\end{equation}
where a form factor $F_\sigma(t)$ is applied at each vertex,
taking into account the fact that the exchanged $\sigma$ meson is
not on its mass shell.
This form factor is parameterized in the conventional monopole form,
\begin{equation}
F_\sigma (t) = {\Lambda ^2_\sigma - m^2_\sigma \over
\Lambda ^2_\sigma - t} \ ,
\label{form}
\end{equation}
with a cutoff mass $\Lambda_\sigma$ assumed to be the same
for both vertices.
The correlated potential as given in Eq.~(\ref{dispersion}) can now be
parameterized in terms of $t$-dependent strength functions
$G_{B_1',B_2' \to B_1,B_2}(t)$, so that for the $\sigma$ case:
\begin{equation}
V^{(0^+)}_{A,B \to A,B}(t) =
G^{\sigma}_{AB \to AB}(t) F^2_\sigma(t) {1 \over t - m^2_\sigma}.
\label{sigma}
\end{equation}
The effective coupling constants are then defined as:
\begin{equation}
g_{AA\sigma}g_{BB\sigma} \quad\longrightarrow \quad G_{AB\to
AB}^\sigma (t)= {(t-m_\sigma^2)\over\pi F^2_\sigma(t)}
\int_{4m_\pi^2}^{\infty} {\rho^{(0^+)}_{AB \to AB}(t') \over t'-t} dt' .
\label{effccsig}
\end{equation}
Similar relations can be also derived for the correlated exchange
in the isovector-vector channel \cite{REUBER}, which in this case
will involve vector as well as tensor coupling pieces.
It should be stressed that, so far, this parameterization does not involve
any approximations as long as the full $t$-dependence of the effective
coupling strengths is taken into account.
The parameters of the $\sigma$ and $\rho$ exchange have been chosen to have
the same values in all particle channels.
The masses $m_\sigma$ and $m_\rho$ of the exchanged particles have
been set to the values used in the Bonn-J\"ulich models of the
$NN$~\cite{MHE} and $YN$~\cite{Holz} interactions,\
$m_\sigma=550$ MeV, $m_\rho=770$ MeV.
The cutoff masses $\Lambda_{\sigma}$ and $\Lambda_{\rho}$ have been
chosen so that the coupling strengths in the $S=0, -1$ baryon-baryon
channels vary only weakly with $t$.
The resulting values ($\Lambda_\sigma=2.8$ GeV, $\Lambda_\rho=2.5$ GeV)
are quite large compared to the values of the phenomenological
parameterizations used in Refs.~\cite{Holz,MHE}, and thus represent
rather hard form factors.
Note that in the OBE framework the contribution of a genuine
(SU(3)) $\sigma$ meson to the three reactions $NN\rightarrow NN$,
$YN\rightarrow YN$, $YY \rightarrow YY$ is determined by two parameters
(coupling constants), namely $g_{NN\sigma}$
and $g_{YY\sigma}$, whereas the correlated exchange is
characterized by three independent strength functions
($G_{NN\to NN}$, $G_{YN\to YN}$, $G_{YY\to YY}$) so that vertex
coupling constants cannot be determined uniquely. This implies directly
that the strength parameters cannot fulfill SU(3) relations.
In the physical region the strength of the contributions is to a large
extent governed by the value of $G$ at $t=0$.
Those values for the various channels were tabulated in Ref. \cite{REUBER}
(cf. Tables 5-7) for the case of the full model calculation
and also
when uncorrelated contributions involving spin-1/2 baryons only
are subtracted from the spectral function of the invariant baryon-baryon
amplitudes. The latter are the proper values to be used for constructing
a $YN$ model based on simple OBE-exchange diagrams.
\begin{table}[h]
\begin{tabular}{|rcccccccc|}
\hline
&\multicolumn{8}{c|}{$G_{YN\to Y'N}/4\pi$} \\
\hline
\hline
&\multicolumn{4}{c}{$\Lambda N$}&\multicolumn{4}{c|}{$\Sigma N$}\\
\hline
\mbox{$\sigma$ channel} &\multicolumn{4}{c}{3.52}&\multicolumn{4}{c|}
{2.92}\\
\hline
\hline
&\multicolumn{4}{c}{$\Sigma N$}&\multicolumn{4}{c|}{$\Lambda N \to \Sigma N$}\\
\hline
&VV & VT & TV & TT & VV & VT & TV & TT \\
\hline
\mbox{$\rho$ channel} & \ 1.26 \ & \ 1.24 \ & \ 8.09 \ & \ 8.07 \ &
\ $-$0.43 \ & \ 3.72 & \ $-$1.32 \ & \ 21.00 \\
\hline
\end{tabular}
\caption{Effective $\sigma$ and $\rho$ coupling strengths $G_{YN\to Y'N}(t=0)$
for correlated $\pi\pi$ and $K\anti{K}$ exchange in the various
nucleon-hyperon channels. $VV$, $VT$, etc. stand for the vector-vector,
vector-tensor, etc. combinations of the $\rho$ coupling, cf.
Ref.~\cite{REUBER}.}
\label{coup0}
\end{table}
In principle,
the average size of the effective coupling strengths is only an
approximate measure of the strength of correlated $\pi\pi$ and
$K\anti{K}$ exchange in the various particle channels.
The precise energy dependence of the correlated exchange as well
as its relative strength in the different partial waves of the
$s$-channel reaction is determined by the spectrum of exchanged
invariant masses, or spectral functions, leading to a different
$t$-dependence of the effective coupling strengths.
This was demonstrated in Ref. \cite{Melni1} where the on-shell
$NN$, $\Lambda N$ and $\Sigma N$ potentials in spin-singlet states
with angular momentum $L=0, 2$ and 4, generated
directly by the scalar-isoscalar part of the correlated $\pi\pi$ and
$K\anti{K}$ exchange, were compared to the corresponding
results based on a $\sigma$ exchange with sharp mass.
It could be seen that the correlated $2\pi$ exchange is significantly stronger
in high partial waves because
the $\sigma$ exchange, which corresponds to a spectral function proportional
to $\delta(t'-m^2_\sigma)$, does not contain the long-range part of the
correlated processes. Thus, parameterizing the results derived from
the microscopic model by $\sigma$ exchange with a sharp mass, but using
the effective coupling strength $G^\sigma_{NN\to NN}$ at $t=0$ one can
obtain rough agreement with the exact result in the $S$ waves, say,
but usually underestimates the magnitude considerably in the high partial
waves. Obviously the replacement of correlated $\pi\pi$ and $K\anti{K}$
exchanges by an exchange of a sharp mass $\sigma$ meson with a
$t$-independent coupling cannot provide a simultaneous description
of both low and high partial waves.
These features are important for investigations of the $NN$ systems where
the phase shifts are known quantitatively even for rather high partial waves.
In this case the results of the correlated exchange should be used
directly \cite{Kim}. However, for the $\Lambda N$ and $\Sigma N$ systems
only scattering observables are available, and those (total and differential cross
sections) are primarily sensitive to $S$- and $P$-wave contributions.
Thus, here it is reasonable to simplify the calculation and use only an
effective parametrization of the results derived from
the microscopic model in terms of a $\sigma$ and $\rho$ exchange with a sharp
mass. Specifically, combining Eqs. (\ref{formd}) and (\ref{sigma}) we use the
expression
\begin{equation}
V^{(0^+)}_{A,B \to A,B}(t) =
G^{\sigma}_{AB \to AB} \tilde F_{\sigma}^2 (t) {1 \over t - m^2_\sigma}.
\label{corrpot}
\end{equation}
with
\begin{equation}
\tilde F_{\sigma} (t) = {\Lambda ^2_{\sigma} \over
\Lambda ^2_{\sigma}- t} \
\label{form1}
\end{equation}
and a similar one for the $\rho$ exchange contribution.
The effective coupling strength $G^\sigma_{YN\to Y'N}$ (and
$^{ij}G^\rho_{YN\to Y'N}$) is deduced via Eq. (\ref{effccsig})
(and via a similar one for the $\rho$ channel, cf. Ref. \cite{REUBER}) for the
form factor (\ref{form1}) and adjusted to the value at $t=0$.
The different prescription for the vertex form factor as compared to Ref. \cite{REUBER},
i.e to Eq. (\ref{form}), is adopted here because it guarantees that the
on-shell behaviour of the potential (which is fully determined by the
dispersion integral) is not modified strongly as long as the energy is not too high.
At the same time smaller cutoff masses as those mentioned above (and employed in
\cite{REUBER}) can be used to ensure sufficient convergence when the potential
(\ref{corrpot}) is iterated in the scattering equation. The concrete values
used for the cutoff masses are $\Lambda_\sigma$ = 2.5 (1.6) GeV for the
$\Lambda N$ ($\Sigma N$) channels and $\Lambda_\rho$ = 1.25 (1.8) GeV for the
$\Lambda N \to \Sigma N$ transition ($\Sigma N$ channel).
The effective coupling strengths employed in our new $YN$ model are compiled in
Table \ref{coup0}. Though these values differ slightly from those given
in Tables 5-7 of Ref.~\cite{REUBER}, due to the different choice
of the form factor, we would like to emphasize that the strengths
of the interactions at $t=0$ are the same in both cases and
coincide with the one derived from the microscopic model of
$\pi\pi$ and $K\bar K$ correlations.
In order to demonstrate that, we show in Fig.~\ref{fig:5_6_4} the
corresponding on-shell potential matrix elements
for the $^1S_0$ partial wave of the $\Lambda N$ and $\Sigma N$ channels.
One can see that in case of the $\Lambda N$ system the result generated by
the scalar-isoscalar part of correlated $\pi\pi$ and $K\anti{K}$ exchange
is similar to the one of the $\sigma$ exchange used in the
J\"ulich $YN$ model~A. In fact, correlated $\pi\pi$ exchange is
marginally stronger. It is also obvious that the parameterization of
the interaction generated by correlated $\pi\pi$ exchange by
an effective $\sigma$ exchange, c.f. the dotted line, works rather well.
{}From the corresponding results for the on-shell $\Sigma N$ potential
one can see that here the $\sigma$ exchange used in the J\"ulich $YN$
model A is clearly much stronger than what one obtains from
the correlated $\pi\pi$ and $K\anti{K}$ exchange. Once again the
parameterization by an effective $\sigma$ exchange provides an
excellent representation of the interaction strength.
\begin{table}[ht]
\caption{Vertex coupling constants used in the new $YN$ model that are
constrained by SU(6) symmetry and corresponding cutoff masses. The
assumed SU(6) symmetry fixes the $F/(F+D)$ ratios to
$\alpha_{ps}$=2/5, $\alpha_{v}^e$=1, $\alpha_{v}^m$=2/5 \cite{Reu}.
}
\label{coup1}
\begin{center}
\begin{tabular}{|c|ccc|}
\hline
Vertex & $g_{BB'm}/\sqrt{4\pi}$ & $f_{BB'm}/\sqrt{4\pi}$ & $\Lambda_{BB'm}$ (GeV) \\
\hline
$NN\pi$ & 3.795 & & 1.3 \\
$\Lambda\Sigma\pi$ & 2.629 & & 1.3 \\
$\Sigma\Sigma\pi$ & 3.036 & & 1.3 \\
& & & \\
$N\Lambda K$ & $-$3.944 & & 1.2 \\
$N\Sigma K$ & 0.759 & & 1.2 \\
& & & \\
$NN\omega$ & 3.317 & & 1.7 \\
$\Lambda\Lambda\omega$ & 2.211 & $-$2.796 & 1.4 \\
$\Sigma\Sigma\omega$ & 2.211 & 2.796 & 1.7 \\
& & & \\
$N\Lambda K^*$ & $-$1.588 & $-$5.175 & 1.2 \\
$N\Sigma K^*$ & $-$0.917 & 2.219 & 1.4 \\
\hline
\end{tabular} \end{center}
\end{table}
\section{Results and discussion}
\subsection{Coupling constants}
In the present $YN$ model we take into account exchange diagrams
involving the well-established lowest lying pseudoscalar and vector
meson SU(3) octets. Following the philosophy of the original J\"ulich
$YN$ potential \cite{Holz} the coupling constants in the pseudoscalar
sector are fixed by strict SU(6) symmetry. In any case, this is
also required for being consistent with the model of correlated $\pi\pi$
and $K\bar K$ exchange. The cutoff masses of the
form factors belonging to the $NN$ vertices are taken over from the
full Bonn $NN$ potential. The cutoff masses at the strange vertices
are considered as open parameters though, in practice, their values
are kept as close as possible to those found for the $NN$ vertices,
cf. Table \ref{coup1}. Note that like in \cite{Holz} and in line with
the arguments brought forth in Ref. \cite{Reu} we neglect again
the contribution from $\eta$ meson exchange. Anyhow, in the full
Bonn $NN$ model the $\eta NN$ coupling constant was set to zero.
In addition phenomenological analyses \cite{Grein} and also
microscopic calculations, like those based on the topological chiral
soliton model (extended Skyrme model with vector mesons)
\cite{etaNN}, indicate that this coupling constant should be small. Thus,
the $\eta$ contribution would be completely unimportant anyway,
given the pseudoscalar nature of its coupling. For the same reason
the $\eta'$ contribution is likewise not considered.
In the vector meson sector we depart from the strategy of the original
J\"ulich $YN$ potential. As already mentioned above, first and most
importantly the contribution of the $\rho$ meson is no longer seen
as resulting from the exchange of a genuine particle that belongs to
the SU(3) vector meson octet but is identified with the strength
generated by a microscopic model of correlated $\pi\pi$ and $K\bar{K}$
in the vector-isovector channel. The effective coupling constants for
$\rho$ exchange in
the various $YN$ and $YY$ channels have been extracted and thoroughly
analysed in Ref. \cite{REUBER}. Thereby it was found that the result from
correlated exchange deviates significantly from those implied by SU(3)
symmetry -- even though SU(3) symmetry was imposed for the bare $\rho NN$
and $\rho YY$ couplings, cf. \cite{REUBER}.
In view of this it is questionable whether one should invoke SU(3) symmetry
for fixing the other coupling strengths of the vector-meson octet, i.e.
those of the $K^*$ meson and of the coupling of the $\omega$ meson to the hyperons.
But in absence of any better alternative we still follow this prescription
for the present model. As reference values we take here the $NN\rho$ coupling
constants of the full Bonn $NN$ potential \cite{MHE}, which were already used
for the old $YN$ model \cite{Holz,Reu}.
However, as far as the $YY\omega$ coupling constants are concerned now
we take into account the insight gained in Ref. \cite{Janssen1} that
the $\omega$ exchange in the full Bonn $NN$ potential represents not only the
genuine SU(3) $\omega$ but is also an effective parametrization of additional
short-range contributions from correlated $\pi-\rho$ exchange, say, that are not
included explicitly in that model. Therefore, in the Bonn $NN$ model
the required $NN\omega$ coupling constant is indeed much larger than what
follows from the SU(3) relations and this large coupling constant formed also
the basis for fixing the $YY\omega$ coupling constants of the old J\"ulich
$YN$ model \cite{Holz,Reu}, cf. the discussion in Sect. 2.2 of Ref. \cite{Reu}.
In the present model we adopt the smaller value
found in Ref. \cite{Janssen1} which is very close to the SU(3) value.
This is in line with results obtained from a dispersion-theoretical analysis
of the nucleon electromagnetic form factors - the inclusion of the $\pi-\rho$
continuum sizeably reduces the $\omega NN$ coupling, compare the values
found in \cite{MMD} with the ones in \cite{MMSvO}.
Assuming furthermore that the $\rho$ meson couples universally to the
isospin current -- which fixes the $F/(F+D)$ ratio $\alpha^e_V$ to 1 --
and ideal mixing for the $\phi$ and $\omega$ mesons then yields the
following relation for the $\omega$ coupling constants:
\begin{eqnarray}
g_{\Lambda\Lambda\omega}=g_{\Sigma\Sigma\omega} = {2\over 3} g_{NN\omega}, \ \ \
f_{\Lambda\Lambda\omega}={5\over 6} f_{NN\omega}
-{1\over 2} f_{NN\rho}, \ \ \
f_{\Sigma\Sigma\omega}={1\over 2} f_{NN\omega} +{1\over 2} f_{NN\rho}
\end{eqnarray}
For $f_{NN\omega}$ and f$ _{NN\rho}$ we take over the values of
the full Bonn $NN$ potential. Since $f_{NN\omega}$=0 \cite{MHE}
it follows that $f_{\Lambda\Lambda\omega} =- f_{\Sigma\Sigma\omega}$.
The short-range contributions from correlated $\pi-\rho$ exchange were
parametrized by an effective $\omega '$ exchange in Ref. \cite{Janssen1}
with a mass of $m_{\omega '}$ = 1120 MeV. We follow here the same strategy
but treat the coupling constants of the $\omega '$ to the strange baryons
as free parameters to be determined in a fit to the $YN$ data.
Like the $\rho$ also the contribution of the $\sigma$ meson is
computed from a microscopic model of correlated $\pi\pi$ and $K\bar{K}$
-- now from the scalar-isoscalar channel. The effective coupling constants
for $\sigma$ exchange in the various $YN$ channels have been discussed
in the previous section.
\begin{table}[ht]
\caption{Parameters (effective coupling strengths $G$, cutoff masses $\Lambda$)
used in the new $YN$ model for the effective $\omega '$, $a_0$ and $\kappa$ exchanges.
In case of $\omega '$ only the vector-vector component is considered .
Cutoff masses in parentheses indicate that here a product of form factors of
monopole type (\ref{form}) is utilized instead of the standard dipole form,
cf. Eq. (\ref{formd}).
Numbers in square brackets denote corresponding values of the model where
$\kappa$ exchange is replaced by a contact term, cf. text, when different.
}
\label{coup2}
\begin{center}
\begin{tabular}{|ccccc|}
\hline
channel & exchange & mass (GeV) & $G_{BB'\to BB'}/(4\pi)$ & $\Lambda_{BB'm}$ (GeV) \\
\hline
$\Lambda N$ & $\omega '$ & 1.12 & 5.0 & 1.65 \\
& $\kappa$ & 1.0 & 6.0 [1.8] & 1.45 [1.5] \\
& & & & \\
$\Lambda N \to \Sigma N$
& $a_0$ & 0.983 & 2.0 & 2.0 (1.8) [2.0 (2.1)] \\
& $\kappa$ & 1.0 & 6.0 [1.9] & 1.45 (1.65) [1.5] \\
& & & & \\
$\Sigma N$ & $\omega '$ & 1.12 & 10.75 & 1.35 \\
& $a_0$ & 0.983 & 5.63 & 2.0 (1.45) \\
& $\kappa$ & 1.0 & 6.0 [2.4] & 1.65 [1.5] \\
\hline
\end{tabular} \end{center}
\end{table}
Besides replacing the conventional $\sigma$ and $\rho$ exchanges by
correlated $\pi\pi$ and $K\bar{K}$ exchange, there are in addition
some other new ingredients in the present $YN$ model.
First of all, we now take into account contributions from $a_0(980)$
exchange.
The $a_0$ meson is present in the original Bonn $NN$ potential
\cite{MHE}, and for consistency should also be included in the $YN$
model.
Secondly, we consider the exchange of a strange scalar meson, the
$\kappa$, with mass $\sim 1000$~MeV.
Let us emphasize, however, that like in case of the $\sigma$ meson
these particles are not viewed as being members of a scalar meson SU(3)
multiplet, but rather as representations
of strong meson-meson correlations in the scalar--isovector
($\pi\eta$--$K\bar K$) \cite{Janssen} and scalar--isospin-1/2 ($\pi K$)
channels \cite{Lohse}, respectively.
In principle, their contributions can also be evaluated along the lines
of Ref.~\cite{REUBER}, however, for simplicity in the present model they
are effectively parameterized by one-boson-exchange diagrams with the
appropriate quantum numbers assuming the coupling constants to be free
parameters. The parameters specifying those ingredients are summarized
in Table \ref{coup2}.
Thus we have the following scenario: The long- and intermediate-range part
of our new $YN$ interaction model is completely
determined by SU(6) constraints (for the pseudoscalar and to
some extent also for the vector mesons) and by correlated
$\pi\pi$ and $K\bar K$ exchange. The short-range part is viewed as
being also due to correlated meson-meson exchanges but in practice is
parametrized phenomelogically in terms of one-boson-exchange
contributions in specific spin-isospin channels. In particular,
no SU(3) relations are imposed on the short-range part. This
assumption is based on our observation that the contributions in the
$\rho$ exchange channel as they result from
correlated $\pi\pi$ and $K\bar{K}$ no longer fulfill
SU(3) relations, but it also acknowledges
the fact that at present there is no general agreement about who are
the actual members of the lowest-lying scalar meson SU(3) multiplet.
A graphical representation of all meson-exchange contributions that are
included in the new $YN$ model is given in Fig. \ref{figyn}.
In recent investigations of the $NN$ interaction within the framework
of chiral perturbation theory \cite{Epelbaum}
only pionic degrees of freedom are taken into account and all short-range
physics is parametrized by contact terms. This is certainly also an
option that one should explore for the $YN$ system \cite{Korpa} in the future \cite{Henk}.
As a first step we consider here an alternative model where the
contributions of the $\kappa$(1000) meson - whose mass and even existence is
still under dispute \cite{Kappa} -- are substituted by a contact term.
In practice this means that we replace the product of the $\kappa$ coupling
constants and propagator, $G_{BB'\to BB'}/(m_\kappa^2-t)$,
by $G_{BB'\to BB'}/m_\kappa^2$ and readjust only the parameters of
related to the $\kappa$ exchange (with one exception). Those parameters
can be found also in Table \ref{coup2}, in square brackets, for those
cases where they differ from the values of our regular model.
Results corresponding to the model with the contact term will also be
presented in the next section.
In the fitting procedure we only take into account data on total
cross sections (and energies near the corresponding thresholds)
for the channels
$\Lambda p$ \cite{Alex,Sechi,Kadyk}, $\Sigma^-p$ \cite{Eisele},
$\Sigma^-p \to \Lambda n$ \cite{Engel}, $\Sigma^-p \to \Sigma^0n$ \cite{Engel},
and $\Sigma^+p$ \cite{Eisele}.
Differential cross sections but also total cross sections at higher energies
\cite{Stephen,Kondo,Ahn} are therefore genuine predictions of our model.
As already mentioned above, the free parameters in our model consist of
the cut-off masses at the strange vertices and the coupling constants
of the $a_0$(980), $\kappa$(1000) and $\omega '$(1120) mesons.
When adjusting those parameters to the empirical data it turned
out that the results are not very sensitive to the cut-off masses in
the pseudo-scalar sector and we fixed them to be close to the cutoff mass
used at the $\pi NN$ vertex. There is also only a weak sensitivity to the
cut-off masses used for the correlated $\pi\pi$-$K\bar K$ contributions
in the $\sigma$ and $\rho$ channels. This is due to the chosen analytic
form of the form factors that practically does not change the strength of
the corresponding potentials as they result from the microscopic model
-- which is of course intended, cf. the discussion in Sect. III.
Besides the cut-off masses of the vector mesons we found that also
the parameters of the $a_0$(980) and $\kappa$(1000) mesons,
viewed here as effective parametrization of correlated $\pi\eta$ and
$\pi K$ exchange, have a sizeable influence. In fact, without the
contributions of the latter two mesons we would not have been able
to achieve a satisfactory description of the data. Note that those
two exchanges were not considered in the original J\"ulich model \cite{Holz}.
We should say that values of the coupling strengths and cut-off masses for
those scalar mesons are strongly correlated and cannot be fixed independently
from a fit to the data. Thus, one should not attribute any physical
significance to the actual values of the coupling strengths or cut-off
masses that we found individually.
Finally we want to mention that the fit to the available $YN$ data did not
constrain the relative magnitude of the $^1S_0$ and $^3S_1$ partial waves
in the $\Lambda N$ system. Thus, as a further constraint, we required the
$^1S_0$ scattering length to be larger than the one for $^3S_1$ -- as
it seems to be necessary if one wants to achieve a bound
hypertriton \cite{Ueda}. A first application of the new $YN$ model in
three-body calculations confirmed that it yields indeed a bound
hypertriton state \cite{Nogga}.
\subsection{The scattering equation}
The original $YN$ model of the J\"ulich group was derived within the
framework of time ordered perturbation theory (TOPT) \cite{Holz}. In
this approach retardation effects from the meson-exchange diagrams
are retained (and those of baryon-exchange as well) and as
a consequence the interaction depends explicitly on the starting
energy. This is not convenient if one wants to apply the $YN$ model
in conventional few-body
\cite{Miya1,Miya2,Miya4,Akaishi,Nogga1,Nemura,Fujiwara3N,Hiyama}
or many-body \cite{Ramos,Tzeng,Fujii,Lenske} investigations.
Thus, in Ref. \cite{Reu} the
J\"ulich group presented energy-independent versions of their $YN$
model where the energy dependence was removed in such a way that
basically all other characteristics of the original model could be
kept. The detailed comparison of the TOPT model and its
energy-independent counterpart performed in Ref. \cite{Reu} made
clear that this goal was indeed achieved.
Since we are also interested to facilitate an application of our
new $YN$ model in future few- and many-body investigations we will
likewise present here an energy-independent interaction. This
implies that we do not use the (relativistic)
TOPT scattering equation of Ref. \cite{Holz} but instead solve
the nonrelativistic (coupled-channel) Lippmann-Schwinger equation
\begin{equation}
T_{i,j} = V_{i,j} + \sum_k V_{i,k} G_k T_{k,j}
\label{tmat}
\end{equation}
to obtain the scattering amplitude $T_{i,j}$. Here the indices
($i,j,k$) stand for the $\Lambda N$ and $\Sigma N$ channels
and the nonrelativistic Green's function $G_k$ is given
by
\begin{equation}
G_k = \left[ {q_k^2-{\bf q}'^2 \over {2\mu_k}} + i\varepsilon \right]^{-1} \ ,
\label{Green}
\end{equation}
where $\mu_k = M_YM_N/(M_Y+M_N)$ is the reduced mass and ${\bf q}'$ the
c.m. momentum in the intermediate $YN$ channel. $q_k = q_k(z)$ denotes the
on-shell momentum in the intermediate $YN$ state defined by
$z = \sqrt{M_Y^2 + q_k^2} + \sqrt{M_N^2 + q_k^2}$. The latter equation
guarantees that the $\Sigma N$ channel opens exactly at the physical
threshold. Note that $q_{\Sigma N}$ is imaginary for starting energies
below the $\Sigma N$ threshold ($z < M_\Sigma + M_N$).
Explicit expressions for the potential matrix elements $V_{i,j}$ for
the various exchange diagrams can be found in Ref. \cite{Holz}. The
dependence on the starting energy $z$ is removed via the prescriptions
given in Eq. (4.7) of Ref. \cite{Reu}.
Note that the potential matrix elements $V_{i,j}$ are derived by
assuming isospin symmetry. However, the Lippmann-Schwinger equation
(\ref{tmat}) is solved in particle space using the proper
physical masses of the baryons for the various $\Sigma N$ channels.
Furthermore, in the charged channels the Coulomb potential is taken
into account. Since we solve the Lippmann-Schwinger equation in
momentum space this is done by means of the Vincent-Phatak
method \cite{Holz,Phatak}.
\subsection{Hyperon-nucleon observables}
In Fig.~\ref{cross} we compare the integrated cross sections obtained from
the new $YN$ potential (solid curves) with the $YN \rightarrow Y'N$ scattering data.
Obviously, a good reproduction of the empirical data \cite{Alex,Sechi,Kadyk,Eisele,Engel}
is achieved. Also shown are results from the original J\"ulich $YN$ model~A
\cite{Holz} (dash-dotted curves).
The main qualitative differences between the two models appear in the
$\Lambda p \rightarrow \Lambda p$ channel, for which the J\"ulich model
\cite{Holz} (with standard $\sigma$ and $\rho$ exchange) predicts a broad
shoulder at $p_{lab} \approx$ 350 MeV/c.
This structure, which is not supported by the available experimental
evidence, is due to a bound state in the $^1S_0$ partial wave of the
$\Sigma N$ channel. It is not present in the new model anymore. (We should
say, however, that the new model has a bound state, too. But with a binding
energy of about 400 MeV below the $\Lambda N$ threshold it is located
completely outside of the physical region. One could speculate, of course,
that this bound state is a manifestation of the Pauli forbidden $(11)_s$
state at the quark level \cite{FujiwaraP}.)
Furthermore, the cusp structure at the opening of the $\Sigma N$
threshold is much less pronounced in the new model. In the old model
this structure was primarily caused by a large amplitude in the
tensor-coupled $^3S_1-^3D_1$ partial wave of the $\Lambda N$ --
$\Sigma N$ transition. This amplitude is now much smaller. As a
consequence also the transition cross section for
$\Sigma^- p \to \Lambda n$ is now somewhat smaller, though still in
line with the empirical informations. In the $\Sigma^- p$ channel the
new model yields a stronger energy dependence of the reaction cross
section as it is favoured by
the available cross-section data. In the other two measured
reaction channels the agreement with the data is equally good, if not
better, for the new model.
Note that the $\Sigma^+p$ and $\Sigma^-p$ elastic cross
sections are not ``true'' total cross sections.
The cross sections that were measured are defined as~\cite{Eisele}
\begin{equation}
\sigma=\frac{2}{\cos\theta_{\rm max}-\cos\theta_{\rm min}}
\int_{\cos \theta_{\rm min}}^{\cos \theta_{\rm max}}
\frac{d\sigma(\theta)}{d\cos\theta}d\cos\theta,
\end{equation}
with typical values $-0.2$ to $-0.5$ for $\cos\theta_{\rm min}$ and
$0.3$ to $0.5$ for $\cos\theta_{\rm max}$. In order to stay as close
as possible to the plotted experimental data, the theoretical curves
in Figs.~\ref{diff}(c) and (d) have been calculated with
$\cos\theta_{\rm min}=-0.5$ and $\cos\theta_{\rm max}=0.5$.
Cross sections at somewhat higher energies are presented
in Fig.~\ref{cross2}. Note that the data shown in this figure have
not been taken into account in the fitting process and therefore the
results are genuine predictions of the model.
Also here the agreement with the data is satisfactory.
The differential $YN$ scattering cross sections presented in
Fig.~\ref{diff} are likewise genuine predictions of our $YN$ model.
We want to point out that the empirical information in those
figures comes from data
taken from a finite momentum interval, e.g. 160 $ < p_{Lab} < $ 180 MeV/c
for the $\Sigma^+p$ channel \cite{Alex}, whereas the
calculations were performed for the central value of that momentum
interval as it is given in the various plots.
Note also that the original $YN$ model of the J\"ulich group was
fitted to the data without including the Coulomb interaction (and
without taking into account the mass splitting between $\Sigma^-$,
$\Sigma^0$, and $\Sigma^+$).
Thus, the corresponding results presented in Fig.~\ref{diff} do not
show the strong forward peak caused by the Coulomb amplitude in the
charged channels.
Evidently, also the data on differential cross sections are
rather well reproduced by our new $YN$ model. In comparison to
the results of the original J\"ulich model one can say that the
angular dependence in the $\Sigma^- p$ channel is now much better
described and it seems to be more in line with the trend of the
angular dependence exhibited by the data in the
$\Sigma^- p \to \Lambda n$ channel too.
The dashed curves in Figs.~\ref{cross}, \ref{cross2} and
\ref{diff} are results from an alternative model where the
contributions from the disputed $\kappa$(1000) meson
have been replaced by a contact interaction. Obviously there is
practically no sensitivity to the concrete range of the contribution
in the scalar channel with isospin 1/2 -- besides that it has to be
of fairly short range. In this context we want to mention that we
could achieve a comparable description of the data even with a
$\kappa$ mass as low as 800 MeV \cite{Aitala}.
For exploring the differences between the original J\"ulich $YN$ model
and our new model in more detail we present in Figs.~\ref{pol},
\ref{pols} further observables where, however, no data are available.
Fig.~\ref{pol} contains differential cross sections, polarizations and the
depolarization parameter $D_{nn}$ (definition and explicit expressions
for those observables can be found in the appendix B of Ref.~\cite{Reu})
for the $\Lambda N$ channel. We present predictions at two energies,
one ($p_{lab}$ = 150 MeV/c) close to the $\Lambda N$ threshold and
one ($p_{lab}$ = 600 MeV/c) close to (but below) the $\Sigma N$ threshold.
The results at the higher energy reveal that the new model differs drastically
from the old one. The differential cross section in the new model is
strongly forward-peaked whereas the one of the old models peaks in forward
and backward direction. The polarization and $D_{nn}$ have even different
signs. The observables at the lower energy are still dominated by the
$S$ waves and therefore exhibit only minor differences. But one can see from
the differential cross section that the onset of higher partial waves
occurs earlier for the new $YN$ model.
Similarly striking differences are present also in the predictions for other
differential observables though we refrain from showing them here.
For the various $\Sigma N$ channels we present predictions for the polarization
and the depolarization parameter $D_{nn}$ at $p_{lab}$ = 500 MeV/c, cf.
Fig.~\ref{pols}. Also here one can see that, in general, there are large
differences between the results of the old and the new model.
\subsection{Low energy parameters and phase shifts}
For the computation of the low energy parameters and phase shifts we
omit the Coulomb interaction and ignore the mass differences between
the $\Sigma$'s and proton and neutron so that we can solve the
Lippmann-Schwinger equation in isospin basis. This allows us to
present also results for the $\Sigma N$ system in the $I = 1/2$
channel. The $YN$ S-wave low energy parameters are listed in
Table \ref{Effr} while phase shifts for selected partial waves
are shown in Figs.~\ref{phases} and \ref{phases1}.
\begin{table}[ht]
\caption{$YN$ low energy parameters in the $^1S_0$ and $^3S_1$ partial waves
derived from our new model (J04) together with the corresponding results of the
J\"ulich model A \protect\cite{Holz}. J04c refers to results of an alternative
model where $\kappa$ exchange is replaced by a contact term, cf. text.
}
\label{Effr}
\begin{center}
\begin{tabular}{|c|c|cccc|}
\hline
Channel & Model & $a_s(fm)$ & $r_s(fm)$ & $a_t(fm)$ & $r_t(fm)$ \\
\hline
$\Lambda N$ & J04 & $-$2.56 & 2.75 & $-$1.66 & 2.93 \\
& J04c & $-$2.66 & 2.67 & $-$1.57 & 3.08 \\
& A [1] & $-$1.56 & 1.43 & $-$1.59 & 3.16 \\
& & & & & \\
$\Sigma N (I=1/2)$ & J04 & 0.90$-i$0.13 & $-$4.38$-i$2.07 & $-$3.83$-i$3.01 & 2.79$-i$0.57 \\
& J04c & 0.90$-i$0.13 &$-$4.29$-i$2.05 & $-$3.63$-i$3.09 & 2.78$-i$0.60 \\
& A [1] & 1.42$-i$0.08 & $-$0.49$-i$0.27 & 2.47$-i$3.74 & 1.61$-i$0.64 \\
& & & & & \\
$\Sigma N (I=3/2)$ & J04 & $-$4.71 & 3.31 & 0.29 & $-$11.54 \\
& J04c & $-$4.58 & 3.32 & 0.28 & $-$11.63 \\
& A [1] & $-$2.26 & 5.22 & $-$0.76 & 0.79 \\
\hline
\end{tabular} \end{center}
\end{table}
{}From Table \ref{Effr} one can see that the scattering lengths in the
$^3S_1$ $\Lambda N$ partial wave ($a_t$) are of similar magnitude for
the old and new $YN$ models, but in the $^1S_0$ state ($a_s$) the new
model yields a significantly larger value. The stronger $^1S_0$
component of the new model is reflected in the larger $\Lambda p$
cross section near threshold, cf. Fig.~\ref{cross}, and it is
expected to provide sufficient strength in order to support a
bound hypertriton state \cite{Nogga}. Indeed recent $YN$ models like NSC97f of the
Nijmegen group \cite{NijV} or the Ehime model 00A \cite{Ehime},
that apparently lead to a bound hypertriton \cite{Miya3,Ehime}, predict
singlet scattering lengths that are very similar to that of our new model.
In this context we want to mention that the static version of the old
J\"ulich $YN$ model \cite{Reu} did not support a hypertriton bound
state \cite{Miya1}. However, in that model both the $^1S_0$ as well as
the $^3S_1$ $\Lambda N$ scattering lengths are considerably smaller \cite{Reu}
than in our new $YN$ model.
The scattering lengths and effective ranges for $\Sigma N$ with $I= 1/2$
are complex because this channel is coupled to the $\Lambda N$ system.
In the singlet case the scattering lengths are comparable for the two models
whereas in the triplet case they even have opposite signs.
We want to emphasize, however, that in both models the latter partial wave is
attractive. But in the original J\"ulich model the attraction is so strong
that there is a near-threshold quasibound state in the $\Sigma N$ channel
that causes the real part of $a_t$ to be positive - like in case of the
corresponding $NN$ partial wave and the deuteron. Let us mention that
practically the same situation occurs in the Nijmegen model NSC97f,
whose pole structure has been investigated and thoroughly discussed in
Ref. \cite{Yamamura}. As a consequence of the near-threshold pole both
these models yield a very pronounced cusp-like structure in the $\Lambda p$
cross section at the opening of the $\Sigma N$ channel, cf. Fig.~\ref{cross}
and Fig.~2 in Ref. \cite{NijV}, respectively. In our new $YN$ model, on the
other hand, the cusp at the $\Sigma N$ threshold is much less pronounced.
Note that the $^1S_0$ partial wave is attractive too. As already mentioned above,
in the original J\"ulich model there is a bound state in the $\Sigma N$ channel
-- as evidenced by the broad bump in the $\Lambda p$ cross section around
$p_{lab} \approx$ 350 MeV/c. And the new $YN$ model has also a bound state
which is located, however, around 400 MeV below the $\Lambda N$ threshold and
therefore completely outside of the physically relevant region.
Let us finally come to the $\Sigma N$ channel with $I= 3/2$. Here we see that
the singlet scattering length of the new model is about twice as large as the
one of the original J\"ulich model. Note that a comparably large singlet
scattering length is also predicted by all of the $YN$ models presented in
Ref. \cite{NijV}. The scattering lengths for $^3S_1$ are small in both cases,
but of opposite sign. Now, however, it is indeed so that our new $YN$ model
is repulsive in this partial wave whereas the old model is attractive. It is
interesting that basically all available $YN$ models predict rather small
values for the spin-triplet scattering length of the $\Sigma N$ $I= 3/2$
channel \cite{Holz,NijIII,NijIV,NijV,Fujiwara}, though there is no general
trend as far as the sign is concerned. We also observe an unnaturally large
value for the triplet effective range, which is clearly related to the strong
suppression of the corresponding scattering length. Such a scenario will
require special attention when this channel is considered in effective
field theory (for further discussion, see Sec.~\ref{sec:sum}).
Predictions for $\Lambda N$ and $\Sigma N$ phase shifts for selected
$S$- and $D$-waves are shown in Fig.~\ref{phases} and those for
$P$-waves can be found in Fig.~\ref{phases1}.
The $\Sigma N$ $S$-wave phase shifts reflect the features that we already
discussed in the context of the scattering lengths. For example one can see
that the phase shift for the $^3S_1$ $I=1/2$ state starts at 180$^0$ for
the original J\"ulich model, as it is expected for a partial wave where
a bound state is present. For the $I=3/2$ state the corresponding phase
is positive, reflecting an attractive interaction, whereas the phase shift
resulting from the new $YN$ model is negative. Note that the phases
for $^1S_0$ and $I=1/2$ should both start at 180$^0$ because, as mentioned
above, there is a bound state in both models.
The opening of the $\Sigma N$ channel at around $E_{Lab} \approx$ 170 MeV
is cleary reflected in the $^3S_1$ phase shift of the $\Lambda N$ system.
But its effect on the $^3D_1$ phase shift is even more striking where,
for the old J\"ulich model, the phase even goes through 90 degrees.
In fact, the resonance-like behaviour in that partial wave is predominantly
responsible for the strong enhancement of the $\Lambda N$ cross section in the
vicinity of the $\Sigma N$ threshold, cf. Fig. \ref{cross}. In addition,
the transition amplitude $^3D_1 (\Lambda N) \leftrightarrow ^3S_1 (\Sigma N)$
provides a significant contribution to the $\Sigma^-p \to \Lambda n$ cross
section. In the new model the $^3D_1$ phase shift of the $\Lambda N$ system
is much smaller. Accordingly, the cusp-like structure at the $\Sigma N$ threshold
is much less pronounced and the $\Sigma^-p \to \Lambda n$ cross
section is somewhat reduced in this model, as can be seen in Fig. \ref{cross}.
The predictions for the $P$ waves (Fig. \ref{phases1}) show a varying picture.
In the $\Lambda N$ system most of the phases are now attractive whereas they
are mostly repulsive for the old model. This concerns in particular the
$^1P_1$ amplitude, which is fairly large in the new model, but also the
$^3P_0$ partial wave.
In the $I=3/2$ channel of the $\Sigma N$ system the results of the two
models are qualitatively rather similar. To some extent this is also the
case for the $I=1/2$ channel though here the $^3P_1$ amplitude of the new
$YN$ model is significantly larger than the one of the old J\"ulich model.
Indeed the simultaneous enhancement in the $^3P_1$ ($\Sigma N$) and
$^1P_1$ ($\Lambda N$) phase shifts is caused by a stronger antisymmetric
spin-orbit force between the $\Lambda N$ and $\Sigma N$ channels
in the new model. The increase is primarily due to the $\rho$ exchange
contribution whose strength for the $\Lambda N \to \Sigma N$ transition,
fixed from correlated $\pi\pi - K\bar K$ exchange, is
about twice as large as what was used in the old J\"ulich model,
cf. Table 11 of Ref. \cite{REUBER}.
In this context let us mention that some other $YN$ models
exhibit a similarly strong coupling between those partial waves
and channels \cite{FujiwaraA}.
\section{Summary and outlook}
\label{sec:sum}
We have presented a meson-exchange model of the $YN$ interaction where
-- as the main new feature -- the contributions both in the
scalar-isoscalar ($\sigma$) and the vector-isovector ($\rho$)
channels are constrained by a microscopic model of correlated
$\pi\pi$ and $K\bar K$ exchange.
An essential part of baryon-baryon interactions is the strong
medium-range attraction, which in one-boson-exchange models is
parameterized by exchange of a fictitious scalar-isoscalar meson
with mass around 500 MeV.
In extended meson exchange models this part is naturally generated
by two-pion exchange contributions.
As well as uncorrelated two-pion exchange, correlated contributions
must be included in which the exchanged pions interact during their
exchange; these terms in fact provide the main contribution to the
intermediate-range interaction.
As kaon exchange is an essential part of hyperon-nucleon interactions
a simultaneous investigation of correlated $\pi\pi$ and $K\anti{K}$
exchanges is clearly necessary.
In Ref.~\cite{REUBER} the correlated $\pi\pi$ and $K\anti{K}$ exchange
contributions in various baryon-baryon channels have therefore been
investigated within a microscopic model for the transition amplitudes
of the baryon-antibaryon system ($B\anti{B'}$) into $\pi\pi$ and
$K\anti{K}$ for energies below the $B\anti{B'}$ threshold.
The correlations between the two mesons have been taken into account
by means of $\pi\pi-K\anti{K}$ amplitudes, determined in the field
theoretical framework of Refs.~\cite{Lohse,Janssen,Schutz}, which provide an
excellent description of empirical $\pi\pi$ data up to 1.3 GeV.
With the help of unitarity and dispersion-theoretical methods, the
baryon-baryon amplitudes for correlated $\pi\pi$ and $K\anti{K}$
exchange in the $J^P=0^+$ ($\sigma$) and $J^P=1^-$ ($\rho$)
$t$-channels have then been determined.
With this model it is possible to reliably take into
account correlated $\pi\pi$ and $K\anti{K}$ exchange in both the
$\sigma$ and $\rho$ channels for various baryon-baryon reactions.
Given the strong constraints on $\sigma$ as well as $\rho$ exchange
from correlated $\pi\pi$ exchange, a more sound microscopic model
for the $YN$ interaction can hence now be constructed.
Besides contributions from correlated $\pi\pi$ and $K\anti{K}$ exchange
the present model incorporates also the standard one-boson exchanges
of the lowest pseudoscalar and vector meson multiplets with coupling
constants fixed by SU(6) symmetry relations. Thus, in the
present model the long- and intermediate-range part of the $YN$
interaction is completely determined -- either by SU(6) constraints
or by correlated $\pi\pi$ and $K\bar K$ exchange.
In addition there are some short-ranged ingredients.
First of all, the contribution from the $a_0(980)$ meson is taken into
account. Secondly, we consider the exchange of a strange scalar meson, the
$\kappa$, with mass $\sim 1000$~MeV. (Note that these pieces
were not taken into account in the earlier $YN$ models of the J\"ulich
group \cite{Holz,Reu}.) These short-ranged contributions are also viewed as
being due to correlated meson-meson exchanges but in practice they are
parametrized phenomelogically in terms of one-boson-exchange
contributions in the corresponding spin-isospin channels. In particular,
no SU(3) relations are imposed on the short-range part. This
assumption is based on our observation that the contributions in the
$\rho$ exchange channel as they result from correlated $\pi\pi$ and
$K\bar{K}$ no longer fulfill SU(3) relations, but it also acknowledges
the fact that at present there is no general agreement about who are
the actual members of the lowest-lying scalar meson SU(3) multiplet.
The new $YN$ model provides a rather satisfactory reproduction of the
available $YN$ data. It describes not only the integrated cross
sections for $\Lambda p$ and the various $\Sigma N$ channels but
also the few available data on differential cross sections, even
though the latter were not included in the fitting procedure.
We see that as an indication that the data are compatible with
the assumption of SU(6) symmetry for the pseudoscalar sector of
our $YN$ model.
As the main qualitative difference between the old $YN$ J\"ulich model
\cite{Holz} (with standard $\sigma$ and $\rho$ exchange)
we want to mention that the broad shoulder at $p_{lab} \approx$
350 MeV/c in the $\Lambda p \rightarrow \Lambda p$ channel,
predicted by that model but not seen in the experiments,
is no longer present in the new model. But, as a more detailed
comparison revealed, there are also striking differences between these
two models in the predictions for the individual partial waves.
For example, in the new model the triplet $S$ wave in the $I=3/2$
channel of the $\Sigma N$ system is repulsive
and some of the $P$-wave amplitudes are significantly larger.
Thus, it will be interesting to see the performance of the new
$YN$ interaction model in applications to few- and many-body
systems involving hyperons \cite{Nogga}.
This study also paves the way for a systematic investigation in the
framework of effective field theory, see \cite{Henk}. In such a framework,
pion- and kaon exchange supplemented by four-baryon contact interactions
(these encode the contributions from the exchange of heavier mesons not
linked to chiral symmtery) is considered to generate a potential based
on the power counting rules. It remains to be seen how well such a more
systematic approach can indeed describe the data and what conclusions can
be drawn about three-baryon forces that naturally arise in such a framework.
\section*{Acknowledgements}
We thank J.~Speth and W.~Melnitchouk for collaboration during the early stages
of this investigation. We also thank A. Nogga for a careful reading of our
manuscript. This research is part of the EU Integrated Infrastructure Initiative
Hadron Physics Project under contract number RII3-CT-2004-506078. The work was
supported in part by DFG through funds provided to the special research grant
TR-16 ``Subnuclear Structure of Matter''.
\defNucl.\ {Nucl.\ }
\defPhys.\ {Phys.\ }
\defRev.\ {Rev.\ }
\defLett.\ {Lett.\ }
\def\Phys \Lett{Phys.\ Lett.\ }
\def\Phys\Lett B {Phys.\ Lett.\ B }
\def\Nucl\Phys{Nucl.\ Phys.\ }
\def\Nucl\Phys A {Nucl.\ Phys.\ A }
\def\Nucl\Phys B {Nucl.\ Phys.\ B }
\def\Nucl\Phys (Proc.\ Suppl.\ )B {Nucl.\ Phys.\ (Proc.\ Suppl.\ )B }
\def\Phys\Rev{Phys.\ Rev.\ }
\def\Phys\Rev\Lett{Phys.\ Rev.\ Lett.\ }
\def\Phys\Rev C {Phys.\ Rev.\ C }
\def\Phys\Rev D {Phys.\ Rev.\ D }
\def\Rev Mod.\ \Phys{Rev.\ Mod.\ Phys.\ }
\defZ.\ \Phys{Z.\ Phys.\ }
\defZ.\ \Phys A {Z.\ Phys.\ A }
\defZ.\ \Phys C {Z.\ Phys.\ C }
\defAnn.\ \Phys{Ann.\ Phys.\ }
\def\Phys Rep.\ {Phys.\ Rep.\ }
\defAdv.\ in \Nucl\Phys Vol.\ {Adv.\ in Nucl.\ Phys.\ Vol.\ }
\defProg.\ Theor.\ \Phys{Prog.\ Theor.\ Phys.\ }
\defProg.\ Theor.\ \Phys Suppl.\ {Prog.\ Theor.\ Phys.\ Suppl.\ }
\def\Phys \Lett{Phys.\ Lett.\ }
\defJ.\ Physique{J.\ Physique}
\defFew--Body Systems, Suppl.\ {Few--Body Systems, Suppl.\ }
\defInt.\ J.\ Mod.\ \Phys A{Int.\ J.\ Mod.\ Phys.\ A}
\defNuovo Cimento~{Nuovo Cimento~}
|
2,869,038,154,606 | arxiv | \section{Introduction}
In his seminal 1987 paper \cite{Polyakov:1987zb}, Polyakov provides a solution to the
two-dimensional induced gravity theory \cite{Polyakov:1981rd},
\begin{equation}
\label{polyeq1}
S = \frac{c}{96 \pi} \int d^2x \sqrt{-g} \, R \frac{1}{\nabla^2} R,
\end{equation}
by working in a light-cone gauge. The gauge choice puts the metric into the form
\begin{equation}
\label{polyeq2}
ds^2 = - dx^+ dx^- + F(x^+, x^-) (dx^+)^2.
\end{equation}
Polyakov shows that the quantum theory for the dynamical field $F(x^+, x^-)$ admits an $sl(2,
\mathbb{R})$ current algebra symmetry with level $k= c/6$. In this note, we present the
three-dimensional bulk theory that is dual to this two-dimensional theory.
\section{Chiral boundary conditions in $AdS_3$ gravity}
The action of three-dimensional gravity with negative cosmological
constant~\cite{Balasubramanian:1999re} is given by
\begin{eqnarray}
S = - \frac{1}{16 \pi G} \int d^3x \, \sqrt{-g} \left( R - \frac{2}{l^2} \right) - \frac{1}{8 \pi G} \int_{\partial {\cal M}} d^2 x \, \sqrt{-\gamma} \, \Theta + \frac{1}{8 \pi G} S_\text{ct} (\gamma_{\mu\nu}),
\end{eqnarray}
where $\gamma_{\mu\nu}$ is the induced metric and $\Theta$ is trace of the extrinsic curvature of the boundary. Varying the action yields
\begin{eqnarray}
\delta S = \int_{\partial {\cal M}} d^2x \sqrt{-\gamma} \, \frac{1}{2} T^{\mu\nu} \delta \gamma_{\mu\nu} \, ,
\end{eqnarray}
where
\begin{eqnarray}
T^{\mu\nu} = \frac{1}{8\pi G} \left[ \Theta^{\mu\nu} - \Theta \gamma^{\mu\nu} + \frac{2}{\sqrt{-\gamma}}\frac{\delta S_{ct}}{\delta \gamma_{\mu\nu}} \right].
\end{eqnarray}
The variational principle is made well-defined by imposing $\delta \gamma_{\mu\nu} = 0$ (Dirichlet) or
$T^{\mu\nu} = 0$ (Neumann) at the boundary (see~\cite{Compere:2008us} for a recent discussion).
Recently Comp\`{e}re, Song and Strominger (CSS)~\cite{Compere:2013aya, Compere:2013bya} and
Troessaert~\cite{Troessaert:2013fma} proposed new sets of boundary conditions for three-dimensional
gravity, which differ from the well-known Dirichlet-type Brown--Henneaux boundary
conditions~\cite{Brown:1986nw}.\footnote{In fact, the boundary conditions of
\cite{Troessaert:2013fma} subsume those of~\cite{Brown:1986nw}.} Before delving into specifics,
let us discuss the general strategy employed by~\cite{Compere:2013bya}. One begins by adding a term
of the type
\begin{eqnarray}
\label{css-term}
S' = -\frac{1}{8 \pi G} \int_{\partial {\cal M}} d^2x \, \sqrt{-\gamma} \, \frac{1}{2} {\cal T}^{\mu\nu} \gamma_{\mu\nu}
\end{eqnarray}
for a fixed ($\gamma_{\mu\nu}$-independent) symmetric boundary tensor ${\cal T}^{\mu\nu}$. The
variation of this term is
\begin{eqnarray}
\delta S' = - \frac{1}{8 \pi G} \int_{\partial {\cal M}} d^2x \, \sqrt{-\gamma} \tilde{\cal T}^{\mu\nu} \delta \gamma_{\mu\nu},
\end{eqnarray}
where $\tilde {\cal T}^{\mu\nu} = {\cal T}^{\mu\nu} - \frac{1}{2} ({\cal T}^{\alpha\beta} \gamma_{\alpha\beta}) \, \gamma^{\mu\nu} $. The variation of the total action then gives
\begin{eqnarray}
\label{totvar}
\delta S + \delta S' = \frac{1}{8 \pi G} \int_{\partial {\cal M}} d^2x \, \sqrt{-\gamma} (T^{\mu\nu} - \tilde{\cal T}^{\mu\nu}) \delta \gamma_{\mu\nu}.
\end{eqnarray}
Now the boundary conditions consistent with the variational principle depend on $\tilde {\cal
T}^{\mu\nu}$. Generically, this leads to ``mixed'' type boundary conditions. If
for a given class of boundary conditions some particular component of $T^{\alpha\beta}-\tilde{\cal
T}^{\alpha\beta}$ vanishes sufficiently fast in the boundary limit such that its contribution to the integrand in \eqref{totvar} vanishes, then the corresponding component of
$\gamma_{\alpha\beta}$ can be allowed to fluctuate. Since we want the
boundary metric to match~\eqref{polyeq2}, we would like Neumann boundary conditions for
$\gamma_{++}$. Therefore we choose $\cal T^{\mu\nu}$ such that the leading term of $T^{++}$ equals $\tilde {\cal T}^{++}$ in the boundary limit.
This condition has been imposed in \cite{Compere:2013bya}, with the addition of an extra boundary term \eqref{css-term} with\footnote{The induced metric $\gamma_{\mu\nu}$ differs from $g_{\mu\nu}^{(0)}$ of \cite{Compere:2013bya} by a factor of $r^2$. }
\begin{equation}\label{eq:css-T}
\mathcal{T}^{\mu\nu} = -\frac{1}{2r^4} N^2 l \delta^\mu_+\delta^\nu_+,
\end{equation}
and the following boundary conditions are imposed on the metric:
\begin{equation}\label{css}
\begin{gathered}
g_{rr} = \frac{l^2}{r^2} + {\cal O}(r^{-4}), ~~ g_{r \pm} = {\cal O} (r^{-3}),\\
g_{+-} = - \frac{r^2}{2} + {\cal O}(r^0) ,~~g_{++} = r^2 f(x^+) + {\cal O}(r^0), ~~ g_{--} = - \frac{l^2}{4} N^2 + {\cal O}(r^{-1}),
\end{gathered}
\end{equation}
where $f(x^+)$ is a dynamical field and $N^2$ is fixed constant.\footnote{To relate to the notation
in~\cite{Compere:2013bya}, set $N^2 = -\frac{16 G\Delta}{l}$ and $f(x^+) = l^2 \partial_+
\bar{P}(x^+)$.} These boundary conditions give rise to an asymptotic symmetry algebra: a chiral
$U(1)$ current algebra with level determined by $N$. These also ensure that $T_{--}$
is held fixed in the variational problem, whereas $g_{++}$ is allowed to fluctuate as long as its
boundary value is independent of $x^-$.
In what follows, we show that~\eqref{css} are not the most general boundary conditions
consistent with the variational principle and the extra boundary term given by~\eqref{eq:css-T}. For this, we introduce a weaker set of consistent boundary conditions that enhance the asymptotic symmetry algebra to an
$sl(2, {\mathbb R})$ current algebra whose level is independent of $N$.
\subsection{New boundary conditions}
In the new boundary conditions, the class of allowed boundary metrics coincides with that
of~\eqref{polyeq2}. Since we want to allow $\gamma_{++}$ to fluctuate, we keep $T_{--}$
fixed in our asymptotically locally $AdS_3$ metrics. Therefore, we propose the following boundary conditions:
\begin{equation}\label{aps}
\begin{aligned}
g_{rr} &= \frac{l^2}{r^2} + {\cal O}(\tfrac{1}{r^4}), ~~ g_{r+} = {\cal O}(\tfrac{1}{r}), ~~ g_{r-} = {\cal O}(\tfrac{1}{r^3}), \\
g_{+-} &= - \frac{r^2}{2} + {\cal O}(r^0), ~~ g_{--} = - \frac{l^2 N^2}{4} + {\cal O}(\tfrac{1}{r}), \\
g_{++} &= r^2 F(x^+, x^-) + {\cal O} (r^0) ,
\end{aligned}
\end{equation}
where, as above, we take $F(x^+, x^-)$ to be a dynamical field and $N$ fixed. The crucial
difference between these boundary conditions and those in~\eqref{css} is the different fall-off
condition for $g_{r+}$ which allows for the boundary component of $g_{++}$ to depend on $x^-$ as well. One must, of course, check the consistency of these conditions with the
equations of motion. This involves constructing the non-linear solution in an expansion in inverse
powers of $r$. Working to the first non-trivial order, one finds the following condition on $F(x^+,
x^-)$:
\begin{equation}
N^2 \, \partial_- F(x^+, x^-) + \partial_-^3 F(x^+, x^-) = 0,
\end{equation}
which forces $F(x^+, x^-)$ to take the form
\begin{equation}
F(x^+, x^-) = f(x^+) + g(x^+) e^{i N x^-} + \bar g(x^+) e^{-i N x^-}
\end{equation}
where $f(x^+)$ is a real function and $\bar g(x^+)$ is the complex conjugate of $g(x^+)$.
Let us note that this is directly analogous to the form of $F(x^+, x^-)$ derived
in~\cite{Polyakov:1987zb}. Throughout our discussion we think of $\phi = \frac{x^+ - x^-}{2}$ as
$2\pi$-periodic (and $\tau = \tfrac{x^++x^-}{2}$ as the time coordinate), and therefore we restrict our consideration to $N\in\mathbb{Z}$. Similarly, we
impose periodic boundary conditions on $f(x^+)$ and $g(x^+)$. If one takes the spatial part of the
boundary to be $\re$ instead of $S^1$, there are no such restrictions and one may even consider $N^2
< 0$ like in~\cite{Compere:2013bya}.
\subsection{The non-linear solution}
One can write a general non-linear solution of $AdS_3$ gravity in Fefferman--Graham
coordinates \cite{Skenderis:1999nb} as:
\begin{equation}
\label{nlsol1}
ds^2 = \frac{dr^2}{r^2} + r^2 \left[ g^{(0)}_{ab} + \frac{l^2}{r^2} \, g^{(2)}_{ab} + \frac{l^4}{r^4} g^{(4)}_{ab} \right] dx^a dx^b.
\end{equation}
The full non-linear solution with our boundary conditions is obtained when
\begin{equation}
\label{nlsol2}
\begin{aligned}
g^{(0)} _{++ } &= f(x^+) + g(x^+) \, e^{i N x^-} + \bar g (x^+) \, e^{-i N x^-}, ~~ g^{(0)}_{+-} = -\frac{1}{2}, ~~ g^{(0)}_{--} = 0, \\
g^{(2)}_{++} &= \kappa (x^+) + \frac{1}{2} N^2 \left[ g^2(x^+) \, e^{2i N x^-}
+ \bar g^2(x^+) \,e^{-2i N x^-} \right] \\
& ~~~~~~~~~~\,~+ \frac{i}{2} N \left[g'(x^+) e^{i N x^-} - \bar g'(x^+) e^{-i N x^-}\right] ,\\
g^{(2)}_{+-} &= \frac{1}{4} N^2 \left[ f(x^+) - g(x^+) \, e^{i N x^-} - \bar g (x^+) \, e^{-iN x^-}\right],
~~ g^{(2)}_{--} = -\frac{1}{4} N^2,\\
g^{(4)}_{ab} &= \frac{1}{4} g^{(2)}_{ac} g_{(0)}^{cd} g^{(2)}_{db} \, ,
\end{aligned}\end{equation}
where in the last line $g_{(0)}^{cd}$ is $g^{(0)}_{cd}$ inverse. As above, demanding that the
solution respects the periodicity of $\phi$-direction requires $N$ to be an integer and the functions
$f(x^+)$, $g(x^+)$ and $\kappa(x^+)$ to be periodic. This solution reduces to the one given in
\cite{Compere:2013bya} when $g(x^+) = \bar g(x^+) = 0$.
As mentioned in the previous subsection one can take $N$ to be purely imaginary when the boundary spatial coordinate is not periodic. In this case too the non-linear solution \eqref{nlsol1} continues to be a valid solution with $g(x^+)$ and $\bar g(x^+)$ treated as two real and independent functions. However, we will not consider this case further here.
\section{Charges, algebra and central charges}
It is easy to see that vectors of the form
\begin{equation}\begin{aligned}
\xi^r &= -\frac{1}{2}\left[B'(x^+) +iN A(x^+) e^{iN x^-}
-iN \bar{A}(x^+)e^{iN x^-} \right]r + \mathcal{O}(r^0)\\
\xi^+ &= B(x^+) - \frac{l^2N^2}{2r^2}\left[A(x^+)e^{iN x^-}
+ \bar{A}(x^+) e^{-iN x^-}\right] + \mathcal{O}(\tfrac{1}{r^3})\\
\xi^- &= A_0(x^+) + A(x^+)e^{iN x^-} + \bar{A}(x^+)e^{-iN x^-} + \mathcal{O}(\tfrac{1}{r})
\end{aligned}\end{equation}
satisfy the criteria of~\cite{Barnich:2001jy}, which allow us to construct corresponding asymptotic
charges. If, on the other hand, one demands that the asymptotic symmetry generators $\xi$ leave the
space of boundary conditions invariant, one finds the same vectors but with the first subleading terms
appearing at one higher order for each component. For either set of vectors, the Lie bracket
algebra closes to the same order as one has defined the vectors.
Here, $B(x^+)$ and $A_0(x^+)$ are real and $A(x^+)$ is complex; therefore, there are four real,
periodic functions of $x^+$ that specify this asymptotic vector.
%
We take the following basis for the modes of the vector fields:
\begin{equation}\begin{aligned}
L_n &= i e^{i \, n \, x^+} [ \partial_+ - \frac{i}{2} n \, r\partial_r ] + \cdots \\
T^{(0)}_n &= \tfrac{i}{N} e^{i \, n \, x^+} \partial_- + \cdots \\
T^{(+)}_n &= \tfrac{i}{N} e^{i (n \, x^+ + N \, x^-)} [ \partial_- - \frac{i}{2} N \, r\partial_r -\frac{N^2}{2r^2}\partial_+] + \cdots \\
T^{(-)}_n &= \tfrac{i}{N} e^{i (n \, x^+ - N \, x^-)} [ \partial_- + \frac{i}{2} N \, r\partial_r -\frac{N^2}{2r^2}\partial_+] + \cdots ,
\end{aligned}\end{equation}
which satisfy the Lie bracket algebra
\begin{equation}\begin{aligned}
\,[L_m, L_n] &= (m-n) \, L_{m+n}, &\qquad [L_m, T^{(a)}_n] &= - n \, T^{(a)}_{m+n}, \\
[T^{(0)}_m, T^{(\pm)}_n] &= \mp T^{(\pm)}_{m+n}, & [T^{(+)}_m, T^{(-)}_n] &= 2 \, T^{(0)}_{m+n}\, .
\end{aligned}\end{equation}
Thus, the classical asymptotic symmetry algebra is a Witt algebra and an $sl(2,{\mathbb R})$ current algebra.
We use the Brandt--Barnich--Comp\`{e}re (BBC) formulation \cite{Barnich:2001jy, Barnich:2007bf} for
computing the corresponding charges of our geometry. We find that the charges are integrable over
the solution space if $\delta N = 0$ with
\newpage
\begin{eqnarray}
\small
\delta \!\!\!/ Q_\xi &=& \frac{1}{8 \pi G} \delta \int d\phi \, \Big\{ B(x^+) \Big[ \kappa (x^+) + N^2 (\frac{1}{2} f^2(x^+)- g(x^+) \bar g(x^+) ) \cr
&&~~~~~~~~~~~~~~~~~~\,~~~~~~~~~~~~~~~~~~ +\frac{N^2}{2} (e^{i N x^-} g(x^+) + e^{-i N x^-} \bar g(x^+)) \cr
&&~~~~~~~~~~~~~~~~~~\,~~~~~~~~~~~~~~~~~~ + \frac{i}{2} N \, \partial_+[B (x^+)\, (e^{i N x^-} g(x^+) - e^{-i N x^-} \bar g(x^+))]\Big] \Big\} \cr
&& -\frac{1}{8\pi G} \delta \int d\phi \, N^2 \Big[ \frac{1}{2} A_0 (x^+) f(x^+) - (g(x^+) A(x^+) + \bar g(x^+) \bar A(x^+)) \Big]\, . \nonumber \\
\end{eqnarray}
These can be integrated between the configurations trivially in the solution space from $f(x^+) = g(x^+) = \kappa(x^+) = 0$ to general values of these fields to write down the charges
\begin{eqnarray}
\label{aps-charges}
Q_B &=& \frac{1}{8 \pi G} \int_0^{2\pi} d\phi \, \Big[ B(x^+) \Big( \kappa (x^+) + \frac{N^2}{2}(f^2(x^+)- 2 \, g(x^+)\bar g(x^+)) \Big) \cr
&& ~~~~~~~~~~~~~~~~~~~~~~~~~\,~~~~~~~~~~~~ +\frac{1}{2}(\partial_+ - \partial_-)\partial_- [e^{i N x^-} g(x^+) + e^{-i N x^-} \bar g(x^+)]\Big] \cr
&=& \frac{1}{8 \pi G} \int_0^{2\pi} d\phi \, \Big[ B(x^+)[ \kappa (x^+) + \frac{N^2}{2}(f^2(x^+) - 2 \, g(x^+)\bar g(x^+))] \cr
&& ~~~~~~~~~~~~~~~~~~~~~\,~~~~~~~~~~~~~~~ +\frac{1}{32 \pi G} \partial_- [e^{i N x^-} g(x^+) + e^{-i N x^-} \bar g(x^+)]\Big|^{\phi = 2\pi}_{\phi = 0} \, ,\nonumber \\
\end{eqnarray}
\begin{eqnarray}
Q_A &=& -\frac{N^2}{8\pi G} \int_0^{2\pi} d\phi \, \Big[ \frac{1}{2} A_0 (x^+) f(x^+) - (g(x^+) A(x^+) + \bar g(x^+) \bar A(x^+)) \Big] \, .
\end{eqnarray}
The boundary term in \eqref{aps-charges} vanishes as we assumed $g(x^+)$ to be periodic and $N$ to be an integer. The algebra of these charges admits central charges. We find that the central term in the commutation relation between charges corresponding to two asymptotic symmetry vectors $\xi$ and $\tilde \xi$ is given by
\begin{multline}
(-i) \frac{l}{32 \pi G} \int_0^{2\pi} d\phi \, \Big[ B'(x^+) \tilde B''(x^+) - B(x^+) \tilde B'''(x^+) \\ + 2 N^2 \, A_0 (x^+) \tilde A'_0 (x^+) - 4 N^2 \Big(A (x^+) \bar{\tilde A}'(x^+) + \bar A(x^+) \tilde A'(x^+) \Big)\Big].
\end{multline}
These give rise to the following algebra for the charges\footnote{The bracket in \eqref{aps-algebra} is $i$ times the Dirac bracket.}
\begin{eqnarray}
\label{aps-algebra}
[L_m, L_n] &=& (m-n) \, L_{m+n} + \frac{c}{12} m^3 \, \delta_{m+n, 0} \, , \cr
[L_m, T^a_n] &=& - n \, T^a_{m+n} \, , \cr
[T^a_m, T^b_n] &=& {f^{ab}}_c T^c_{m+n} + \frac{k}{2} \eta^{ab} \, m \, \delta_{m+n, 0}
\end{eqnarray}
with
\begin{eqnarray}
c= \frac{3l}{2G}, ~~ k = \frac{c}{6}, ~~ {f^{0+}}_+ = -1,~~ {f^{0-}}_- = 1, ~~ {f^{+-}}_0 = 2, ~~ \eta^{00} = -1, ~~ \eta^{+-} = 2. \nonumber\\
\end{eqnarray}
This is precisely the $sl(2, {\mathbb R})$ current algebra found in \cite{Polyakov:1987zb}.
\section{Conclusion}
In this note we have provided boundary conditions for 3-dimensional gravity with negative cosmological constant such that the algebra of asymptotic symmetries is an $sl(2, {\mathbb R})$ current algebra. In the process we showed that the boundary term proposed by CSS \cite{Compere:2013bya} admits a more general set of boundary conditions, which enables our result.
It should be noted that our asymptotic symmetry algebra does contain the full isometry algebra of the global $AdS_3$ solution. This feature is similar to Brown--Henneaux \cite{Brown:1986nw} though one does not demand that the asymptotic vector fields of interest be asymptotically Killing; instead one uses the more general notion of asymptotic symmetries advocated by BBC \cite{Barnich:2001jy, Barnich:2007bf}. Using the BBC formulation, we computed the algebra of charges and found the level $k$ to be $c/6$, independent of the parameter $N$.
To understand the relation to 2-dimensional induced gravity of Polyakov in light-cone gauge \cite{Polyakov:1987zb} further, it will be interesting to see if the correlation functions of the boundary currents, and the effective action for the dynamical fields of the boundary can also be recovered from the gravity side. See \cite{Banados:2002ey} for a discussion on the latter issue. Of course, connections between 3-dimensional gravity with negative cosmological constant and Liouville theory, which arises as a different gauge-fixing of~\eqref{polyeq1}, are well-known (see e.g.~\cite{Carlip:2005zn}, and the recently proposed boundary conditions in~\cite{Troessaert:2013fma}).
It will be interesting to see how adding matter to $AdS_3$ gravity would generalize our analysis. The boundary conditions of \cite{Compere:2013bya} have been found to be related to string theory solutions of \cite{Azeyanagi:2012zd} with a warped $AdS_3$ factor. It will be interesting to explore whether the boundary conditions in \eqref{aps} also play a role in some string theory context.
The non-linear solution in (\ref{nlsol1}, \ref{nlsol2}) does not contain the conventional positive mass BTZ \cite{Banados:1992wn} black hole. The special case of vanishing charges is given by $f(x^+) = g(x^+) = \bar g(x^+) = \kappa(x^+) = 0$ which is simply an extremal BTZ but with negative mass (in global $AdS_3$ vacuum). The comments of CSS \cite{Compere:2013bya} about the possible existence of ergoregions and instabilities in their solution also apply to \eqref{nlsol1}. It will be important to understand these issues better.
Finally, it is intriguing that different ways of gauge-fixing the induced gravity \cite{Polyakov:1987zb} lead to different boundary conditions in the bulk and therefore apparently different holographic duals. It will be important to understand the class of physical theories one can obtain this way and how they are related to each other.
\bibliographystyle{utphys}
\providecommand{\href}[2]{#2}\begingroup\raggedright |
2,869,038,154,607 | arxiv | \section{Introduction}
Artificial intelligence is a wide research field that simulates human intelligence using certain machines that are programmed to perform human-like skills. Presently, the study of artificial intelligence encompasses several branches, such as machine learning (ML), reinforcement learning and deep learning, the former being the most prominent one. ML has recently become a crucial tool for extracting useful information from the very rapidly increasing rate of available data, and is now widely used in numerous research areas including computer science, medicine, chemistry, biology and physics~\cite{jordan255}. \blue{In particular, supervised ML is an approach where a training data set is introduced to the ML algorithm so that it can learn to yield the desired outputs. When provided with a training data set that includes certain inputs and their correct outputs, the model can learn over time to provide accurate estimations of the output also for the inputs that have not been used in the training process~\cite{scikit-learn}. In case of a classification problem, these outputs can correspond to different classes such as Markovian and non-Markovian quantum dynamics. On the other hand, a regression problem deals with outputs that corresponds to real values such as the outcomes of a measurement, where the aim is to make reliable projections about the desired measurement outcome.}
Recently, ML has had a remarkable impact in physics~\cite{Dunjko_2018, Mehta_2019, Carleo_2019}, for instance, in the fields of condensed matter physics~\cite{Ghiringhelli2015,Torlai2016,Carleo2017}, quantum phase transitions~\cite{Carrasquilla_2017, Ponte_2017, Liu_2019, Canabarro_2019}, and quantum information science~\cite{Torlai2018,Canabarro2019,Raban2020}.
Unlike the ideal-isolated quantum systems that evolve unitarily in time, realistic quantum systems are in general open to interaction with an environment, which gives rise to non-unitary dynamics resulting in loss of coherence~\cite{BreuerPet,Rivas2012}. Understanding the physics of open quantum systems is of both fundamental and practical interest since the development of quantum technologies relies on the presence of precious quantum resources such as coherence~\cite{Baumgratz2014,Streltsov2017}. One of the principal concepts in the study of open quantum systems is the dynamical memory effects which might arise throughout the time evolution of the open system, and define non-Markovian dynamics~\cite{BreuerPet}. Although, under special circumstances, the evolution of open systems can be treated under Markovian approximation ignoring the memory effects, non-Markovian behavior cannot be overlooked in many realistic settings. In fact, the theory of non-Markovianity in the dynamics of open quantum systems has been widely explored in recent literature from various perspectives~\cite{Breuer2016,Li2018,Li2020x,Fanchini2013,Addis2016}, and numerous means of quantifying it have been proposed~\cite{Rivas2014}. In addition, the detection of memory effects in the open system dynamics has also been experimentally achieved~\cite{Liu2011,Fanchini2014,Haseli2014,Li2020}. More recently, ML methods have been started to be employed to study non-Markovian quantum processes~\cite{Banchi2018,Shrapnel2018,Luchnikov2020,2004.11038}.
In this work, we present a computational approach based on ML techniques to determine the degree of non-Markovianity in the dynamics of open systems. The proposed approach requires prior knowledge only about the type of dominant decoherence process that our system of interest undergoes. In other words, we will assume that we know the underlying dynamical process, but we have no information about the characteristic model parameters defining the Markovian or non-Markovian nature of this process. Here, we consider two distinct well-established quantifiers of memory effects for our analysis, namely, the trace distance~\cite{Breuer2009} and the entanglement-based measures~\cite{Rivas2010}. Although capturing the signatures of non-Markovian behavior has been possible in some experiments in recent literature~\cite{Li2020}, accurate determination of the degree of non-Markovianity remains challenging for a wide variety of experimental setups, since in general it would require a large number of rounds of quantum state tomography to be successfully performed on the open system. Moreover, depending on the considered measure in an experiment, one would need to deal with the time evolution of a pair of different initial states or even introduce an ancillary system that needs to be protected from the destructive environmental effects. Consequently, the main motivation of our study is to simplify the experimental determination of the non-Markovianity quantifiers with the help of a ML algorithm. In particular, we show that a support vector machines (SVM) based model precisely assesses the degree of non-Markovianity of two paradigmatic quantum processes, i.e., phase damping (PD) and amplitude damping (AD), with only a single or at most two rounds of quantum state tomography. At the same time, our results provide a proof of principle that the non-Markovianity quantifiers can be precisely estimated with the assistance of ML techniques.
This manuscript is organized as follows. In Sec.~\ref{sec2}, we introduce the quantifiers of non-Markovianity considered in our work. Sec.~\ref{sec3} includes the open system models we consider in our analysis. In Sec.~\ref{sec4}, we briefly review the ML model we use in our analysis. We present our main results in Sec.~\ref{sec5} and we conclude in Sec.~\ref{sec6}. { Details of the SVM-based ML approach are discussed in the appendix.}
\section{Quantifying Non-Markovianity} \label{sec2}
{
\blue{In this section, we intend to elaborate on the characterization and quantification of non-Markovianity in the dynamics of open quantum systems. Before going into the details of the non-Markovianity measures and the notion of memory effects that we consider in our analysis, let us first discuss the fundamental and practical relevance of the non-Markovianity measures in quantifying the degree of memory effects in the dynamics of open systems.
To start with, memory effects are known to play an important role in certain significant quantum information protocols. For instance, considering an optical physical setup, it has been shown that in the case of mixed state teleportation under decoherence, increasing the amount of memory in the open system dynamics enhances the fidelity of the protocol, even allowing for perfect teleportation~\cite{Laine2014}. In a similar setting, it has also been experimentally demonstrated that superdense coding under noise can be efficiently performed due to the emergence of memory effects in the dynamics ~\cite{Liu2016}, where the superdense coding capacity can actually be expressed as a function of the trace distance measure. Moreover, it has been shown that, in a realistic open system scenario, the amount of memory in the dynamics directly controls the lower bound of the entropic uncertainty relation~\cite{Karpat2015}, which is in turn related to applications such as witnessing entanglement and cryptographic security~\cite{Berta2010}. In addition, utilizing Landauer’s principle, it has been argued that the degree of memory effects determine the amount of work extraction by erasure under decoherence~\cite{Bylicka2016}. We also mention that a rather general framework has been introduced in Ref.~\cite{Bylicka2014}, where greater values of non-Markovianity has been shown to induce larger revivals of classical and quantum capacities which would potentially improve error correction schemes. Besides, it has been very recently demonstrated that the emergence of spontaneous quantum synchronization between a pair of two-level systems (which has consequences such as the creation of robust quantum correlations between the pair~\cite{giorgi2012}), can be delayed and even completely prevented as a consequence of the increasing degree of non-Markovianity in the dynamics~\cite{Karpat2020}. Finally, we emphasize that non-Markovianity in the quantum regime is multifaceted phenomenon and different measures can be relevant in different physical problems.}
Despite the established definition of non-Markovianity in classical settings, non-Markovianity in the quantum regime is a rather delicate phenomenon~\cite{Vacchini2011}. Traditionally, a prototypical Markovian quantum process is defined based on the Lindblad type master equation, which gives rise to a semigroup of completely positive quantum dynamical maps~\cite{Lindblad1976,gorini1976}. A more general class of quantum processes satisfies the property of completely positive divisibility (CP-divisibility) in connection with the non-negativity of the decay rates in time dependent Lindblad master equations~\cite{Breuer2012}. Let us assume that we have a dynamical quantum process $\Lambda$, i.e., a completely positive trace preserving (CPTP) map, describing the time evolution of a quantum system. In recent literature, Markovian quantum dynamical maps are typically considered to be the ones which obey the decomposition law $\Lambda(t,0)=\Lambda(t,s)\Lambda(s,0)$ where, in addition to $\Lambda(t,0)$ and $\Lambda(s,0)$, the transformation $\Lambda(t,s)$ is also a CPTP map for all $s\leq t$. Such maps are known as CP-divisible transformations and are said to imply a memoryless evolution for the open system. Therefore, based on the violation of the decomposition relation (or equivalently the degree of violation of the CP-divisibility property), it becomes possible to define quantifiers to measure the degree of non-Markovianity in open system dynamics.
At this point, it is important to emphasize that most of the non-Markovianity quantifiers in the literature are actually witnesses for the breakdown of CP-divisibility rather than strict measures~\cite{Rivas2014}. In other words, even though these quantifiers consistently vanish when CP-divisibility property is satisfied, they are not always guaranteed to capture its violation. However, we should recall that some of these non-Markovianity witnesses might still be considered as non-Markovianity measures (or measures for the degree of memory effects in the dynamics) on their own the right, since they can be used for quantifying the backflow of information from the environment to the open system, which by itself can be used as a basis for the definition of non-Markovian dynamics in the quantum regime~\cite{Breuer2016}. This approach is also intuitive because in this way the future states of an open system can depend on its past states, due to the flow of information from the environment back to the open system during the time evolution~\cite{Breuer2009,Fanchini2014}.
Having briefly elaborated on what we mean by memory effects, we are in a position to discuss the non-Markovianity measures that we consider in our study. Let us first introduce the trace distance measure which is constructed upon the distinguishability of an arbitrary pair of quantum states represented by the density operators $\rho_1$ and $\rho_2$. Trace distance between these two states can be written as $D(\rho_1,\rho_2)=\frac{1}{2} \rm{Tr}|\rho_1-\rho_2|$, with $|A|=\sqrt{A^\dagger A}$. Since a temporary increase of distinguishability, measured with the trace distance, throughout the open system dynamics can be interpreted as a backflow of information from the environment to the open system, signatures of memory effects are signaled when $dD/dt>0$. On the other hand, if the trace distance monotonically decreases or remains constant during the dynamics, that is $dD/dt \leq 0$, then it means that the dynamics has no memory and thus it is Markovian. Therefore, the degree of non-Markovianity can be measured by~\cite{Breuer2009}
\begin{equation}\label{NBreuer}
\mathcal{N}_{D}=\max_{\rho_1(0),\rho_2(0)}\int_{(dD(t)/dt)>0}\frac{dD(t)}{dt}dt
\end{equation}
where the optimization is performed over all possible pairs of initial states $\rho_1(0)$ and $\rho_2(0)$. As it has been suggested in the recent literature~\cite{Addis2014}, in our calculations we assume that the optimal initial states are orthogonal~\cite{Wismannn2012} and given by the eigenstates of the Pauli operator along $x$ direction. We recall that due to the fact that the trace distance is contractive under CPTP transformations, distinguishability between $\rho_1$ and $\rho_2$ monotonically decreases for all CP-divisible dynamical maps at all times. However, as mentioned earlier, non-Markovianity based on the trace distance measure is not equivalent to the breakdown of CP-divisibility property.
The second measure that we use in our study is based on the entanglement dynamics of a bipartite quantum state, given by our system of interest and an ancilla that is isolated from the effects of the environment. Aside from the interpretation of the information flow using distinguishability, this approach is linked to the information dynamics between the open quantum system and its environment through entropic quantities~\cite{Rivas2010,Fanchini2014}. Specifically, let us introduce an ancilla system $A$, which has the same dimension as the principal open system $B$. Considering that the subsystem $B$ undergoes decoherence and the ancilla $A$ trivially evolves, a monotonic decrease in entanglement of the bipartite system $AB$ implies that the dynamics is Markovian. However, any temporary increase in entanglement throughout the time evolution can be used to capture the memory effects in the open system dynamics. Thus, non-Markovianity can be quantified with
\begin{eqnarray}\label{NEntanglement}
\mathcal{N}_E&=&\max_{\rho_{AB}(0)}\int_{(dE(t)/dt)>0}\frac{dE(t)}{dt}dt
\end{eqnarray}
where $E$ denotes an entanglement measure and the optimization is carried out over all initial states of the bipartite system $\rho_{AB}(0)$. Since it has been demonstrated for a single qubit open system and an ancilla that the optimal value of the measure is attained for Bell states~\cite{Neto2016}, we calculate it considering that the initial bipartite system $AB$ is in one of the Bell states. In fact, any entanglement measure can be used to evaluate this measure. Here we choose to focus on the concurrence~\cite{Wootters1997}. We should also finally note that as entanglement measures are monotones under local CPTP maps, the entanglement based non-Markovianity measure vanishes for all CP-divisible processes, similar to the trace distance measure.}
\section{Open Quantum System Models} \label{sec3}
We now introduce the paradigmatic open quantum system models that we consider to study how well one can determine the degree of non-Markoviantity using ML techniques.
\subsection{Phase Damping}
Let us first consider a two-level quantum system (qubit) undergoing decoherence induced by colored dephasing noise as introduced in Ref.~\cite{daffer04}. Suppose that the time-evolution of the qubit is described by a master equation of the form
\begin{equation} \label{mem}
\dot{\rho}=K\mathcal{L}\rho,
\end{equation}
where $\mathcal{L}$ is a Lindblad superoperator and $\rho$ denotes the density operator of our system of interest. Here, the time-dependent integral operator $K$ acts on the open system as $K\phi=\int_0^t k(t-t')\phi(t')dt'$ with $k(t-t')$ being a kernel function governing the type of memory in the environment. A master equation of the form given in Eq.~(\ref{mem}) can arise, for instance, when one considers a time-dependent Hamiltonian
\begin{equation}
H(t)=\hbar\sum_{k=1}^3\Gamma_k(t)\sigma_k,
\end{equation}
where $\Gamma_k(t)$ are independent random variables possessing the statistics of a random telegraph signal, and $\sigma_k$ are the Pauli matrices in $x, y$ and $z$ directions. The random variables can be expressed as $\Gamma_k(t)=\alpha_k n_k(t)$, where each $n_k(t)$ has a Poisson distribution with a mean equal to $t/2\tau_k$ and $\alpha_k$ is a coin-flip random variable with the possible values $\pm \alpha_k$. While $\alpha_k$ describe the coupling of the open system to the random noise, the flipping rates are given by $1/\tau_k$.
To obtain a solution for the density operator $\rho$ of the open system qubit, one can directly use the von Neumann equation given by $\dot{\rho}=-(i/\hbar)[H,\rho]$, then it reads
\begin{equation}
\rho(t)=\rho(0)-i \int_0^t\sum_k \Gamma_k(s)[\sigma_k,\rho(s)]ds. \label{isol}
\end{equation}
Substituting Eq. (\ref{isol}) back into the von Neumann equation and evaluating the stochastic average, one gets
\begin{equation}
\dot{\rho}(t)=-\int_0^t\sum_k e^{-(t-t')/\tau_k}\alpha_k^2 [\sigma_k,[\sigma_k,\rho(t')]]dt', \label{sol}
\end{equation}
using the correlation functions of the random telegraph signals $\langle\Gamma_j(t)\Gamma_k(t')\rangle=\alpha_k^2\exp(-|t-t'|/\tau_k)\delta_{jk}$, which define the memory kernel. In Ref.~\cite{daffer04}, it has also been shown that under the condition that the noise acts only in a single direction, i.e., when two of the $\alpha_k$ vanish, the dynamics generated by Eq. (\ref{sol}) is completely positive. In fact, if $\alpha_3=1$ and $\alpha_1=\alpha_2=0$, then the open system undergoes decoherence induced by a colored dephasing noise. In this case, the Kraus operators describing the dynamics of the open system are given by
\begin{align}
M_1(\nu) &= \sqrt{[1+\Lambda(\nu)]/2}\mathbb{I}, \\
M_2(\nu) &= \sqrt{[1-\Lambda(\nu)]/2}\sigma_3,
\end{align}
where $\Lambda(\nu)=e^{-\nu}[\cos(\mu\nu)+\sin(\mu\nu)/\mu]$, $\mu=\sqrt{(4\tau)^2-1}$, $\nu=t/2\tau$ is the dimensionless time and $\mathbb{I}$ denotes the identity operator. Particularly, the dynamics of the open system can be expressed using the operator-sum representation as
\begin{equation}
\rho(\nu) = \sum_{i=1}^{2}M_{i}(\nu)\rho(0)M_{i}^{\dagger}(\nu).
\end{equation}
We note that the parameter $\tau$ controls the degree of memory effects responsible for the emergence of non-Markovianity, that is, as $\tau<1/4$ gives a Markovian time evolution, $\tau>1/4$ implies a non-Markovian dynamics for the open system, {according to both measures that we have introduced}. For further details about the physical relevance of the considered model in this part, interested readers might refer to Ref.~\cite{daffer04}.
\subsection{Amplitude Damping}
We will now consider a resonantly driven qubit under the influence of an AD channel, which is modelled as a bosonic reservoir at zero temperature~\cite{whalen2016,Haikka2010,Haikka2010-2,Shen2014,Huang2017}. The dynamics for this configuration is described by the Hamiltonian ($\hbar=1$)
\begin{align}
H &= \omega_{0}\sigma_{+}\sigma_{-} + \Omega(\sigma_{+} e^{-i\omega_L t} + \sigma_{-} e^{i\omega_L t}) \nonumber \\
&+ \sum\nolimits_{k} \omega_{k}a_{k}^{\dag}a_{k} + \sum\nolimits_{k} ( g_{k}^{\ast}\sigma_{+} a_{k}+g_{k}\sigma_{-} a_{k}^{\dag}) \label{Hamiltonian1},
\end{align}
where $\sigma_{+} =\sigma_{-}^\dagger= \ket{\rm e}\bra{\rm g}$, and $\ket{\rm e}$ ($\ket{\rm g}$) corresponds to the excited (ground) state of the qubit with transition frequency $\omega_0$. The external driving field strength and its frequency are denoted by $\Omega$ and $\omega_L = \omega_0$, respectively, while $a_{k}^{\dagger}$ ($a_{k}$) is the creation (annihilation) operator of the $k$-th reservoir mode with frequency $\omega_{k}$. Finally $g_{k}$ is the coupling strength between the qubit and the $k$-th mode. The dissipation kernel is given by
\begin{align}
f(t) &= \sum\nolimits_{k} \left\vert g_{k}\right\vert ^{2}e^{-i\left( \omega_{k}-\omega_{0}\right) t} \nonumber\\
&= \int\nolimits_{0}^{\infty}d\omega J\left( \omega\right) e^{-i\left( \omega-\omega_0\right)t},\label{ft}
\end{align}
with $J\left(\omega\right)$ being the spectral density of the reservoir. Without loss of generality, we assume the qubit resonantly couples to a reservoir with a Lorentzian spectral density \cite{BreuerPet,whalen2016,Haikka2010,Haikka2010-2,Shen2014,Huang2017,Bellomo2007}
\begin{equation}
J( \omega) =\left( \frac{\gamma_0}{2\pi}\right) \frac{\lambda^{2}}{\left( \omega-\omega_0\right) ^{2}+\lambda^{2}}, \label{spectraldensity}
\end{equation}
in which the spectral width (twice the coupling $\lambda$) is related to the correlation time of the reservoir $\tau_{B}\approx1/\lambda$, whereas $\gamma_0$ is connected to the time scale in which the state of the system changes $\tau_{R}\approx1/\gamma_0$~\cite{BreuerPet}. For this spectral density and considering no external field, the open system dynamics is essentially Markovian within the weak coupling regime, which corresponds to $\tau_{R}>2\tau_{B}$ $\left( \lambda>2\gamma_0\right) $. By contrast, the dynamics exhibits non-Markovian features within the strong coupling regime where $\lambda<2\gamma_0$ { for both of the considered measures}.
When the spectral density is Lorentzian, the interaction of the qubit with its genuine environment can be exactly modeled by an equivalent `Markovian' description, in which the qubit itself is coupled to a damped harmonic oscillator (auxiliary pseudomode described by the bosonic operators $b$ and $b^{\dagger}$), which is initially in the vacuum state. Relationship between the original environment variables and the psedomode ones is well established and the details can be found in Ref.~\cite{Garraway1997}; besides it is worth emphasizing the pseudomode is a mathematical construction and, strictly, does not exist physically. Here, the system-pseudomode dynamics, described by the density operator $\varrho_t$, is given by the following master equation in a frame rotating with the driving field frequency~\cite{whalen2016}
\begin{equation} \label{mateqeff}
\dot{\varrho}_t = -i[\mathcal{H},\varrho_t] + {\mathcal{L}}_b \varrho_t,
\end{equation}
with
\begin{align}
&\mathcal{H} = \Omega(\sigma_{+} +\sigma_{-}) + \sqrt{\lambda \gamma_0/2}\,(\sigma_{+} b + b^\dagger \sigma_{-}), \label{Heff} \\
&{\mathcal{L}}_b \varrho_t = \lambda(2b \varrho_t b^\dagger - b^\dagger b \varrho_t - \varrho_t b^\dagger b) \label{Lb}.
\end{align}
The qubit dynamics is obtained by taking the partial trace over the harmonic oscillator degrees of freedom, i.e., $\rho_t = \Tr_{b}[\varrho_t]$. We remark that, up to our best knowledge, Eq.~\eqref{mateqeff} does not have a closed-form solution for $\rho_t$ in the general case. However, when there is no external driving field, $\Omega=0$, open system dynamics of the qubit is then given by~\cite{BreuerPet,Bellomo2007}
\begin{equation}
\rho_t = \begin{pmatrix}
\rho_\text{ee}^{0} P_t \,\,\, & \,\,\, \rho_\text{eg}^{0} \sqrt{P_t} \\
\rho_\text{ge}^{0} \sqrt{P_t} \,\,\, & \,\,\, \rho_\text{gg}^{0}+\rho_\text{ee}^{0} (1-P_t)
\end{pmatrix}, \label{rhotamp}
\end{equation}
where $P_t = e^{-\lambda t}[\cos(dt/2)+(\lambda/d)\sin(dt/2)]^2$ with $d=\sqrt{2\gamma_0\lambda - \lambda^2}$, and $\rho_{ij}^{0}$ denotes the initial state elements.
\section{Machine Learning} \label{sec4}
There are now myriads of learning models available in the literature~\cite{scikit-learn}, each of which are suitable for a particular problem. Since we will perform our calculations using SVM throughout this study, it is instructive to {briefly} introduce the main aspects of this computational approach. \blue{A more elaborated explanation about SVM, including a more illustrative and pedagogical example, is given in Appendix~\ref{app}.}
\subsection{Support Vector Machines}
One of the most well understood ML models is SVM~\cite{Vapnik_1995}. This model can be used for classification (SVC)~\cite{Burges_1998,Dietrich_1999,Risau_2000, Opper_2001} and regression (SVR)~\cite{Scholkopf_1998, Scholkpf_2002, Smola_2004, Drucker96}. Moreover, it has been recently extended to the quantum regime~\cite{Rebentrost_2014, Li_2015, Biamonte_2017, Havlicek_2019}. In general lines, SVC is a class of algorithms aiming to find a hyperplane that splits the dataset based on the different classes. Therefore, predicting the label of unknown data is relatively easy, since it only depends on where the data samples fall with respect to the hyperplane. The way a hyperplane can be defined is not unique, and thus, SVC sets the maximum-margin, i.e. maximizing the distance between the hyperplane and some of the boundary training data, which are the data samples that are close to the edge of the class. These particular samples are known as support vectors (SVs). Since SVs are a subset of the training data set, this model is suitable for situations where the number of training data samples is small as compared to the dimension of the features vector. Moreover, once the model has fitted the training data set, it can be used as a decision function that predicts new samples, without holding in memory the training data set. For a non-linearly-separated data set, it is possible to define a kernel function that takes the samples to a higher dimensional space, where they are linearly separated. Although we have only provided an intuitive representation for SVC, here we give a brief mathematical description for SVR which will be our main tool in the rest of this manuscript.
SVR delivers the tools for finding a function $f(\textbf{x})$ that fits the training data set $\lbrace \textbf{x}_i,y_i \rbrace$, where $\textbf{x}_i\in\mathbb{R}^d$, and $y_i\in\mathbb{R}$ labels each sample. Note that $d$ stands for the dimension of the features vector. For illustration, we focus on a linear function $f(\textbf{x})= \textbf{w}\cdot\textbf{x} + b$, with $\textbf{w}\in\mathbb{R}^d$ and $b\in\mathbb{R}$ being fitting parameters. For $\epsilon$-SVR \cite{Vapnik_1995}, deviations of $f(\textbf{x}_i)$ from the labeled data ($y_i$) must be smaller than $\epsilon$, i.e. $\vert f(\textbf{x}_i)-y_i\vert\leq\epsilon$. Moreover, the desired function must be as flat as possible but can also include some errors. Therefore, the optimization problem can actually be stated as \cite{Vapnik_1995,Smola_2004,scikit-learn}
\begin{eqnarray}
&\mbox{minimize} \hspace*{1cm} \frac{1}{2} \left\Vert \textbf{w}\right\Vert ^2 + C\sum_i \left( \xi_i + \xi_i^\ast \right)\label{min01} \\
&\mbox{subjected to} \hspace*{1cm} \left\lbrace \begin{array}{l}
y_i - \textbf{w}\cdot\textbf{x}_i -b \leq \epsilon + \xi_i \\
\textbf{w}\cdot\textbf{x}_i +b - y_i\leq \epsilon + \xi_i^\ast \\
\xi_i,\xi_i^\ast \geq 0
\end{array} \right. \label{min02}
\end{eqnarray}
where { $||\cdot ||^2$ stands for the squared Euclidean distance,} $\xi_i,\xi_i^\ast$ are real slack variables and the condition $C>0$ sets the tolerance for deviations larger than $\epsilon$.
{
Before we start to present our main results using the considered SVM model, we would like to first define certain terms, which are commonly used in ML studies, for the readers who might be unfamiliar with the subject. In our work, the \textit{regressor} is an algorithm which basically estimates the relationship between independent input variables and a certain output variable. While these independent variables acting as input data are known as \textit{features}, the output of the regressor is said to be the \textit{target} value. When a training data set including features and their respective target values is introduced to the ML algorithm, it attempts to find patterns in this set to create a regressor. This is known as the process of training, during which the algorithm learns from the training data set. In other words, a learning algorithm such as SVM takes the training data and produces a regressor which can in turn give reliable predictions for the output values of independent inputs. In our study, the features will be expectation values of spin observables at certain times, and the target value will be the degree of non-Markovianity of the considered open quantum system dynamics. \blue{It is important to note that the target value cannot be evaluated as a simple function of the features since the expectation values of the observables are not explicitly connected to the degree of non-Markovianity. Consequently,} in Appendix~\ref{app}, we elaborate on how the SVM based ML algorithm functions by first providing a simple illustrative example and then discussing its mathematical details.}
\section{Main Results} \label{sec5}
We commence our analysis considering what we refer to as pure PD and AD channels, where a pure channel means that no external driving field is present. For each one of these models, in order to generate a database for the training process, we calculate the time evolution of the open system and use the aforementioned measures to quantify the degree of non-Markovianity for model parameters, i.e., $\lambda$ and $\tau$. We consider a wide range of parameter values that define the two processes. In particular, in case of the AD channel, we consider the coupling parameter $\lambda/\gamma_0$ to be in the range $[0.1,3.0]$ with a step size equal to $10^{-3}$, which will enable us to generate a uniformly distributed training data with 2900 samples. On the other hand, for the PD channel, the parameter $\tau$ is varied in the range $[0.1,0.5]$ with a step size equal to $10^{-4}$, which will result in a uniformly distributed training data with 4000 samples. Hereafter, we name each sample of these databases as $\lambda_n$ and $\tau_n$. It is worth to note that we actually create two independent regressors, one for each channel, but we discuss both of them in parallel because of the identical procedure.
Next, we calculate the expectation values $\mathcal{O}_x$, $\mathcal{O}_y$, and $\mathcal{O}_z$ at a fixed time $t^*$ in the dynamics where
\begin{equation}
\mathcal{O}_k = {\rm{Tr}}[\sigma_k\rho(t^*)],
\end{equation}
with $\sigma_k$ being the three Pauli spin operators in the $x$, $y$ and $z$ directions. We should emphasize that the expectation values for $\mathcal{O}_k$ are calculated for all $\lambda_n$ and $\tau_n$ individually at each fixed time point $t^*$. Therefore, our database now contains, for each model parameter, the expectations values $\mathcal{O}_x(t^*)$, $\mathcal{O}_y(t^*)$, and $\mathcal{O}_z(t^*)$ as the \textit{features} and the degree of non-Markovianity $\mathcal{N}$ as our \textit{target} value. We note that the experimental determination of these expectation values can be realized with a single quantum state tomography performed at each time $t^*$. \blue{We should also emphasize that, to train the regressor by providing it with a data set composing of the features and their target values, the degree of non-Markovianity is calculated numerically employing the definitions given in Eq.~(\ref{NBreuer}) and Eq.~(\ref{NEntanglement}).} To summarize, we introduce to our learner a set of features and their known respective targets. Our main objective will be to produce a regressor that will be able to determine the degree of non-Markovianity, given a pure decoherence process (without external fields), using only the information contained in the expectation values {at a fixed time}.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{Fig1.png}
\caption{Dynamics of the expectation value $\mathcal{O}_x(t)$ for different coupling strengths {in case of the pure AD channel, that is,} for $\lambda=0.1$ {(blue crosses)}, $\lambda=0.5$ {(red dotted line)}, $\lambda=1.0$ {(yellow dot-dashed line)}, $\lambda=3.0$ {(purple circles)}, $\lambda=5.0$ {(green dashed line)}. The thick black solid line is the separation curve between Markovian and non-Markovian dynamics, i.e., $\lambda=2.0$. In the inset, we show how {the expectation value} $\mathcal{O}_x(1/\gamma_0)$ changes with $\lambda$.}
\label{fig1}
\end{figure}
We would like to first point out that, in case of the pure channels, depending on the time $t^*$, each expectation value $\mathcal{O}_k(t)$, can have a unique correspondence with each $\lambda_n$ and $\tau_n$ for AD and PD, respectively. For illustrative purpose, we show in Fig.~\ref{fig1} the time evolution of $\mathcal{O}_x(t)$ for different values of $\lambda$ for pure AD channel. It is straightforward to note that one can find an optimal time $t_c$, (for example, in this case, around $1/\gamma_0$), at which the curves corresponding the Markovian and non-Markovian dynamics are well separated, depending on whether they are above or below the thick solid line ($\lambda=2\gamma_0$). This suggests that a single state tomography, in a well determined time $t_c$, is sufficient to estimate the degree of non-Markovianity. For example, if $t\gamma_0=1$, for each value of $\lambda$, we have a precise and distinct value of $\mathcal{O}_x(t)$. In the inset of Fig.~\ref{fig1} (assuming $t\gamma_0=1$) we show that there is an optimal region where, even for small variations in $\lambda$, the change in $\mathcal{O}_x(t_c)$ is significant. This is crucial to determine the best $t_c$ to be used in an experiment. Indeed, we need to choose a time $t_c$ that increases the accuracy of the ML algorithm but, at the same time, keep sparse the expectation values as a function of $\lambda$. For example, examining Fig.~\ref{fig1}, we see that one could choose $t\gamma_0=0.5$ but, in this case, a high precision measurement is necessary since $\mathcal{O}_x(t)$ varies not much, i.e., from approximately $0.8$ to $1.0$ as $\lambda/\gamma_0$ ranges from $0.1$ to $3.0$. This imposes a balance between the experimental precision of the measurements and the accuracy of the ML algorithm.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{Fig2.png}
\caption{Comparison of the estimated ({orange} circles) and theoretical values ({blue solid line}) of the degree of non-Markovianity for pure decoherence channels. The plots (a) and (b) display the results for the trace distance $\mathcal{N}_D$ and the entanglement based $\mathcal{N}_E$ measures, respectively, in case of pure AD channel. On the other hand, the plots (c) and (d) show the outcomes of the same investigation in case of pure PD channel. {The estimated values are generated by our regressor using the input data, which has not been used in training, and the target values are ordered in decreasing order for better illustration.}}
\label{fig2}
\end{figure}
An important aspect of the application of ML algorithms is the concept of data normalization. Here, we also employ the procedure of feature standardization which makes the values of each feature in the dataset to have zero mean and unit variance. Such a treatment can in general speed up the algorithm convergence~\cite{ioffe2015} while increasing the accuracy of method. Thus, for each set of observables, we calculate their mean value and variance, and transform the data as \begin{equation}
\tilde{\mathcal{O}_k^n} = (\mathcal {O}_k^n - u_k)/s_k,
\end{equation}
where $\mathcal {O}_k^n$ is a specific data of $\mathcal{O}_k$ (that is, for a particular $\lambda_n$ or $\tau_n$), $u_k$ is the mean value of the expectation $\mathcal{O}_k$ , and $s_k$ is the standard deviation of $\mathcal{O}_k$. We remark that this simple procedure can actually enhance the accuracy of the estimation up to one order of magnitude.
We now turn our attention to the results on the estimation of the degree of non-Markovianity in pure AD and PD channels, which are generated by the regressor we trained. From this point on, out of the whole database we have produced, we will keep always $70\%$ of the data (which are randomly chosen) to train the SVR, and we will reserve the remaining $30\%$ of the data to test the performance of the regressor. Note that this is a standard procedure when working with SVR, but we should also remark that the choice of these percentages can be adjusted depending on the problem to improve the prediction accuracy. In Fig.~\ref{fig2}, we show the degree of non-Markovianity predicted with our SVR model (orange circles) and the theoretical ones (blue solid line) in case of pure decoherence channels, considering both the trace distance $\mathcal{N}_D$ and entanglement $\mathcal{N}_E$ based measures of non-Markovianity. Here, in the generation of the dataset, the expectation values $\mathcal{O}_k(t)$ are calculated at the fixed time $t_c=3/\gamma_0$ ($t_c=3$) for AD (PD). We also note that the theoretical data is arranged in decreasing order and we limit the number of the estimated non-Markovianity values in the figure merely for illustrative purposes. Specifically, whereas Fig.~\ref{fig2}a and Fig.~\ref{fig2}b respectively show our findings for $\mathcal{N}_D$ and $\mathcal{N}_E$ for the AD channel, Fig.~\ref{fig2}c and Fig.~\ref{fig2}d display the results of the same analysis for the PD channel. It then becomes clear that our ML algorithm can estimate the degree of non-Markovianity with a very high precision. Indeed, the mean errors for AD and the PD channels are given by $7 \times 10^{-4}$ and $2 \times 10^{-4}$ for the trace distance measure, and $9 \times 10^{-4}$ and $9 \times 10^{-5}$ for the entanglement based measure, respectively. Therefore, for pure decoherence channels, a single tomography should be sufficient to accurately estimate the degree of memory effects.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{Fig3.png}
\caption{Comparison between the predicted ({orange} circles) and theoretical values ({blue solid line}) of the degree of non-Markovianity for the AD channel considering an additional external field, as measured by the entanglement based measure $\mathcal{N}_E$, for increasing values of the field strength $\Omega$ (in units of $\gamma_0$), {that is, in (a) $\Omega=0.01$, in (b) $\Omega=0.05$, in (c) $\Omega=0.09$, and in (d) $\Omega=0.20$.} Here, our regressor has been trained with the data generated for the pure AD channel.}\label{fig3}
\end{figure}
At this point, it is important to mention that the above results on the AD channel clearly depend on the knowledge of the parameter $\gamma_0$ so that the timescale of $t_c$ can be reliably determined and our approach can be used in an experiment. If the parameter $\gamma_0$ is unknown in the considered setting, it has been recently addressed in Ref.~\cite{2005.01144} that the noise spectrum of any environment surrounding a qubit can be accurately extracted by training a deep neural network (long short-term memory network) with usual time-dynamics measurements on qubits, e.g., the two-pulse `Hahn' echo curves.
Motivated by the results we have obtained for pure decoherence channels, we would like to apply our computational approach to a natural extension of the studied problem, that is, we ask the question of what would be the consequences of an external driving field affecting the open system? This problem is certainly more involved as compared to pure decoherence since the external field induces extra oscillations in the evolution of the expectation values $\mathcal{O}$, which could be mistaken as a signature of non-Markovianity by the regressor. In this part, we choose to limit our analysis to the non-Markovianity of the AD channel quantified through the entanglement based measure $\mathcal{N}_E$. We will now assume an external driving $\Omega\neq 0$ in Eq.~(\ref{Hamiltonian1}) and we follow the procedure that we have used to obtain the results presented in Fig.~\ref{fig2}. In fact, our first question here is: given a regressor that is trained to work with pure AD channel, how precisely is it able to estimate the degree of non-Markovianity in the presence of an external field? To answer this question we show in Fig.~\ref{fig3} the comparison between the estimated (by a regressor trained for pure AD channel) and the theoretical non-Markovianity results when the external field is non-zero for the AD channel. In the plots displayed from Fig.~\ref{fig3}a to Fig.~\ref{fig3}d, we consider the external field strength $\Omega/\gamma_0$ values to be $0.01$, $0.05$, $0.09$, and $0.20$ in respective increasing order, which in turn result in mean errors given by $1.6 \times 10^{-3}$, $2.2 \times 10^{-2}$, $6.4 \times 10^{-2}$, and $0.27$. As it can be seen comparing the predicted and theoretical non-Markovianity values, the results are satisfactory only for small perturbations, and as the driving strength increases, the regressor no longer works.
\begin{figure}[t]
\centering
\includegraphics[width=0.40\textwidth]{Fig4.png}
\caption{The degree of non-Markovianity quantified by $\mathcal{N}_E$ as a function of the coupling strength $\lambda$ (in units of $\gamma_0$) for different values of external field $\Omega$ (in units of $\gamma_0$) {for the AD channel.}}
\label{fig4}
\end{figure}
Our findings in Fig.~\ref{fig3} agree with what we expected since the effects induced by the external driving can significantly alter the time evolution of the expectation values $\mathcal{O}_x(t)$, $\mathcal{O}_y(t)$, and $\mathcal{O}_z(t)$. It is also important to emphasize that revivals in the dynamics of the expectations values do not necessarily imply that the time evolution is non-Markovian. Actually, the external field $\Omega$ suppresses the memory effects in the open system dynamics despite the fact that it causes oscillations in the dynamics of the expectation values. Fig.~\ref{fig4} demonstrates this situation, i.e., while the field strength $\Omega$ increases, non-Markovianity $\mathcal{N}_E$ decreases, tending to zero even for small values of $\Omega$. This behavior is the cause of the inaccuracy of the non-Markovianity estimated by the SVR algorithm in Fig.~\ref{fig3}.
In order to enhance the predictive power of our SVR based ML algorithm, the natural solution is to train the regressor taking into account the existence of the external field $\Omega$. Thus, we now train our algorithm assuming that the coupling strength $\lambda/\gamma_0$ takes values in the range $[0.1,3.0]$, with a step size equal to $10^{-2}$, and additionally, we consider a set of values for the drive parameter $\Omega/\gamma_0$ (ranging from $0.01$ to $0.5$), which generates a training data with $290$ samples for each $\Omega$. Here, $\Omega$ is divided with a step size equal to $10^{-2}$ for $\Omega/\gamma_0$ values between $0.01$ to $0.2$, and with a step size equal to $0.1$ between $0.2$ to $0.5$. The reason for this difference in the distribution of $\Omega$ is to have a balanced dataset, where the number of data with Markovian results is similar to that of non-Markovian ones. In Fig.~\ref{fig5}, we present the predictions of our regressor now trained in the presence of the external field. In particular, Fig.~\ref{fig5}a and Fig.~\ref{fig5}c present a comparison of the theoretical and the estimated results of the non-Markovianity measure $\mathcal{N}_E$ using the values of the expectation values $\mathcal{O}_x(t_c)$, $\mathcal{O}_y(t_c)$, and $\mathcal{O}_z(t_c)$ at fixed times $t_c=3/\gamma_0$ and $t_c=5/\gamma_0$, respectively. Note that for each case, the experimental implementation requires a single state tomography performed at time $t_c$. As can be seen from these plots, we obtain a better result for $t_c=3/\gamma_0$ as compared to $t_c=5/\gamma_0$ (mean error for these two cases are $2.6 \times 10^{-3}$ and $1.3 \times 10^{-2}$, respectively). Next, in order to further improve the estimation efficiency of our SVR algorithm, we let our regressor to have access to more information, which means that we train it using the values of $\mathcal{O}_x(t_c)$, $\mathcal{O}_y(t_c)$, and $\mathcal{O}_z(t_c)$ at two fixed times $t_{c_1}$ and $t_{c_2}$. The outcomes of our analysis in this case are shown in Fig.~\ref{fig5}b and Fig.~\ref{fig5}d. Particularly, Fig.~\ref{fig5}b includes the results of the comparison between the estimated and the theoretical values of the non-Markovianity for the AD channel with external drive when two state tomographies are performed at times $t_{c_1}=3/\gamma_0$ and $t_{c_2}=6/\gamma_0$. On the other hand, in Fig.~\ref{fig5}d, the outcomes of the same analysis are given when the measurement times are $t_{c_1}=5/\gamma_0$ and $t_{c_2}=10/\gamma_0$. Consequently, we see that two quantum state tomographies at fixed times spaced by the intervals either $3/\gamma_0$ or $5/\gamma_0$ should be sufficient to precisely estimate the degree of non-Markovianity with mean errors $1.2 \times 10^{-3}$ and $1.3 \times 10^{-3}$, respectively.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{Fig5.png}
\caption{Comparison between the estimated ({orange} circles) and theoretical values {(blue solid lines)} of the degree of non-Markovianity for the AD channel with external field, as measured by the entanglement based measure $\mathcal{N}_E$, where the regressor is trained taking into account the external field. While the plots in (a) and (c) are generated considering a single state tomography at a fixed time during the dynamics, the results in the plots (b) and (d) are obtained taking into account two tomographies {at two fixed times}.}
\label{fig5}
\end{figure}
\section{Conclusion} \label{sec6}
In summary, we have introduced an experimentally friendly approach, which utilizes ML techniques based on SVR, to estimate the degree of memory effects in the dynamics of open quantum systems. In particular, we have first considered the trace distance and entanglement based measures of non-Markovianity and demonstrated that, in case of pure AD and PD channels, a single quantum state tomography should be sufficient to estimate the value of non-Markovianity measures very precisely. Next, we have focused on AD channel but now also taking into account an external drive on the open system. We demonstrated that even though the regressor trained with pure AD data can estimate the degree of non-Markovianity relatively well for small values of the external drive strength, as the drive parameter increases, our method no longer works due to the extra oscillations induced on the expectation values $\mathcal{O}$ by the external drive. We have then shown that once our regressor is trained with the data provided by the AD channel dynamics including the external drive, it becomes once again possible to precisely estimate the degree of non-Markovianity with at most two rounds of state tomography.
\section{Acknowledgements}
F. F. F. acknowledges support from Funda\c{c}\~{a}o de Amparo \`{a} Pesquisa do Estado de S\~{a}o Paulo (FAPESP), project number 2019/05445-7. G. K. is supported by the BAGEP Award of the Science Academy, the TUBA-GEBIP Award of the Turkish Academy of Sciences, and also by the Technological Research Council of Turkey (TUBITAK) under Grant No. 117F317. A.N. acknowledges support from Universidad Mayor through the Postdoctoral fellowship. R.C. acknowledges support from Fondecyt Iniciaci\'on No. 11180143.
|
2,869,038,154,608 | arxiv | \section{The problem: regular magnetic field resists strong shocks}
Magnetic fields in the interstellar medium of the Milky Way and other disc galaxies are, to a first approximation, frozen-in to the interstellar gas: put another way, the magnetic Reynolds number of the medium is sufficiently high
that advection dominates diffusion in the magnetic induction equation. One would therefore expect that strong velocity shear, such as that which occurs in barred galaxies, and large-scale shocks, like the density wave induced shocks along spiral arms, would lead to a strengthening of the magnetic field when the large-scale magnetic field is approximately perpendicular to the shear or parallel to the shock, as is commonly observed in barred (\cite{Beck:02}) and normal spiral galaxies (\cite{Beck:96}).
Surprisingly, observations of the barred galaxies NGC 1097 and NGC 1365 (\cite{Beck:05}) show that the polarized radio emission, tracing the \emph{regular} magnetic field, is hardly affected by the velocity shear of about $200 \ensuremath{\,\mathrm{km}\,\mathrm{s}^{-1}}\ensuremath{\,\mathrm{kpc}}^{-1}$, increasing by a factor of $1$--$7$ whereas theoretically one would expect an increase by a factor of about $60$. It is important to note that similar calculations are able to accurately predict the observed increase in the \emph{total} radio emission, as a result of compression and shear of the turbulent magnetic field in the bar. In normal spiral galaxies a similar, but less pronounced, discrepancy is observed. For example in M51 the neutral gas density increases on average by a factor of $4$ in the spiral arms, which should produce an increase by a factor of at least $16$ in polarized emission if the magnetic field were frozen-in and depolarization remains constant, whereas on average there is no observed increase in polarized emission in the spiral arms (Fletcher et al. in prep.).
\section{The solution: dense clouds detach from the regular magnetic field}
We suggest that the weaker than expected increase in regular magnetic field strength in regions of strong shocks and shear is due to the multi-phase nature of the ISM. The regions in bars and spiral arms where the ISM density increases are also the regions where dense clouds of molecular gas ($n\gtrsim 100\ensuremath{\,\mathrm{cm}^{-3}}$) are rapidly formed from diffuse interstellar gas ($n\lesssim 1\ensuremath{\,\mathrm{cm}^{-3}}$). As clouds collapse in the turbulent flow, they will rotate faster, winding up the magnetic field lines that thread the cloud. The clouds can then become detached from the background magnetic field in the diffuse ISM by flux expulsion (\cite{Weiss:66}) via ambipolar diffusion (\cite{Mestel:84}) or magnetic reconnection. The cartoon in Figure~\ref{fig} shows the basic principle.
\begin{figure}
\centering
\includegraphics[width=0.32\textwidth]{decoupling1}
\includegraphics[width=0.32\textwidth]{decoupling2}
\includegraphics[width=0.32\textwidth]{decoupling3}
\caption{Cartoon showing three stages in the condensation of a dense cloud in a diffuse medium threaded by a large-scale magnetic field. As the cloud collapses its rotation velocity increases.}
\label{fig}
\end{figure}
The number of rotations a cloud makes can be bracketed by the following limits. Assuming angular momentum conservation and turbulent flow, an initial density of $1\ensuremath{\,\mathrm{cm}^{-3}}$ in a proto-cloud of radius $r=10\ensuremath{\,\mathrm{pc}}$ rotating at $v=v_0(r/l_0)^{1/3}\approx 5\ensuremath{\,\mathrm{km}\,\mathrm{s}^{-1}}$, where $v_0=10\ensuremath{\,\mathrm{km}\,\mathrm{s}^{-1}}$ and $l_0=100\ensuremath{\,\mathrm{pc}}$ are typical values for the largest turbulent eddies, a cloud will rotate about $40$ times in a typical lifetime of $3$-$5\ensuremath{\,\mathrm{Myr}}$ or $100$ times in a formation time of $12$-$20\ensuremath{\,\mathrm{Myr}}$. On the other hand, observations of \emph{magnetically braked} mature clouds suggest about $2$ rotations in the formation time (\cite{Bodenheimer:95}). These estimates suggest that sufficient rotations for magnetic reconnection to occur are possible and the model is worth examining in more detail.
We have carried out numerical simulations, in 2D and 3D, of clouds forming via the thermal instability in the diffuse, magnetized ISM. We measure the fraction of magnetic flux threading gas at different densities once the instability has fully developed (after approximately $60\ensuremath{\,\mathrm{Myr}}$). The simulations show that magnetic flux is detached from the forming clouds, with the rate of detachment dependent on the magnetic diffusivity. Furthermore, when the perturbations that trigger cloud formation have non-zero rotation the critical density at which flux detachment becomes important scales inversely with the rotation rate.
If the dense clouds become detached from the regular magnetic field in the diffuse ISM, then the mass-to-flux ratio will decrease for the diffuse gas. Thus the regular magnetic field becomes more important in the dynamics of the diffuse gas and eventually will be strong enough to resist shearing and shocks in the diffuse ISM. This dynamical importance will continue until the dense clouds are dissipated (for example by star formation) and their remaining gas reloads the regular magnetic field in the diffuse ISM.
|
2,869,038,154,609 | arxiv | \section{Introduction\label{sec:Introduction}}
Theoretical model atmospheres are needed in order to interpret stellar
fluxes and derive individual characteristics of stars, like stellar
parameters and chemical abundances. In recent decades, successive
improvements of the often used one-dimensional (1D) hydrostatic atmosphere
models have confirmed their predictive capabilities \citep[see, e.g., ][]{Gustafsson:2008p3814}
but also highlighted their limitations. In fact, these 1D models make
use of several simplifications in favor of computational ease, the
most prominent one being the treatment of convection with the mixing-length
theory \citep[MLT,][]{BohmVitense:1958p4822,Henyey:1965p15592}. The
latter entails several free parameters, in particular the free mixing-length
parameter, $\alpha_{\mathrm{MLT}}$, which is a priori unknown, hence
normally calibrated for the Sun by observations and assumed constant
for all stars. Moreover, the calculation of synthetic spectral absorption
lines in 1D requires the additional calibration of micro- and macro-turbulence
parameters ($\xi_{\mathrm{turb}}$ and $\chi_{\mathrm{turb}}$, respectively)
in order to properly account for the contribution of non-thermal convective
and turbulent motions to the broadening of spectral line profiles.
Most of the limitations of 1D modeling of convection can be overcome
only by performing time-dependent, three dimensional (3D), radiative-hydrodynamical
(RHD) calculations \citep[see][and references therein]{Nordlund:2009p4109}.
The goal of 3D simulations is to provide realistic ab initio models
where stellar surface convection emerges self-consistently from first
principles. Compared to 1D models, such 3D RHD models are able, for
the Sun in particular, to predict additional observable features of
stars associated with stellar surface velocity fields and temperature
and density inhomogeneities, e.g. surface granulation pattern, line
asymmetries, and center-to-limb variation \citep[CLV; e.g., such as][]{Asplund:2000p20866,Pereira:2013arXiv1304}.
To systematically study such properties of stars with a realistic
approach, we computed a large grid of 3D models using the \textsc{Stagger}-code,
covering a wide range in stellar parameters%
\footnote{In the following, we always refer to stellar \emph{atmospheric} parameters.%
} ($T_{\mathrm{eff}}$, $\log g$, and $\left[\mathrm{Fe}/\mathrm{H}\right]$) for late-type (spectral type FGK)
stars \citep[see][hereafter Paper I]{Magic:2013}.
It is advantageous to reduce the relatively large amount of data from
the full 3D atmospheric models to temporally and spatially averaged
(hereafter $\left\langle \mathrm{3D}\right\rangle$) representations. However, this reduction comes
at the expense of physical self-consistency \citep[see][]{Atroshchenko:1994p14010}.
Nonetheless, in this way one can deal with more manageable atmospheric
data structures compared to the otherwise enormous amount of information
associated with the full 3D models. These mean $\left\langle \mathrm{3D}\right\rangle$ stratifications
are usually compared with classical 1D hydrostatic atmosphere models.
\citet{Nordlund:2001p6371} point out that the large-amplitude fluctuations
in the superadiabatic region%
\footnote{The SAR can be located with the superadiabatic gradient, e.g., with
$\vec{\nabla}_{\mathrm{sad}}>0.1\max\left[\vec{\nabla}_{\mathrm{sad}}\right]$ one obtains typically a range of
$-0.5\lesssim\log\tau_{\mathrm{Ross}}\lesssim4.0$.%
} (SAR) leads to deviations from the hydrostatic equilibrium. Furthermore,
the 3D data sets incorporate quantities emerging from the hydrodynamics
and associated with convection itself, such as, self-consistent velocity
fields and turbulent pressure, for which there are no physically consistent
counterparts in the case of 1D hydrostatic models.
The definition of the $\left\langle \mathrm{3D}\right\rangle$ stratifications is neither unambiguous
nor unique, but depends largely on the choice of reference depth scale.
When dealing with the analysis of the atmospheric layers above the
optical surface, monochromatic or Rosseland optical depth scales are
usually considered the appropriate choice since these are the natural
reference depth scales that are used to describe radiative transfer
processes in the photosphere. On the other hand, the optical depth
loses its usefulness somewhat in the very deep optically thick layers
below the optical surface, since here the mean free path of photons
becomes very short and the radiative transfer insignificant. Therefore,
other reference scales are best suited to describing the main properties
of the stellar stratification. Also, the bimodal and highly asymmetric
distribution of bulk upflows and of downflows in the convective zone
complicates the definition of a meaningful unique average value, particularly
near the surface, at the transition between convectively unstable
and stable regions.
\citet{Uitenbroek:2011p10448} investigated the application of $\left\langle 3\mathrm{D}\right\rangle $
models to spectral line formation. They computed and compared continuum
and atomic line intensities and their respective CLV from $\left\langle 3\mathrm{D}\right\rangle $
and 3D models. They conclude that a mean $\left\langle \mathrm{3D}\right\rangle$ stratification is
insufficient to represent the full 3D atmosphere model in the light
of spectral analysis. As reasons for the latter they list the non-linearity
of the Planck function, formation of molecules, and the asymmetry
of convective motions.
The present work constitutes the second paper in the \textsc{Stagger}-grid
series. Here, we want to explore the following key question: which
averaging method leads to the closest $\left\langle 3\mathrm{D}\right\rangle $
representation of a full 3D data set in the light of spectral line
formation calculations? Therefore, we investigate spectral line absorption
features by probing the latter with fictitious $\ion{Fe}{i}$ and $\ion{Fe}{ii}$
lines with different strengths and excitation potentials for different
stellar parameters.
\section{Averaging 3D models\label{sec:Methods}}
\label{sub:stagger-code}The 3D models that form the basis of the
present work were computed with the \textsc{Stagger-}code. For a general
description of our grid of 3D models, we refer the reader to Paper
I. In short, the \textsc{Stagger-}code solves the time-dependent,
3D hydrodynamical equations coupled with realistic non-gray radiative
transfer. We utilize an updated version of the realistic state-of-the-art
equation of state (EOS) by \citet{Mihalas:1988p20892}. Continuum
and sampled line opacity are taken primarily from the MARCS package
\citep[see also references in Paper I]{Gustafsson:2008p3814}. The
radiative transfer is solved for nine angles along long characteristics
with a slightly modified version of the \citet{Feautrier:1964p21596}
method. The opacity-binning method with 12 opacity bins is applied
to all \textsc{Stagger}-grid models to reduce the computational burden
while still accounting for non-gray radiative transfer under the assumption
of local thermodynamic equilibrium (LTE); in particular, the effects
of scattering are neglected \citep[see][]{Nordlund:1982p6697,Skartlien:2000p9857,Collet:2011p6147}.
Our simulations are of the so-called \textit{box-in-a-star} type,
and they cover only a small representative volume of stellar surface
that typically includes about ten granules horizontally and spans
about 14 pressure scale heights vertically. The numerical resolution
of the Cartesian grid is $240^{3}$. It features a non-equidistant
vertical axis in order to enhance resolution in the layers with the
steepest temperature gradients. The vertical boundaries are open,
while the horizontal ones are periodic.
\subsection{Computing temporal and horizontal averages\label{sub:aAveraging}}
We computed various temporal and horizontal averages for a large number
of physical quantities of interest. For the spatial (horizontal) averages,
we computed $\left\langle \mathrm{3D}\right\rangle$ stratifications by considering four different
reference depth scales and averaging the various physical quantities
on layers of constant
\begin{itemize}
\item geometrical height, $z$;
\item column mass density, $m=\int\rho\, dz$;
\item Rosseland optical depth, $\tau_{\mathrm{Ross}}=\int(\rho\kappa_{\mathrm{Ross}})\, dz$;
\item optical depth at 500 nm, $\tau_{500}=\int(\rho\kappa_{500})\, dz$,
\end{itemize}
(hereafter denoted by $\hav_z$, $\hav_{m}$, $\hav_{\mathrm{Ross}}$, and $\hav_{\mathrm{500}}$,
respectively), where $\rho$ is the gas density, and $\kappa_{\mathrm{Ross}}$
and $\kappa_{500}$ are the Rosseland mean opacity%
\footnote{Including both line and continuum opacity.%
} and opacity at 500 nm, respectively, both defined as cross-sections
per unit mass.
The geometrical averages $\left\langle 3\mathrm{D}\right\rangle _{z}$
are easily taken directly from the output of the \textsc{Stagger}-code,
since the numerical mesh of this code is Eulerian in nature. For the
three other (Lagrangian-like) averages, the original data sets have
to be remapped to their respective new reference depth scale by individually
interpolating each column of each 3D simulation snapshot (see \ref{sub:Interpolation-new-reference-scale}).
Furthermore, we also considered four additional averages:
\begin{itemize}
\item flux-weighted average temperature, $\langle T^{4}\rangle$;
\item average brightness temperature at 500nm, $\langle T_{\mathrm{rad}}\rangle$;
\item logarithmic average, $\left\langle \mathrm{3D}\right\rangle_{\log}$; and
\item enforced-hydrostatic-equilibrium average, $\hav_{\mathrm{HSE}}$.
\end{itemize}
We determine the flux-weighted temperature stratification $\langle T^{4}\rangle$
by evaluating the spatial averages of $T^{4}$, motivated by the Stefan-Boltzmann
law for wavelength-integrated radiative flux. The brightness temperature
average $T_{\mathrm{rad}}$ is computed using the expression $B_{500}^{-1}\left(\left\langle B_{500}(T)\right\rangle \right)$,
where $B_{500}$ and $B_{500}^{-1}$ denote the Planck function at
500 nm and its inverse, respectively (see also Sect. \ref{sub:Temperature}).
The depth-dependent $\langle T_{\mathrm{rad}}\rangle$ thus needs
to be interpreted as the equivalent brightness temperature corresponding
to the average black-body emission at 500 nm from each layer. For
$\left\langle \mathrm{3D}\right\rangle_{\log}$ we define spatial averages of a given 3D variable $X$
as $\exp\left(\left\langle \log{X}\right\rangle \right)$. Finally,
since the $\left\langle \mathrm{3D}\right\rangle$ models do not in general fulfill the hydrostatic
equilibrium condition (see App. \ref{app:hse_stratification}), for
the $\hav_{\mathrm{HSE}}$ averages we \emph{enforce} hydrostatic equilibrium
by adjusting the density and adjusting the thermodynamic pressure
$p_{\mathrm{th}}$ consistently with the EOS, until hydrostatic equilibrium
is attained. We emphasize that the proper enforcement of hydrostatic
equilibrium requires that one considers both the thermodynamic $p_{\mathrm{th}}$
and turbulent $p_{\mathrm{turb}}$ contributions to total pressure
$p_{\mathrm{tot}}$: the gas pressure in the atmosphere is in fact
significantly reduced because of the structural support provided by
turbulent pressure. Then, a new geometrical depth $z$ is computed
(see Eq. \ref{eq:hse}).
Classical hydrostatic 1D models of stellar atmospheres are often defined
and computed on an optical depth scale, since this allows the numerical
resolution to be easily adjusted where it is most needed to achieve
the highest accuracy in the solution of the radiative transfer equation
in the atmospheric layers, both during the modeling itself and during
line-formation calculations. Therefore, especially for radiative transfer-oriented
applications, these 1D models can be compared most naturally with
averages of corresponding 3D models on constant optical depth, $\hav_{\mathrm{Ross}}$
or $\hav_{\mathrm{500}}$. In Paper I, in particular, we adopted $\hav_{\mathrm{Ross}}$ as our
standard averaging choice. One of the main reasons we chose $\hav_{\mathrm{Ross}}$
over $\hav_{\mathrm{500}}$ is that during the scaling of the simulations and the
construction of the initial snapshots, the top physical boundary of
essentially all models reached up to $\left\langle \log\tau_{\mathrm{Ross}}\right\rangle _{\mathrm{top}}\approx-6.0$
(see Paper I). In contrast, the vertical extent of the simulations
in terms of optical depth at $500$~nm varies depending on stellar
parameters ($\log g$ in particular) owing to the concomitant variations
in opacity at $500$~nm as a function of temperature and density.
Therefore, the $\hav_{\mathrm{500}}$ models in general require a careful extrapolation
at the top to be extended up to $\log\tau_{500}{\approx}-6.0$ (see
Sect. \ref{sub:Extrapolation-at-the-top}).
While $\hav_{\mathrm{Ross}}$ or $\hav_{\mathrm{500}}$ represent natural reference depth scales
for the mean photospheric stratification, $\hav_z$ or $\hav_{m}$ is
better suited to describing the average physical conditions below
the stellar surface; e.g., only the geometrical averages fulfill conservation
of momentum and energy (see App. \ref{app:hse_stratification}).
In late-type stellar atmospheres, the continuum opacity $\kappa_{\lambda}$
in the optical is dominated by the $\mathrm{H}^{-}$ bound-free absorption
that is sensitive to temperature ($\sim T^{10}$). Therefore, even
small fluctuations in $T$ will result in large variations in $\kappa_{\lambda}$,
which in turn will lead to a high degree of spatial corrugation of
layers at constant optical depth \citep[see][]{Stein:1998p3801}.
Furthermore, owing to such highly non-linear behavior of the $\mathrm{H}^{-}$
opacity, temperature fluctuations around the average will be reduced
by interpolation to layers of constant optical depth (see Sect. \ref{sub:Contrast}).
We note briefly that only the geometrical averages $\hav_z$, sampled
over a sufficient time length, preserve the conservation properties
of the hydrodynamical equations, such as hydrostatic equilibrium and
conservation of energy. Furthermore, depending on the intended particular
application of $\left\langle \mathrm{3D}\right\rangle$ models, it is very important to use these carefully,
since the different types of $\left\langle \mathrm{3D}\right\rangle$ models vary significantly among
the different averaging methods.
\subsection{Basic averaging procedure\label{sub:Averaging-procedure}}
We proceeded with the following steps in order to obtain the $\left\langle \mathrm{3D}\right\rangle$
models:
\begin{enumerate}
\item Retrieval of 3D variables of interest;
\item Interpolation to new reference depth scale;
\item Computation of horizontal averages and statistics;
\item Extrapolation of horizontal averages, if necessary;
\item Computation of temporal averages.
\end{enumerate}
In case of the geometrical averages $\hav_z$, steps 2 and 4 are unnecessary
and are therefore skipped. Owing to the generally non-linear response
of the various physical quantities as a function of basic independent
variables and the EOS, the interpolation to a new reference depth
scale should be performed after retrieving the variables. In particular,
because of these non-linearities, we caution against the derivation
of thermodynamic variables via the EOS by utilizing averaged independent
variables interpolated to the new reference depth scale, since the
spatial averaging will inevitably break the physical self-consistency
present in the full original 3D data (see Sect. \ref{sub:Interpolation-new-reference-scale}
and Appendix \ref{app:Deviations-from-EOS}).
At the vertical boundaries of our simulation box are so-called \textit{ghost
zones}, each consisting of five layers at the top and bottom. Their
sole purpose is to numerically define the boundary conditions at both
vertical ends. They do not contain physically meaningful values, so
we excluded them before the averaging procedure.
To speed up the calculations without noticeably degrading the statistical
properties, when computing the averages we considered only every fourth
column of the 3D data cubes in both horizontal directions ($x$ and
$y$), which means that the initial $N_{x}N_{y}=240^{2}$ columns
are reduced down to $60^{2}$. The vertical extent of the columns
is unchanged with $N_{z}=230$ (geometrical) or $101$ (all other
reference depth scales). Tests ensured that this horizontal reduction
does not influence the horizontal averages owing to the still large
sample of vertical columns considered and the multiple snapshots included
in the temporal averaging.
For step 3, we used an arithmetic mean to compute the average values
of variable $X$ for snapshot $t$ at each horizontal layer $z$:
\begin{equation}
\left\langle X\right\rangle _{z,t}=\frac{1}{N_{x}N_{y}}\sum_{x=1}^{N_{x}}\sum_{y=1}^{N_{y}}X_{xyz,t}\label{eq:spatial}
\end{equation}
with $N_{x}$ and $N_{y}$ the number of horizontal elements. For
exponentially varying variables like density and pressure, we computed
also logarithmic averages, i.e., replacing $X_{xyz}$ with $\log X_{xyz}$
in Eq. \ref{eq:spatial}, denoting the models with $\left\langle \mathrm{3D}\right\rangle_{\log}$.
In the final step 5, temporal averages are evaluated with
\begin{equation}
\left\langle X\right\rangle _{z}=\frac{1}{N_{t}}\sum_{t=1}^{N_{t}}\left\langle X\right\rangle _{z,t}\label{eq:temporal}
\end{equation}
with $N_{t}\approx100-150$ being the total number of snapshots considered
for each simulation, which corresponds typically to about two turnover
times. In the present work, the combined temporal and spatial averages
of variable $X$ are always denoted with $\left\langle X\right\rangle _{\tilde{z}}$,
where $\tilde{z}$ is the considered reference depth scale.
Since the 3D structures display a great plethora of details, for each
relevant 3D variable we also determine a number of additional statistical
properties (standard deviation $\sigma$, root mean square, minimum-maximum
range, and histograms of the distribution of values) at each horizontal
layer, which are presented and discussed in Sect. \ref{sec:Statistical-properties}.
As for the spatial averages, the standard deviation and the root mean
square are evaluated in step $3$ for each layer $z$ using the same
basic expression as in Eq.~\ref{eq:spatial} and, if necessary, doubly
extrapolated at the top as in steps 2 and 4 (see Sect. \ref{sub:Extrapolation-at-the-top}).
Finally, their temporal averages are computed in step 5.
Histograms of the distribution of values we determined separately,
and we use temporal averages of the depth-dependent extrema of variable
$X$, $\left\langle \min X\right\rangle _{z}$ and $\left\langle \max X\right\rangle _{z}$
to define a depth-dependent range $r_{z}=\left[\left\langle \min X\right\rangle _{z},\left\langle \max X\right\rangle _{z}\right]$
for the histograms. For the 3D variable $X$ at time $t$, we determined
a set of 1D histograms, $p_{r,z,t}\left(X\right)$, for each individual
layer $z$. The depth-dependent range $r_{z}$ is resolved with $N_{r}=200$
equidistant points; temporal averages $p_{r,z}\left(X\right)$ of
the histograms are computed using a subset of $N_{t}=20$ equidistant
snapshots (see Sect. \ref{sub:Histograms} for details).
Finally, we also computed averages and associated statistical properties
separately for up- and downflows, which we differentiate based on
the sign of the vertical component of the velocity. Of course, when
computing such averages and statistics, one has to account for the
correct filling factor in either case, i.e. for the number of elements
$N_{x,y}$ belonging to up- or downflows, respectively (Sect. \ref{sub:Up-and-downflows}).
\subsection{Interpolation to the new reference depth scale\label{sub:Interpolation-new-reference-scale}}
To interpolate to the new reference depth scale (hereafter denoted
as $\tilde{z}$) in step 2, we defined a new equidistant logarithmic
reference optical depth scale, $\tilde{z}=\log\tilde{\tau}$, from $-5.0,\dots,+5.0$
in steps of $0.1$ for both optical depth scales $\tau_{\mathrm{Ross}}$ and $\tau_{500}$.
In the case of averaging based on the column-mass density scale $m$,
we used the column-mass density $\tilde{m}$ normalized to the mean
value of $m$ at the optical surface, i.e. $\tilde{z}=\log(\tilde{m})=\log(m/\left\langle m\right\rangle _{\mathrm{surf}})$
for the new reference depth scale, where $\left\langle m\right\rangle _{\mathrm{surf}}$
was determined at $\left\langle \tau_{\mathrm{Ross}}=0\right\rangle $
and considered a fixed range from $-3.0,\dots,+2.0$ in steps of 0.05
for all simulations. All variables, $X$, we remapped column-wise
from the original geometrical depth scale to the new reference depth
scale, namely $X_{xy}\left(z\right)\rightarrow\tilde{X}_{xy}\left(\tilde{z}\right)$.
We use linear interpolation, since quadratic interpolation introduced
numerical artifacts in some $\left\langle \mathrm{3D}\right\rangle$ models.
We note that owing to the remapping to a new reference depth scale,
points at a constant optical depth or column-mass density will end
up probing and spanning a range of geometrical depths, implying that
the averages (and statistical properties) with respect to the new
reference depth scale will be qualitatively and quantitatively different
from plain horizontal averages on constant geometrical depth (see
App. \ref{app:Remarks-on-averages}).
\subsection{Extrapolation at the top\label{sub:Extrapolation-at-the-top}}
The vast majority of \textsc{Stagger}-grid models are sufficiently
extended vertically, in particular at the top, to embrace the full
range of $\log\tilde{\tau}$ with $\left[-5.0,+5.0\right]$. The condition $\left\langle \log\tau_{\mathrm{Ross}}\right\rangle _{\mathrm{top}}\leq-6.0$,
is usually fulfilled for all but a few models. More specifically,
surfaces of constant optical depth can become quite corrugated at
the top for some giant models and fall outside the physical domain
of the simulations; that is, one can occasionally have $\log\tau_{\mathrm{Ross}}^{\mathrm{top}}>-5.0$
for a limited number of columns. These particular columns are therefore
linearly extrapolated to $\log\tau_{\mathrm{Ross}}=-5.0$ to allow calculating of average
quantities in the desired range of optical depths. Exponentially varying
values like density, pressure opacities are extrapolated by considering
their logarithmic values. The extrapolation is needed only for a few
giant models ($\log g\leq2.5$), and the concerned columns are usually
only a small fraction ($\lesssim0.3\%$). Therefore, we regard these
extrapolations as negligible in the case of the optical depth scale
$\tau_{\mathrm{Ross}}$.
For the optical depth scale $\tau_{500}$, the situation is slightly
different. The mean optical depth at 500~nm at the top $\left\langle \log\tau_{500}\right\rangle _{\mathrm{top}}$
deviates increasingly towards giant models from $\left\langle \log\tau_{\mathrm{Ross}}\right\rangle _{\mathrm{top}}$,
so that $\left\langle \log\tau_{500}\right\rangle _{\mathrm{top}}>-5.0$.
Therefore, the necessary extrapolation at the top is considerable,
in particular for giant models.
We notice that careless column-wise extrapolation at the top can lead
to a largely uncertain and erroneous stratification, which would have
a negative impact on spectral line formation. For instance, a wrong
density stratification at the top can dramatically affect the ionization
balance. To limit these extrapolation errors, we first restrict the
column-wise extrapolation to the region $\log\tilde{\tau}_{\mathrm{500}}\geq\log\tilde{\tau}_{\mathrm{top}}$
where the value $\log\tilde{\tau}_{\mathrm{top}}>-5.0$ is chosen so that no
more than $20\%$ of the columns would require extrapolation up to
that level. We then compute the horizontal averages (step 3) and,
after that, linearly extrapolate the $\left\langle \mathrm{3D}\right\rangle$ models a second time
to the original $\log\tilde{\tau}_{\mathrm{top}}=-5.0$ for each time snapshot.
This particular extrapolation procedure produces more plausible stratifications
since the horizontal $\left\langle \mathrm{3D}\right\rangle$ averages exhibit a smooth and monotonic
behavior with depth at the top compared to individual columns of the
3D data set.
Test calculations of data sets from the solar simulation, which were
truncated at the top, revealed the reliability of this \textit{double
extrapolation} approach, since for the temperature stratifications
we find the maximum error around $1\,\%$ at the top ($\log\tilde{\tau}_{\mathrm{top}}=-5.0$).
Nonetheless, we favor the use of averages on mean Rosseland optical
depth, i.e. $\hav_{\mathrm{Ross}}$ rather than $\hav_{\mathrm{500}}$, since these averages are
not plagued by such extrapolation uncertainties. For the extrapolated
models on $\tau_{500}$, we kept track of the extent of the applied
extrapolation; in fact, only a few models with the lowest gravities
($\log g=1.5/2.0$) exhibit a noteworthy extrapolation ($\log\tilde{\tau}_{\mathrm{top}}\simeq-4.3/4.8$,
respectively). The $\hav_{\mathrm{500}}$ averages can therefore be reduced to the
extrapolation-free regime at the top afterwards.
\section{Comparison of the averaging methods\label{sec:Comparison-of-averages}}
In the following, we systematically compare the different types of
averaging procedures explained in Sect. \ref{sec:Methods} over a
broad range of stellar parameters relative to Rosseland optical depth,
i.e. $\left\langle \mathrm{3D}\right\rangle_{\tilde{z}}-\hav_{\mathrm{Ross}}$. For the sake of clarity, we illustrate
the properties of average stratifications only for a representative
selection of \textsc{Stagger}-grid models comprising dwarfs and giants
($\log g=4.5$ and $2.0$) at solar and subsolar metallicity ($\left[\mathrm{Fe}/\mathrm{H}\right]=0.0$
and $-3.0$). Besides the most important thermodynamic state variables,
temperature and density, we also investigate averages of electron
number density, an important quantity for, say, calculations of ionization
balance and spectral line formation.
Owing to the lack of a unique common global depth scale that is invariant
between different averaging methods, we display their results jointly
on the averaged Rosseland optical depth scale, $\left\langle \tau_{\mathrm{Ross}}\right\rangle $,
in order to enable a direct comparison.
\subsection{Temperature\label{sub:Temperature}}
\begin{figure*}
\includegraphics[width=88mm]{fig/diff_tt}\includegraphics[width=88mm]{fig/diff_rho}
\caption{Relative differences in the temperature (left) and density (right
panel) stratification vs. the (averaged) Rosseland optical depth for
various stellar parameters. The differences are relative to the Rosseland
optical depth, i.e. $\left\langle \mathrm{3D}\right\rangle_{\tilde{z}}-\hav_{\mathrm{Ross}}$.\emph{ Orange/brown
dashed lines}: averages on layers of constant geometrical height $\hav_z$;
\emph{orange/brown dotted lines}: averages on layers of constant column
mass density $\hav_{m}$; \emph{orange/brown solid lines}: 1D MLT models.
\emph{Blue solid lines}: flux-weighted $T^{4}$-stratifications; \emph{blue
dashed lines}: brightness temperatures $T_{\mathrm{rad}}$ averaged
on surfaces of constant Rosseland optical depth (left panel). \emph{Green
solid lines}: logarithmic density averages $\hav_{\mathrm{Ross}}^{\mathrm{log}}$;
\emph{green dashed lines}: hydrostatic averages $\hav_{\mathrm{Ross}}^{\mathrm{HSE}}$
(right panel). We compare always cooler and hotter effective temperatures,
which are distinguished by dark and bright colors respectively. We
note that the cool metal-poor dwarfs exhibit very small differences,
and are therefore indistinguishable. Note the differences in the $y$-axes.}
\label{fig:temp}\label{fig:density}
\end{figure*}
We find that the temperature stratifications of the two optical reference
depth scales, $\hav_{\mathrm{Ross}}$ and $\hav_{\mathrm{500}}$, are similar, therefore we refrain
from showing these. Only at the top of the metal-poor stars do the
$\hav_{\mathrm{500}}$-averages appear cooler ($\sim5\,\%$, i.e by $\gtrsim250\,\mathrm{K}$
at $T_{\mathrm{eff}}=6000\,\mathrm{K}$). On the other hand, the geometrical
$\hav_z$ and column mass density $\hav_{m}$ averages deviate distinctively
from the $\hav_{\mathrm{Ross}}$-stratification (see Fig. \ref{fig:temp}). In the
regime $1.0<\log\tau_{\mathrm{Ross}}<3.0$, both $\hav_z$ and $\hav_{m}$ are cooler by
$\sim5-10\,\%$. At the surface ($\tau_{\mathrm{Ross}}=0$), the geometrical
averages deviate considerably, while the $\hav_{m}$-averages are closer
to the optical depth scale (see Fig. \ref{fig:temp}). In the deeper
layers below the superadiabatic regime (SAR), the various averaging
methods are practically indistinguishable. In the upper atmosphere
the differences are smaller at higher $\left[\mathrm{Fe}/\mathrm{H}\right]$ due to relatively low
horizontal contrast, but, these increase significantly for lower metallicity.
The averages $\hav_z$ and $\hav_{m}$ are marginally cooler than $\left\langle \mathrm{3D}\right\rangle_{\mathrm{Ross}}$
by $\sim1-2\,\%$ at solar metallicity. In the metal-poor case $\left[\mathrm{Fe}/\mathrm{H}\right]=-3.0$,
the temperature stratifications are distinctively cooler, which will
certainly influence the line formation calculations with $\left\langle \mathrm{3D}\right\rangle$ stratifications.
Furthermore, the differences increase with higher $T_{\mathrm{eff}}$ and lower
$\log g$.
As mentioned earlier, in the atmospheres of late-type stars, minor
temperature fluctuations are amplified disproportionally into large
variations in the line and continuum opacity $\kappa_{\lambda}$ owing
to the strong $T$-sensitivity of the $\mathrm{H}^{-}$-opacity ($\kappa_{\lambda}{\propto}T^{10}$,
see \citealt{Stein:1998p3801}). Therefore, surfaces of constant optical
depth appear strongly corrugated in terms of the range of geometrical
heights that they span. The transformation to layers of constant optical
depth will naturally even out these corrugated surfaces and, at the
same time, smooth the temperature fluctuations, since the latter are
the source of the former (see App. \ref{app:Reversed-granulation}).
Therefore, these are noticeably smaller on layers of constant optical
depth compared to layers of constant geometrical depth, which is portrayed
in the temperature contrast and histogram (see also Figs. \ref{fig:tt_cont}
and \ref{fig:tt_hist}). The SAR exhibits large-amplitude fluctuations
as a result of the release of thermal and ionization energy at the
photospheric transition, which are the reason for the observed enhanced
differences between the averaging methods (see Sect. \ref{sub:Contrast}).
\citet{Steffen:2002p18843} found a beneficial mean $\left\langle T\right\rangle $-representation
for the Sun in the flux-weighted temperature averages, $T^{4}$, taken
on constant Rosseland optical depth from their 2D simulations. The
idea behind this approach is that the $T^{4}$-averages render radiation-oriented
$T$-stratifications, therefore resulting in 1D line profiles that
are closer to the multidimensional ones \citep[see also][]{Steffen:1995p14024}.
To allow for a similar comparison for our models, we computed such
average $T^{4}$-stratifications. In Fig. \ref{fig:temp}, the $T_{\mathrm{Ross}}^{4}$-stratifications
generally appear hotter at the top and in the SAR compared to the
simple $T$-stratification. Averages taken at the fourth power will
weight higher values more, which leads to hotter average temperatures.
This could lead to pronounced differences for molecular lines that
form high up in the atmosphere. At solar metallicity, the $T^{4}$-stratifications
at the top are fairly similar to the plain $T$-averages ($\sim1-2\,\%$)
in agreement with the findings of \citet{Steffen:2002p18843}. This
is different at lower metallicity ($\left[\mathrm{Fe}/\mathrm{H}\right]=-3.0$), namely the $T^{4}$-averages
are clearly higher by $\sim5-10\,\%$. At higher $T_{\mathrm{eff}}$ and lower
$\log g$, the temperature differences are greater, in particular for
the metal-poor giants, owing to the enhanced temperature fluctuations
(see Sect. \ref{sub:Contrast}).
Under the assumption of local thermodynamic equilibrium (LTE) and
neglecting the effects of scattering, the source function is given
by the Planck function, $S_{\lambda}=B_{\lambda}\left(T\right)$.
Within this approximation, we can thus consider the brightness temperature
average $T_{\mathrm{rad}}$ defined earlier in Sect. \ref{sub:aAveraging}
as a good representation of the mean temperature stratification from
the point of view of the radiative emission properties: brighter parts
in each depth layer are given more weight with this averaging method.
The differences between the average $T_{\mathrm{rad}}$ at $500\,\mathrm{nm}$
and average $T$-stratifications are displayed in Fig. \ref{fig:temp}.
Their variations with stellar parameters are very similar to those
of $T^{4}$-averages, however, slightly more pronounced, in particular
the metal-poor giants exhibit hotter stratifications by up to $\sim20\,\%$
at the top.
\subsection{Density\label{sub:Density}}
In Fig. \ref{fig:density}, we also illustrate the results of averaging
in the case of the density stratifications. In the deeper interior,
the different $\left\langle \mathrm{3D}\right\rangle$ models converge toward the same density stratification.
In the SAR, below the optical surface at $\log\tau_{\mathrm{Ross}}\gtrsim0.0$, the
geometrical averages $\hav_z$ are smaller than the $\hav_{\mathrm{Ross}}$ averages
by up to $\sim30\,\%$, while at the top these are much denser by
up to $\sim40\,\%$. The differences increase towards higher $T_{\mathrm{eff}}$
and lower $\log g$. We find a different behavior in the metal-poor
dwarfs, which turn lower towards the top after the initial increase
($\sim10\,\%$). The density stratifications averaged on column mass
density $\hav_{m}$ are larger in the SAR and in the upper layers closer
to $\hav_{\mathrm{Ross}}$. However, we find that at lower metallicity $\left\langle \rho\right\rangle _{m}$
they are smaller by up to $\sim30\,\%$. We note that thermal pressure
qualitatively shows the same characteristics as the density.
The shape of the density distribution is symmetric and narrow on layers
of constant column mass density, thanks to the exponential stratification
of the atmosphere and to the additional damping of density fluctuations
on the column mass scale (see Fig. \ref{fig:rho_hist}). As a result,
the $\hav_{m}$ averages feature the narrowest contrast and density
ranges, which, on the contrary, are usually greatest for geometrical
averages $\hav_z$; for the $\hav_{\mathrm{Ross}}$ averages, these are noticeably
reduced due to the mapping onto the optical reference depth scale
(Fig. \ref{fig:rho_cont}). Overall, the density fluctuations at the
top of the $\hav_{\mathrm{Ross}}$ stratifications are similarly as small as those
by $\hav_{m}$ and $\sim20\,\%$; however, for metal-poor dwarfs they
reach up to $\sim80\,\%$ (see Fig. \ref{fig:tt_cont}). As shown
in Sect. \ref{sub:Histograms}, we find that the corrugation of the
layers of constant optical depth in the upper part of 3D model stellar
atmospheres at lower metallicity increases considerably towards higher
$T_{\mathrm{eff}}$ because of an enhanced $T$-contrast by the so-called reversed
granulation \citep[see][]{Rutten:2004p16166}. This in turn broadens
the density distribution during the remapping to the optical depth
scale, shifting the mean density value and leading to the observed
deviations between $\left\langle \rho\right\rangle _{\mathrm{Ross}}$
and $\left\langle \rho\right\rangle _{m}$ at lower metallicity (see
App. \ref{app:Reversed-granulation}), which will affect the $\left\langle \mathrm{3D}\right\rangle$
line formation calculations.
The highly stratified structure of stellar atmospheres features an
exponential decrease with height. Linear density averages will therefore
tend to give more weight to higher density values, leading to a systematic
overestimation of the mean densities. For this reason we consider
the logarithmic averages $\left\langle \rho\right\rangle _{\mathrm{log}}$,
which we compare to the linear ones in Fig. \ref{fig:density}. As
expected, we find the logarithmic $\rho$-averages are smaller than
the linear ones, with the difference between the two increasing with
higher $T_{\mathrm{eff}}$ and lower $\log g$ by up to $\sim30\,\%$. The mean
densities in the upper layers are lower by $\sim10\,\%$ and $\sim40\,\%$
at solar and low metallicity, respectively. For quantities that vary
more moderately (e.g., temperature) the differences between logarithmic
and linear averaging are rather small.
The transformation to constant optical depth and the subsequent averaging
will change the physical self-consistency as shown in App. \ref{app:hse_stratification}.
To rectify this, we followed the recommendation of \citet{Uitenbroek:2011p10448}
and also computed $\rho$-stratifications, which are enforced to be
in hydrostatic equilibrium, $\left\langle \rho\right\rangle _{\mathrm{HSE}}$
(Fig. \ref{fig:density}). These deviate significantly from the plain
$\left\langle \rho\right\rangle $-stratifications, in particular
at the top. Incidentally, we note however that their dynamic nature
and the effects of convective flows and turbulent pressure mean that
the 3D models themselves are not strictly speaking in hydrostatic
equilibrium at any one time.
In Fig. \ref{fig:temp} (both panels), we also compare the 1D MLT
models with the $\hav_{\mathrm{Ross}}$ stratifications. The 1D models in general
show qualitatively similar behavior as the geometrical averages. The
metal-poor 1D models are distinctively hotter, since these enforce
radiative equilibrium in the upper layers.
\subsection{Electron number density\label{sub:Electron-number-density}}
\begin{figure*}
\includegraphics[width=88mm]{fig/ov_uyrms}\includegraphics[width=88mm]{fig/ov_nel}
\caption{Root mean square (rms) of the vertical velocity $v_{z,\mathrm{rms}}$ (left) and
mean electron number density $n_{\mathrm{el}}$ vs. optical depth
(right panel). \emph{Dashed lines}: $\hav_z$ averages; \emph{dotted
lines}: $\hav_{m}$; \emph{solid lines}: $\hav_{\mathrm{Ross}}$.}
\label{fig:velocity}\label{fig:electron-density}
\end{figure*}
We find large differences among the various averages of the electron
number density, $n_{\mathrm{el}}$, which we show in Fig. \ref{fig:electron-density}
(right panel). In the SAR the geometrical averages $\left\langle n_{\mathrm{el}}\right\rangle _{z}$
are distinctively larger than the averages on surfaces of constant
Rosseland optical depth $\left\langle n_{\mathrm{el}}\right\rangle _{\mathrm{Ross}}$,
while the column mass density averages $\left\langle n_{\mathrm{el}}\right\rangle _{m}$
are found in between the two. The deviations increase for higher $T_{\mathrm{eff}}$
and lower $\log g$ considerably, while at lower $T_{\mathrm{eff}}$ the differences
are significantly smaller. We show in App. \ref{app:Reversed-granulation}
that the interpolation to a new reference depth scale changes the
statistical properties by redistributing properties from different
heights, so the resulting mean horizontal average will look different
depending on the reference depth scale. This effect seems to be most
pronounced in the case of electron density.
To determine the ionization fraction in spectral line calculations,
the electron number density is either already provided by the model
atmosphere or looked up from an EOS using the independent thermodynamic
variables (typically $(T,p)$ or $(T,\rho)$). The latter has to be
done carefully in the case of the $\left\langle \mathrm{3D}\right\rangle$ models, since, besides potential
differences in the EOS compared to the one used for calculating the
model atmosphere, electron densities derived from the EOS based on
averaged independent variables, $n_{\mathrm{el}}^{\mathrm{EOS}}=n_{\mathrm{el}}\left(\left\langle T\right\rangle ,\left\langle p\right\rangle \right)$,
can deviate significantly from the more physically consistent averaged
$\left\langle n_{\mathrm{el}}\right\rangle $ (see App. \ref{app:Deviations-from-EOS}).
\subsection{Vertical velocity\label{sub:Vertical-velocity}}
It is worthwhile to compare how the vertical velocity, $v_{z,\mathrm{rms}}$,
changes with the respective averaging methods. For comparison, we
show in Fig. \ref{fig:velocity} (left panel) the rms of the vertical
velocity. In the upper layers, we find the $v_{z,\mathrm{rms}}$ on geometrical
averages to be higher compared to other averages, while it is lower
in the deeper layers. On optical depth the peak in $v_{z,\mathrm{rms}}$ below
the surface is somewhat symmetric and slightly higher, while for averages
on geometrical height and column mass density their peaks are flatter
and more skewed towards higher layers, and the peak location is realized
in slightly upper layers. For lower $T_{\mathrm{eff}}$ and higher $\log g$,
the differences diminish more and more, so that for the coolest models,
the difference are small. The differences in the velocity arise as
well due to the redistribution of velocity during the mapping to the
new reference depth scale (see App. \ref{app:Reversed-granulation}).
\section{Statistical properties\label{sec:Statistical-properties}}
To explore the origins of the differences among the various average
$\left\langle \mathrm{3D}\right\rangle$ structures and the resulting ramifications for line formation
calculations, we discuss here the statistical properties of the temperature,
density, and velocity stratifications. Since the statistical properties
of $\hav_{\mathrm{500}}$ and $\hav_{\mathrm{Ross}}$ are fairly similar, we focus only on the
latter.
\subsection{Contrast\label{sub:Contrast}}
\begin{figure}
\includegraphics[width=88mm]{fig/ov_cont_tt_rho}
\caption{Temperature (top) and density (bottom) contrasts vs. averaged Rosseland
optical depth\emph{. Dashed lines}: $\hav_z$ averages; \emph{dotted
lines}: $\hav_{m}$; \emph{solid lines}: $\hav_{\mathrm{Ross}}$.}
\label{fig:tt_cont}\label{fig:rho_cont}\label{fig:tt_ext}\label{fig:rho_ext}
\end{figure}
The 3D RHD models usually exhibit a broad range of values at a given
height thanks to the fluctuations arising from the convective motions.
The amplitude of these fluctuations can be quantified using the root-mean-square
of the relative deviation from the mean,
\begin{equation}
\delta X_{\mathrm{rms}}=\sqrt{\Sigma_{i=1}^{N}\left(X_{i}-\bar{X}\right)^{2}/\left(N\bar{X}^{2}\right)},\label{eq:contrast}
\end{equation}
which we refer to as the \emph{contrast} ($\bar{X}$ is the mean value
of $X$). It is equal to the normalized standard deviation; i.e.,
$\delta X_{\mathrm{rms}}=\sigma_{X}/\bar{X}$.
The translation to another reference depth scale changes the statistical
properties as variables are remapped, which in turn is reflected in
changes in contrast. Among the various averaging methods, geometric
averages $\hav_z$ typically feature the highest contrast. We also
find that the level of fluctuations generally increases with increasing
$T_{\mathrm{eff}}$ and decreasing $\log g$. The highest contrast typically prevails
in simulations with the highest $T_{\mathrm{eff}}$ and located in the vicinity
of the maximum superadiabatic gradient, $\vec{\nabla}_{\mathrm{sad}}^{\mathrm{peak}}$, and maximum rms-velocity,
$v_{z,\mathrm{rms}}^{\mathrm{peak}}$. These arise from the photospheric transition from convective
to radiative energy transport, and the resulting overturning of the
entropy-depleted plasma. At the top of the convection zone, the fluctuations
reach a minimum, and they decrease towards the bottom of the model
atmosphere.
In top and bottom panels of Fig. \ref{fig:tt_cont}, we show the temperature
and density contrasts, $\delta T_{\mathrm{rms}}$ and $\delta\rho_{\mathrm{rms}}$,
respectively. In the case of the optical depth $\hav_{\mathrm{Ross}}$, the temperature
contrast is significantly reduced compared to the other reference
depth scales ($\delta T_{\mathrm{rms}}^{\mathrm{peak}}$ reduced by
a factor of $\sim3$), while the density contrast is slightly enhanced
($\delta\rho_{\mathrm{rms}}^{\mathrm{peak}}\sim20-60\,\%$ compared
to $10-50\,\%$). For averages on column mass density $\hav_{m}$, $\delta\rho_{\mathrm{rms}}$
is lower, in particular in the upper layers, and $\delta T_{\mathrm{rms}}$
is slightly smaller compared to the $\hav_z$ case. Fluctuations of
variables that correlate with the new reference depth scale will be
reduced during the transformation. As the translation to layers of
constant optical depth partly evens out the corrugated $\tau$-isosurface,
fluctuations of the opacity $\kappa_{\lambda}$ will be reduced, since
the dominant $\mathrm{H}^{-}$opacity is very sensitive to temperature.
Therefore, the temperature fluctuations are also smoothed out. Layers
of constant column mass density will similarly suppress density variations
(see App. \ref{app:Reversed-granulation}). At the top, $\delta\rho_{\mathrm{rms}}$
is almost similar between $\hav_{m}$ and $\hav_{\mathrm{Ross}}$ in the case of the
solar metallicity $(\delta\rho_{\mathrm{rms}}^{\mathrm{top}}\sim40\,\%$);
however, at lower metallicity, $\left[\mathrm{Fe}/\mathrm{H}\right]=-3.0$, we find considerable
disparity with $\delta\rho_{\mathrm{rms}}^{\mathrm{top}}\sim80\,\%$.
The thermal stratification in the upper atmosphere is determined by
adiabatic cooling thanks to mechanical expansion and radiative heating
because of spectral line re-absorption \citep{Asplund:1999p11771,Collet:2007p5617}.
In metal-poor stars, radiative reheating in upper layers is significantly
reduced owing to the weakness of spectral line features, while the
mechanical expansion cooling term is virtually unaffected. The reversed
granulation takes place at increasingly lower geometrical height with
higher $T_{\mathrm{eff}}$ and lower $\log g$, causing the distribution of the
thermodynamic variables to become increasingly broader and more skewed
(see Sect. \ref{sub:Histograms}). This is the reason for the enhancement
in $\delta T_{\mathrm{rms}}$ and $\delta\rho_{\mathrm{rms}}$ towards
the top boundary in metal-poor simulations in Fig. \ref{fig:tt_cont}.
Replicating the results of full 3D line formation calculations in
low-metallicity stars with $\left\langle \mathrm{3D}\right\rangle$ models is therefore challenging,
since the averages have to correctly account for such temperature
and density fluctuations. Interestingly, the temperature contrast
saturates at $6500\,\mathrm{K}$, similar to the saturation of the
intensity contrast shown in our previous work (see Fig. 10 in Paper
I).
The strength of spectral lines is sensitive to temperature, and the
remapping to constant optical depth decreases $\delta T_{\mathrm{rms}}$,
making $\left\langle T\right\rangle $ closer to $\left\langle T\right\rangle _{\mathrm{rad}}$.
However, the transformation to layers of constant optical depth exhibits
the side effect of redistributing the other variables, too, in particular
the gas density; $\delta \rho_{\mathrm{rms}}$ is thus much higher than averages on column
mass density, due to the additional influence of opacity on the depth
scale (see Sect. \ref{sub:aAveraging}). This in turn will likely
affect the line formation calculations with the different $\left\langle \mathrm{3D}\right\rangle$
models.
The strong contrast in the upper part of the convection zone ($\log\tau_{\mathrm{Ross}}\ge0$)
is induced by the large amplitude fluctuations owing to the radiative
energy losses at the photosphere and the asymmetry of the up- and
downflows, which we discuss further in Sect. \ref{sub:Up-and-downflows}.
An interesting aspect is that the contrast in thermodynamic variables
is very similar to the rms of the vertical velocity (Fig. \ref{fig:velocity}),
which is indicative of the correlation between the mass flux and the
fluctuations in the thermodynamic variables. Namely, vertical velocity
is generated by density contrast $\delta\rho$ via to the buoyancy
force, $f_{B}=-g\delta\rho$, which results from an imbalance of pressure
and gravity terms in the hydrodynamical equation for conservation
of momentum (see Paper I) in the highly stratified atmosphere. Lighter
fluid elements ($\delta\rho<0$) experience positive buoyancy and
thus upward acceleration, while denser elements ($\delta\rho>0$)
experience negative buoyancy and are pulled downward. Buoyancy forces
will vanish eventually, when the density of the up- or downflowing
element levels with the surrounding gas.
The entropy contrast $\delta s_{\mathrm{rms}}$ (not shown here),
qualitatively depicts a very similar dependence on stellar parameter
and reference depth scale as $\delta T_{\mathrm{rms}}$. Both are
very similar in optical depth, while for the averages $\hav_z$ and
$\hav_{m}$ the overall amplitude is a factor $\sim2$ smaller. In Paper
I, we showed that the convective energy flux depends on the entropy
jump, density, and vertical velocity. Interestingly, here we also
find additional \emph{scaling relations} concerning the peak contrast
in entropy, $\delta s_{\mathrm{rms}}^{\mathrm{peak}}$, and density,
$\delta\rho_{\mathrm{rms}}^{\mathrm{peak}}$, with the vertical peak
velocity $v_{z,\mathrm{rms}}^{\mathrm{peak}}$. This can be interpreted
as convective driving, where the radiative losses generate large fluctuations
in the entropy, temperature, and density.
For the different averaging methods, the variations in the minimum-maximum
range for the temperature and density are qualitatively very similar
to the contrast (even though with larger amplitudes $\sim5-8$), therefore,
we refrain from discussing these explicitly.
\subsection{Upflows and downflows\label{sub:Up-and-downflows}}
\begin{figure}
\includegraphics[width=88mm]{fig/ov_updn_tt_rho}
\caption{Similar as Fig.~\ref{fig:tt_cont} but showing the relative difference
between averages in up and downflows, $\delta T_{\mathrm{up,dn}}$
and $\delta\rho_{\mathrm{up,dn}}$.}
\label{fig:tt_updn}\label{fig:rho_updn}
\end{figure}
The properties of the convective motions in stellar atmospheres are
highly asymmetric in up- and downflows. The upflows overshoot into
the photosphere leading to non-thermal Doppler shifts imprinted on
spectral line features. We first compute the mean values of various
variables separately for up- and downflows based on the sign of the
velocity at a given height. We then determine the relative difference
between up- and downflows with $\delta X_{\mathrm{up,dn}}=(X_{\mathrm{up}}-X_{\mathrm{dn}})/\bar{X}$
(Fig. \ref{fig:tt_updn}). As expected, the buoyant upflows are hotter
and lighter compared to the subsiding downflows. Furthermore, the
asymmetries are especially pronounced in the convection zone below
the optical surface. Above the photosphere, the convective motions
decay quickly, and the asymmetries in $\delta T_{\mathrm{up,dn}}$
and $\delta\rho_{\mathrm{up,dn}}$ are distinctively smaller. The
remaining asymmetries at the top stem from reverse granulation.
The convective flows in granules, slow and almost laminar, radiate
away their energy and overturn into the intergranular lanes characterized
by cool, dense, narrow turbulent downdrafts. The subsequent large-amplitude
fluctuations in the thermodynamical properties are caused by the turbulent
mixing of the downflows with the upflows, typically located in the
intergranular lane below the optical surface in the SAR. These regions
are arranged in tubelike structures around the granules, and can be
identified with their excessive vorticity. It is remarkable that,
across all stellar parameters, the filling factor of the up- and downflow
in the convection zone remains almost constant, with $f_{\mathrm{up}}\sim2/3$
and $f_{\mathrm{dn}}\sim1/3$, respectively (see Paper I).
The variable $\delta T_{\mathrm{up,dn}}$ is reduced, and $\delta\rho_{\mathrm{up,dn}}$
is enhanced on the optical reference depth scale $\hav_{\mathrm{Ross}}$ compared
to the other averages. The column mass density shows a smaller asymmetry
in density. This behavior, similar to what we discussed earlier for
the temperature and density contrasts, is not entirely surprising,
since the fluctuations are caused by the presence of the up- and downflows
(see also \ref{app:Reversed-granulation}).
\subsection{Histograms\label{sub:Histograms}}
\begin{figure}
\includegraphics[width=88mm]{fig/ov_tt_rho_to3}
\caption{Histogram of the temperature (top) and density (bottom) vs. optical
depth for the TO simulation ($T_{\mathrm{eff}}=6500\,\mathrm{K}/\log g=4.0$)
with solar and sub-solar metallicity ($\left[\mathrm{Fe}/\mathrm{H}\right]=-3.0$). Additionally,
the histogram of a single layer ($\log\tau_{\mathrm{Ross}}=-4.0$) is indicated for
the whole layer (black) and separated in up- and downflows (blue and
red, respectively). \emph{Dashed lines}: $\hav_z$ averages; \emph{dotted
lines}: $\hav_{m}$; \emph{solid lines}: $\hav_{\mathrm{Ross}}$; \emph{blue solid
lines}: 1D MLT models.}
\label{fig:hist_to_ov}
\end{figure}
\begin{figure*}
\includegraphics[width=88mm]{fig/ov_hist_tt}\includegraphics[width=88mm]{fig/ov_hist_rho}
\caption{Histograms of the temperature (left) and density (right panel) distributions
taken at $\left\langle \log\tau_{\mathrm{Ross}}\right\rangle =-4.0$. We show the histograms
averaged on constant geometrical height (top), column mass density
(middle), and Rosseland optical depth (bottom). The surface gravity
of displayed models is $\log g=4.5$ and the metallicity is solar (dashed
lines) and subsolar with $\left[\mathrm{Fe}/\mathrm{H}\right]=-3.0$ (solid lines). The mean values
are indicated by filled and open circles for $\left[\mathrm{Fe}/\mathrm{H}\right]=-3.0$ and 0.0,
respectively).}
\label{fig:tt_hist}\label{fig:rho_hist}\label{fig:temperature-distribution}\label{fig:density-distribution}
\end{figure*}
In Fig. \ref{fig:hist_to_ov}, we illustrate temporally averaged histograms
of the temperature, $p\left(T\right)$, and density distributions,
$p\left(\rho\right)$ for the TO simulation with two different $\left[\mathrm{Fe}/\mathrm{H}\right]$
evaluated on layers of constant Rosseland optical depth, in order
to illustrate the differences in the statistical properties. The histogram
of the metal-poor case differs substantially in upper layers from
the solar one. Furthermore, in Fig. \ref{fig:tt_hist}, we show $p\left(T\right)$
and $p\left(\rho\right)$ in the upper layers ($\left\langle \log\tau_{\mathrm{Ross}}\right\rangle =-4.0$)
for dwarf models with different $T_{\mathrm{eff}}$ and $\left[\mathrm{Fe}/\mathrm{H}\right]$. In both cases
we compare the distributions on constant geometrical height $z$,
constant column mass density $m$ and constant Rosseland optical depth
$\tau_{\mathrm{Ross}}$.
At solar metallicity (Fig. \ref{fig:tt_hist}), the temperature distributions
are very narrow and symmetric. With increasing $T_{\mathrm{eff}}$, the average
$T$ is as expected higher and the width of the distribution broadens
slightly. The mean values are very similar between the different $\left\langle \mathrm{3D}\right\rangle$
methods and in principle indistinguishable, which also agrees with
Fig. \ref{fig:temp}. Furthermore, the mean values are located very
close to the mode.
At $\left[\mathrm{Fe}/\mathrm{H}\right]=-3.0$, the temperature distributions change considerably.
While at cooler $T_{\mathrm{eff}}$ the shape is vey narrow and symmetric, for
$T_{\mathrm{eff}}\ge5500\,\mathrm{K}$ we find a distinct broadening of the $T$-distribution
on geometrical reference depth scale $\hav_z$, which is given by a
long tail at high $T$ and a decreasing peak at lower $T$ (see Figs.
\ref{fig:hist_to_ov} and \ref{fig:tt_hist}). In the column mass
density averages $\hav_{m}$ the temperature peak is slightly more pronounced
at higher $T_{\mathrm{eff}}$, while the high-$T$ tail is slightly reduced.
The situation is pretty different for the averages on Rosseland optical
depth $\hav_{\mathrm{Ross}}$, where we find that the temperature peak drops faster
towards higher $T_{\mathrm{eff}}$, and at $7000\,\mathrm{K}$ the $T$-distribution
shows an almost unimodal distribution. The mean values disagree at
higher $T_{\mathrm{eff}}$ between the different reference depth scales.
The density distributions behave differently depending on the reference
depth scale. On $\hav_z$ the histograms are in general slightly skewed
with a fat tail towards lower $\rho$ for all metallicities (Figs.
\ref{fig:hist_to_ov} and \ref{fig:rho_hist}). The density distributions
for the averages on column mass density are very symmetric and narrow
for both solar and low metallicities. At solar metallicity, the density
histograms on constant optical depth are narrower and higher than
the geometrical analogs, but skewed in contrast to $\hav_{m}$. In the
metal-poor case, $\left\langle p\left(\rho\right)\right\rangle _{\mathrm{Ross}}$
becomes very narrow and symmetric at lower $T_{\mathrm{eff}}$, but towards higher
$T_{\mathrm{eff}}$ we find the $\rho$-distribution to also be broader. The
mean density stratification varies considerably among the different
averaging methods.
As mentioned above, adiabatic cooling due to mechanical expansion
and radiative reheating are competing with each other in the upper
photosphere and contribute to the phenomenon of reversed granulation.
At lower metallicity, the reversed granulation is enhanced, so that
the optical depth is increasingly strongly corrugated towards higher
$T_{\mathrm{eff}}$, which in turn will amplify the differences in statistical
properties during the translation to the optical depth scale from
the geometrical depth scale. This leads to the systematical broadening
in the statistical distribution that we encounter at lower metallicity.
\section{Spectral line formation: $\left\langle \mathrm{3D}\right\rangle$ and $3\mathrm{D}$ LTE calculations\label{sec:Spectral-line-formation}}
\begin{figure*}[t]
\includegraphics[width=176mm]{fig/fake_lines1}
\caption{Overview of the $\left\langle \mathrm{3D}\right\rangle-3\mathrm{D}$ line formation differences given
in abundances displacement $\Delta\log\varepsilon$ vs. equivalent width $W_\lambda$ for
the $\ion{Fe}{i}$ and $\ion{Fe}{ii}$ fictitious spectral lines with the excitation
potentials $\chi_{\mathrm{exc}}=1.0$ and $4.0\,\mathrm{eV}$ including the Sun,
TO, RG and dwarf simulation (from top to bottom). The averages on
layers of constant geometric height $\hav_z$ (black dashed), constant
column mass density $\hav_{m}$ (black dotted), constant Rosseland optical
depth $\hav_{\mathrm{Ross}}$ (black solid) and at 500 nm $\hav_{\mathrm{500}}$ (orange dashed
triple-dotted lines) are indicated. Furthermore, we show 1D models
(red solid), $T_{\mathrm{rad}}^{\mathrm{Ross}}$-averages (blue dashed)
and $\hav_{\mathrm{Ross}}^{\mathrm{HSE}}$ (green dashed lines). The microturbulence
of $\xi_{\mathrm{turb}}=1.0\,\mathrm{km}/\mathrm{s}$ has been used throughout.
Notice the different ordinates.}
\label{fig:fakelines1}
\end{figure*}
\begin{figure*}[t]
\includegraphics[width=176mm]{fig/fake_lines2} \caption{Similar to Fig. \ref{fig:fakelines1} but showing overview of the
abundance corrections for metal-poor models, with larger ranges for
the $y$-scales.}
\label{fig:fakelines2}
\end{figure*}
To explore the differences between the line formation based on $\left\langle \mathrm{3D}\right\rangle$
and full 3D models, we have chosen a set of representative models
consisting of a main-sequence (MS) star ($T_{\mathrm{eff}}/\log g=5777$~K/$4.44$),
a turn-off (TO) star ($6500$/$4.0$), a red-giant (RG) star ($4500$/$2.0$),
and a dwarf ($4500/5.0$). For all these models, we considered metal-poor
analogs with $\left[\mathrm{Fe}/\mathrm{H}\right]=-3.0$ besides the solar metallicity.
\subsection{3D line formation calculations\label{sub:SCATE}}
We used the 3D radiative transfer code \textsc{Scate} \citep{Hayek:2011p8560}
to calculate full 3D synthetic spectral line disk-center intensity
and flux profiles with 3D \textsc{Stagger} model atmospheres. \textsc{Scate}
assumes local thermodynamic equilibrium (LTE). Furthermore, in the
present work, we also neglected the effects of scattering; i.e. we
approximated the source function with the Planck function, $S_{\lambda}=B_{\lambda}$.
We caution that LTE is in general a poor approximation, especially
for $\ion{Fe}{i}$ spectral line formation calculations at low $\left[\mathrm{Fe}/\mathrm{H}\right]$ \citep[e.g.][]{Bergemann:2012p20128},
which should be kept in mind for analyzing the LTE-based abundance
corrections presented here. For the sake of consistency, we used the
same EOS \citep{Mihalas:1988p20892} and continuum opacity data \citep[from the MARCS package; see][]{Gustafsson:2008p3814}
as in the 3D \textsc{Stagger} simulations.
To reduce the computational costs for line formation calculations,
we consider a subset of $N_{t}=20$ temporally equidistant snapshots
-- the same as used for the temporal $\left\langle \mathrm{3D}\right\rangle$ averages -- sampling
the entire time spans of the individual 3D simulation sequences. Additionally,
we reduce the horizontal spatial resolution from $N_{x}N_{y}=240^{2}$
to $60^{2}$ by considering only every fourth column in each horizontal
direction. Test calculations carried out at full resolution show that
differences are negligible for all practical purposes \citep[see][]{Asplund:2000p20875}.
Concerning the vertical direction, while we did not subsample the
number of depth points, we considered only those layers with $\min(\log\tau_{\mathrm{Ross}}){\leq}3.0$.
The resulting disk-center intensity and flux profiles are spatially
and temporally averaged, and then normalized with the respective continuum
intensity or flux.
To systematically illustrate the differences between $\left\langle \mathrm{3D}\right\rangle$ and 3D
line formation, we computed \textsl{\emph{fictitious}} atomic lines
for neutral and singly ionized iron, $\ion{Fe}{i}$ and $\ion{Fe}{ii}$, for the
selected \textsc{Stagger}-grid models and metallicities. All lines
are defined at the same wavelength, $\lambda=500\,\mathrm{nm}$, and
we considered two lower-level excitation potentials, $\chi_{\mathrm{exc}}=1.0$ and
$4.0\,\mathrm{eV}$. Furthermore, we varied the oscillator strength,
$\log gf$, in order to cover a range of line strengths, from weak
to partly saturated lines, with equivalent widths from $W_\lambda=5$ to
$80\,\mathrm{m\mathring{A}}$. We assumed an iron abundance of $\log\epsilon_{\mathrm{Fe}}=7.51$
\citep{Asplund:2009p3308} and $\log\epsilon_{\mathrm{Fe}}=4.51$,
for the solar metallicity and $\left[\mathrm{Fe}/\mathrm{H}\right]=-3.0$ case, respectively.
The spectral line calculations with $\left\langle \mathrm{3D}\right\rangle$ models were also performed
with \textsc{Scate}, to guarantee a consistent comparison. \textsc{Scate}
employs atmospheric structures on geometrical height and computes
the optical depth, $\tau_{\lambda}$, for the individual line. Therefore,
we provide the geometrical height by integrating $dz=d\left\langle \tau_{\lambda}\right\rangle /\left\langle \kappa_{\lambda}\right\rangle $,
which is of course unnecessary for $\hav_z$. Furthermore, tests revealed
that including just an averaged velocity, e.g. $\left|\vec{v}\right|/3$,
is insufficient to reproduce the influence of the 3D velocity field
on the line shape. Analyzing the influence of the velocity field on
the line formation surpasses the scope of the present work; therefore,
we will explore this aspect in a separate study. In this paper, for
the calculations with $\left\langle \mathrm{3D}\right\rangle$ models we neglected the information
about the actual velocity field and instead assumed a fixed microturbulence
of $\xi_{\mathrm{turb}}=1.0\,\mathrm{km}/\mathrm{s}$ for all considered stellar
parameters.
Since the line formation calculations with $\left\langle \mathrm{3D}\right\rangle$ models are obviously
much faster, we use the $\hav_{\mathrm{Ross}}$ averages first to estimate the $\log gf$
range, which would result in the designated range in $W_\lambda$. We then
consider ten equidistant $\log gf$ values within that range for the
$\left\langle \mathrm{3D}\right\rangle$ and full 3D models. Finally, we interpolate the curves of
growth ($\log gf$ vs. $W_\lambda$) using a spline interpolation and retrieve
the $\Delta\log gf$ difference between $\left\langle \mathrm{3D}\right\rangle$ and $3\mathrm{D}$ synthetic
lines at a given equivalent width; i.e., $\Delta\log gf=\left\langle \mathrm{3D}\right\rangle-3\mathrm{D}$.
For trace elements, changes in line strength due to $\Delta\log gf$
are equivalent to changes due to abundance variations $\Delta\log\varepsilon$; hence,
the $\Delta\log gf$ differences can be interpreted as $\left\langle \mathrm{3D}\right\rangle-3\mathrm{D}$
abundance corrections. With four fictitious lines and four representative
models with two metallicities, we covered 32 cases in total.
\label{sub:Consequences-line-formation}Full 3D line profiles are
marked by line shifts and asymmetries owing to the non-thermal Doppler
broadening introduced by the up- and downflows of the convective motions,
which are present in the photosphere due to overshooting \citep{Asplund:2000p20875}.
In 3D RHD modeling, the velocity field emerges naturally from first
principles. The buoyant hot rising plasma in the granules blue-shifts
the line, while the fast downdrafts introduce a redshift. Besides
the convective motions, another source of line broadening are the
inhomogeneities in the thermodynamic independent variables, $\rho$
and $T$. The ascending granules are hotter and less dense than the
downdrafts (see Fig. \ref{fig:tt_updn}). The velocities and inhomogeneities
prevailing at formation height of the individual lines will lead to
line shifts and asymmetries. The $\left\langle \mathrm{3D}\right\rangle$-based lines are symmetric
without any shifts, however, we can compare the equivalent widths
of lines from calculations based on full 3D models and on the different
average stratifications.
We probed different formation heights with the parameters of our fictitious
lines. The $\ion{Fe}{ii}$ lines form deeper in the atmosphere, closer to
the continuum forming layers, while the $\ion{Fe}{i}$ lines are more sensitive
to the intermediate heights of the atmosphere. Spectral lines with
lower (higher) excitation potential form at smaller (larger) optical
depths. We showed in Sect. \ref{sec:Comparison-of-averages} that
the metal-poor model stellar atmospheres exhibit rather different
temperature stratification at the top depending on the averaging method,
consequently the latter should show the largest differences between
the $\left\langle \mathrm{3D}\right\rangle$ models.
\subsection{Comparison of $\left\langle \mathrm{3D}\right\rangle$ and $3\mathrm{D}$ line formation\label{sub:Comparison-line-formation}}
We show an overview of the differences between the $\left\langle \mathrm{3D}\right\rangle$ and the
full 3D calculations in Figs. \ref{fig:fakelines1} and \ref{fig:fakelines2}.
The first noticeable observations are the systematic trends in form
of a slope towards higher line strength, which are due to the fixed
value of the microturbulence, $\xi_{\mathrm{turb}}$, with $1\,\mathrm{km}/\mathrm{s}$
in the $\left\langle \mathrm{3D}\right\rangle$ models. An increasing slope with line strength indicates
an underestimation of $\xi_{\mathrm{turb}}$, in particular for the TO and RG (see
panel 5 to 12 in Fig. \ref{fig:fakelines1} and 21 to 28 in Fig. \ref{fig:fakelines2}).
By contrast, in cool dwarfs, the adopted $\xi_{\mathrm{turb}}$ seem to be overestimated.
These findings agree with comparisons of 1D models with observations
\citep[e.g., ][]{Edvardsson:1993A&Ap275,Bensby:2009A&Ap499}. We tested
this by applying a number of $\xi_{\mathrm{turb}}$ values%
\footnote{We find a reduction of the slope in the curve-of-growth with $\xi_{\mathrm{turb}}=0.5,\,1.5,\,2.0\mathrm{km}/\mathrm{s}$
for the dwarf, RG and TO models respectively (while a fine-tuning
could flatten it completely). %
}, which showed that a fine-tuning can rectify the present slope. However,
for the sake of clarity, we prefer to limit the already large number
of stellar and line parameters to just a single $\xi_{\mathrm{turb}}$. The calibration
of the microturbulence will be the subject of a separate study.
Weak lines are insensitive to $\xi_{\mathrm{turb}}$, yet they show variations
in strength, which can be attributed to differences in the mean $\left\langle \mathrm{3D}\right\rangle$
stratifications of temperature and density. Interestingly, when one
compares this regime between the different averages in Fig. \ref{fig:fakelines1},
the averages on column mass density are often the closest to the full
3D spectral lines and perform in this respect often better than the
averages on constant Rosseland optical depth. The stratification on
constant optical depth at 500 nm always shows spectral line features
slightly closer to the full 3D case compared to the Rosseland optical
depth. However, this is because we chose our fictitious iron lines
at $500\,\mathrm{nm}$, which leads to an inherent advantage of $\hav_{\mathrm{500}}$
over $\hav_{\mathrm{Ross}}$. The geometrical averages show large deviations in the
case of the TO and RG star at solar metallicity (see panels 5 to 12).
The differences in the metal-poor case (Fig. \ref{fig:fakelines2})
are clearly greater than in the solar metallicity models (Fig. \ref{fig:fakelines1}).
It is obvious that $\left\langle \mathrm{3D}\right\rangle$ models at low $\left[\mathrm{Fe}/\mathrm{H}\right]$ struggle to reproduce
the 3D case properly, in particular $\ion{Fe}{i}$ lines with small excitation
potential, and the differences are particularly pronounced for the
hotter metal-poor TO stars (panel 21). This is in accordance with
our findings from Sects. \ref{sec:Comparison-of-averages} and \ref{sec:Statistical-properties}:
at low metallicity and high $T_{\mathrm{eff}}$. The differences in the statistical
properties among the various $\left\langle \mathrm{3D}\right\rangle$ averages increases at low $\left[\mathrm{Fe}/\mathrm{H}\right]$.
In particular, the widths of the temperature and density distributions
become broader at lower metallicity (Fig. \ref{fig:tt_hist}), and
their mean values become increasingly less well-defined in its statistical
representation. The reason for the broadening is the enhanced contrast
of the reversed granulation due to the reduced radiative re-heating
with weak spectral line features at low metallicity (see App.\ref{app:Reversed-granulation}).
\begin{figure*}
\includegraphics[width=88mm]{fig/fake_lines_ov}\includegraphics[width=88mm]{fig/fake_lines_ov_clv}
\caption{In the left figure the mean $\Delta\log\varepsilon$ (evaluated between
$5-20\,\mathrm{m}\mathring{A}$) is illustrated against $\ion{Fe}{i}$ and $\ion{Fe}{ii}$
given at $\chi_{\mathrm{exc}}=1.0$ and $4.0\,\mathrm{eV}$ for the different selected
models. In the right figure, the relative difference with $\left\langle \mathrm{3D}\right\rangle-3\mathrm{D}$
of the continuum intensity, $\delta I_{\mu}$, vs. $\mu$ angle is
displayed. Both Figures include the solar metallicity (top) and the
metal-poor (bottom) case, and the averages $\hav_z$ (black dashed),
$\hav_{m}$ (black dotted), $\hav_{\mathrm{Ross}}$ (black solid), $\hav_{\mathrm{500}}$ (orange
dashed triple-dotted), $T_{\mathrm{rad}}^{\mathrm{Ross}}$-averages
(blue dashed), and 1D models (red solid lines).}
\label{fig:ov_fakelines}\label{fig:ov_clv}
\end{figure*}
To facilitate an overall comparison between the different averages
with respect to line formation, we show in Fig. \ref{fig:ov_fakelines}
(left) the mean abundance deviations for weak lines that are determined
between $W_\lambda=5-20\,\mathrm{m}\mathring{A}$. For the model representing
the Sun, the differences between $\left\langle \mathrm{3D}\right\rangle$ and 3D are in general small:
$\lesssim0.1\,\mathrm{dex}$. For the TO stars at solar $\left[\mathrm{Fe}/\mathrm{H}\right]$, the
differences are considerably larger: $\lesssim0.2\,\mathrm{dex}$.
We find the largest deviations for $\ion{Fe}{i}$ lines with small excitation
potential $\chi_{\mathrm{exc}}=1.0\,\mathrm{eV}$, which are the most temperature
sensitive; in particular the geometrical averages exhibit strong differences.
At lower metallicity, the differences increase in particular for the
TO and RG model with $\lesssim0.4\,\mathrm{dex}$, and the $\left\langle \mathrm{3D}\right\rangle$
on optical depth shows the largest deviation for metal-poor TO star.
In general the deviations become smaller at higher $\chi_{\mathrm{exc}}$ and for
$\ion{Fe}{ii}$ lines. The dwarfs show very small differences compared to
the full 3D case. These models exhibit the lowest velocities and temperature
contrast with the mean stratifications closely resembling the 1D models
based on same EOS and opacities.
The averages on column mass density $\hav_{m}$ typically exhibit the
best agreement with the predictions of the full 3D model, in particular
at low metallicity. The geometrical averages $\hav_z$ exhibit large
deviations \citep[in agreement with ][]{Uitenbroek:2011p10448}, especially
for the TO stars. When one considers the comparison of the temperature
and density in Fig.~\ref{fig:temp}, then one can deduce that the
models with cooler stratifications are closer to the full 3D line
strength. Both models averaged on constant optical depth, $\hav_{\mathrm{Ross}}$
and $\hav_{\mathrm{500}}$, lead to systematically larger deviations from the full
3D line formation calculations than those obtained with $\hav_{m}$
models, in particular for low excitation $\ion{Fe}{i}$ for the metal-poor
TO star.
The resulting spectral line features with the logarithmic averages
$\left\langle \mathrm{3D}\right\rangle_{\log}$ are similar to plain $\hav_{\mathrm{Ross}}$ (therefore we refrain
from showing the latter), while averages enforcing hydrostatic equilibrium,
$\hav_{\mathrm{HSE}}$, clearly fail to closely reproduce the results from 3D
line formation \citep[similar to][]{Uitenbroek:2011p10448} and lead
to rather large errors in the line formation, in particular for the
metal-poor TO model (Fig. \ref{fig:fakelines2}). Furthermore, both
the flux-weighted and brightness-temperature averages, $T^{4}$ and
$T_{\mathrm{rad}}$, are in general very close to the plain average,
but often slightly less accurate, which is a somewhat surprising result
(see $T_{\mathrm{rad}}$ in Fig. \ref{fig:ov_fakelines}).
Another meaningful way to test the performance of the different averages
can be accomplished by comparing the deviation of the center-to-limb
variation (CLV) of the continuum intensity. In Fig. \ref{fig:ov_clv},
we show the differences of the continuum intensity, $\delta I_{\mu}=(I_{\mu}^{\left\langle \mathrm{3D}\right\rangle}-I_{\mu}^{3\mathrm{D}})/I_{\mu}^{3\mathrm{D}}$,
i.e. between the $\left\langle \mathrm{3D}\right\rangle$ and full 3D models. We find in general that
the $\left\langle \mathrm{3D}\right\rangle$ models overestimate the continuum intensity at disk center
($\mu=1$), while towards the limb ($\mu=0.2$) the $\left\langle \mathrm{3D}\right\rangle$ often
underestimate the intensity. The deviations of the different averages
are similar to the above findings with the comparison of the curve
of growth. The disk-center intensities of the 3D RHD models are matched
best by the averages on column mass density $\hav_{m}$, whereas the
geometrical averages $\hav_z$ display the largest discrepancies, in
particular for the RG model at solar metallicity with an overestimation
by $\sim60\,\%$. The results for the averages on optical depth are
once again midway between the two other kinds of averages. An interesting
aspect is that the brightness-temperature averages $T_{\mathrm{rad}}$
fail to render the continuum intensities exactly, which has to be
interpreted as a consequence of the non-linearity of the Planck function.
Our findings are qualitatively similar to those by \citet{Uitenbroek:2011p10448}.
\subsection{Cautionary remarks\label{sub:Cautionary-remarks}}
We remind the reader that LTE is often a very poor assumption at low
$\left[\mathrm{Fe}/\mathrm{H}\right]$ \citep[e.g.][]{Asplund:2005p7792} and thus that the abundance
differences presented in Figs. \ref{fig:fakelines1} and \ref{fig:fakelines2}
should not be added indiscriminately to results from standard 1D LTE
abundance analyses. In LTE, the difference between 3D and 1D models
can be very substantial for metal-poor stars for especially low excitation
and minority species like $\ion{Fe}{i}$ \citep[e.g.,][]{Asplund:1999p11771,Collet:2007p5617},
but those same lines also tend to be sensitive to departures from
LTE \citep[e.g.,][]{Bergemann:2012p20128,Lind:2012p427} in 1D and
$\left\langle \mathrm{3D}\right\rangle$ models, mainly due to overionization and overexcitation in
the presence of a hotter radiation field than the local kinetic temperature
(i.e., $J_{\lambda}>B_{\lambda}$). Although not explored for more
than Li, one would expect that the very cool upper atmospheric layers,
hence steep temperature gradients in metal-poor 3D models compared
with classical 1D models, are even more prone to substantial non-LTE
effects \citep[e.g.,][]{Asplund:2003p7793,Sbordone:2010p10214}. In
particular, neutral species of relatively low ionization energy, such
as $\ion{Fe}{i}$, typically suffer from significant positive NLTE abundance
corrections due to overionization \citep[e.g.,][]{Asplund:2005p7792,Bergemann:2012p20128,Lind:2012p427}
with low excitation lines are especially prone. For low-excitation
$\ion{Fe}{i}$ lines, one would therefore expect the 3D NLTE line strengths
to be more similar to the 1D case than the 3D LTE results due to the
positive NLTE corrections, partly compensating for the negative 3D
LTE corrections. We therefore caution the reader that the 3D LTE abundance
corrections presented here (3D LTE - 1D LTE) for $\ion{Fe}{i}$ lines are
likely to be too negative compared to the NLTE case (3D NLTE - 1D
NLTE). As a corollary, it is inappropriate to apply a 1D NLTE abundance
correction to a 3D LTE-inferred abundance when the latter is very
significant, as is often the case at low $\left[\mathrm{Fe}/\mathrm{H}\right]$.
\subsection{Comparison with 1D models\label{sub:1D-models}}
In Paper I we compared the $\hav_{\mathrm{Ross}}$ stratifications with 1D models
computed with the same EOS and opacity as used in the \textsc{Stagger}-code,
in order to quantify the differences arising solely from 1D modeling
based on MLT. The line formation calculations with 1D models perform
quite well at solar metallicity, with the exception of the cool dwarf
models (Fig. \ref{fig:fakelines1}). However, in the metal-poor case,
the lines based on the 1D models obviously do not correctly reproduce
the full 3D lines by overestimating the $T$-stratifications due to
the enforcement of radiative equilibrium in the upper atmosphere (Fig.
\ref{fig:fakelines2}). This is, in particular, distinctive for low-excitation
neutral iron lines as previously found by \citet{Asplund:1999p11771}
and \citet{Collet:2007p5617}. \citet{Kucinskas:2012p23943} present
similar findings for a solar-metallicity RG simulation as well, namely
that neutral iron lines based on 1D MLT models are slightly closer
to the full 3D lines compared to the $\left\langle \mathrm{3D}\right\rangle$ lines.
We note that in our 1D models the turbulent pressure is neglected,
and the mixing length is fixed with $\alpha_{\mathrm{MLT}}=1.5$,
both choices that will influence the stratification significantly.
Since their effect is strongest in convective zone below the optical
surface and the line formation region, the influence in terms of abundance
is likely small; in fact, \citet{Kucinskas:2012p23943} only found
a very small effect $<0.02\,\mathrm{dex}$ for the reduction in $\alpha_{\mathrm{MLT}}$
from 1.5 to 1.2. However, for metal-poor giants the influence can
be greater for lines with very high excitation potential.
\section{Conclusions\label{sec:Conclusions}}
We have investigated the properties of different methods in detail
for computing temporal and horizontal average stratifications from
3D RHD \textsc{Stagger}-grid simulations of stellar surface convection.
The choice of the reference depth is critical, as comparisons of the
various $\left\langle \mathrm{3D}\right\rangle$ demonstrated. We find in general that the temperature
stratifications of the $\hav_z$ and $\hav_{m}$ are hotter close to
the continuum forming layers and cooler in the upper layers compared
to averages on surfaces of constant optical $\hav_{\mathrm{Ross}}$ and $\hav_{\mathrm{500}}$,
while the density shows differences in the opposite sense. The flux-weighted
temperature average and brightness temperature average are distinctively
hotter than the plain averages, both close to the optical surface
and in the upper atmosphere, since the Planck function and the fourth
powers weights the larger temperatures higher. Averages obtained from
the logarithmic values lead to lower temperature and density distributions
by giving more weight the lower values in the distribution. These
characteristics increase with higher $T_{\mathrm{eff}}$, lower $\log g$ and
especially with lower $\left[\mathrm{Fe}/\mathrm{H}\right]$.
The statistical properties change depending on the reference depth
scale, since the transformation to the new depth scale will inevitably
imply a remapping of the values from different heights. The translation
to layers of constant optical depth will smooth out temperature fluctuations
as a byproduct: the temperature is in fact the main source of spatial
corrugation of the surfaces of constant optical depth due to the strong
temperature sensitivity of the dominant $\mathrm{H}^{-}$ continuum
opacity source. Therefore, the temperature contrast and extrema are
distinctively reduced, in particular in the superadiabatic region.
However, this has also the side effect of enhancing both contrast
and minimum-maximum range of the density. The concomitant remapping
of properties from deeper or higher layers during the transformation
to the new reference depth scale will in turn change the average values.
Furthermore, we examined the effects of reversed granulation in the
upper layers of metal-poor stars, namely the lowering of temperatures
above the granules in metal-poor 3D models compared to classical 1D
models. We found that the contribution of radiative reheating due
to weak spectral line absorption features relative to cooling due
to mechanical expansion in the upper atmospheric layers is reduced
towards higher $T_{\mathrm{eff}}$. On the other hand, the temperature in the
regions immediately above the intergranular lanes are primarily controlled
by mechanical expansion or compression and do not appear to be affected
by the reduced metallicity. The two combined effects result in an
enhanced contrast in the reversed granulation. This in turn leads
to an increase in the corrugation of the surfaces of constant optical
depth, which implies that the averages on constant optical depth are
sampling values from a very wide range in geometrical height, thereby
affecting the statistical properties such as mean value and contrast.
The comparison of $\ion{Fe}{i}$ and $\ion{Fe}{ii}$ calculated in full 3D and different
$\left\langle \mathrm{3D}\right\rangle$ atmosphere models reveals the surprising result that the averages
on column mass density $\hav_{m}$ typically provide the best representation
of the 3D model with respect to the line formation. The commonly preferred
averages on layers of constant optical depth $\hav_{\mathrm{Ross}}$ or $\hav_{\mathrm{500}}$
in general perform worse. We located the reason for the underperformance
in the predictions of 3D RHD by the $\left\langle \mathrm{3D}\right\rangle_{\tau}$ models being due
to the optical depth, $d\tau_{\lambda}=\rho\kappa_{\lambda}dz$, which
contains the additional non-linearity of opacity $\kappa_{\lambda}$,
in contrast to the column mass density, $dm=\rho dz$; therefore,
the statistical properties, in particular, the mean value, are more
prone to distinctive temperature fluctuations present in the superadiabatic
region and the upper layers, where the reversed granulation takes
place. The differences between the lines calculated with the $\left\langle \mathrm{3D}\right\rangle_{\tau}$
models and the full 3D RHD models are significant, in particular,
for metal-poor simulations due to the enhanced reversed granulation
in the upper layers. We find that the neutral $\ion{Fe}{i}$ lines with low
excitation potential feature the largest differences between the mean
$\left\langle \mathrm{3D}\right\rangle$ and full 3D line calculations. The 1D MLT models perform quite
well at solar metallicity; however, for metal-poor models the mismatch
is evident. Therefore, we caution against using 1D models for metal-poor
stars, which will lead to systematic errors in the spectral analysis.
For spectral line formation calculations with $\left\langle \mathrm{3D}\right\rangle$ models from
the \textsc{Stagger}-grid, we recommend using averages obtained on
layers of constant column mass density, $\hav_{m}$, since these provide
the closest match to the spectral line strengths obtained with the
full 3D RHD models. Furthermore, we advise strongly against using
geometrical averages $\hav_z$ for spectral line formation calculations.
For purposes of improving stellar structures and asteroseismology,
the $\hav_z$ models are, however, useful, since these averages alone
fulfill the hydrostatic equilibrium, and therefore, comparisons with
helioseismological observations are in better agreement.
It is obvious that the temporally and spatially averaged models are
incapable of substituting the full 3D atmospheric structure. The reduction
due to the averaging will unavoidably lead to sacrificing required
information. A promising intermediate approach could be the so-called
\textquotedbl{}1.5D\textquotedbl{} approximation. This approach emulates
atmospheric inhomogeneities, which are probed by the traversing radiation,
by considering a series of perturbed 1D stratifications for spectral
synthesis \citep[e.g., see][]{Ayres:2006p21937}. In the spirit of
the latter, one could utilize the temporal averaged histograms for
an improved spectral line synthesis, since these contain additional
information on the statistical distribution of the 3D simulations.
\begin{acknowledgements}
We acknowledge access to computing facilities at the Rechenzentrum
Garching (RZG) of the Max Planck Society and at the Australian National
Computational Infrastructure (NCI) where the simulations were carried
out. Remo Collet is the recipient of an Australian Research Council
Discovery Early Career Researcher Award (project number DE120102940).
We thank Tiago Pereira for his contribution.
\end{acknowledgements}
\bibliographystyle{aa}
|
2,869,038,154,610 | arxiv | \section{Introduction}
Non-Gaussian and in general non-classical states of light enjoy an increasing
interest in the quantum information community. They are indispensable building
blocks for entanglement generation, distillation and broadcasting, for quantum
information relays and networks in the realm of quantum information processing
based on infinite-dimensional optical systems and atomic ensembles
\cite{Cerf-book,Ill06}. As has been shown, it is impossible to perform optical
quantum computation with linear optics only and a non-linear interaction is
needed to perform a universal quantum computation \cite{Llo99}. Hence it is
unavoidable to investigate the nonlinearities of atomic systems or alternative
solutions for giant nonlinearities and efficient nonlinear coupling, such as
measurement-induced nonlinearities or other non-Gaussian operations. In
addition, the experimental difficulty associated with the ``on-line'' nonlinear
operations can be circumvent by using the off-line resources, the highly
non-Gaussian states prepared separately from the actual computation and fed
into the computation circuits when necessary to replace the nonlinear gates
\cite{Gho07,Ral03}. Continuous-variable quantum repeaters, quantum relays and a
number of other computation and communication tasks rely on non-Gaussian
operations or non-Gaussian states, states described by a non-Gaussian Wigner
function.
One of the examples of a non-linear coupling is the cross-Kerr effect which
involves controlling the refractive index experienced by one mode of the
electromagnetic field by the intensity of another. In this paper, we propose an
experiment that employs the cross-Kerr effect to create highly non-classical
non-Gaussian states of light via interaction of two coherent beams in an atomic
medium exhibiting electromagnetically-induced transparency \cite{Sch96},
subsequent measurement on one beam and feed-forward on the other. Such
experiment on the one hand responses to the need for an efficient source of
non-Gaussian states, on the other hand can be seen as a test-bench for the
strong non-linear coupling using an EIT based system. In our case, the first
evidence of the large non-linear phase shift in combination with enough
coherence to generate and preserve quantum features can be obtained using
merely a direct photodetection to verify the photon-number squeezing in one of
the output modes. This would pave the way for further applications of such
non-linear systems: Cross-Kerr nonlinear interaction provides a basis for
several proposals of quantum information protocols or their elements, such as
non-demolition photon number detection~\cite{Bea05}, C-NOT gate~\cite{Nem07},
or continuous-variable entanglement concentration
\cite{Fiu03,Men06}.
The first basic ingredient of our scheme is the nonlinear coupling based on the
third-order nonlinearity, an optical cross-Kerr effect \cite{She84}. Such
Kerr-type interaction of the two initially independent coherent beams $a$ and
$b$ entangles them producing a continuous-variable state of modes $a$ and $b$
with quantum correlations between photon number in one mode and phase of the
other. A subsequent measurement done on the mode $b$ generates a conditional
squeezed state of the mode $a$. This is a photon-number squeezed state
described for the first time in \cite{Kit86}. The mechanism behind it is a
re-shaping of the quantum uncertainty due to the Kerr-effect and the influence
of a local measurement on the system of two spatially-separated entangled
modes. A particular feature of the states squeezed using Kerr nonlinearity is
their non-Gaussian character in contrast to the squeezed states produced, e.g.,
in the parametric process based on the nonlinearity of the second order
\cite{Ill06}. The corresponding Wigner function has a specific crescent shape
and exhibits negativity in some regions of the phase space in the form of the
decaying fringes, resembling a section of the Wigner function of the Fock
states. Indeed, there is a strong connection between the states emerging in
this scheme and the Fock states: the generation of the non-classical state in
our scheme can be understood as a superposition effect between different
photon-number or Fock states. Hence the resultant Wigner function can be viewed
a result of interference between the Fock-state Wigner functions with different
photon numbers $n$. We will provide a more detailed discussion of the mechanism
producing this particular state in Sec. 3 and 4 of the paper.
Another inherent feature of our protocol is the controlled displacement. We are dealing with continuous variable quantum systems. It means, that although in each single run of the proposed experiment (corresponding to a single measurement result on mode $b$) a squeezed state is created, the detection of squeezing will not be possible because it requires a statistical processing of an ensemble of measurement results. In a number of subsequent runs, the overall squeezing vanishes, as we then will obtain a mixture of quantum uncertainties with different mean photon numbers and the overall state will become fuzzy. The way out is provided by the quantum feed-forward: the state in the mode $a$ is displaced in each run with a displacement amplitude determined by a corresponding measurement outcome in mode $b$. This procedure merges all differently displaced crescent-shaped uncertainties into a single crescent which exhibits amplitude (or photon-number) squeezing.
The non-classical non-Gaussian states of this type have not yet been
demonstrated experimentally although the Kerr effect, e.g. in optical nonlinear
fibres, was exploited extensively for squeezing and entanglement generation
(see e.g. \cite{Sil01,Hee03}). The challenging point is to produce the strong
enough nonlinear coupling to generate sensible non-Gaussian features. All the
experiments performed so far were working in the regime of weak nonlinearity
restricting themselves to the first stages of quantum state evolution, which
can be well approximated by the Gaussian Wigner function of the same type as
the one describing squeezing in parametric second-order nonlinear
processes. The most promising candidate for observing the large cross-Kerr
effect is the four-level atomic medium exhibiting electromagnetically induced
transparency (EIT)~\cite{Sch96}. Under certain conditions, the cross-phase
modulation in such media can reach values by many orders of magnitude larger
than it is possible in optical fibres and the phase shift between two photons
can reach values of $10^{-3}$ for hot atoms in a vapour cell or even $10^{-1}$
for cold atoms in a magneto-optical trap~\cite{Wan06} . The experimental
feasibility of achieving a large cross-Kerr interaction for continuous-wave
fields has been demonstrated using cold atoms in magneto-optical
trap~\cite{Kan03}. Recently, a suitable method was suggested for
group-velocity matched pulses and room temperature atomic gas
cell~\cite{Wan06}. The form of the cross-Kerr interaction in a such four-level
atomic system in the N-configuration can be derived in a simple
way~\cite{Sin07}. Essentially, the strong cross-Kerr interaction is caused by
the ac-Stark shift produced by the third perturbing field to the dark-state of
the lambda subsystem.
The paper is organised as follows. In Sec.~\ref{scheme} we present the
experimental scheme. Sec.~\ref{state-psi} discusses the non-Gaussian
properties of the state obtained in a single run of the experiment. In
Sec.~\ref{displacement} we show how a controlled displacement on this output
state can reproduce the same highly non-Gaussian state in multiple runs generating a constant output state with high accuracy. In Sec.~\ref{scaling} we discuss the scaling of the non-Gaussian effects with
experimental parameters and conclude in Sec.~\ref{conclusion}.
\section{Experimental scheme}
\label{scheme}
Consider the following experimental scheme sketched in Figure~\ref{setup}: two
coherent states with amplitudes $\alpha$ and $\beta$ occupying modes $a$ and
$b$, respectively, interact in a cross-Kerr medium and mode $b$ is then subject
to a measurement of the $\hat x$ quadrature via homodyne detection. Based on the
measurement outcome $x$, a displacement operation is performed on mode $a$. In this section we obtain the form of the output state $\rho$ and its properties will be discussed in the following sections.
\begin{figure}
\begin{center}
\includegraphics[width=80mm]{figure1.eps}
\caption{Proposed experimental setup for generating crescent states with
negative Wigner function: two coherent states interact in a cross-Kerr
medium, one of them is subject to measurement of the field quadrature, and
displacement is performed on the other one depending on the measurement
outcome.}
\label{setup}
\end{center}
\end{figure}
The Hamiltonian of the cross-Kerr interaction is
\begin{equation}
\hat H=-\gamma'\hat n_a\hat n_b,
\end{equation}
where $\gamma'$ expresses the strength of the interaction and $\hat n_a=\hat
a^\dagger\hat a$, $\hat n_b=\hat b^\dagger\hat b$ are the photon number
operators of the two modes. We introduce the phase shift between two photons
as $\gamma=\gamma't$, where $t$ is the interaction time, which gives the
evolution operator $\hat U=\exp({\rm i}\gamma\hat n_a\hat n_b)$. If the two modes
are originally in coherent states $\ket\alpha_a$ and $\ket\beta_b$, then after
the cross-Kerr interaction their state will be
\begin{eqnarray}\nonumber
\ket{\Psi}_{ab} = \exp(\gamma\hat n_a\hat n_b)\ket\alpha_a\ket\beta_b
&=\mbox e^{-|\alpha|^2/2}\sum_{n=0}^\infty \frac{\alpha^n}{\sqrt{n!}}
\exp(\gamma\hat n_a\hat n_b)\ket n_a\ket\beta_b \\
&=\mbox e^{-|\alpha|^2/2}\sum_{n=0}^\infty \frac{\alpha^n}{\sqrt{n!}}
\ket n_a \,\ket{\beta\mbox e^{{\rm i}\gamma n}}_b
\label{superposition}\end{eqnarray}
This is an entangled state where the photon number $n_a$ in mode $a$ is
correlated with the amplitude of coherent state in mode $b$ by introducing to it a phase shift dependent on $n_a$ (and vice versa).
In the next step the quadrature $\hat x=(\hat b+\hat b^\dagger)/\sqrt2$ of mode
$b$ is measured via homodyne detection with the outcome $x$, which results in
the following (unnormalized) state of mode $a$:
\begin{equation}
\hspace*{-22mm} \ket{\psi(x)}_a={}_b\bra{x} \Psi\rangle_{ab}
=\frac{\mbox e^{-|\alpha|^2/2}}{\sqrt[4]\pi}
\sum_{n=0}^\infty \frac{\alpha^n
\exp[-(x-\sqrt2\beta_n)^2/2+{\rm i}\sqrt2\beta'_nx-{\rm i}\beta_n\beta'_n]}
{\sqrt{n!}} \,\, \ket n_a
\label{entangled}
\end{equation}
where we have denoted $\beta_n={\rm Re}\,(\beta\mbox e^{{\rm i}\gamma n})$,
$\beta'_n={\rm Im}\,(\beta\mbox e^{{\rm i}\gamma n})$ and used the expression for a
coherent state in $x$-representation. The square of the norm of
$\ket{\psi(x)}_a$ is the probability density $P(x)$ that the particular
measurement outcome $x$ occurs.
As the last step, a displacement in the phase space is applied to mode $a$. The
resulting state of mode $a$ is then
\begin{equation}
\ket{\Phi(x)}=\hat D[d(x)]\ket{\psi(x)}=\mbox e^{d(x)\hat
a^\dagger-d^*(x)\hat a}\ket{\psi(x)},
\label{Phi}\end{equation}
where $d(x)$ is the displacement parameter that depends on the measurement
outcome $x$.
When the experiment is performed repeatedly, then the output state is averaged
to
\begin{equation}\label{rho}
\hat\rho=\int_{\mathbb R} \ket{\Phi(x)} \bra{\Phi(x)}\,{\rm d} x,
\end{equation}
which is properly normalized.
The state $\ket{\Phi(x)}$ in general depends on the measurement outcome
$x$. However, we will show that in a certain regime this dependence can be
quite weak.
This way, by running the experiment many times (and obtaining different values
of $x$), one can still get almost a pure state $\hat\rho$ at the output mode
$a$ that has highly non-classical properties.
\section{Properties of the state $\ket{\psi(x)}$}
\label{state-psi}
Before describing the averaged output state $\hat\rho$, in this section we will
focus on the state $\ket{\psi(x)}$ of mode $a$ after the $\hat x$ measurement
on mode $b$ has been performed. Consider the specific situation of
$\beta={\rm i}|\beta|\mbox e^{-{\rm i}\gamma|\alpha|^2}$, i.e., when the phase of of the
mode $b$ is set to $\pi/2-\gamma|\alpha|^2$. We also put a constraint on
$|\alpha|$, such that $|\alpha|\le10$ and the product $|\alpha\beta|\gamma$ is
of order of unity. The reasons for these assumptions will be explained later
in this section. Taking into account that $\gamma$ is several orders of
magnitude smaller than unity, it also holds that $\gamma|\alpha|\ll1$.
\begin{figure}[h]
\begin{center}
\includegraphics[width=5cm, angle=270]{figure2.eps}
\caption{Photon number probability distribution of the coherent state
$\ket\alpha$ (a, black curve) compared with the normalized distributions of
the states $\ket{\psi(x)}$. The parameters are $|\alpha|=6$,
$\gamma|\beta|=0.36$ and $x=-4$ (b, green curve), $x=0$ (c, blue curve) and
$x=4$ (d, red curve). The photon number distribution in the states
$\ket{\psi(x)}$ is about $4\times$ squeezed compared to the Poissonian
statistics of the coherent state $\ket\alpha$.} \label{fock}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=65mm]{figure3.eps}
\caption{Illustration in the phase space of how the photon number squeezing of
the state $\ket{\psi(x)}$ emerges: the red circles represent the uncertainty
regions of coherent states $\ket{\mbox e^{{\rm i}\gamma n}\beta}_b$ in
(\ref{superposition}) entangled with the photon numbers $n$ in mode $a$
($\overline n\approx|\alpha|^2$ denotes the most likely photon number). The
circles are distributed along the green arch centred at the origin of the
phase space, having the radius $|\beta|$ and stretching to the angle of
approximately $\gamma|\alpha|$ to both sides of the imaginary axis. If then
the quadrature $\hat x$ is measured on mode $b$ with a particular outcome $x$
(blue line represents the quadrature eigenstate $\ket x$), then only
those states are chosen from the superposition (\ref{entangled}) that have a
non-negligible overlap with $\ket x$. If the length of the arch,
$2\gamma|\alpha\beta|$, is larger than the spatial extension of the uncertainty
region of about unity, then only a relatively small number of circles have a
significant overlap with $\ket x$ and so the photon number variance in the
state $\ket{\psi(x)}$ is reduced significantly compared to the state
$\ket\alpha$.
The effect is the strongest when the arch is, very loosely speaking,
perpendicular to the line representing $\ket x$; this gives the phase condition
on $\beta$.}
\label{oblouk}
\end{center}
\end{figure}
In this particular case, for evaluating the real and imaginary parts of
$\beta\mbox e^{{\rm i}\gamma n}$ to calculate $\beta_n$ and $\beta'_n$ in
(\ref{entangled}), we use the expansion of the exponential function as
follows:
\begin{equation}\label{approx}
\beta\mbox e^{{\rm i}\gamma n}={\rm i}|\beta|\mbox e^{{\rm i}\gamma(n-|\alpha|^2)}
\approx{\rm i}|\beta|-\gamma|\beta|(n-|\alpha|^2).
\end{equation}
We terminated the series after the second term since $\gamma(n-|\alpha|^2)\ll1$
for all $n$ for which the probability of having $n$ photons in the state
$\ket\alpha$ is non-negligible. Indeed, the photon number distribution of the
state $\ket\alpha$ is Poissonian with both mean and variance equal to
$|\alpha|^2$, hence $(n-|\alpha|^2)$ is of order of $|\alpha|$ and therefore
$\gamma(n-|\alpha|^2)\ll1$ holds in agreement with our assumption
$\gamma|\alpha|\ll1$.
\begin{figure}
\begin{center}
\includegraphics[width=9cm, angle=0]{figure4a.eps}
\includegraphics[width=9cm, angle=0]{figure4b.eps}
\caption{Two different views of the Wigner function $W(u)$ of the output state
$\ket{\psi(x)}$ for $|\alpha|=6$, $\gamma|\beta|=0.36$ and $x=0$. The
product $|\alpha\beta\gamma|$ is equal to 2.16 and the phase of $\alpha$ is
set to $-\gamma|\beta|^2$ to make the phase of the output state zero. The
crescent shape and negative values of the Wigner function are clearly
visible.}
\label{wig1}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=9cm, angle=0]{figure5a.eps}
\includegraphics[width=9cm, angle=0]{figure5b.eps}
\caption{Two different views of the Wigner function $W(u)$ of the output state
$\ket{\psi(x)}$ for $|\alpha|=30$, $\gamma|\beta|=0.066$ and $x=0$ such that
$|\alpha\beta\gamma|= 1.98$. The state is strongly squeezed and is close to a
Gaussian state, hence the negativeness of the Wigner function is weak.}
\label{wig3}
\end{center}
\end{figure}
From (\ref{approx}) we get $\beta_n=-|\beta|\gamma(n-|\alpha|^2)$ and
$\beta'_n=|\beta|$. Substituting this into (\ref{entangled}) we arrive at
\begin{equation}
\hspace*{-20mm} \ket{\psi(x)} =
\frac{\mbox e^{{\rm i}\phi-|\alpha|^2/2}}{\sqrt[4]\pi}
\sum_{n=0}^\infty \frac{\alpha^n}{\sqrt{n!}}\,
\exp\left[{\rm i} \gamma|\beta|^2n-\gamma^2|\beta|^2\left(n-|\alpha|^2
+\frac x{\sqrt2|\beta|\gamma}\right)^2\right] \,\ket n,
\label{psix}
\end{equation}
where $\phi=\sqrt2\,x|\beta|+\gamma|\alpha\beta|^2$ is an irrelevant global
phase factor. The important term here is the exponent, which provides the
insight into the main physical effects due the cross-Kerr interaction. The
first term in the exponent shows that the phase of the state $\ket{\psi(x)}$ is
increased by $\gamma|\beta|^2$ with respect to the phase of the original state
$\ket\alpha$. The second, Gaussian term causes the photon number distribution
in $\ket{\psi(x)}$ to be altered with respect to the original Poissonian
distribution of the state $\ket\alpha$: If $2|\alpha\beta|\gamma>1$, then this
term becomes more important than the factor $\alpha^n/\sqrt{n!}$. Moreover, as
the product $2|\alpha\beta|\gamma$ grows above unity, the photon number
distribution quickly gets dominated by the Gaussian term. In this situation the
mean photon number is $|\alpha|^2-x/(\sqrt2|\beta|\gamma)$ compared to
$|\alpha|^2$ for the coherent state $\ket\alpha$ and the variance of the photon
number distribution is $1/(2\gamma|\beta|)^2$ compared to $|\alpha|^2$ for
$\ket\alpha$. This means that the state $\ket{\psi(x)}$ is squeezed in the
photon number by the factor of approximately $2|\alpha\beta|\gamma$, as
illustrated in Figure~\ref{fock} for $|\alpha\beta|\gamma\approx 2.16$ and
different values of the measurement result $x$. Thus if
$2|\alpha\beta|\gamma>1$, then the output state will exhibit a strongly
sub-Poissonian photon statistics. Figure~\ref{oblouk} gives a visualisation of
the above argumentation. Equation~(\ref{psix}) and Figure~\ref{oblouk} also
provide an argument for the particular choice of the phase of the mode $b$,
$\beta={\rm i}|\beta|\mbox e^{-{\rm i}\gamma|\alpha|^2}$. This way we have demonstrated
the photon number variance reduction -- photon number squeezing in the output
state $\ket{\psi(x)}$ obtained in a single run of the experiment. A
consequence of this reduction are several non-classical phenomena: negativity
and crescent shape of the Wigner function, as well as amplitude squeezing.
Figures~\ref{wig1} and \ref{wig3} show the Wigner function
corresponding to the output state $\ket{\psi(x)}$ of (\ref{psix}) for
different values of $|\alpha|$ and $\gamma|\beta|$. The above mentioned
features of the Wigner function are clearly visible especially in
Figure~\ref{wig1}.
Comparison of Figures~\ref{wig1} and \ref{wig3} provides a clear insight into the
assumptions made in the beginning of this section. For large values of
$|\alpha|$ (say over 10) one gets a state that is close to a squeezed Gaussian
state and the features we are looking for, namely the negativity and
crescent-shape of the Wigner function, are suppressed (Figure~\ref{wig3}). We
also assumed that the product $|\alpha\beta|\gamma$ is of order of unity
because otherwise the photon number squeezing is small, the state
$\ket{\psi(x)}$ is close to a coherent state and hence does not have strong
non-Gaussian and non-classical properties. $|\alpha\beta|\gamma>1$ requires a
large cross-Kerr nonlinearity and/or a large amplitude $\beta$. The former we
anticipate to be feasible with the four-level EIT based system mentioned in the
Introduction. The Wigner function in Figure~\ref{wig1} is plotted for the
parameters satisfying all the discussed restrictions, with
$|\alpha\beta|\gamma\approx 2.16$ as in Figure~\ref{fock}. Such features as the
negativity and crescent-shape are highly pronounced, confirming the strongly
non-Gaussian character of the output state.
\section{Effect of displacement in the phase space}
\label{displacement}
As we have seen, the output state $\ket{\psi(x)}$ depends on the value $x$
obtained by the measurement. Specifically, for different $x$, there will be
different mean photon numbers in the output state of the mode $a$. To prepare an
identical crescent state repeatedly, one strategy would be to post-select the
output at some sufficiently narrow interval of $x$, which would, however,
reduce the success rate significantly. Another option, however, is to correct
for the change of the mean photon number caused by the $x$-measurement. It
turns out that this correction can be achieved with a good accuracy by
performing a phase-space displacement operation $\hat D[(d(x)]$
with displacement parameter $d$ depending on $x$. In this way one can
obtain a state $\ket{\Phi(x)}$ almost independent of the measurement outcome
$x$, which makes $\hat\rho$ in (\ref{rho}), Figure~\ref{setup} nearly a pure state.
To find the displacement magnitude $|d(x)|$ that would compensate for the
change of the mean photon number, we recall that for coherent states, the mean
photon number depends quadratically on the amplitude. The
state $\ket{\psi(x)}$ has the mean photon number
$|\alpha|^2-x/(\sqrt2|\beta|\gamma)$ and therefore its
mean amplitude $|\alpha'|$ is equal to the square root of this number. In
order to revert to the original amplitude $|\alpha|$, we need to displace by $|\alpha|
-|\alpha'|$. The direction of the displacement in the phase space is given by
the phase of the state $\ket{\psi(x)}$, which is, as seen from
(\ref{psix}), equal to $\gamma|\beta|^2+\arg(\alpha)$. This way we arrive
at the displacement parameter
\begin{equation}
d(x)=\left(|\alpha|-\sqrt{|\alpha|^2-\frac{x}{\sqrt
2\,\gamma|\beta|}}\right) \,\mbox e^{{\rm i}[\gamma|\beta|^2+\arg(\alpha)]} .
\label{delta}
\end{equation}
To see that the state $\ket{\Phi(x)}$ in (\ref{Phi}) is really almost
independent of the measurement outcome $x$, we investigate the normalized
scalar product
\begin{equation}
F(x)\equiv\frac{\langle\Phi(0)\ket{\Phi(x)}}
{\sqrt{\langle\Phi(0)\ket{\Phi(0)}\langle\Phi(x)\ket{\Phi(x)}}}
\label{}
\end{equation}
that expresses the overlap of the displaced output state corresponding to an
arbitrary $x$ and the one corresponding to $x=0$. Figure~\ref{scalar_product}
shows the function $F(x)$ and the probability density
$P(x)=\langle\psi(x)\ket{\psi(x)}$ of the quadrature for a few values of
$|\alpha|$ and $\gamma|\beta|$. It can be seen that in the regions of $x$ where
the probability $P(x)$ is non-negligible, the function $F(x)$ has values close
to unity, and this is a typical behaviour. This means that the state
$\ket{\Phi(x)}$ is close to $\ket{\Phi(0)}$ for all $x$ that are likely to be
found in the quadrature measurement. Therefore $\hat\rho$, which is the mixture
of these states [see (\ref{rho})], will be almost a pure state. Numerical
simulations confirm this and the purity ${\cal P}={\rm Tr}\,\hat\rho^2$ has
values of approximately 0.95 for the states corresponding to parameters
$\alpha,\beta$ and $\gamma$ in Figures~\ref{wig1} and \ref{wig3}.
\begin{figure}
\begin{center}
\includegraphics[width=40mm,angle=270]{figure6a.eps}
\includegraphics[width=40mm,angle=270]{figure6b.eps}
\caption{The normalized scalar product $F(x)$ (solid red line) and the
probability density $P(x)$ (dashed green line) as a function of $x$ for (a)
$|\alpha|=6, \gamma|\beta|=0.36$ and (b) $|\alpha|=9,
\gamma|\beta|=0.2$. Since the function $F(x)$ is much broader than $P(x)$, the
fidelity $F(x)$ is almost unity for all values $x$ likely to be found in the
measurement, and therefore the state $\ket{\Phi(x)}$ is almost independent of
$x$.}
\label{scalar_product}
\end{center}
\end{figure}
Experimentally, the desired controlled displacement can be realized mixing the
state $\ket{\psi(x)}$ with a coherent state $\ket{d/t}$ on a beam splitter
with a very low transmissivity $t$. If $|t|\ll1$, then one obtains almost
exactly the state $\hat D(d)\ket{\psi(x)}$ at one beam splitter output
port~\cite{Fur98}. The amplitude $d/t$ can be varied electronically
according to the measured value $x$ and (\ref{delta}) using, e.g., the
scheme depicted in Figure~\ref{displ_exper}.
\begin{figure}
\begin{center}
\includegraphics[width=65mm]{figure7.eps}
\caption{A possible way of realizing the displacement operation: a vertically
polarised coherent beam is directed into a Pockels cell (PC) with the optical
axis rotated by $45$ degrees and controlled by the outcome $x$ of quadrature
measurement. The output beam then impinges onto a polarising beam splitter
(PBS), which yields a polarised coherent beam with amplitude depending on
$x$. This beam is then mixed with the state $\ket{\psi(x)}$ on a highly
reflective beam splitter (BS). The displaced state $\ket{\Phi(x)}$ is found
at one output. }
\label{displ_exper}
\end{center}
\end{figure}
\section{Scaling of the non-classical effects with $|\alpha|,|\beta|$ and
$\gamma$}
\label{scaling}
In what follows, we discuss the scaling of the effects described above for the
output state $\ket{\Phi(0)}=\ket{\psi(0)}$, as we have shown that the state
$\ket{\Phi(x)}$ is almost independent of $x$. An inspection of formulas
(\ref{psix}) and (\ref{delta}) shows that, apart from phase factors, the form
of the output states depends on $|\alpha|$ and the product $\gamma|\beta|$.
Consider first the effect of varying $\gamma|\beta|$ while keeping $|\alpha|$ constant.
For $\gamma|\beta|=0$, the output state reduces to the
original state $\ket\alpha$. For very small $\gamma|\beta|$ for which
$\gamma|\alpha\beta|\ll 1$, the state $\ket{\psi(0)}$ is close to a weakly
squeezed coherent state with amplitude $\alpha\exp({\rm i}\gamma|\beta|^2)$, so it
is almost Gaussian. As $\gamma|\beta|$ increases, $\ket{\psi(0)}$ starts to deviate from
a Gaussian state, negative regions of the Wigner functions get larger, and the
crescent shape of $\ket{\psi(0)}$ emerges, as well as the photon number squeezing. When
$\gamma|\beta|$ gets larger than unity, $\ket{\psi(x)}$ is almost a Fock state
$\ket n$ with photon number $n$ depending on the measured value of $x$, and it
is no longer true that $\ket{\Phi(x)}$ is almost independent of $x$.
In this situation our scheme (without the displacement operation) can be used
as a conditional source of Fock states.
The effect of varying $\alpha$ while keeping the product $\gamma|\alpha\beta|$
constant is different. The phase of $\alpha$ influences just the phase of
$\ket{\psi(0)}$. If $|\alpha|$ is small, the crescent shape of the Wigner
function is strongly pronounced, as well as its negativeness, because the state is squeezed in
photon number and is not far from a Fock state. Close to the origin of the
phase space the circles corresponding to a fixed photon number are more curved
than those farther from the origin, and hence the Wigner function corresponding
to smaller $|\alpha|$ shows a stronger crescent shape than the one
corresponding to larger $|\alpha|$. In contrast, for larger $|\alpha|$ the state is
closer to a Gaussian state whose Wigner function is positive. Hence the
negativeness of the Wigner function is more profound in case of smaller
$|\alpha|$, and this case is more appealing experimentally. Figures ~\ref{wig1} -- \ref{wig3} illustrate this behaviour.
\section{Conclusion}
\label{conclusion}
In conclusion, we have suggested a feasible scheme to generate highly
non-Gaussian states exhibiting crescent-shaped Wigner function with negative
regions. We envisage the application of such states in quantum information
processing using infinitely-dimensional, continuous-variable quantum
systems. The non-Gaussian operations and non-Gaussian states attract currently
an increasing attention in this context (see e.g. \cite{Gho07,NonGaus} and
references therein) and there is a high demand in the community for their
experimentally viable implementation. In our scheme, the initial states as
well as the detection (Figure~\ref{setup}), feed-forward
(Figure~\ref{displ_exper}) and evaluation steps belong to the standard toolbox
of quantum optics and are readily available in the laboratory. The first
verification of quantum features can be performed with merely direct
photodetection to prove the photon-number squeezing of Fig~\ref{fock}. The
negativity of the Wigner function and its crescent shape can be visualised
using quantum state tomography \cite{ulf}, a more involved but established
procedure \cite{Lvo07,Bre97}. For the first demonstration of the effect using
the tomographic measurement of the Wigner function of the state $\hat\rho$, the
displacement operation of Figure~\ref{displ_exper} can be simplified via
replacing it by the electronic one.
The challenging part of the proposed scheme is the strong nonlinear coupling
implied. Again, it represents an important building block in quantum
information processing attracting recently a substantial interest from both the
theoretical and experimental sides. A feasible nonlinear coupling device would
make a profound impact on the development of quantum communication and
computation protocols. So far, there are only few first quite involved
implementations demonstrating this effect \cite{Kan03}, \cite{Roo07}. For the
realization of our scheme, we suggest the nonlinear optical cross-Kerr effect
in an EIT based four-level atomic system, which was closely studied recently
both theoretically \cite{Sch96,Wan06,Sin07,Bea04} and experimentally
\cite{Kan03}. Furthermore, due to the experimental availability of the other
elements and particular simplicity of the first verification using direct
photodetection, our suggested scheme can serve as a test bench for the strong
nonlinear coupling preserving quantum effects.
\section*{Acknowlegments}
We thank Friedrich K\"onig and Chris Kuklewicz for valuable discussions. We
gratefully acknowledge the financial support of the EU project COVAQIAL
(FP6-511004) under STREP and of the Leverhulme Trust.\\ \\
|
2,869,038,154,611 | arxiv | \section{Introduction}\label{sec:intro}
Consider an $N$-node multi-hop network, where each node $i$ observes a convex function $f_i$, and all the $N$ nodes wish to determine an optimal consensus $x^*$, which minimizes the sum of the $f_i$'s:
\begin{align}
x^*\in\operatornamewithlimits{arg\,min}_x\sum_{i=1}^Nf_i(x).\label{eq:x*=argminsumf}
\end{align}
Since each node $i$ knows only its own $f_i$, the nodes cannot individually compute the optimal consensus $x^*$ and, thus, must collaborate to do so. This problem of achieving unconstrained, separable, convex consensus optimization has many applications in multi-agent systems and wired/\linebreak[0]wireless/\linebreak[0]social networks, some examples of which can be found in \cite{SonSH05, Rabbat04}.
The current literature offers a large body of work on distributed consensus (see \cite{Olfati-Saber07} for a survey), including a line of research that focuses on solving problem \eqref{eq:x*=argminsumf} for an optimal consensus $x^*$ \cite{Nedic01, Nedic01b, Nedic01c, Rabbat04, Rabbat05, SonSH05, Johansson07, Nedic07, Ram07, Johansson08, Lobel08, Nedic08, Nedic09, Ram09, Ram09b, Ram10}. This line of work has resulted in a family of discrete-time subgradient algorithms, including the {\em incremental} subgradient algorithms \cite{Nedic01, Nedic01b, Nedic01c, Rabbat04, Rabbat05, SonSH05, Johansson07, Ram07, Ram09}, whereby an estimate of $x^*$ is passed around the network, and the {\em non-incremental} ones \cite{Nedic07, Johansson08, Lobel08, Nedic08, Nedic09, Ram09b, Ram10}, whereby each node maintains an estimate of $x^*$ and updates it iteratively by exchanging information with neighbors.
Although the aforementioned subgradient algorithms are capable of solving problem \eqref{eq:x*=argminsumf} under fairly weak assumptions, they suffer from one or more of the following limitations:
\begin{enumerate}
\renewcommand{\theenumi}{L\arabic{enumi}}\itemsep-\parsep
\item {\em Stepsizes}: The algorithms require selection of stepsizes, which may be constant, diminishing, or dynamic. In general, constant stepsizes ensure only convergence to neighborhoods of $x^*$, rather than to $x^*$ itself. Moreover, they present an inevitable trade-off: larger stepsizes tend to yield larger convergence neighborhoods, while smaller ones tend to yield slower convergence. In contrast, diminishing stepsizes typically ensure asymptotic convergence. However, the convergence may be very slow, since the stepsizes may diminish too quickly. Finally, dynamic stepsizes allow shaping of the convergence behavior \cite{Nedic01, Nedic01c}. Unfortunately, their dynamics depend on global information that is often costly to obtain. Hence, selecting appropriate stepsizes is not a trivial task, and inappropriate choices can cause poor performance.
\item {\em Hamiltonian cycle}: Many incremental subgradient algorithms \cite{Nedic01, Nedic01b, Nedic01c, Rabbat04, Rabbat05, SonSH05, Ram07, Ram09} require the nodes to construct and maintain a Hamiltonian cycle (i.e., a closed path that visits every node exactly once) or a pseudo one (i.e., that allows multiple visits), which may be very difficult to carry out, especially in a decentralized, leaderless fashion.
\item {\em Multi-hop transmissions}: Some incremental subgradient algorithms \cite{Nedic01, Nedic01b, Nedic01c} require the node that has the latest estimate of $x^*$ to pass it on to a randomly and equiprobably chosen node in the network. This implies that every node must be aware of all the nodes in the network, and the algorithms must run alongside a routing protocol that enables such passing, which may not always be the case. The fact that the chosen node is typically multiple hops away also implies that these algorithms are communication inefficient, requiring plenty of transmissions (up to the network diameter) just to complete a single iteration.
\item {\em Lack of asymptotic convergence}: A variety of convergence properties have been established for the subgradient algorithms in \cite{Nedic01, Nedic01b, Nedic01c, Rabbat04, Rabbat05, SonSH05, Johansson07, Nedic07, Ram07, Johansson08, Lobel08, Nedic08, Nedic09, Ram09, Ram09b, Ram10}, including error bounds, convergence in expectations, convergence in limit inferiors, convergence rates, etc. In contrast, relatively few asymptotic convergence results have been reported, except for the subgradient algorithms with diminishing or dynamic stepsizes in \cite{Nedic01, Nedic01b, Nedic01c, Ram07, Ram09, Ram09b, Ram10}.
\end{enumerate}
Limitations~L1--L4 facing the subgradient algorithms raise the question of whether it is possible to devise algorithms, which require neither the notion of a stepsize, the construction of a (pseudo-)Hamiltonian cycle, nor the use of a routing protocol for multi-hop transmissions, and yet guarantee asymptotic convergence, bypassing~L1--L4. In this paper, we show that, for the {\em one-dimensional} case and with a few mild assumptions, such algorithms can be constructed. Specifically, instead of letting the network be directed, we assume that it is undirected, with possibly a time-varying topology unknown to any of the nodes. In addition, instead of letting each $f_i$ in \eqref{eq:x*=argminsumf} be convex but not necessarily differentiable, we assume that it is strictly convex, continuously differentiable, and has a minimizer. Based on these assumptions, we develop two gossip-style, distributed asynchronous iterative algorithms, referred to as {\em Pairwise Equalizing} (PE) and {\em Pairwise Bisectioning} (PB), which not only solve problem \eqref{eq:x*=argminsumf} and circumvent limitations~L1--L4, but also are rather easy to implement---although computationally they are more demanding than the subgradient algorithms.
As will be shown in the paper, PE and PB exhibit a number of notable features. First, they produce switched, nonlinear, networked dynamical systems whose state evolves along an invariant manifold whenever nodes gossip with each other. The switched systems are proved, using Lyapunov stability theory, to be asymptotically convergent, as long as the gossiping pattern is sufficiently rich. In particular, we show that the first-order convexity condition can be used to form a common Lyapunov function, as well as to characterize drops in its value after every gossip. Second, PE and PB do not belong to the family of subgradient algorithms as they utilize fundamentally different, non-gradient-based update rules that involve no stepsize. These update rules are synthesized from two simple ideas---{\em conservation} and {\em dissipation}---which are somewhat similar to how Pairwise Averaging \cite{Tsitsiklis84} was conceived back in the 1980s. Indeed, we show that PE reduces to Pairwise Averaging \cite{Tsitsiklis84} and Randomized Gossip Algorithm \cite{Boyd06} when problem \eqref{eq:x*=argminsumf} specializes to an averaging problem. Finally, PE requires one-time sharing of the $f_i$'s between gossiping nodes, which may be costly or impermissible in some applications. This requirement is eliminated by PB at the expense of more communications per iteration.
\section{Problem Formulation}\label{sec:probform}
Consider a multi-hop network consisting of $N\ge2$ nodes, connected by bidirectional links in a time-varying topology. The network is modeled as an undirected graph $\mathcal{G}(k)=(\mathcal{V},\mathcal{E}(k))$, where $k\in\mathbb{N}=\{0,1,2,\ldots\}$ denotes time, $\mathcal{V}=\{1,2,\ldots,N\}$ represents the set of $N$ nodes, and $\mathcal{E}(k)\subset\{\{i,j\}:i,j\in\mathcal{V},i\neq j\}$ represents the nonempty set of links at time $k$. Any two nodes $i,j\in\mathcal{V}$ are one-hop neighbors and can communicate at time $k\in\mathbb{N}$ if and only if $\{i,j\}\in\mathcal{E}(k)$.
Suppose, at time $k=0$, each node $i\in\mathcal{V}$ observes a function $f_i:\mathcal{X}\rightarrow\mathbb{R}$, which maps a nonempty open interval $\mathcal{X}\subset\mathbb{R}$ to $\mathbb{R}$, and which satisfies the following assumption:
\begin{assumption}\label{asm:fi}
For each $i\in\mathcal{V}$, the function $f_i$ is strictly convex, continuously differentiable, and has a minimizer $x_i^*\in\mathcal{X}$.
\end{assumption}
Suppose, upon observing the $f_i$'s, all the $N$ nodes wish to solve the following unconstrained, separable, convex optimization problem:
\begin{align}
\min_{x\in\mathcal{X}}F(x),\label{eq:minF}
\end{align}
where the function $F:\mathcal{X}\rightarrow\mathbb{R}$ is defined as $F(x)=\sum_{i\in\mathcal{V}}f_i(x)$. Clearly, $F$ is strictly convex and continuously differentiable. To show that $F$ has a unique minimizer in $\mathcal{X}$ so that problem \eqref{eq:minF} is well-posed, let $f_i':\mathcal{X}\rightarrow\mathbb{R}$ and $F':\mathcal{X}\rightarrow\mathbb{R}$ denote the derivatives of $f_i$ and $F$, respectively, and consider the following lemma and proposition:
\begin{lemma}\label{lem:exisuniqz}
Let $g_i:\mathcal{X}\rightarrow\mathbb{R}$ be a strictly increasing and continuous function and $z_i\in\mathcal{X}$ for $i=1,2,\ldots,n$. Then, there exists a unique $z\in\mathcal{X}$ such that $\sum_{i=1}^ng_i(z)=\sum_{i=1}^ng_i(z_i)$. Moreover, $z\in[\min_{i\in\{1,2,\ldots,n\}}z_i,\max_{i\in\{1,2,\ldots,n\}}z_i]$.
\end{lemma}
\begin{proof}
Since $g_i$ is strictly increasing and continuous $\forall i\in\{1,2,\ldots,n\}$, so is $\sum_{i=1}^ng_i:\mathcal{X}\rightarrow\mathbb{R}$. Thus, $\sum_{i=1}^ng_i(\min_{j\in\{1,2,\ldots,n\}}z_j)\le\sum_{i=1}^ng_i(z_i)\le\sum_{i=1}^ng_i(\max_{j\in\{1,2,\ldots,n\}}z_j)$. It follows from the Intermediate Value Theorem that there exists a unique $z\in\mathcal{X}$ such that $\sum_{i=1}^ng_i(z)=\sum_{i=1}^ng_i(z_i)$, and that $z\in[\min_{i\in\{1,2,\ldots,n\}}z_i,\max_{i\in\{1,2,\ldots,n\}}z_i]$.
\end{proof}
\begin{proposition}\label{pro:exisuniqx*}
With Assumption~\ref{asm:fi}, there exists a unique $x^*\in\mathcal{X}$, which satisfies $F'(x^*)=0$, minimizes $F$ over $\mathcal{X}$, and solves problem \eqref{eq:minF}, i.e., $x^*=\operatornamewithlimits{arg\,min}_{x\in\mathcal{X}}F(x)$.
\end{proposition}
\begin{proof}
By Assumption~\ref{asm:fi}, for every $i\in\mathcal{V}$, $f_i'$ is strictly increasing and continuous. By Lemma~\ref{lem:exisuniqz}, there exists a unique $x^*\in\mathcal{X}$ such that $\sum_{i\in\mathcal{V}}f_i'(x^*)=\sum_{i\in\mathcal{V}}f_i'(x_i^*)$. Since $F'=\sum_{i\in\mathcal{V}}f_i'$ and $f_i'(x_i^*)=0$ $\forall i\in\mathcal{V}$, $F'(x^*)=0$. Since $F$ is strictly convex, $x^*$ minimizes $F$ over $\mathcal{X}$, solving \eqref{eq:minF}.
\end{proof}
Given the above, the goal is to construct a distributed asynchronous iterative algorithm free of limitations~L1--L4, with which each node can asymptotically determine the unknown optimizer $x^*$.
\section{Pairwise Equalizing}\label{sec:PE}
In this section, we develop a gossip algorithm having the aforementioned features.
Suppose, at time $k=0$, each node $i\in\mathcal{V}$ creates a state variable $\hat{x}_i\in\mathcal{X}$ in its local memory, which represents its estimate of $x^*$. Also suppose, at each subsequent time $k\in\mathbb{P}=\{1,2,\ldots\}$, an iteration, called {\em iteration $k$}, takes place. Let $\hat{x}_i(0)$ represent the initial value of $\hat{x}_i$, and $\hat{x}_i(k)$ its value upon completing each iteration $k\in\mathbb{P}$. With this setup, the goal may be stated as
\begin{align}
\lim_{k\rightarrow\infty}\hat{x}_i(k)=x^*,\quad\forall i\in\mathcal{V}.\label{eq:limxh=x*}
\end{align}
To design an algorithm that guarantees \eqref{eq:limxh=x*}, consider a {\em conservation condition}
\begin{align}
\sum_{i\in\mathcal{V}}f_i'(\hat{x}_i(k))=0,\quad\forall k\in\mathbb{N},\label{eq:sumf'xh=0}
\end{align}
which says that the $\hat{x}_i(k)$'s evolve in a way that the sum of the derivatives $f_i'$'s, evaluated at the $\hat{x}_i(k)$'s, is always conserved at zero. Moreover, consider a {\em dissipation condition}
\begin{align}
\lim_{k\rightarrow\infty}\hat{x}_i(k)=\tilde{x},\quad\forall i\in\mathcal{V},\;\text{for some}\;\tilde{x}\in\mathcal{X},\label{eq:limxh=xt}
\end{align}
which says that the $\hat{x}_i(k)$'s gradually dissipate their differences and asymptotically achieve some arbitrary consensus $\tilde{x}\in\mathcal{X}$. Note that if \eqref{eq:sumf'xh=0} is met, then $\lim_{k\rightarrow\infty}\sum_{i\in\mathcal{V}}f_i'(\hat{x}_i(k))=\lim_{k\rightarrow\infty}0=0$. If, in addition, \eqref{eq:limxh=xt} is met, then due to the continuity of every $f_i'$, $\sum_{i\in\mathcal{V}}\lim_{k\rightarrow\infty}f_i'(\hat{x}_i(k))=\sum_{i\in\mathcal{V}}f_i'(\lim_{k\rightarrow\infty}\hat{x}_i(k))=\sum_{i\in\mathcal{V}}f_i'(\tilde{x})=F'(\tilde{x})$. Because $\lim_{k\rightarrow\infty}f_i'(\hat{x}_i(k))$ exists for every $i\in\mathcal{V}$, $\lim_{k\rightarrow\infty}\sum_{i\in\mathcal{V}}f_i'(\hat{x}_i(k))=\sum_{i\in\mathcal{V}}\lim_{k\rightarrow\infty}f_i'(\hat{x}_i(k))$. Combining the above, we obtain $F'(\tilde{x})=0$. From Proposition~\ref{pro:exisuniqx*}, we see that the arbitrary consensus $\tilde{x}$ must be the unknown optimizer $x^*$, i.e., $\tilde{x}=x^*$, so that \eqref{eq:limxh=x*} holds. Therefore, to design an algorithm that ensures \eqref{eq:limxh=x*}---where $x^*$ explicitly appears, it suffices to make the algorithm satisfy both the conservation and dissipation conditions \eqref{eq:sumf'xh=0} and \eqref{eq:limxh=xt}---where $x^*$ is implicitly encoded.
To this end, observe that \eqref{eq:sumf'xh=0} holds if and only if the $\hat{x}_i(0)$'s are such that $\sum_{i\in\mathcal{V}}f_i'(\hat{x}_i(0))=0$, and the $\hat{x}_i(k)$'s are related to the $\hat{x}_i(k-1)$'s through
\begin{align}
\sum_{i\in\mathcal{V}}f_i'(\hat{x}_i(k))=\sum_{i\in\mathcal{V}}f_i'(\hat{x}_i(k-1)),\quad\forall k\in\mathbb{P}.\label{eq:sumf'xh=sumf'xh}
\end{align}
To satisfy $\sum_{i\in\mathcal{V}}f_i'(\hat{x}_i(0))=0$, it suffices that each node $i\in\mathcal{V}$ computes $x_i^*$ on its own and sets
\begin{align}
\hat{x}_i(0)=x_i^*,\quad\forall i\in\mathcal{V},\label{eq:xh0=x*}
\end{align}
since $f_i'(x_i^*)=0$. To satisfy \eqref{eq:sumf'xh=sumf'xh}, consider a gossip algorithm, whereby at each iteration $k\in\mathbb{P}$, a pair $u(k)=\{u_1(k),u_2(k)\}\in\mathcal{E}(k)$ of one-hop neighbors $u_1(k)$ and $u_2(k)$ gossip and update their $\hat{x}_{u_1(k)}(k)$ and $\hat{x}_{u_2(k)}(k)$, while the rest of the $N$ nodes stay idle, i.e.,
\begin{align}
\hat{x}_i(k)=\hat{x}_i(k-1),\quad\forall k\in\mathbb{P},\;\forall i\in\mathcal{V}-u(k).\label{eq:xh=xh/u}
\end{align}
With \eqref{eq:xh=xh/u}, equation \eqref{eq:sumf'xh=sumf'xh} simplifies to
\begin{align}
f_{u_1(k)}'(\hat{x}_{u_1(k)}(k))+f_{u_2(k)}'(\hat{x}_{u_2(k)}(k))=f_{u_1(k)}'(\hat{x}_{u_1(k)}(k-1))+f_{u_2(k)}'(\hat{x}_{u_2(k)}(k-1)),\quad\forall k\in\mathbb{P}.\label{eq:f'xhf'xh=f'xhf'xh}
\end{align}
Hence, all that is needed for \eqref{eq:sumf'xh=sumf'xh} to hold is a gossip between nodes $u_1(k)$ and $u_2(k)$ to share their $f_{u_1(k)}$, $f_{u_2(k)}$, $\hat{x}_{u_1(k)}(k-1)$, and/or $\hat{x}_{u_2(k)}(k-1)$, followed by a joint update of their $\hat{x}_{u_1(k)}(k)$ and $\hat{x}_{u_2(k)}(k)$, which ensures \eqref{eq:f'xhf'xh=f'xhf'xh}.
Obviously, \eqref{eq:f'xhf'xh=f'xhf'xh} alone does not uniquely determine $\hat{x}_{u_1(k)}(k)$ and $\hat{x}_{u_2(k)}(k)$. This suggests that the available degree of freedom may be used to account for the dissipation condition \eqref{eq:limxh=xt}. Unlike the conservation condition \eqref{eq:sumf'xh=0}, however, \eqref{eq:limxh=xt} is about where the $\hat{x}_i(k)$'s should approach as $k\rightarrow\infty$, which nodes $u_1(k)$ and $u_2(k)$ cannot guarantee themselves since they are only responsible for two of the $N$ $\hat{x}_i(k)$'s. Nevertheless, given that all the $N$ $\hat{x}_i(k)$'s should approach the {\em same} limit, nodes $u_1(k)$ and $u_2(k)$ can help make this happen by imposing an {\em equalizing condition}
\begin{align}
\hat{x}_{u_1(k)}(k)=\hat{x}_{u_2(k)}(k),\quad\forall k\in\mathbb{P}.\label{eq:xh=xh}
\end{align}
With \eqref{eq:xh=xh} added, there are now two equations with two variables, providing nodes $u_1(k)$ and $u_2(k)$ a chance to uniquely determine $\hat{x}_{u_1(k)}(k)$ and $\hat{x}_{u_2(k)}(k)$ from \eqref{eq:f'xhf'xh=f'xhf'xh} and \eqref{eq:xh=xh}.
The following proposition asserts that \eqref{eq:f'xhf'xh=f'xhf'xh} and \eqref{eq:xh=xh} always have a unique solution, so that the evolution of the $\hat{x}_i(k)$'s is well-defined:
\begin{proposition}\label{pro:xhwelldef}
With Assumption~\ref{asm:fi} and \eqref{eq:xh0=x*}--\eqref{eq:xh=xh}, $\hat{x}_i(k)$ $\forall k\in\mathbb{N}$ $\forall i\in\mathcal{V}$ are well-defined, i.e., unambiguous and in $\mathcal{X}$. Moreover, $[\min\limits_{i\in\mathcal{V}}\hat{x}_i(k),\max\limits_{i\in\mathcal{V}}\hat{x}_i(k)]\subset[\min\limits_{i\in\mathcal{V}}\hat{x}_i(k-1),\max\limits_{i\in\mathcal{V}}\hat{x}_i(k-1)]$ $\forall k\in\mathbb{P}$.
\end{proposition}
\begin{proof}
By induction on $k\in\mathbb{N}$. By Assumption~\ref{asm:fi} and \eqref{eq:xh0=x*}, $\hat{x}_i(0)$ $\forall i\in\mathcal{V}$ are unambiguous and in $\mathcal{X}$. Next, let $k\in\mathbb{P}$ and suppose $\hat{x}_i(k-1)$ $\forall i\in\mathcal{V}$ are unambiguous and in $\mathcal{X}$. We show that so are $\hat{x}_i(k)$ $\forall i\in\mathcal{V}$. From \eqref{eq:xh=xh/u}, $\hat{x}_i(k)$ $\forall i\in\mathcal{V}-u(k)$ are unambiguous and in $\mathcal{X}$. To show that so are $\hat{x}_{u_1(k)}(k)$ and $\hat{x}_{u_2(k)}(k)$, we show that \eqref{eq:f'xhf'xh=f'xhf'xh} and \eqref{eq:xh=xh} have a unique solution $(\hat{x}_{u_1(k)}(k),\hat{x}_{u_2(k)}(k))\in\mathcal{X}^2$. By Lemma~\ref{lem:exisuniqz}, there is a unique $z\in\mathcal{X}$ such that
\begin{align}
f_{u_1(k)}'(z)+f_{u_2(k)}'(z)=f_{u_1(k)}'(\hat{x}_{u_1(k)}(k-1))+f_{u_2(k)}'(\hat{x}_{u_2(k)}(k-1)),\label{eq:f'zf'z=f'xhf'xh}
\end{align}
which satisfies $z\in[\min_{i\in u(k)}\hat{x}_i(k-1),\max_{i\in u(k)}\hat{x}_i(k-1)]$. Setting $\hat{x}_{u_1(k)}(k)=\hat{x}_{u_2(k)}(k)=z$, we see that $(\hat{x}_{u_1(k)}(k),\hat{x}_{u_2(k)}(k))$ is a solution to \eqref{eq:f'xhf'xh=f'xhf'xh} and \eqref{eq:xh=xh}, confirming the existence. Now let $(a_1,a_2)\in\mathcal{X}^2$ and $(b_1,b_2)\in\mathcal{X}^2$ be two solutions of \eqref{eq:f'xhf'xh=f'xhf'xh} and \eqref{eq:xh=xh}. Then, due to \eqref{eq:xh=xh}, \eqref{eq:f'xhf'xh=f'xhf'xh}, and Lemma~\ref{lem:exisuniqz}, we have $a_1=a_2=b_1=b_2$, confirming the uniqueness. Therefore, $\hat{x}_i(k)$ $\forall i\in\mathcal{V}$ are well-defined as desired. Finally, the second statement follows from \eqref{eq:xh=xh/u} and the fact that $\hat{x}_{u_1(k)}(k)=\hat{x}_{u_2(k)}(k)\in[\min_{i\in u(k)}\hat{x}_i(k-1),\max_{i\in u(k)}\hat{x}_i(k-1)]$ $\forall k\in\mathbb{P}$.
\end{proof}
Proposition~\ref{pro:xhwelldef} calls for a few remarks. First, the interval $[\min_{i\in\mathcal{V}}\hat{x}_i(k),\max_{i\in\mathcal{V}}\hat{x}_i(k)]$ can only shrink or remain unchanged over time $k$. While this does not guarantee the dissipation condition \eqref{eq:limxh=xt}, it shows that the $\hat{x}_i(k)$'s are ``trying'' to converge and are, at the very least, bounded even if $\mathcal{X}$ is not. Second, the proofs of Proposition~\ref{pro:xhwelldef} and Lemma~\ref{lem:exisuniqz} suggest a simple, practical procedure for nodes $u_1(k)$ and $u_2(k)$ to solve \eqref{eq:f'xhf'xh=f'xhf'xh} and \eqref{eq:xh=xh} for $(\hat{x}_{u_1(k)}(k),\hat{x}_{u_2(k)}(k))$: apply a numerical {\em root-finding method}, such as the {\em bisection method} with initial bracket $[\min_{i\in u(k)}\hat{x}_i(k-1),\max_{i\in u(k)}\hat{x}_i(k-1)]$, to solve \eqref{eq:f'zf'z=f'xhf'xh} for the unique $z$ and then set $\hat{x}_{u_1(k)}(k)=\hat{x}_{u_2(k)}(k)=z$. Finally, since \eqref{eq:f'zf'z=f'xhf'xh} always has a unique solution $z$, we can eliminate $z$ and write
\begin{align}
\hat{x}_{u_1(k)}(k)=\hat{x}_{u_2(k)}(k)=(f_{u_1(k)}'+f_{u_2(k)}')^{-1}(f_{u_1(k)}'(\hat{x}_{u_1(k)}(k-1))+f_{u_2(k)}'(\hat{x}_{u_2(k)}(k-1))),\quad\forall k\in\mathbb{P},\label{eq:xh=xh=f'f'invf'xhf'xh}
\end{align}
where $(f_i'+f_j')^{-1}:(f_i'+f_j')(\mathcal{X})\rightarrow\mathcal{X}$ denotes the inverse of the injective function $f_i'+f_j'$ with its codomain restricted to its range.
Expressions \eqref{eq:xh0=x*}, \eqref{eq:xh=xh/u}, and \eqref{eq:xh=xh=f'f'invf'xhf'xh} collectively define a gossip-style, distributed asynchronous iterative algorithm that yields a switched, nonlinear, networked dynamical system
\begin{align}
\hat{x}_i(k)=\begin{cases}(\sum_{j\in u(k)}f_j')^{-1}(\sum_{j\in u(k)}f_j'(\hat{x}_j(k-1))), & \text{if $i\in u(k)$},\\ \hat{x}_i(k-1), & \text{otherwise},\end{cases}\quad\forall k\in\mathbb{P},\;\forall i\in\mathcal{V},\label{eq:xh=sumf'invsumf'xhifiinuxh}
\end{align}
with initial condition \eqref{eq:xh0=x*}, and with $(u(k))_{k=1}^\infty$ representing the sequence of gossiping nodes that trigger the switchings. As this algorithm ensures the conservation condition \eqref{eq:sumf'xh=0}, the state trajectory $(\hat{x}_1(k),\hat{x}_2(k),\ldots,\hat{x}_N(k))$ must remain on an $(N-1)$-dimensional manifold $\mathcal{M}=\{(x_1,x_2,\ldots,x_N)\in\mathcal{X}^N:\sum_{i\in\mathcal{V}}f_i'(x_i)=0\}\subset\mathcal{X}^N\subset\mathbb{R}^N$, making $\mathcal{M}$ an invariant set. Given that the algorithm involves repeated, pairwise equalizing of the $\hat{x}_i(k)$'s, we refer to it as {\em Pairwise Equalizing} (PE). PE may be expressed in a compact algorithmic form as follows:
\begin{algorithm}[Pairwise Equalizing]\label{alg:PE}
\begin{algorithminit}{}
\item Each node $i\in\mathcal{V}$ computes $x_i^*\in\mathcal{X}$, creates a variable $\hat{x}_i\in\mathcal{X}$, and sets $\hat{x}_i\leftarrow x_i^*$.
\end{algorithminit}
\begin{algorithmoper}{At each iteration:}
\item A node with one or more one-hop neighbors, say, node $i$, initiates the iteration and selects a one-hop neighbor, say, node $j$, to gossip. Nodes $i$ and $j$ select one of two ways to gossip by labeling themselves as either nodes $a$ and $b$, or nodes $b$ and $a$, respectively, where $\{a,b\}=\{i,j\}$. If node $b$ does not know $f_a$, node $a$ transmits $f_a$ to node $b$. Node $a$ transmits $\hat{x}_a$ to node $b$. Node $b$ sets $\hat{x}_b\leftarrow(f_a'+f_b')^{-1}(f_a'(\hat{x}_a)+f_b'(\hat{x}_b))$ and transmits $\hat{x}_b$ to node $a$. Node $a$ sets $\hat{x}_a\leftarrow\hat{x}_b$.
\end{algorithmoper}
\end{algorithm}
Due to space limitations, we omit remarks concerning the execution of Algorithm~\ref{alg:PE} and refer the reader to an earlier, conference version of this paper \cite{LuJ10}.
Notice that PE does not rely on a stepsize parameter to execute, nor does it require the construction of a (pseudo-)Hamiltonian cycle, as well as the concurrent use of a routing protocol for multi-hop transmissions. Indeed, all it essentially needs is that every node is capable of applying a root-finding method, maintaining a list of its one-hop neighbors, and remembering the functions it learns along the way. Therefore, PE overcomes limitations~L1--L3, while being rather easy to implement---although computationally it is more demanding than the subgradient algorithms.
To show that PE asymptotically converges and, thus, circumvents~L4, let $\mathbf{x}^*=(x^*,x^*,\ldots,x^*)$ and $\mathbf{x}(k)=(\hat{x}_1(k),\hat{x}_2(k),\ldots,\hat{x}_N(k))$. Then, from Propositions~\ref{pro:exisuniqx*} and~\ref{pro:xhwelldef}, $\mathbf{x}^*\in\mathcal{X}^N$ and $\mathbf{x}(k)\in\mathcal{X}^N$ $\forall k\in\mathbb{N}$. In addition, due to \eqref{eq:xh=sumf'invsumf'xhifiinuxh}, if $\mathbf{x}(k)=\mathbf{x}^*$ for some $k\in\mathbb{N}$, then $\mathbf{x}(\ell)=\mathbf{x}^*$ $\forall\ell>k$. Hence, $\mathbf{x}^*$ is an equilibrium point of the system \eqref{eq:xh=sumf'invsumf'xhifiinuxh}. To show that $\lim_{k\rightarrow\infty}\mathbf{x}(k)=\mathbf{x}^*$, i.e., \eqref{eq:limxh=x*} holds, we seek to construct a Lyapunov function. To this end, recall that for any strictly convex and differentiable function $f:\mathcal{X}\rightarrow\mathbb{R}$, the first-order convexity condition says that
\begin{align}
f(y)\ge f(x)+f'(x)(y-x),\quad\forall x,y\in\mathcal{X},\label{eq:fy>=fxf'xyx}
\end{align}
where the equality holds if and only if $x=y$. This suggests the following Lyapunov function candidate $V:\mathcal{X}^N\subset\mathbb{R}^N\rightarrow\mathbb{R}$, which exploits the convexity of the $f_i$'s:
\begin{align}
V(\mathbf{x}(k))=\sum_{i\in\mathcal{V}}f_i(x^*)-f_i(\hat{x}_i(k))-f_i'(\hat{x}_i(k))(x^*-\hat{x}_i(k)).\label{eq:V=sumfx*fxhf'xhx*xh}
\end{align}
Notice that $V$ in \eqref{eq:V=sumfx*fxhf'xhx*xh} is well-defined. Moreover, due to Assumption~\ref{asm:fi} and \eqref{eq:fy>=fxf'xyx}, $V$ is continuous and positive definite with respect to $\mathbf{x}^*$, i.e., $V(\mathbf{x}(k))\ge0$ $\forall\mathbf{x}(k)\in\mathcal{X}^N$, where the equality holds if and only if $\mathbf{x}(k)=\mathbf{x}^*$. Therefore, to prove \eqref{eq:limxh=x*}, it suffices to show that
\begin{align}
\lim_{k\rightarrow\infty}V(\mathbf{x}(k))=0.\label{eq:limV=0}
\end{align}
The following lemma represents the first step toward establishing \eqref{eq:limV=0}:
\begin{lemma}\label{lem:PEVnonincr}
Consider the use of PE described in Algorithm~\ref{alg:PE}. Suppose Assumption~\ref{asm:fi} holds. Then, for any given $(u(k))_{k=1}^\infty$, $(V(\mathbf{x}(k)))_{k=0}^\infty$ is non-increasing and satisfies
\begin{align}
V(\mathbf{x}(k))-V(\mathbf{x}(k-1))=-\sum_{i\in u(k)}f_i(\hat{x}_i(k))-f_i(\hat{x}_i(k-1))-f_i'(\hat{x}_i(k-1))(\hat{x}_i(k)&-\hat{x}_i(k-1)),\nonumber\\
&\quad\forall k\in\mathbb{P}.\label{eq:VV=sumfxhfxhf'xhxhxh}
\end{align}
\end{lemma}
\begin{proof}
Let $(u(k))_{k=1}^\infty$ be given. Then, from \eqref{eq:V=sumfx*fxhf'xhx*xh} and \eqref{eq:xh=sumf'invsumf'xhifiinuxh}, we have $V(\mathbf{x}(k))-V(\mathbf{x}(k-1))=-\sum_{i\in u(k)}f_i(\hat{x}_i(k))-f_i(\hat{x}_i(k-1))+f_i'(\hat{x}_i(k))x^*-f_i'(\hat{x}_i(k-1))x^*-f_i'(\hat{x}_i(k))\hat{x}_i(k)+f_i'(\hat{x}_i(k-1))\hat{x}_i(k-1)$ $\forall k\in\mathbb{P}$. Due to \eqref{eq:xh=sumf'invsumf'xhifiinuxh}, $-\sum_{i\in u(k)}f_i'(\hat{x}_i(k))x^*$ cancels $\sum_{i\in u(k)}f_i'(\hat{x}_i(k-1))x^*$, while $\sum_{i\in u(k)}f_i'(\hat{x}_i(k))\hat{x}_i(k)$ becomes $\sum_{i\in u(k)}f_i'(\hat{x}_i(k-1))\hat{x}_i(k)$. This proves \eqref{eq:VV=sumfxhfxhf'xhxhxh}. Note that the right-hand side of \eqref{eq:VV=sumfxhfxhf'xhxhxh} is nonpositive due to \eqref{eq:fy>=fxf'xyx}. Hence, $(V(\mathbf{x}(k)))_{k=0}^\infty$ is non-increasing.
\end{proof}
Lemma~\ref{lem:PEVnonincr} has several implications. First, upon completing each iteration $k\in\mathbb{P}$ by {\em any} two nodes $u_1(k)$ and $u_2(k)$, the value of $V$ must either decrease or, at worst, stay the same, where the latter occurs if and only if $\hat{x}_{u_1(k)}(k-1)=\hat{x}_{u_2(k)}(k-1)$. Second, since $(V(\mathbf{x}(k)))_{k=0}^\infty$ is non-increasing irrespective of $(u(k))_{k=1}^\infty$, $V$ in \eqref{eq:V=sumfx*fxhf'xhx*xh} may be regarded as a {\em common} Lyapunov function for the nonlinear switched system \eqref{eq:xh=sumf'invsumf'xhifiinuxh}, which has as many as $\frac{N(N-1)}{2}$ different dynamics, corresponding to the $\frac{N(N-1)}{2}$ possible gossiping pairs. Finally, the first-order convexity condition \eqref{eq:fy>=fxf'xyx} can be used not only to form the common Lyapunov function $V$, but also to characterize drops in its value in \eqref{eq:VV=sumfxhfxhf'xhxhxh} after every gossip. This is akin to how quadratic functions may be used to form a common Lyapunov function $V(k)=x^T(k)Px(k)$ for a linear switched system $x(k+1)=A(k)x(k)$, $A(k)\in\{A_1,A_2,\ldots,A_M\}$, as well as to characterize drops in $V(k)$ via $V(k+1)-V(k)=x^T(k)(A_i^TPA_i-P)x(k)=-x^T(k)Q_ix(k)$. Indeed, as we will show later, when problem \eqref{eq:minF} specializes to an averaging problem, where the nonlinear switched system \eqref{eq:xh=sumf'invsumf'xhifiinuxh} becomes linear, both $V$ and its drop become quadratic functions.
As $(V(\mathbf{x}(k)))_{k=0}^\infty$ is nonnegative and non-increasing, $\lim_{k\rightarrow\infty}V(\mathbf{x}(k))$ exists and is nonnegative. This, however, is insufficient for us to conclude that $\lim_{k\rightarrow\infty}V(\mathbf{x}(k))=0$, since, for some pathological gossiping patterns, $\lim_{k\rightarrow\infty}V(\mathbf{x}(k))$ can be positive (see \cite{LuJ10} for examples). Thus, some restrictions must be imposed on the gossiping pattern, in order to establish \eqref{eq:limV=0}. To this end, let $\mathcal{E}_\infty=\{\{i,j\}:u(k)=\{i,j\}\;\text{for infinitely many}\;k\in\mathbb{P}\}$, so that a link $\{i,j\}$ is in $\mathcal{E}_\infty$ if and only if nodes $i$ and $j$ gossip with each other infinitely often. Then, we may state the following restriction on the gossiping pattern, which was first adopted in \cite{Tsitsiklis84} and is not difficult to satisfy in practice \cite{LuJ10}:
\begin{assumption}\label{asm:PEnetconn}
The sequence $(u(k))_{k=1}^\infty$ is such that the graph $(\mathcal{V},\mathcal{E}_\infty)$ is connected.
\end{assumption}
The following theorem says that, under Assumption~\ref{asm:PEnetconn} on the gossiping pattern, PE ensures asymptotic convergence of all the $\hat{x}_i(k)$'s to $x^*$, circumventing limitation~L4:
\begin{theorem}\label{thm:PEasymconv}
Consider the use of PE described in Algorithm~\ref{alg:PE}. Suppose Assumptions~\ref{asm:fi} and~\ref{asm:PEnetconn} hold. Then, \eqref{eq:limV=0} and \eqref{eq:limxh=x*} hold.
\end{theorem}
\begin{proof}
See Appendix~\ref{ssec:proofthmPEasymconv}.
\end{proof}
Finally, we point out that the above results may be viewed as a natural generalization of some known results in distributed averaging. Consider a special case where each node $i\in\mathcal{V}$ observes not an arbitrary function $f_i$, but a quadratic one of the form $f_i(x)=\frac{1}{2}(x-y_i)^2+c_i$ with domain $\mathcal{X}=\mathbb{R}$ and parameters $y_i,c_i\in\mathbb{R}$. In this case, finding the unknown optimizer $x^*$ amounts to calculating the network-wide average $\frac{1}{N}\sum_{i\in\mathcal{V}}y_i$ of the node ``observations'' $y_i$'s, so that the convex optimization problem \eqref{eq:minF} becomes an averaging problem. In addition, initializing the node estimates $\hat{x}_i(0)$'s simply means setting them to the $y_i$'s, and equalizing $\hat{x}_{u_1(k)}(k)$ and $\hat{x}_{u_2(k)}(k)$ simply means averaging them, so that PE reduces to Pairwise Averaging \cite{Tsitsiklis84} and Randomized Gossip Algorithm \cite{Boyd06}. Moreover, the invariant manifold $\mathcal{M}$ becomes the invariant hyperplane $\mathcal{M}=\{(x_1,x_2,\ldots,x_N)\in\mathbb{R}^N:\sum_{i\in\mathcal{V}}x_i=\sum_{i\in\mathcal{V}}y_i\}$ in distributed averaging. Furthermore, both the common Lyapunov function $V$ in \eqref{eq:V=sumfx*fxhf'xhx*xh} and its drop in \eqref{eq:VV=sumfxhfxhf'xhxhxh} take a quadratic form: $V(\mathbf{x}(k))=\frac{1}{2}(\mathbf{x}(k)-\mathbf{x}^*)^T(\mathbf{x}(k)-\mathbf{x}^*)$ and $V(\mathbf{x}(k))-V(\mathbf{x}(k-1))=-\frac{1}{2}\mathbf{x}^T(k-1)Q_{u(k)}\mathbf{x}(k-1)$ $\forall k\in\mathbb{P}$, where $Q_{\{i,j\}}\in\mathbb{R}^{N\times N}$ is a symmetric positive semidefinite matrix whose $ii$ and $jj$ entries are $\frac{1}{2}$, $ij$ and $ji$ entries are $-\frac{1}{2}$, and all other entries are zero. Therefore, the first-order-convexity-condition-based Lyapunov function \eqref{eq:V=sumfx*fxhf'xhx*xh} generalizes the quadratic Lyapunov function in distributed averaging.
\section{Pairwise Bisectioning}\label{sec:PB}
Although PE solves problem \eqref{eq:minF} and bypasses~L1--L4, it requires one-time, one-way sharing of the $f_i$'s between gossiping nodes, which may be costly for certain $f_i$'s, or impermissible for security and privacy reasons. In this section, we develop another gossip algorithm that eliminates this requirement at the expense of more real-number transmissions per iteration.
Note that PE can be traced back to four defining equations \eqref{eq:xh0=x*}--\eqref{eq:xh=xh}, and that its drawback of having to share the $f_i$'s stems from having to solve \eqref{eq:f'xhf'xh=f'xhf'xh} and \eqref{eq:xh=xh}. To overcome this drawback, consider a gossip algorithm satisfying \eqref{eq:xh0=x*}--\eqref{eq:f'xhf'xh=f'xhf'xh} and a new condition but not \eqref{eq:xh=xh}. Assuming, without loss of generality, that $\hat{x}_{u_1(k)}(k-1)\le\hat{x}_{u_2(k)}(k-1)$ $\forall k\in\mathbb{P}$, this new condition can be stated as
\begin{align}
\hat{x}_{u_1(k)}(k-1)\le\hat{x}_{u_1(k)}(k)\le\hat{x}_{u_2(k)}(k)\le\hat{x}_{u_2(k)}(k-1),\quad\forall k\in\mathbb{P}.\label{eq:xh<=xh<=xh<=xh}
\end{align}
Termed as the {\em approaching condition}, \eqref{eq:xh<=xh<=xh<=xh} says that at each iteration $k\in\mathbb{P}$, nodes $u_1(k)$ and $u_2(k)$ force $\hat{x}_{u_1(k)}(k)$ and $\hat{x}_{u_2(k)}(k)$ to approach each other while preserving their order. Observe that the approaching condition \eqref{eq:xh<=xh<=xh<=xh} includes the equalizing condition \eqref{eq:xh=xh} as a special case. Furthermore, unlike \eqref{eq:f'xhf'xh=f'xhf'xh} and \eqref{eq:xh=xh}, \eqref{eq:f'xhf'xh=f'xhf'xh} and \eqref{eq:xh<=xh<=xh<=xh} do not uniquely determine $\hat{x}_{u_1(k)}(k)$ and $\hat{x}_{u_2(k)}(k)$. Rather, they allow $\hat{x}_{u_1(k)}(k)$ and $\hat{x}_{u_2(k)}(k)$ to increase gradually from $\hat{x}_{u_1(k)}(k-1)$ and decrease accordingly from $\hat{x}_{u_2(k)}(k-1)$, respectively, until the two become equal.
The following lemma characterizes the impact of the non-uniqueness on the value of $V$:
\begin{lemma}\label{lem:PBVnonincr}
Consider \eqref{eq:xh0=x*}--\eqref{eq:f'xhf'xh=f'xhf'xh} and \eqref{eq:xh<=xh<=xh<=xh}. Suppose Assumption~\ref{asm:fi} holds. Then, for any given $(u(k))_{k=1}^\infty$, $(V(\mathbf{x}(k)))_{k=0}^\infty$ is non-increasing. Moreover, for any given $k\in\mathbb{P}$ and $\mathbf{x}(k-1)\in\mathcal{X}^N$, $V(\mathbf{x}(k))$ strictly increases with $\hat{x}_{u_2(k)}(k)-\hat{x}_{u_1(k)}(k)$ over $[0,\hat{x}_{u_2(k)}(k-1)-\hat{x}_{u_1(k)}(k-1)]$.
\end{lemma}
\begin{proof}
Let $(u(k))_{k=1}^\infty$ be given. Then, from \eqref{eq:V=sumfx*fxhf'xhx*xh}, \eqref{eq:xh=xh/u}, and \eqref{eq:f'xhf'xh=f'xhf'xh}, we have $V(\mathbf{x}(k))-V(\mathbf{x}(k-1))=-\sum_{i\in u(k)}f_i(\hat{x}_i(k))-f_i(\hat{x}_i(k-1))-f_i'(\hat{x}_i(k-1))(\hat{x}_i(k)-\hat{x}_i(k-1))+(f_i'(\hat{x}_i(k-1))-f_i'(\hat{x}_i(k)))\hat{x}_i(k)$ $\forall k\in\mathbb{P}$. Due to \eqref{eq:f'xhf'xh=f'xhf'xh} and \eqref{eq:xh<=xh<=xh<=xh}, $\sum_{i\in u(k)}(f_i'(\hat{x}_i(k-1))-f_i'(\hat{x}_i(k)))\hat{x}_i(k)=(f_{u_1(k)}'(\hat{x}_{u_1(k)}(k-1))-f_{u_1(k)}'(\hat{x}_{u_1(k)}(k)))(\hat{x}_{u_1(k)}(k)-\hat{x}_{u_2(k)}(k))\ge0$. This, along with \eqref{eq:fy>=fxf'xyx}, implies $V(\mathbf{x}(k))-V(\mathbf{x}(k-1))\le0$ $\forall k\in\mathbb{P}$. Now let $k\in\mathbb{P}$ and $\mathbf{x}(k-1)\in\mathcal{X}^N$ be given. By Lemma~\ref{lem:exisuniqz}, there exists a unique $x_{\text{eq}}\in\mathcal{X}$ such that $\sum_{i\in u(k)}f_i'(x_{\text{eq}})=\sum_{i\in u(k)}f_i'(\hat{x}_i(k))$. Also, $x_{\text{eq}}\in[\hat{x}_{u_1(k)}(k),\hat{x}_{u_2(k)}(k)]$. Let $\mathbf{x}_{\text{eq}}\in\mathcal{X}^N$ be such that its $i$th entry is $x_{\text{eq}}$ if $i\in u(k)$ and $\hat{x}_i(k-1)$ otherwise. Then, it follows from \eqref{eq:V=sumfx*fxhf'xhx*xh}, \eqref{eq:xh=xh/u}, and \eqref{eq:fy>=fxf'xyx} that $V(\mathbf{x}(k))-V(\mathbf{x}_{\text{eq}})=\sum_{i\in u(k)}f_i(x_{\text{eq}})-f_i(\hat{x}_i(k))-f_i'(\hat{x}_i(k))(x_{\text{eq}}-\hat{x}_i(k))\ge0$. Because $f_i(y)-f_i(x)-f_i'(x)(y-x)$ strictly increases with $|y-x|$ for each fixed $y\in\mathcal{X}$ $\forall i\in\mathcal{V}$ and because of \eqref{eq:f'xhf'xh=f'xhf'xh} and \eqref{eq:xh<=xh<=xh<=xh}, the second claim is true.
\end{proof}
Lemma~\ref{lem:PBVnonincr} says that the value of $V$ can never increase. In addition, the closer $\hat{x}_{u_1(k)}(k)$ and $\hat{x}_{u_2(k)}(k)$ get, the larger the value of $V$ drops, and the drop is maximized when $\hat{x}_{u_1(k)}(k)$ and $\hat{x}_{u_2(k)}(k)$ are equalized. These observations suggest that perhaps it is possible to design an algorithm that only forces $\hat{x}_{u_1(k)}(k)$ and $\hat{x}_{u_2(k)}(k)$ to approach each other (as opposed to becoming equal) to the detriment of a smaller drop in the value of $V$, but at the benefit of not having to share the $f_i$'s. The following algorithm, referred to as {\em Pairwise Bisectioning} (PB), shows that this is indeed the case and utilizes a bisection step that allows $\hat{x}_{u_1(k)}(k)$ and $\hat{x}_{u_2(k)}(k)$ to get arbitrarily close:
\begin{algorithm}[Pairwise Bisectioning]\label{alg:PB}
\begin{algorithminit}{}
\item Each node $i\in\mathcal{V}$ computes $x_i^*\in\mathcal{X}$, creates variables $\hat{x}_i,a_i,b_i\in\mathcal{X}$, and sets $\hat{x}_i\leftarrow x_i^*$.
\end{algorithminit}
\begin{algorithmoper}{At each iteration:}
\item A node with one or more one-hop neighbors, say, node $i$, initiates the iteration and selects a one-hop neighbor, say, node $j$, to gossip. Node $i$ transmits $\hat{x}_i$ to node $j$. Node $j$ sets $a_j\leftarrow\min\{\hat{x}_i,\hat{x}_j\}$ and $b_j\leftarrow\max\{\hat{x}_i,\hat{x}_j\}$ and transmits $\hat{x}_j$ to node $i$. Node $i$ sets $a_i\leftarrow\min\{\hat{x}_i,\hat{x}_j\}$ and $b_i\leftarrow\max\{\hat{x}_i,\hat{x}_j\}$. Nodes $i$ and $j$ select the number of bisection rounds $R\in\mathbb{P}$.
\item Repeat the following $R$ times: Node $j$ transmits $f_j'(\frac{a_j+b_j}{2})-f_j'(\hat{x}_j)$ to node $i$. Node $i$ tests if $f_j'(\frac{a_j+b_j}{2})-f_j'(\hat{x}_j)+f_i'(\frac{a_i+b_i}{2})-f_i'(\hat{x}_i)\ge0$. If so, node $i$ sets $b_i\leftarrow\frac{a_i+b_i}{2}$ and transmits LEFT to node $j$, and node $j$ sets $b_j\leftarrow\frac{a_j+b_j}{2}$. Otherwise, node $i$ sets $a_i\leftarrow\frac{a_i+b_i}{2}$ and transmits RIGHT to node $j$, and node $j$ sets $a_j\leftarrow\frac{a_j+b_j}{2}$. End repeat.
\item Node $j$ transmits $f_j'(c_j)-f_j'(\hat{x}_j)$ to node $i$, where $c_j=\bigl\{\begin{smallmatrix}a_j & \text{if $\hat{x}_j\le a_j$}\\ b_j & \text{if $\hat{x}_j\ge b_j$}\end{smallmatrix}$. Node $i$ tests if $\Bigl(f_j'(c_j)-f_j'(\hat{x}_j)+f_i'(c_i)-f_i'(\hat{x}_i)\Bigr)(\hat{x}_i-\frac{a_i+b_i}{2})\ge0$, where $c_i=\bigl\{\begin{smallmatrix}a_i & \text{if $\hat{x}_i\le a_i$}\\ b_i & \text{if $\hat{x}_i\ge b_i$}\end{smallmatrix}$. If so, node $i$ sets $\hat{x}_i\leftarrow(f_i')^{-1}(f_i'(\hat{x}_i)-f_j'(c_j)+f_j'(\hat{x}_j))$ and node $j$ sets $\hat{x}_j\leftarrow c_j$. Otherwise, node $i$ transmits $f_i'(c_i)-f_i'(\hat{x}_i)$ to node $j$ and sets $\hat{x}_i\leftarrow c_i$, and node $j$ sets $\hat{x}_j\leftarrow(f_j')^{-1}(f_j'(\hat{x}_j)-f_i'(c_i)+f_i'(\hat{x}_i))$.
\end{algorithmoper}
\end{algorithm}
Notice that Step~1 of PB is identical to that of PE except that each node $i\in\mathcal{V}$ creates two additional variables, $a_i$ and $b_i$, which are used in Step~2 to represent the initial bracket $[a_i,b_i]=[a_j,b_j]=[\min\{\hat{x}_i,\hat{x}_j\},\max\{\hat{x}_i,\hat{x}_j\}]$ for bisection purposes. Step~3 describes execution of the bisection method, where $R\in\mathbb{P}$ denotes the number of bisection rounds, which may be different for each iteration (e.g., a large $R$ may be advisable when $\hat{x}_i$ and $\hat{x}_j$ are very different). Observe that upon completing Step~3, $x_{\text{eq}}\in[a_i,b_i]=[a_j,b_j]\subset[\min\{\hat{x}_i,\hat{x}_j\},\max\{\hat{x}_i,\hat{x}_j\}]$ and $b_i-a_i=b_j-a_j=\frac{1}{2^R}|\hat{x}_j-\hat{x}_i|$, where $x_{\text{eq}}$ denotes the equalized value of $\hat{x}_i$ and $\hat{x}_j$ if PE were used. Moreover, upon completing Step~4, $x_{\text{eq}}\in[\min\{\hat{x}_i,\hat{x}_j\},\max\{\hat{x}_i,\hat{x}_j\}]\subset[a_i,b_i]=[a_j,b_j]$, where $\hat{x}_i$ and $\hat{x}_j$ here represent new values. Therefore, upon completing each iteration $k\in\mathbb{P}$,
\begin{align}
|\hat{x}_{u_1(k)}(k)-\hat{x}_{u_2(k)}(k)|\le\frac{1}{2^R}|\hat{x}_{u_1(k)}(k-1)-\hat{x}_{u_2(k)}(k-1)|,\quad\forall k\in\mathbb{P}.\label{eq:|xhxh|<=12R|xhxh|}
\end{align}
Finally, note that unlike PE which requires two real-number transmissions per iteration, PB requires as many as $3+R$ or $4+R$. However, it allows the nodes to never share their $f_i$'s.
The following theorem establishes the asymptotic convergence of PB under Assumption~\ref{asm:PEnetconn}:
\begin{theorem}\label{thm:PBasymconv}
Consider the use of PB described in Algorithm~\ref{alg:PB}. Suppose Assumptions~\ref{asm:fi} and~\ref{asm:PEnetconn} hold. Then, \eqref{eq:limV=0} and \eqref{eq:limxh=x*} hold.
\end{theorem}
\begin{proof}
See Appendix~\ref{ssec:proofthmPBasymconv}.
\end{proof}
As it follows from the above, PB represents an alternative to PE, which is useful when nodes are either unable, or unwilling, to share their $f_i$'s. Although not pursued here, it is straightforward to see that PE and PB may be combined, so that equalizing is used when one of the gossiping nodes can send the other its $f_i$, and approaching is used when none of them can.
\section{Conclusion}\label{sec:concl}
In this paper, based on the ideas of conservation and dissipation, we have developed PE and PB, two non-gradient-based gossip algorithms that enable nodes to cooperatively solve a class of convex optimization problems over networks. Using Lyapunov stability theory and the convexity structure, we have shown that PE and PB are asymptotically convergent, provided that the gossiping pattern is sufficiently rich. We have also discussed several salient features of PE and PB, including their comparison with the subgradient algorithms and their connection with distributed averaging.
|
2,869,038,154,612 | arxiv | \section{Introduction}
One of the most studied problems in extremal combinatorics is the so-called Tur\'an problem originated in the work of Tur\'an \cite{T1941} (for a recent survey see \cite{FS2013}). A basic problem of this sort asks for the maximum possible number of edges $ex(F,G)$ in a subgraph $G'$ of a given graph $G$ that does not contain $F$ as a subgraph.
Much less attention is paid to the vertex version of this problem. This problem can be formalized as follows: what is the the maximum size $ex_v(F,G)$, of a subset $U$ of vertices of a given graph $G$ such that $G[U]$ does not contain $F$ as a subgraph.
We will consider Tur\'an type problems for the \textit{n-dimensional hypercube} $Q_n$, the graph with vertex set $V_n=\{0,1\}^n$ corresponding to subsets of an $n$-element set and edges between vertices that differ in exactly one coordinate.
Edge-Tur\'an problems in the hypercube have attracted a lot of attention. This research was initiated by Erd\H{o}s \cite{erdos1984}, who conjectured $ex(C_4,Q_n)=(1+o(1))n 2^{n-1}$, i.e., any subgraph of $Q_n$ having significantly more than half of the edges of $Q_n$ must contain a copy of $C_4$. This problem is still unsolved. Conlon \cite{conlon} showed, extending earlier results due to Chung \cite{Chu92} and F\"uredi and \"Ozkahya \cite{FurOzk09, FurOzk11}, that $ex(C_{2k},Q_n)=o(n2^n)$ for $k\neq 2,3,5$.
Concerning the vertex Tur\'an problem in the hypercube $Q_n$, it is obvious that we can take half of the vertices of $Q_n$ such that they induce no edges.
Kostochka \cite{K} and later, independently, Johnson and Entringer \cite{JE} showed
$ex_v(C_4,Q_n)=\max_j\{\sum_{i\not\equiv j~\mod 3}\binom{n}{i}\}$. Johnson and Talbot \cite{JT} proved a local stability version of this result. Chung, F\"uredi, Graham, Seymour \cite{CFGS} proved that if $U$ contains more than $2^{n-1}$ vertices, then there is a vertex of degree at least $\frac{1}{2}\log n - \frac{1}{2}\log \log n +\frac{1}{2}$ in $Q_n[U]$. This shows that for any star $S_k$ with $k$ fixed, we have $ex_v(S_k,Q_n)=2^{n-1}$ for large enough $n$. Alon, Krech, and Szab\'o \cite{AKS} investigated the function $ex_v(Q_d,Q_n)$.
\smallskip
Let us note that there is a simple connection between the edge and the vertex Tur\'an problems in the hypercube.
\begin{proposition}
$ex_v(F,Q_n)\le 2^{n-1}+\frac{ex(F,Q_n)}{n}$.
\end{proposition}
\begin{proof} If a subgraph $G$ of $Q_n$ contains more than $2^{n-1}+\frac{ex(F,Q_n)}{n}$ vertices, then it contains more than $\frac{ex(F,Q_n)}{n}$ edges in every direction, thus more than $ex(F,Q_n)$ edges altogether, hence $G$ contains a copy of $F$.
\end{proof}
This observation implies that for every tree $T$, we have $ex_v(T,Q_n)=\left(\frac{1}{2}+\mathcal{O}\left(\frac{1}{n}\right)\right)2^n$, using the well-known result from Tur\'an theory which states $ex(n,T)=O(n)$ (and so $ex(F,Q_n) = \mathcal{O}(2^n)$). Also, together with Conlon's result on the cycles mentioned earlier, we obtain $ex_v(C_k,Q_n)=\left(\frac{1}{2}+o(1)\right)2^n$ for $k\neq 2,3,5$.
\bigskip
In this paper, we consider an oriented version of this problem. There is a natural orientation of the edges of the hypercube. An edge $uv$ means that $u$ and $v$ differ in only one coordinate; if $u$ contains $1$ and $v$ contains $0$ in this coordinate, then we direct the edge from $v$ to $u$. We denote the hypercube $Q_n$ with this orientation by $\overrightarrow{Q_n}$. With this orientation it is natural to forbid oriented subgraphs. We will denote by $ex_v(\overrightarrow{F}, \overrightarrow{Q_n})$ the maximum number of vertices that an $\overrightarrow{F}$-free subgraph of $\overrightarrow{Q_n}$ can have. As vertices of the hypercube correspond to sets, instead of working with subsets of the vertices of $\overrightarrow{Q}_n$ we will consider families $\cG\subseteq 2^{[n]}$ of sets. We will say that $\cG\subseteq 2^{[n]}$ is $\overrightarrow{F}$-free if for the corresponding subset $U$ of vertices of $\overrightarrow{Q}_n$ the induced subgraph $\overrightarrow{Q}_n[U]$ is $\overrightarrow{F}$-free.
For example, there is only one orientation of $C_4$ that embeds into the hypercube, we will denote it by $\overrightarrow{C_4}$. Hence we have $ex_v(\overrightarrow{C_4}, \overrightarrow{Q_n})=ex_v(C_4,Q_n)$, which is known exactly, due to the above mentioned result of Kostochka and Johnson and Entringer.
However, there are three different orientations of $P_3$, according to how many edges go towards the middle vertex: $\overrightarrow{V_2}$ denotes the orientation with a source (i.e., $\overrightarrow{V_2}$ is the path $abc$ such that the edge $ab$ is directed from $b$ to $a$ and the edge $bc$ is directed from $b$ to $c$).
The directed path $\overrightarrow{P_k}$ is a path on $k$ vertices $v_1,\dots,v_k$ with edges going from $v_i$ to $v_{i+1}$ for every $i<k$. The \emph{height} of a directed graph is the length of a longest directed path in it.
If we consider the hypercube as the Boolean poset, then each edge of the hypercube goes between a set $A$ and a set $A\cup \{x\}$ for some $x\not\in A$. Then in $\overrightarrow{Q_n}$ the corresponding directed edge goes from $A$ to $A\cup \{x\}$. A directed acyclic graph $\overrightarrow{F}$ can be considered as a poset $F$; we will say that $F$ is the \textit{poset of} $\overrightarrow{F}$. The poset corresponding to a directed tree is said to be a \textit{tree poset}. Forbidding copies of a poset in a family of sets in this order-preserving sense has an extensive literature (see \cite{griggs2016progress} for a survey on the theory of forbidden subposets). We say $\cP\subset 2^{[n]}$ is a \textit{copy} of $P$ if there exists a bijection $f:P\rightarrow \cP$ such that $p<p'$ implies $f(p)\subset f(p')$. We say that $\cF\subset 2^{[n]}$ is \textit{P-free}, if there is no $\cP \subset \cF$ that is a copy of $P$. Observe that if $P$ is the poset of the directed acyclic graph $\overrightarrow{F}$, then any $P$-free family is $\overrightarrow{F}$-free.
The oriented version of the vertex Tur\'an problem in the hypercube corresponds to the following variant of the forbidden subposet problem. We say $\cP\subset 2^{[n]}$ is a \emph{cover-preserving copy} of $P$ if there exists a bijection $f:P\rightarrow \cP$ such that if $p$ covers $p'$ in $P$, then $f(p)$ covers $f(p')$ in the Boolean poset. Thus it is not surprising that we can use techniques and results from the theory of forbidden subposet problems in our setting.
In this paper, we consider Vertex Tur\'an problems for directed trees. Our main result determines the asymptotic value of the vertex Tur\'an number $ex_v(\overrightarrow{T},\overrightarrow{Q_n})$ for any directed tree $\overrightarrow{T}$.
\begin{theorem}\label{dirtree}
For any directed tree $\overrightarrow{T}$ of height $h$, we have
$$ex_v(\overrightarrow{T},\overrightarrow{Q_n})=\left(\frac{h-1}{h}+o(1)\right)2^n.$$
\end{theorem}
Below we obtain the exact value of the vertex Tur\'an number for some special directed trees (namely $\overrightarrow{V_2}$ and $\overrightarrow{P_k}$).
\begin{theorem}\label{V}
$$ex_v(\overrightarrow{V_2},\overrightarrow{Q_n})=2^{n-1}+1.$$
\end{theorem}
It would be natural to consider the following generalization of $\overrightarrow{V_2}$: let $\overrightarrow{V_r}$ denote the star with $r$ leaves all edges oriented towards the leaves. Note that if one takes the elements of the $r$ highest levels of the Boolean poset and every other level below them, then the corresponding family in $\overrightarrow{Q_n}$ will be $\overrightarrow{V_r}$-free. Computing the size of this family we have $ex_v(\overrightarrow{V_r},\overrightarrow{Q_n})=2^{n-1}+\Omega(n^{r-2})$. We conjecture that $ex_v(\overrightarrow{V_r},\overrightarrow{Q_n})=2^{n-1}+\Theta(n^{r-2})$ holds for every $r\ge 3$.
\begin{theorem}\label{path}
For any pair $k,n$ of integers with $k\le n$ we have
$$ex_v(\overrightarrow{P_k},\overrightarrow{Q_n})=\max_{j\in [k]}\left\{\sum_{i\not\equiv j ~\text{mod}\ k}\binom{n}{i}\right\}.$$
\end{theorem}
\section{Proofs}
\subsection{Proof of Theorem \ref{dirtree}}
We follow the lines of a proof of Bukh \cite{Buk09} that shows that if $T$ is a tree poset with $h(T)=k$ and $\cF\subseteq 2^{[n]}$ is a $T$-free family of sets, then $|\cF|\le (k-1+O(\frac{1}{n}))\binom{n}{\lfloor n/2\rfloor}$ holds.
The proof of this theorem consists of several lemmas. Some of them we will state and use in their original form, some others we will state and prove in a slightly altered way so that we can apply them in our setting. First we need several definitions. For a family $\cF\subseteq 2^{[n]}$, its \textit{Lubell-function} $$\lambda_n(\cF)=\sum_{F\in \cF}\frac{1}{\binom{n}{|F|}}=\frac{1}{n!}\sum_{F\in \cF}|F|!(n-|F|)!$$ is the average number of sets in $\cF$ that a maximal chain $\cC$ in $2^{[n]}$ contains. A poset $P$ is called \textit{saturated} if all its maximal chains have length $h(P)$. For any poset $T$ its \textit{opposite poset} $T'$ consists of the same elements as $T$ with $t\le_{T'} t'$ if and only if $t'\le_T t$. For a family $\cF\subseteq 2^{[n]}$ of sets, its complement family is $\overline{\cF}=\{[n]\setminus F:F\in\cF\}$. Clearly, $\cF$ contains a copy of $P$ if and only if $\overline{\cF}$ contains a copy of $P'$ and $\lambda_n(\cF)=\lambda_n(\overline{\cF})$.
\begin{lemma}[Bukh \cite{Buk09}]\label{inducedsaturated}
Every tree poset $T$ is an induced subposet of a saturated tree poset $T'$ with $h(T)=h(T')$.
\end{lemma}
An \emph{interval} in a poset $P$ is a set of the form $[x,y] = \{z \in P : x \le z \le y\}.$
\begin{lemma}[Bukh \cite{Buk09}]\label{saturatedgrow}
If $T$ is a saturated tree poset that is not a chain, then there exists $t \in T$ that is a leaf in $H(T)$ and there exists an interval $I\subset T$ containing $t$ such that $|I|<h(T)$ holds, and $T\setminus I$ is a saturated tree poset with $h(T)=h(T\setminus I)$.
\end{lemma}
From now on we fix a tree poset $T$ and we denote its height by $k$. We say that a chain in $2^{[n]}$ is \textit{fat} if it contains $k$ members of $\cF$.
\begin{lemma}\label{bukh1}
If $\cF\subseteq \bigcup_{j=i}^{i+k-1} \binom{[n]}{j}$ is a family with $\lambda(\cF)\ge (k-1+\varepsilon)$, then there are at least $(\varepsilon/k)n!$ fat chains.
\end{lemma}
\begin{proof}
Let $C_i$ denote the number of maximal chains that contain exactly $i$ sets from $\cF$. As $\cF\subseteq \bigcup_{j=i}^{i+k-1} \binom{[n]}{j}$, we have $C_i=0$ for all $i>k$. Then
counting the number of pairs $(F, \cC)$ with $\cC$ being a maximal chain and $F \in \cF \cap \cC$,
in two different ways, we obtain
\[
\sum_{i=0}^niC_i=\lambda(\cF)n!\ge (k-1+\varepsilon)n!.
\]
This, and $\sum_iC_i=n!$ imply
\[
kC_k=\sum_{i\ge k}iC_i\ge \sum_{i=0}^niC_i-(k-1)\sum_{i<k}C_i\ge \varepsilon n!.
\]
Therefore the number of fat chains in $\cF$ is $C_k\ge (\varepsilon/k)n!$.
\end{proof}
\begin{lemma}\label{bukhmain}
Let $T$ be a saturated tree poset of height $k$. Suppose $\cF\subseteq \cup_{j=i}^{i+k-1} \binom{[n]}{j}$ is a family with $n/4\le i \le 3n/4$. Moreover, suppose $\cL$ is a
family of
fat chains with
$|\cL| >\frac{4\binom{|T|+1}{2}}{n}n!$.
Then there is a copy of $T$ in $\cF$ that contains only sets that are contained in some fat chain in $\cL$.
\end{lemma}
\begin{proof}
We proceed by induction on $|T|$. If $T$ is a chain, then the $k$ sets in any element of $\cL$
form a copy of $T$. In particular, it gives the base case of the induction. So suppose $T$ is not a chain. Then applying Lemma \ref{saturatedgrow}, there exists a leaf $t$ in $T$ and interval $I \subseteq T$ containing $t$ such that $h(T\setminus I)=k$ and $T\setminus I$ is a saturated tree poset. Our aim is to use induction to obtain a copy of $T\setminus I$ in $\cF$ that can be extended to a copy of $T$. Finding a copy of $T\setminus I$ is immediate, but in order to be able to extend it, we need a copy satisfying some additional properties, described later.
By passing to the opposite poset $T'$ of $T$ and considering $\overline{\cF}$, we may assume that $t$ is a minimal element of $T$. There exists a maximal chain $C$ in $T$ that contains $I$, and we have $|C|=k$ as $T$ is saturated. Then $s:=|C\setminus I|=k-|I|\ge 1$.
We need several definitions. Let $F_1\supset F_2\supset \dots \supset F_s$ be a chain with $|F_j|=i+k-j$ for $j=1,\dots,s$. Then this chain is
a \textit{bottleneck} if there exists a family $\cS\subset \cF$ with $|\cS|<|T|$ such that for every fat chain $F_1 \supset F_2\supset \dots\supset F_s\supset F_{s+1}\supset\dots\supset F_k$ in $\cL$ we have $\cS\cap \{F_{s+1},\dots, F_k\}\neq \emptyset$. Such an $\cS$ is a \textit{witness} to the fact that $F_1,\dots, F_s$ is a bottleneck (and we assume all sets of the witness are contained in $F_s$). We say that a fat chain is \textit{bad} if its top $s$ sets form a bottleneck. A fat chain is \textit{good} if it is not bad. Observe that if there is a copy $\cF_{T\setminus I}$ of $T\setminus I$ consisting of sets of good fat chains, then we can extend $\cF_{T\setminus I}$ to a copy of $T$. Indeed, as the sets $F'_1,\dots,F'_s$ representing $C\setminus I$ in $\cF_{T\setminus I}$ do not form a bottleneck and $|\cF_{T\setminus I}|<|T|$, there must be a good fat chain $F'_1\supset \dots\supset F'_s\supset F'_{s+1}\supset \dots\supset F'_k$ such that $F'_{s+1},\dots,F'_k\notin \cF_{T\setminus I}$, therefore $\cF_{T\setminus I}\cup \{F'_{s+1},\dots, F'_k\}$ is a copy of $T$. Therefore all we need to prove is that there are enough good fat chains to obtain a copy of $T\setminus I$ by induction.
Let us bound the number of bad fat chains. If $|\cC\cap \cF|<s$, then $\cC$ cannot be bad. We partition maximal chains in $2^{[n]}$ according to their $s$th largest set $F_{s}$ from $\cF$. As the top $s$ sets must form a bottleneck, there is a witness $\cS$ to this fact. This means that if $\cC$ is bad, then $\cC$ must meet $\cS$ whose elements are all contained in $F_{s}$. But as $|\cS|<|T|$ and all sets of $2^{F_{s}}\cap \cF$ have size between $n/4$ and $3n/4$, the proportion of those chains that do meet $\cS$ is at most $4|T|/n$ (any proper non-empty subset of $F_{S}$ is contained in at most $1/|F_{s}|$ proportion of chains going through $F_{s}$). This holds independently of the choice of $F_{s}$, thus the number of bad fat chains is at most $\frac{4|T|}{n}n!$.
So the number of good fat chains is at least
\[
|\cL|-\frac{4|T|}{n}n! \ge \frac{4(\binom{|T|+1}{2}-|T|)}{n}n!=\frac{4\binom{|T|}{2}}{n}n!.
\]
As $|T\setminus I|<|T|$, the induction hypothesis implies the existence of a copy of $T\setminus I$ among the
sets contained in good fat chains, as required.
\end{proof}
\vskip 0.3truecm
The next lemma essentially states that if a a $T$-free family is contained in the union of $k$ consecutive levels, then its size is asymptotically at most the cardinality of the $k-1$ largest levels. Formally, let $b(i)=b_{k,n}(i)=\max\{\binom{n}{j}:i\le j \le i+k-1\}$. So if $i\le n/2-k+1$, then $b(i)=\binom{n}{i+k-1}$, if $i\ge n/2$, then $b(i)=\binom{n}{i}$, while if $n/2-k+1<i<n/2$, then $b(i)=\binom{n}{\lfloor n/2\rfloor}$.
\begin{lemma}\label{bukhuj} If $T$ is a tree poset of height $k$, then there exists $n_0$ such that for $n>n_0$, $n/4\le i\le 3n/4-k$ any $\cF\subset \bigcup_{j=i}^{i+k-1} \binom{[n]}{j}$ of size at least $\left(k-1+\frac{k4|T|^2}{n}\right)b(i)$ contains a copy of $T$.
\end{lemma}
\begin{proof} By Lemma \ref{inducedsaturated} we may suppose that $T$ is a saturated tree poset. Assume $\cF\subseteq \bigcup_{j=i}^{i+k-1} \binom{[n]}{j}$ is a $T$-free family that contains at least $\left(k-1+\frac{k4|T|^2}{n}\right)b(i)$ sets. Then $\cF\subseteq \bigcup_{j=i}^{i+k-1} \binom{[n]}{j}$ implies that $\lambda_n(\cF)\ge k-1+\frac{k4|T|^2}{n}$.
Let $\varepsilon=4k|T|^2/n$. Then we can apply Lemma \ref{bukh1} to find $4|T|^2n!/n$ fat chains. Then we can apply Lemma \ref{bukhmain} with $k=h(T)$ to obtain a copy of $T$ in $\cF$, contradicting the $T$-free property of $\cF$.
\end{proof}
With Lemma \ref{bukhuj} in hand, we can now prove Theorem \ref{dirtree}. Let us consider a $\overrightarrow{T}$-free family $\cF$. Let $T$ be the poset of $\overrightarrow{T}$ and let $T^*$ be the saturated poset containing $T$ with $h(T)=h(T^*)=k$ - guaranteed by Lemma \ref{inducedsaturated}. For any integer $0\le i \le n-k+1$, let $\cF_i=\{F\in \cF: i\le |F|\le i+k-1\}$. Observe that the $\overrightarrow{T}$-free property of $\cF$ implies that $\cF_i$ is $T^*$-free for every $i$. Note that every $F\in \cF$ belongs to exactly $k$ families $\cF_i$ unless $|F|<k-1$ or $|F|>n-k+1$. It is well-known that $\left|\binom{[n]}{\le n/4}\cup \binom{[n]}{\ge 3n/4}\right|=o\left(\frac{1}{n}2^{n}\right)$, therefore using Lemma \ref{bukhuj} we obtain
\[
k|\cF|-o\left(\frac{1}{n}2^n\right)\le \sum_{i=n/4}^{3n/4}|\cF_i|\le \left(k-1+\frac{k4|T|^2}{n}\right)\sum_{i=n/4}^{3n/4}b(i)\le \left(k-1+\frac{k4|T|^2}{n}\right)\left(2^n+k\binom{n}{\lfloor n/2\rfloor}\right).
\]
After rearranging, we get $|\cF|\le\left(\frac{k-1}{k}+o(1)\right)2^n$.
\subsection{Proof of Theorem \ref{V}}
To prove the lower bound, we show a $\overrightarrow{V_2}$-free family in $\overrightarrow{Q_n}$ of size $2^{n-1}+1$. Simply take every second level in the hypercube starting from the $(n-1)$st level and also take the vertex corresponding to $[n]$.
We prove the upper bound by induction on $n$ (it is easy to see the base case $n=2$). We will need the following simple claim.
\begin{claim}\label{vclaim}
Let $\cF\subset 2^{[n]}$ is a maximal $\overrightarrow{V_2}$-free family, then $\cF$ contains the set $[n]$ and at least one set of size $n-1$.
\end{claim}
\begin{proof}[Proof of Claim]
First note that $[n]$ can be added to any $\overrightarrow{V_2}$-free family as there is only one subset of $[n]$ of size $n$. Also, if $\cF$ does not contain any set of size $n-1$, then one such set $S$ can be added to $\cF$. Indeed, if we add $S$, no copy of $\overrightarrow{V_2}$ having sets of size $n-1$ and $n$ will be created because $[n]$ is the only set of size $n$ in $\cF\cup \{S\}$. Furthermore, no copy of $\overrightarrow{V_2}$ having sets of size $n-2$ and $n-1$ will be created as $S$ is the only set of size $n-1$ in $\cF\cup \{S\}$.
\end{proof}
Now we are ready to prove Theorem \ref{V}. Let $\cF\subset 2^{[n]}$ be a $\overrightarrow{V_2}$-free family. For some $x\in [n]$, define
$$\cF_x^-=\{F~|~F\in\cF,~x\not\in F\}~~~\text{and}~~~\cF_x^+=\{F\backslash\{x\}~|~F\in\cF,~x\in F\}.$$
Then $\cF_x^-,\cF_x^+\subset 2^{[n]\backslash\{x\}}$ and they are also $\overrightarrow{V_2}$-free. By induction, we have
$$|\cF|=|\cF_x^-|+|\cF_x^+|\le 2^{n-2}+1+2^{n-2}+1=2^{n-1}+2.$$
Assume that $|\cF|=2^{n-1}+2.$ Then $|\cF_x^-|=|\cF_x^+|=2^{n-2}+1$ must hold for all $x\in [n]$. By Claim \ref{vclaim}, $|\cF_x^-|=2^{n-2}+1$ implies that $[n]\backslash\{x\}$ and at least one set of size $n-2$ are in $\cF$. This holds for all $x\in [n]$, so all sets of size $n-1$, and at least one set of size $n-2$ are in $\cF$. However, these would form a forbidden $\overrightarrow{V_2}$ in $\cF$, contradicting our original assumption on $\cF$. This proves that $|\cF|\le 2^{n-1}+1$.
\subsection{Proof of Theorem \ref{path}}
Let $U$ be a set of vertices in $Q_n$ such that the subgraph of $Q_n$ induced by $U$ (i.e., $Q_n[U]$) is $\overrightarrow{P_k}$-free. Let $\cF \subset 2^{[n]}$ be a family of subsets corresponding to $U$.
First, we will introduce a weight function. For every $F\in\cF$, let $w(F)=\binom{n}{|F|}$. For a maximal chain $\cC$, let $w(\cC)=\sum_{F\in \cC\cap \cF}w(F)$ denote the weight of $\cC$. Let $\bC_n$ denote the set of all maximal chains in $[n]$. Then
$$\frac{1}{n!}\sum_{\cC\in\bC_n} w(\cC)=\frac{1}{n!}\sum_{\cC\in\bC_n}\sum_{F\in \cC\cap \cF}w(F)=\frac{1}{n!}\sum_{F\in \cF} |F|!(n-|F|)!w(F)=|\cF|.$$
This means that the average of the weights of the full chains equals the size of $\cF$. It means that if we find an upper bound that is valid for the weight of any chain, then this will be an upper bound on $|\cF|$ too.
Our assumption that there is no $\overrightarrow{P_k}$ means that there are no $k$ neighboring members of $\cF$ in a chain. For a given chain $\cC$, let $a_1, a_2,\dots a_t$ denote the sizes of those elements of $\cC$ that are not in $\cF$. Then $0\le a_1<a_2<\dots <a_t\le n$, $a_1\le k-1$, $n-k+1\le a_t$ and $a_{i+1}-a_{i}\le k$ for all $i=1,2,\dots t-1$. The weight of the chain $\cC$ is
$$w(\cC)=2^n-\sum_{i=1}^t \binom{n}{a_i}.$$
We claim that this is maximized when the numbers $\{a_1, a_2,\dots a_t\}$ are all the numbers between 0 and $n$ that give the same residue when divided by $k$.
Assume that $w(\cC)$ is maximized by a different kind of set $\{a_1, a_2,\dots a_t\}$. Then there is an index $i$ such that $a_{i+1}-a_{i}<k$.
If $a_i\le \frac{n}{2}$ then we can decrease the numbers $\{a_1, a_2,\dots a_i\}$ by 1. (If $a_1$ becomes -1 then we simply remove that number.) The resulting set of numbers will still satisfy the conditions and $w(\cC)$ increases. Otherwise, $a_{i+1}> \frac{n}{2}$ must hold. Similarly, we can increase the numbers $\{a_{i+1}, a_{i+2},\dots a_n\}$ by 1 to achieve the same result. We proved that
$$w(\cC)\le 2^n-\min_{j\in [k]}\sum_{i\equiv j ~\text{mod}\ k}\binom{n}{i} =\max_{j\in [k]}\left\{\sum_{i\not\equiv j ~\text{mod}\ k}\binom{n}{i}\right\}$$
holds for any full chain $\cC$. Therefore the same upper bound holds for $|\cF|$ as well.
\subsection*{Acknowledgement}
Research of D. Gerbner was supported by the J\'anos Bolyai Research Fellowship of the Hungarian Academy of Sciences and the National Research, Development and Innovation Office -- NKFIH under the grant K 116769.
Research of A. Methuku was supported by the Hungarian Academy of Sciences and the National Research, Development and Innovation Office -- NKFIH under the grant K 116769.
Research of D.T. Nagy was supported by the \'{U}NKP-17-3 New National Excellence Program of the Ministry of Human Capacities and by National Research, Development and Innovation Office - NKFIH under the grant K 116769.
Research of B. Patk\'os was supported by the National Research, Development and Innovation Office -- NKFIH under the grants SNN 116095 and K 116769.
Research of M. Vizer was supported by the National Research, Development and Innovation Office -- NKFIH under the grant SNN 116095.
All of the authors were funded by the Taiwanese-Hungarian Mobility Program of the Hungarian Academy of Sciences.
|
2,869,038,154,613 | arxiv | \section{Generalized extraction of the sum over certain range from a higher-order tensor of cumulative sums}
\label{app:cumsum-extraction}
\vspace{1cm}
\begin{theorem}
\label{theorem:multi-cumsum}
Let $\mathfrak{X}, C \in \mathbb{R}^{N_1 \times N_2 \times \cdots \times N_M}$ with $\mathfrak{X}$ being an $M^\text{th}$-order data tensor and $C$ being a tensor of cumulative sums given by
\begin{equation}
C(k_1, k_2, \dots, k_M) \coloneqq
\sum_{i_1 = 1}^{k_1} \sum_{i_2 = 1}^{k_2} \cdots \sum_{i_M = 1}^{k_M} \mathfrak{X}(i_1, i_2, \dots, i_M) .
\end{equation}
The sum over all elements of $\mathfrak{X}$ in the range $\left( j_0^{(1)}, j_1^{(1)} \right] \times \left( j_0^{(2)}, j_1^{(2)} \right] \times \cdots \times \left( j_0^{(M)}, j_1^{(M)} \right]$ can then be reconstructed from $C$ with $2^M - 1$ additions/subtractions according to
\begin{equation}
\begin{split}
& \sum_{i_1 = j_0^{(1)} + 1}^{j_1^{(1)}}
\sum_{i_2 = j_0^{(2)} + 1}^{j_1^{(2)}} \cdots
\sum_{i_M = j_0^{(M)} + 1}^{j_1^{(M)}} \mathfrak{X}(i_1, i_2, \dots, i_M) \\
= & \sum_{(i_1, i_2, \dots, i_M) \in \{0,1\}^M} (-1)^{M - \left( \sum_{m=1}^{M} i_m \right)}
\cdot C \left( j_{i_1}^{(1)}, j_{i_2}^{(2)}, \dots, j_{i_M}^{(M)} \right) .
\end{split}
\label{eq:multi-cumsum-recons}
\end{equation}
\end{theorem}
\begin{proof}
For the basic case of $M = 1$ it can easily be seen that
\[
\sum_{i=j_0+1}^{j_1} \mathfrak{X}(i)
= \sum_{i=1}^{j_1} \mathfrak{X}(i) - \sum_{i=1}^{j_0} \mathfrak{X}(i)
= C(j_1) - C(j_0)
= \sum_{i \in \{0,1\}} (-1)^{1-i} \cdot C(j_i) .
\]
Now assume that theorem \ref{theorem:multi-cumsum} holds for $1 \le M' < M$. Applying it for $M-1$ gives
\[
\begin{split}
& \sum_{i_1 = j_0^{(1)} + 1}^{j_1^{(1)}}
\sum_{i_2 = j_0^{(2)} + 1}^{j_1^{(2)}} \cdots
\sum_{i_M = j_0^{(M)} + 1}^{j_1^{(M)}} \mathfrak{X}(i_1, i_2, \dots, i_M) \\
= & \sum_{i_M = j_0^{(M)}+1}^{j_1^{(M)}} \left(
\sum_{(i_1, i_2, \dots, i_{M-1}) \in \{0,1\}^{M-1}}
(-1)^{M - 1 - \left( \sum_{m=1}^{M-1} i_m \right)}
\cdot C \left(
j_{i_1}^{(1)}, j_{i_2}^{(2)}, \dots, j_{i_{M-1}}^{(M-1)}, i_M
\right)
\right) \\
= & \sum_{(i_1, i_2, \dots, i_{M-1}) \in \{0,1\}^{M-1}} \left(
(-1)^{M - 1 - \left( \sum_{m=1}^{M-1} i_m \right)}
\cdot \sum_{i_M = j_0^{(M)}+1}^{j_1^{(M)}} C \left(
j_{i_1}^{(1)}, j_{i_2}^{(2)}, \dots, j_{i_{M-1}}^{(M-1)}, i_M
\right)
\right) .
\end{split}
\]
Since the first $M-1$ indices of $C \left( j_{i_1}^{(1)}, j_{i_2}^{(2)}, \dots, j_{i_{M-1}}^{(M-1)}, i_M \right)$ are fixed in the scope of the inner sum and only the last index varies, the basic case for $M=1$ can be applied to that inner sum expression, transforming the right-hand side of the equation to
\[
\begin{split}
& \sum_{(i_1, \dots, i_{M-1}) \in \{0,1\}^{M-1}} \left(
(-1)^{M - 1 - \left( \sum_{m=1}^{M-1} i_m \right)}
\cdot \sum_{i_M \in \{0,1\}} (-1)^{1-i_M} \cdot C \left(
j_{i_1}^{(1)}, \dots, j_{i_M}^{(M)}
\right)
\right) \\
= & \sum_{(i_1, i_2, \dots, i_M) \in \{0,1\}^M} (-1)^{M - \left( \sum_{m=1}^{M} i_m \right)}
\cdot C \left( j_{i_1}^{(1)}, j_{i_2}^{(2)}, \dots, j_{i_M}^{(M)} \right) .
\end{split}
\]
\end{proof}
\clearpage
\section{North Sea Storm Detections}
\label{app:coastdat-detections}
\iftrue
Each heatmap shows the state of the three variables at the middle of the top 5 detected time frames. The static red box marks the spatial subset of the data that has been used for this experiment described in \cref{sec:exp-storms}.
Heatmaps are best viewed in color.
Animated heatmaps for more detections can be found on our web page: \url{http://www.inf-cv.uni-jena.de/libmaxdiv_applications.html}. \\
\noindent\includegraphics[width=\linewidth]{coastdat-top5} \\
\clearpage
{
\noindent
\fontsize{10pt}{11pt}\selectfont
\begin{tabular}{c|lll}
\toprule
\# & Timeframe & Score & Historical Storm \\
\midrule
1 & 1962-02-16 05:00 -- 1962-02-18 06:00 & 831.635 & Hamburg-Flut (Feb 16-17) \\
2 & 1990-12-11 22:00 -- 1990-12-13 10:00 & 824.840 & Storm 1990/Dec (Dec 12) \\
3 & 1965-02-13 01:00 -- 1965-02-15 23:00 & 797.781 & \\
4 & 2000-01-29 12:00 -- 2000-01-31 02:00 & 796.528 & Cyclone Kerstin (Jan 29-31) \\
5 & 1981-11-23 19:00 -- 1981-11-25 22:00 & 745.951 & North Frisian Flood (Nov 24) \\
6 & 1989-02-13 11:00 -- 1989-02-16 10:00 & 714.684 & Storm 1989/Feb (Feb 14) \\
7 & 1988-02-28 04:00 -- 1988-03-02 03:00 & 710.495 & \\
8 & 1973-12-12 16:00 -- 1973-12-15 15:00 & 673.886 & Storm 1973/Dec (2) (Dec 13-15) \\
9 & 1998-12-26 00:00 -- 1998-12-28 05:00 & 658.592 & Cyclone Stephen (Dec 26-27) \\
10 & 1984-01-02 15:00 -- 1984-01-05 14:00 & 658.306 & \\
11 & 1977-11-13 00:00 -- 1977-11-15 23:00 & 592.562 & \\
12 & 1980-02-26 00:00 -- 1980-02-28 23:00 & 573.913 & \\
13 & 1999-02-04 15:00 -- 1999-02-07 14:00 & 572.603 & Storm 1999/Feb (Feb 05) \\
14 & 2006-10-31 07:00 -- 2006-11-01 21:00 & 560.806 & Cyclone Britta (Oct 31 - Nov 01) \\
15 & 1995-01-09 21:00 -- 1995-01-12 20:00 & 554.777 & \\
16 & 1983-01-17 20:00 -- 1983-01-20 14:00 & 545.856 & Storm 1983/Jan (Jan 17-20) \\
17 & 1991-10-17 00:00 -- 1991-10-19 23:00 & 537.879 & \\
18 & 1996-11-05 07:00 -- 1996-11-07 05:00 & 519.837 & Storm 1996/Nov (Nov 05-07) \\
19 & 1976-01-20 10:00 -- 1976-01-23 02:00 & 508.532 & Storm 1976/Jan (2) (Jan 21) \\
20 & 1993-01-24 02:00 -- 1993-01-27 01:00 & 506.250 & Storm 1993/Jan (2) (Jan 22-25) \\
21 & 1973-11-19 05:00 -- 1973-11-20 16:00 & 494.595 & Storm 1973/Nov (3) (Nov 19-20) \\
22 & 1992-12-23 15:00 -- 1992-12-26 14:00 & 491.438 & \\
23 & 1977-12-29 22:00 -- 1977-12-31 17:00 & 489.287 & \\
24 & 2004-02-07 07:00 -- 2004-02-09 23:00 & 485.346 & Cyclone Ursula (Feb 07-08) \\
25 & 1984-01-12 01:00 -- 1984-01-15 00:00 & 485.231 & Storm 1984/Jan (Jan 14) \\
26 & 1991-01-04 19:00 -- 1991-01-07 18:00 & 471.642 & Storm Undine (Jan 02-09) \\
27 & 1973-11-11 03:00 -- 1973-11-14 02:00 & 471.398 & Storm 1973/Nov (1) (Nov 13-14) \\
28 & 2004-11-17 09:00 -- 2004-11-20 08:00 & 460.155 & \\
29 & 1973-12-04 09:00 -- 1973-12-07 08:00 & 445.639 & Storm 1973/Dec (1) (Dec 06-07) \\
30 & 1961-03-26 13:00 -- 1961-03-29 01:00 & 423.340 & \\
31 & 1994-01-28 06:00 -- 1994-01-31 05:00 & 420.520 & Cyclone Lore (Jan 27-28) \\
32 & 1980-04-19 03:00 -- 1980-04-21 18:00 & 420.186 & \\
33 & 1999-12-23 18:00 -- 1999-12-26 05:00 & 418.482 & Cyclone Lothar (Dec 25-26) \\
34 & 1988-10-02 07:00 -- 1988-10-05 02:00 & 412.905 & \\
35 & 1970-10-19 02:00 -- 1970-10-22 01:00 & 411.187 & \\
36 & 2007-01-10 20:00 -- 2007-01-13 18:00 & 407.993 & Cyclone Franz (Jan 11) \\
37 & 2007-11-07 11:00 -- 2007-11-10 10:00 & 402.426 & Cyclone Tilo (Nov 06-11) \\
38 & 1990-09-19 07:00 -- 1990-09-22 06:00 & 397.632 & \\
39 & 1993-02-19 00:00 -- 1993-02-21 23:00 & 387.598 & Storm 1993/Feb (Feb 20-21) \\
40 & 1998-10-24 11:00 -- 1998-10-27 08:00 & 382.914 & Cyclone Xylia (Oct 27-28) \\
41 & 2003-12-12 15:00 -- 2003-12-15 14:00 & 377.435 & Cyclone Fritz (Dec 13-15) \\
42 & 1991-12-23 06:00 -- 1991-12-26 03:00 & 374.911 & \\
43 & 2002-02-19 17:00 -- 2002-02-22 16:00 & 374.026 & Storm 2002/Feb (1) (Feb 21-23) \\
44 & 1997-03-08 12:00 -- 1997-03-11 11:00 & 371.758 & \\
45 & 1959-02-17 02:00 -- 1959-02-20 01:00 & 369.272 & \\
46 & 1974-12-11 06:00 -- 1974-12-14 05:00 & 365.459 & \\
47 & 1994-12-06 02:00 -- 1994-12-09 01:00 & 358.443 & \\
48 & 2001-12-20 08:00 -- 2001-12-23 07:00 & 357.280 & \\
49 & 1992-11-18 14:00 -- 1992-11-21 02:00 & 343.076 & \\
50 & 2006-12-29 22:00 -- 2007-01-01 21:00 & 340.920 & Cyclone Karla (Dec 30-31) \\
\bottomrule
\end{tabular}
}
\clearpage
\section{Low Pressure Area Detections}
\label{app:slp-detections}
Each row shows the heatmap of Sea Level Pressure at the beginning, the middle, and the end of the detection. The red box marks the detected area. Details on this experiment can be found in \cref{sec:exp-slp}.
Heatmaps are best viewed in color. An animated video showing all detections can be found on our web-page: \url{http://www.inf-cv.uni-jena.de/libmaxdiv_applications.html}.
{
\noindent
\centering
\def2{2}
\begin{tabular}{ @{}>{\centering\scriptsize}m{0.04\linewidth} @{} >{\centering}m{0.28\linewidth} @{\hspace{0.03\linewidth}} >{\centering}m{0.28\linewidth} @{\hspace{0.03\linewidth}} >{\centering\arraybackslash}m{0.28\linewidth} @{\hspace{0.03\linewidth}} }
& \textbf{Start} & \textbf{Middle} & \textbf{End} \\
\rotatebox{90}{1996-01-06 -- 1996-01-15} & \includegraphics[width=\linewidth]{slp/slp_det_1_start} & \includegraphics[width=\linewidth]{slp/slp_det_1_middle} & \includegraphics[width=\linewidth]{slp/slp_det_1_end} \\
\rotatebox{90}{1990-01-28 -- 1990-02-06} & \includegraphics[width=\linewidth]{slp/slp_det_2_start} & \includegraphics[width=\linewidth]{slp/slp_det_2_middle} & \includegraphics[width=\linewidth]{slp/slp_det_2_end} \\
\rotatebox{90}{1989-12-22 -- 1989-12-31} & \includegraphics[width=\linewidth]{slp/slp_det_3_start} & \includegraphics[width=\linewidth]{slp/slp_det_3_middle} & \includegraphics[width=\linewidth]{slp/slp_det_3_end} \\
\rotatebox{90}{2009-01-18 -- 2009-01-27} & \includegraphics[width=\linewidth]{slp/slp_det_4_start} & \includegraphics[width=\linewidth]{slp/slp_det_4_middle} & \includegraphics[width=\linewidth]{slp/slp_det_4_end} \\
\end{tabular}\\
\hspace{0.055\linewidth}
\includegraphics[width=.91\linewidth]{slp/colorbar}
}
\vspace{.5cm}
{
\noindent
\scriptsize
\begin{tabular}{c|llll}
\toprule
\# & Timeframe & Location & Score & Historical Storm \\
\midrule
1 & 1996-01-06 -- 1996-01-15 & 40.0\textdegree\ N, -52.5\textdegree\ E -- 65.0\textdegree\ N, -2.5\textdegree\ E & 5940.691 & \\
2 & 1990-01-28 -- 1990-02-06 & 47.5\textdegree\ N, -52.5\textdegree\ E -- 65.0\textdegree\ N,\hphantom{-} 7.5\textdegree\ E & 5551.022 & Storm Herta (Feb 01-05) \\
3 & 1989-12-22 -- 1989-12-31 & 45.0\textdegree\ N, -52.5\textdegree\ E -- 65.0\textdegree\ N, -2.5\textdegree\ E & 5198.513 & \\
4 & 2009-01-18 -- 2009-01-27 & 47.5\textdegree\ N, -52.5\textdegree\ E -- 65.0\textdegree\ N, 15.0\textdegree\ E & 4959.829 & Cyclone Joris (Jan 23) \\
5 & 1982-12-14 -- 1982-12-23 & 50.0\textdegree\ N, -52.5\textdegree\ E -- 65.0\textdegree\ N, 15.0\textdegree\ E & 4811.575 & \\
6 & 1990-12-25 -- 1991-01-03 & 52.5\textdegree\ N, -52.5\textdegree\ E -- 65.0\textdegree\ N, 15.0\textdegree\ E & 4703.993 & Storm Undine (Jan 02-09) \\
7 & 1974-01-03 -- 1974-01-12 & 47.5\textdegree\ N, -52.5\textdegree\ E -- 65.0\textdegree\ N, -5.0\textdegree\ E & 4594.737 & \\
8 & 1986-12-08 -- 1986-12-17 & 47.5\textdegree\ N, -52.5\textdegree\ E -- 65.0\textdegree\ N, -2.5\textdegree\ E & 4417.568 & Storm 1986/Dec (Dec 14-15) \\
9 & 1997-12-30 -- 1998-01-08 & 50.0\textdegree\ N, -52.5\textdegree\ E -- 65.0\textdegree\ N, 10.0\textdegree\ E & 4377.532 & Cyclone Fanny (Jan 03-05) \\
10 & 1995-01-26 -- 1995-02-04 & 47.5\textdegree\ N, -52.5\textdegree\ E -- 65.0\textdegree\ N, 15.0\textdegree\ E & 4376.735 & \\
11 & 2006-12-03 -- 2006-12-12 & 47.5\textdegree\ N, -52.5\textdegree\ E -- 65.0\textdegree\ N, 15.0\textdegree\ E & 4306.923 & \\
12 & 1997-02-18 -- 1997-02-27 & 52.5\textdegree\ N, -52.5\textdegree\ E -- 65.0\textdegree\ N, 15.0\textdegree\ E & 4249.087 & \\
13 & 1958-01-04 -- 1958-01-13 & 50.0\textdegree\ N, -52.5\textdegree\ E -- 65.0\textdegree\ N, 15.0\textdegree\ E & 4206.594 & \\
14 & 1978-12-06 -- 1978-12-15 & 45.0\textdegree\ N, -52.5\textdegree\ E -- 65.0\textdegree\ N,\hphantom{1} 2.5\textdegree\ E & 4151.843 & \\
15 & 1976-12-01 -- 1976-12-10 & 47.5\textdegree\ N, -52.5\textdegree\ E -- 65.0\textdegree\ N, 15.0\textdegree\ E & 4139.642 & \\
16 & 1971-01-18 -- 1971-01-27 & 45.0\textdegree\ N, -52.5\textdegree\ E -- 65.0\textdegree\ N, 15.0\textdegree\ E & 4030.477 & \\
17 & 1992-11-29 -- 1992-12-08 & 47.5\textdegree\ N, -52.5\textdegree\ E -- 65.0\textdegree\ N, 15.0\textdegree\ E & 3962.119 & \\
18 & 1994-01-27 -- 1994-02-05 & 50.0\textdegree\ N, -52.5\textdegree\ E -- 65.0\textdegree\ N, 15.0\textdegree\ E & 3933.832 & Cyclone Lore (Jan 27-28) \\
19 & 2007-12-02 -- 2007-12-11 & 47.5\textdegree\ N, -52.5\textdegree\ E -- 65.0\textdegree\ N, 15.0\textdegree\ E & 3931.694 & Cyclone Fridtjof (Dec 02-03) \\
20 & 1959-12-18 -- 1959-12-27 & 47.5\textdegree\ N, -52.5\textdegree\ E -- 65.0\textdegree\ N, 15.0\textdegree\ E & 3910.999 & \\
\bottomrule
\end{tabular}
}
\section{Top 10 Anomalous Paragraphs in the Book Genesis}
\label{app:genesis-detections}
Each detected word sequence is shown with some \textcolor{grey}{context} colored in \textcolor{grey}{grey} before and after the detection. See \cref{sec:exp-nlp} for details on this experiment.
The text has been taken from the ``genesis'' corpus included in the \textit{Natural Language Toolkit (NLTK)} for Python and is not free of noise.
\subsection*{Detection \#1: words 3218 -- 3613 (Score: 56462.266)}
\fpar{%
\textcolor{grey}{and called their name Adam , in the day when they were created . And Adam lived an hundred and thirty years , and begat a son in his own likeness , and after his image ; and called his name Se And the days of Adam after he had}
begotten Seth were eight hundred yea and he begat sons and daughters : And all the days that Adam lived were nine hundred and thirty yea and he died . And Seth lived an hundred and five years , and begat Enos : And Seth lived after he begat Enos eight hundred and seven years , and begat sons and daughte And all the days of Seth were nine hundred and twelve years : and he died . And Enos lived ninety years , and begat Cainan : And Enos lived after he begat Cainan eight hundred and fifteen years , and begat sons and daughte And all the days of Enos were nine hundred and five years : and he died . And Cainan lived seventy years and begat Mahalaleel : And Cainan lived after he begat Mahalaleel eight hundred and forty years , and begat sons and daughte And all the days of Cainan were nine hundred and ten years : and he died . And Mahalaleel lived sixty and five years , and begat Jared : And Mahalaleel lived after he begat Jared eight hundred and thirty years , and begat sons and daughte And all the days of Mahalaleel were eight hundred ninety and five yea and he died . And Jared lived an hundred sixty and two years , and he begat Eno And Jared lived after he begat Enoch eight hundred years , and begat sons and daughte And all the days of Jared were nine hundred sixty and two yea and he died . And Enoch lived sixty and five years , and begat Methuselah : And Enoch walked with God after he begat Methuselah three hundred years , and begat sons and daughte And all the days of Enoch were three hundred sixty and five yea And Enoch walked with God : and he was not ; for God took him . And Methuselah lived an hundred eighty and seven years , and begat Lamech . And Methuselah lived after he begat Lamech seven hundred eighty and two years , and begat sons and daughte And all the days of Methuselah were nine hundred sixty and nine yea and he died . And Lamech lived an hundred eighty and two years , and begat a s And he called his name Noah , saying , This same
\textcolor{grey}{shall comfort us concerning our work and toil of our hands , because of the ground which the LORD hath cursed .}
}
\subsection*{Detection \#2: words 30098 -- 30568 (Score: 41058.093)}
\fpar{%
\textcolor{grey}{his house , and his cattle , and all his beasts , and all his substance , which he had got in the land of Canaan ; and went into the country from the face of his brother Jacob . For their riches were more than that they might dwell}
together ; and the land wherein they were strangers could not bear them because of their cattle . Thus dwelt Esau in mount Seir : Esau is Edom . And these are the generations of Esau the father of the Edomites in mount Se These are the names of Esau ' s sons ; Eliphaz the son of Adah the wife of Esau , Reuel the son of Bashemath the wife of Esau . And the sons of Eliphaz were Teman , Omar , Zepho , and Gatam , and Kenaz . And Timna was concubine to Eliphaz Esau ' s son ; and she bare to Eliphaz Amal these were the sons of Adah Esau ' s wife . And these are the sons of Reuel ; Nahath , and Zerah , Shammah , and Mizz these were the sons of Bashemath Esau ' s wife . And these were the sons of Aholibamah , the daughter of Anah the daughter of Zibeon , Esau ' s wife and she bare to Esau Jeush , and Jaalam , and Korah . These were dukes of the sons of Esau : the sons of Eliphaz the firstborn son of Esau ; duke Teman , duke Omar , duke Zepho , duke Kenaz , Duke Korah , duke Gatam , and duke Amalek : these are the dukes that came of Eliphaz in the land of Edom ; these were the sons of Adah . And these are the sons of Reuel Esau ' s son ; duke Nahath , duke Zerah , duke Shammah , duke Mizz these are the dukes that came of Reuel in the land of Edom ; these are the sons of Bashemath Esau ' s wife . And these are the sons of Aholibamah Esau ' s wife ; duke Jeush , duke Jaalam , duke Kor these were the dukes that came of Aholibamah the daughter of Anah , Esau ' s wife . These are the sons of Esau , who is Edom , and these are their dukes . These are the sons of Seir the Horite , who inhabited the land ; Lotan , and Shobal , and Zibeon , and Anah , And Dishon , and Ezer , and Dishan : these are the dukes of the Horites , the children of Seir in the land of Edom . And the children of Lotan were Hori and Hemam ; and Lotan ' s sister was Timna . And the children of Shobal were these ; Alvan , and Manahath , and Ebal , Shepho , and Onam . And these are the children of Zibeon ; both Ajah , and Anah : this was that Anah that found the mules in the wilderness , as he fed the asses of
\textcolor{grey}{Zibeon his father . And the children of Anah were these ; Dishon , and Aholibamah the daughter of Anah . And these are the children of Dishon ; Hemdan , and Eshban , and Ithran , and Cheran .}
}
\subsection*{Detection \#3: words 7347 -- 7684 (Score: 39679.642)}
\fpar{%
\textcolor{grey}{may not understand one another ' s speech . So the LORD scattered them abroad from thence upon the face of all the ear and they left off to build the city . Therefore is the name of it called Babel ; because the LORD did there confound the language}
of all the ear and from thence did the LORD scatter them abroad upon the face of all the earth . These are the generations of Shem : Shem was an hundred years old , and begat Arphaxad two years after the flo And Shem lived after he begat Arphaxad five hundred years , and begat sons and daughters . And Arphaxad lived five and thirty years , and begat Salah : And Arphaxad lived after he begat Salah four hundred and three years , and begat sons and daughters . And Salah lived thirty years , and begat Eber : And Salah lived after he begat Eber four hundred and three years , and begat sons and daughters . And Eber lived four and thirty years , and begat Peleg : And Eber lived after he begat Peleg four hundred and thirty years , and begat sons and daughters . And Peleg lived thirty years , and begat Reu : And Peleg lived after he begat Reu two hundred and nine years , and begat sons and daughters . And Reu lived two and thirty years , and begat Serug : And Reu lived after he begat Serug two hundred and seven years , and begat sons and daughters . And Serug lived thirty years , and begat Nahor : And Serug lived after he begat Nahor two hundred years , and begat sons and daughters . And Nahor lived nine and twenty years , and begat Terah : And Nahor lived after he begat Terah an hundred and nineteen years , and begat sons and daughters . And Terah lived seventy years , and begat Abram , Nahor , and Haran . Now these are the generations of Terah : Terah begat Abram , Nahor , and Haran ; and Haran begat Lot . And Haran died before his father Terah in the land of his nativity , in Ur of the Chaldees . And Abram and Nahor took them wives : the name of
\textcolor{grey}{Abram ' s wife was Sarai ; and the name of Nahor ' s wife , Milcah , the daughter of Haran , the father of Milcah , and the father of Iscah . But Sarai was barren ; she had no child .}
}
\subsection*{Detection \#4: words 30585 -- 30993 (Score: 28796.840)}
\fpar{%
\textcolor{grey}{And these are the children of Zibeon ; both Ajah , and Anah : this was that Anah that found the mules in the wilderness , as he fed the asses of Zibeon his father . And the children of Anah were these ; Dishon , and Aholibamah the}
daughter of Anah . And these are the children of Dishon ; Hemdan , and Eshban , and Ithran , and Cheran . The children of Ezer are these ; Bilhan , and Zaavan , and Akan . The children of Dishan are these ; Uz , and Aran . These are the dukes that came of the Horites ; duke Lotan , duke Shobal , duke Zibeon , duke Anah , Duke Dishon , duke Ezer , duke Dishan : these are the dukes that came of Hori , among their dukes in the land of Seir . And these are the kings that reigned in the land of Edom , before there reigned any king over the children of Israel . And Bela the son of Beor reigned in Edom : and the name of his city was Dinhabah . And Bela died , and Jobab the son of Zerah of Bozrah reigned in his stead . And Jobab died , and Husham of the land of Temani reigned in his stead . And Husham died , and Hadad the son of Bedad , who smote Midian in the field of Moab , reigned in his ste and the name of his city was Avith . And Hadad died , and Samlah of Masrekah reigned in his stead . And Samlah died , and Saul of Rehoboth by the river reigned in his stead . And Saul died , and Baalhanan the son of Achbor reigned in his stead . And Baalhanan the son of Achbor died , and Hadar reigned in his ste and the name of his city was Pau ; and his wife ' s name was Mehetabel , the daughter of Matred , the daughter of Mezahab . And these are the names of the dukes that came of Esau , according to their families , after their places , by their names ; duke Timnah , duke Alvah , duke Jetheth , Duke Aholibamah , duke Elah , duke Pinon , Duke Kenaz , duke Teman , duke Mibzar , Duke Magdiel , duke Iram : these be the dukes of Edom , according to their habitations in the land of their possessi he is Esau the father of the Edomites . And Jacob dwelt in the land wherein his father was a stranger , in the land of Canaan . These are the generations of Jacob . Joseph , being
\textcolor{grey}{seventeen years old , was feeding the flock with his brethren ; and the lad was with the sons of Bilhah , and with the sons of Zilpah , his father ' s wiv and Joseph brought unto his father their evil report .}
}
\subsection*{Detection \#5: words 40473 -- 40821 (Score: 28436.096)}
\fpar{%
\textcolor{grey}{into Egypt , Jacob , and all his seed with h His sons , and his sons ' sons with him , his daughters , and his sons ' daughters , and all his seed brought he with him into Egypt . And these are the names of the children}
of Israel , which came into Egypt , Jacob and his so Reuben , Jacob ' s firstborn . And the sons of Reuben ; Hanoch , and Phallu , and Hezron , and Carmi . And the sons of Simeon ; Jemuel , and Jamin , and Ohad , and Jachin , and Zohar , and Shaul the son of a Canaanitish woman . And the sons of Levi ; Gershon , Kohath , and Merari . And the sons of Judah ; Er , and Onan , and Shelah , and Pharez , and Zar but Er and Onan died in the land of Canaan . And the sons of Pharez were Hezron and Hamul . And the sons of Issachar ; Tola , and Phuvah , and Job , and Shimron . And the sons of Zebulun ; Sered , and Elon , and Jahleel . These be the sons of Leah , which she bare unto Jacob in Padanaram , with his daughter Din all the souls of his sons and his daughters were thirty and three . And the sons of Gad ; Ziphion , and Haggi , Shuni , and Ezbon , Eri , and Arodi , and Areli . And the sons of Asher ; Jimnah , and Ishuah , and Isui , and Beriah , and Serah their sist and the sons of Beriah ; Heber , and Malchiel . These are the sons of Zilpah , whom Laban gave to Leah his daughter , and these she bare unto Jacob , even sixteen souls . The sons of Rachel Jacob ' s wife ; Joseph , and Benjamin . And unto Joseph in the land of Egypt were born Manasseh and Ephraim , which Asenath the daughter of Potipherah priest of On bare unto him . And the sons of Benjamin were Belah , and Becher , and Ashbel , Gera , and Naaman , Ehi , and Rosh , Muppim , and Huppim , and Ard . These are the sons of Rachel , which were born to
\textcolor{grey}{Jacob : all the souls were fourteen . And the sons of Dan ; Hushim . And the sons of Naphtali ; Jahzeel , and Guni , and Jezer , and Shillem . These are the sons of Bilhah , which Laban gave unto Rachel his daughter , and she}
}
\subsection*{Detection \#6: words 12299 -- 12486 (Score: 25531.722)}
\fpar{%
\textcolor{grey}{And Abraham answered and said , Behold now , I have taken upon me to speak unto the LORD , which am but dust and ash Peradventure there shall lack five of the fifty righteous : wilt thou destroy all the city}
for lack of five ? And he said , If I find there forty and five , I will not destroy it . And he spake unto him yet again , and said , Peradventure there shall be forty found there . And he said , I will not do it for forty ' s sake . And he said unto him , Oh let not the LORD be angry , and I will spe Peradventure there shall thirty be found there . And he said , I will not do it , if I find thirty there . And he said , Behold now , I have taken upon me to speak unto the LO Peradventure there shall be twenty found there . And he said , I will not destroy it for twenty ' s sake . And he said , Oh let not the LORD be angry , and I will speak yet but this on Peradventure ten shall be found there . And he said , I will not destroy it for ten ' s sake . And the LORD went his way
\textcolor{grey}{, as soon as he had left communing with Abrah and Abraham returned unto his place . And there came two angels to Sodom at even ; and Lot sat in the gate of Sod and Lot seeing them rose up}
}
\subsection*{Detection \#7: words 9069 -- 9287 (Score: 25480.242)}
\fpar{%
\textcolor{grey}{Twelve years they served Chedorlaomer , and in the thirteenth year they rebelled . And in the fourteenth year came Chedorlaomer , and the kings that were}
with him , and smote the Rephaims in Ashteroth Karnaim , and the Zuzims in Ham , and the Emins in Shaveh Kiriathaim , And the Horites in their mount Seir , unto Elparan , which is by the wilderness . And they returned , and came to Enmishpat , which is Kadesh , and smote all the country of the Amalekites , and also the Amorites , that dwelt in Hazezontamar . And there went out the king of Sodom , and the king of Gomorrah , and the king of Admah , and the king of Zeboiim , and the king of Bela ( the same is Zoar ;) and they joined battle with them in the vale of Siddim ; With Chedorlaomer the king of Elam , and with Tidal king of nations , and Amraphel king of Shinar , and Arioch king of Ellasar ; four kings with five . And the vale of Siddim was full of slimepits ; and the kings of Sodom and Gomorrah fled , and fell there ; and they that remained fled to the mountain . And they took all the goods of Sodom and Gomorrah , and all their victuals , and went their way . And they took Lot , Abram ' s brother ' s
\textcolor{grey}{son , who dwelt in Sodom , and his goods , and departed . And there came one that had escaped , and told Abram the Hebrew ; for he dwelt in the plain of Mamre the Amorite , brother of Eshcol , and brother of An and these were}
}
\subsection*{Detection \#8: words 6811 -- 7104 (Score: 24522.926)}
\fpar{%
\textcolor{grey}{And the Arvadite , and the Zemarite , and the Hamathite : and afterward were the families of the Canaanites spread abroad . And the}
border of the Canaanites was from Sidon , as thou comest to Gerar , unto Gaza ; as thou goest , unto Sodom , and Gomorrah , and Admah , and Zeboim , even unto Lasha . These are the sons of Ham , after their families , after their tongues , in their countries , and in their nations . Unto Shem also , the father of all the children of Eber , the brother of Japheth the elder , even to him were children born . The children of Shem ; Elam , and Asshur , and Arphaxad , and Lud , and Aram . And the children of Aram ; Uz , and Hul , and Gether , and Mash . And Arphaxad begat Salah ; and Salah begat Eber . And unto Eber were born two sons : the name of one was Peleg ; for in his days was the earth divided ; and his brother ' s name was Joktan . And Joktan begat Almodad , and Sheleph , and Hazarmaveth , and Jerah , And Hadoram , and Uzal , and Diklah , And Obal , and Abimael , and Sheba , And Ophir , and Havilah , and Jobab : all these were the sons of Joktan . And their dwelling was from Mesha , as thou goest unto Sephar a mount of the east . These are the sons of Shem , after their families , after their tongues , in their lands , after their nations . These are the families of the sons of Noah , after their generations , in their natio and by these were the nations divided in the earth after the flood . And the whole earth was
\textcolor{grey}{of one language , and of one speech . And it came to pass , as they journeyed from the east , that they found a plain in the land of Shinar ; and they dwelt there .}
}
\subsection*{Detection \#9: words 11352 -- 11512 (Score: 21242.191)}
\fpar{%
\textcolor{grey}{and I will make him a great nation . But my covenant will I establish with Isaac , which Sarah shall bear unto thee at this set time in the next year . And he left off talking with}
him , and God went up from Abraham . And Abraham took Ishmael his son , and all that were born in his house , and all that were bought with his money , every male among the men of Abraham ' s house ; and circumcised the flesh of their foreskin in the selfsame day , as God had said unto him . And Abraham was ninety years old and nine , when he was circumcised in the flesh of his foreskin . And Ishmael his son was thirteen years old , when he was circumcised in the flesh of his foreskin . In the selfsame day was Abraham circumcised , and Ishmael his son . And all the men of his house , born in the house , and bought with money of the stranger , were circumcised with him . And the LORD appeared unto him in the plains of Mamre : and he sat in the
\textcolor{grey}{tent door in the heat of the day ; And he lift up his eyes and looked , and , lo , three men stood by}
}
\subsection*{Detection \#10: words 22251 -- 22399 (Score: 20712.606)}
\fpar{%
\textcolor{grey}{And give thee the blessing of Abraham , to thee , and to thy seed with thee ; that thou mayest inherit the land wherein thou art a stranger , which God}
gave unto Abraham . And Isaac sent away Jacob : and he went to Padanaram unto Laban , son of Bethuel the Syrian , the brother of Rebekah , Jacob ' s and Esau ' s mother . When Esau saw that Isaac had blessed Jacob , and sent him away to Padanaram , to take him a wife from thence ; and that as he blessed him he gave him a charge , saying , Thou shalt not take a wife of the daughers of Canaan ; And that Jacob obeyed his father and his mother , and was gone to Padanaram ; And Esau seeing that the daughters of Canaan pleased not Isaac his father ; Then went Esau unto Ishmael , and took unto the wives which he had Mahalath the daughter of Ishmael Abraham ' s son , the sister of Nebajoth , to
\textcolor{grey}{be his wife . And Jacob went out from Beersheba , and went toward Haran .}
}
\end{appendices}
\section{Introduction}\label{sec:intro}}
\IEEEPARstart{M}{any} pattern recognition methods strive towards deriving models from complex and noisy data. Such models try to describe the prototypical normal behavior of the system being observed, which is hard to model manually and whose state is often not even directly observable, but only reflected by the data. They allow reasoning about the properties of the system, predicting unseen data, and assessing the ``normality'' of new data. In such a scenario, any deviation from the normal behavior present in the data is distracting and may impair the accuracy of the model. An entire arsenal of techniques has therefore been developed to eliminate abnormal observations prior to learning or to learn models in a robust way not affected by a few anomalies.
Such practices may easily lead to the perception of anomalies as being intrinsically bad and worthless. Though that is true for random noise and erroneous measurements, there may also be anomalies caused by rare events and complex processes. Embracing the anomalies in the data and studying the information buried in them can therefore lead to a deeper understanding of the system being analyzed and to the insight that the models hitherto employed were incomplete or---in the case of non-stationary processes---outdated. A well-known example for this is the discovery of the correlation between the \textit{El Niño} weather phenomenon and extreme surface pressures over the equator by Gilbert Walker \cite{walker1928world} during the early \nth{20} century through the analysis of extreme events in time-series of climate data.
Thus, the use of anomaly detection techniques is not limited to outlier removal as a pre-processing step. In contrast, anomaly detection also is an important task \textit{per se}, since only the deviations from normal behavior are the actual object of interest in many applications. Besides the scenario of knowledge discovery mentioned above, fraud detection (e.g., credit card fraud or identity theft), intrusion detection in cyber-security, fault detection in industrial processes, anomaly detection in healthcare (e.g., monitoring patient condition or detecting disease outbreaks), and early detection of environmental disasters are other important examples. Automated methods for anomaly detection are especially crucial nowadays, where huge amounts of data are available that cannot be analyzed by humans.
In this article, we introduce a novel unsupervised method called ``Maximally Divergent Intervals'' (MDI), which can be employed to point the expert analysts to the interesting parts of the data, i.e., the anomalies.
In contrast to most existing anomaly detection techniques (e.g., \cite{breunig2000lof,kim2012rkde,macgregor1995statistical,schoelkopf2001ocsvm}), we do not analyze the data on a point-wise basis, but search for contiguous intervals of time and regions in space that contain the anomalous event.
This is motivated by the fact that anomalies driven by natural processes rather occur over a space of time and, in the case of spatio-temporal data, in a spatial region rather than at a single location at a single time.
Moreover, the individual samples making up such a so-called \textit{collective anomaly} do not have to be anomalous when considered in isolation, but may be an anomaly only as a whole.
Thus, analysts will intuitively be searching for anomalous {\em regions} in the data instead of anomalous points and the algorithm assisting them should do so as well.
We achieve this by searching for anomalous {\em blocks} in multivariate spatio-temporal data tensors, i.e., regions and time frames whose data distribution deviates most from the distribution of the remaining time-series.
To this end, we compare several existing measures for the divergence of distributions and derive a new one that is invariant against varying length of the intervals being compared.
A fast novel interval proposal technique allows us to reduce the computational cost of this procedure by just analyzing a small portion of particularly interesting parts of the data.
Experiments on climate data, videos, and text corpora will demonstrate that our method can be applied to a variety of applications without major adaptations.
Despite the importance of this task across domains, there has been very limited research on the detection of anomalous intervals in multivariate time-series data, though this problem has been known for a couple of years: Keogh et al.\ \cite{keogh2005hot} have already tackled this task in 2005 with a method they called ``HOT SAX''. They try to find anomalous sub-sequences (``discords'') of time-series by representing all possible sub-sequences of length $d$ as a $d$-dimensional vector and using the Euclidean distance to the nearest neighbor in that space as anomaly score.
More recently, Ren et al.\ \cite{ren2018anomaly} use hand-crafted interval features based on the frequency of extreme values and search for intervals whose features are maximally different from all other intervals.
However, both methods are limited to univariate data and a fixed length of the intervals must be specified in advance.
The latter is also true for a multivariate approach proposed by Liu et al.\ \cite{liu2013change} who compare two consecutive intervals of fixed size in a time-series using the Kullback-Leibler or the Pearson divergence for detecting {\em change-point anomalies}, i.e., points where a permanent change of the distribution of the data occurs. This is a different task than finding intervals that are anomalous with regard to {\em all} the remaining data. In addition, their method does not scale well for detecting anomalous intervals of {\em dynamic} size and is hence not applicable for detecting other types of anomalies, for which a broader context has to be taken into account.
The task of detecting anomalous intervals of dynamic size has recently been tackled by Senin et al.\ \cite{senin2018grammarviz}, who search for typical and anomalous patterns in time-series by inducing a grammar on a symbolic discretization of the data.
As opposed to our approach, their method cannot handle multivariate or spatio-temporal data.
Similar to our approach, Jiang et al.\ \cite{jiang2015general} search for anomalous blocks in higher-order tensors using the Kullback-Leibler divergence, but apply their method to discrete data only (e.g., relations in social networks) and use a Poisson distribution for modeling the data. Since their search strategy is very specific to applications dealing with graph data, it is not applicable in the general case for multivariate continuous data dealt with in our work.
Regarding spatio-temporal data, Wu et al.\ \cite{wu2010spatio} follow a sequential approach for detecting anomalies first spatially, then temporally and apply a merge-strategy afterwards. However, the time needed for merging grows exponentially with the length of the time-series and their divergence measure is limited to binary-valued data. In contrast to this, our approach is able to deal with multivariate real-valued data efficiently and treats time and space jointly.
The remainder of this article is organized as follows: \Cref{sec:MDI} will introduce our novel ``Maximally Divergent Intervals'' algorithm for off-line detection of collective anomalies in multivariate spatio-temporal data. Its performance will be evaluated quantitatively on artificial data in \cref{sec:eval} and its suitability for practical applications will be demonstrated by means of experiments on real data from various different domains in \cref{sec:applications}. \Cref{sec:conclusions} will summarize the progress made so far and mention directions for future research.
\section{Maximally Divergent Intervals}
\label{sec:MDI}
\begin{figure}
\includegraphics[width=\linewidth]{figure1}
\caption{Schematic illustration of the principle of the MDI algorithm: The distribution of the data in the inner interval $I$ is compared with the distribution of the remaining time-series in the outer interval $\Omega$.}
\label{fig:maxdiv-example}
\end{figure}
This section formally introduces our MDI algorithm for off-line detection of anomalous intervals in spatio-temporal data.
After a set of definitions that we are going to make use of, we start by giving a very rough overview of the basic idea behind the algorithm, which is also illustrated schematically in \cref{fig:maxdiv-example}. The subsequent sub-sections will go into more detail on the individual aspects and components of our approach.
Our implementation of the MDI algorithm is available as open source at: \url{https://cvjena.github.io/libmaxdiv/}
\subsection{Definitions}
\label{sec:MDI-defs}
Let $\mathfrak{X} \in \mathbb{R}^{T \times X \times Y \times Z \times D}$ be a multivariate spatio-temporal time-series given as \nth{5}-order tensor with 4 contextual attributes (point of time and spatial location) and $D$ behavioral attributes for all $N \coloneqq T \cdot X \cdot Y \cdot Z$ samples. We will index individual samples using 4-tuples $i \in \mathbb{N}^4$ like in $\mathfrak{X}_i \in \mathbb{R}^D$.
The usual interval notation $[\ell,r)$ will be used in the following for discrete intervals $\left\{ {t \in \mathbb{N} \vert \ell \le t < r} \right\}$. Furthermore, the set of all intervals with size between $a$ and $b$ along an axis of size $n$ is denoted by
\begin{equation}
\mathfrak{I}_{a,b}^n \coloneqq
\{ {
[\ell,r) \mid
1 \le \ell < r \le n+1 \wedge
a \le r-\ell \le b
} \} \;.
\label{eq:interval-set-1d}
\end{equation}
The set of all sub-blocks of a data tensor $\mathfrak{X}$ complying with given size constraints $A = (a_t, a_x, a_y, a_z), B = (b_t, b_x, b_y, b_z)$ can then be defined as
\begin{equation}
\begin{split}
\mathfrak{I}_{A,B} \coloneqq
\{
I_t \times I_x \times I_y \times I_z \mid
& I_t \in \mathfrak{I}_{a_t,b_t}^T \wedge
I_x \in \mathfrak{I}_{a_x,b_x}^X \wedge \\
& I_y \in \mathfrak{I}_{a_y,b_y}^Y \wedge
I_z \in \mathfrak{I}_{a_z,b_z}^Z
\} \,.
\label{eq:interval-set}
\end{split}
\end{equation}
In the following, we will often omit the indices for simplicity and just refer to it as $\mathfrak{I}$.
Given any sub-block $I \in \mathfrak{I}_{A,B}$, the remaining part of the time-series excluding that specific range can be defined as
\begin{equation}
\Omega(I) \coloneqq
\left( [1,T] \times [1,X] \times [1,Y] \times [1,Z] \right)
\setminus I
\label{eq:def-omega}
\end{equation}
and we will often simply refer to it as $\Omega$ if the corresponding range $I$ is obvious from the context.
\subsection{Idea and Algorithm Overview}
\label{sec:MDI-idea}
The approach pursued by the MDI algorithm to compute anomaly scores for all intervals $I \in \mathfrak{I}$ can be motivated by a long-standing definition of anomalies given by Douglas Hawkins \cite{hawkins1980} in 1980, who defines an anomaly as ``an observation which deviates so much from other observations as to arouse suspicions that it was generated by a different mechanism''.
In analogy to this definition, the MDI algorithm assumes that there is a sub-block $I \in \mathfrak{I}$ of the given time-series that has been generated according to ``a different mechanism'' than the rest of the time-series in $\Omega$ (cf. the schematic illustration in \cref{fig:maxdiv-example}). The algorithm tries to capture these mechanisms by modelling the probability density $p_I$ of the data in the inner interval $I$ and the distribution $p_\Omega$ in the outer interval $\Omega$. We investigate two different models for these distributions: Kernel Density Estimation (KDE) and multivariate normal distributions (Gaussians), which will be explained in detail in \cref{sec:MDI-density-estimation}.
Moreover, a measure $\mathfrak{D}(p_I, p_\Omega)$ for the degree of ``deviation'' of $p_I$ from $p_\Omega$ has to be defined. Like some other works on collective anomaly detection \cite{liu2013change,jiang2015general}, we use---among others---the \textit{Kullback-Leiber (KL) divergence} for this purpose. However, \cref{sec:divergences} will show that this is a sub-optimal choice when used without a slight modification and discuss alternative divergence measures.
Given these ingredients, the underlying optimization problem for finding the most anomalous interval can be described as
\begin{equation}
\hat{I} = \argmax_{I \in \mathfrak{I}_{A,B}}\ \mathfrak{D}\left(p_I, p_{\Omega(I)}\right) \;.
\label{eq:MDI-optimization-problem}
\end{equation}
Various possible choices for the divergence measure $\mathfrak{D}$ will be discussed in \cref{sec:divergences}.
In order to actually locate this ``maximally divergent interval'' $\hat{I}$, the MDI algorithm scans over all intervals $I \in \mathfrak{I}_{A,B}$, estimates the distributions $p_I$ and $p_\Omega$ and computes the divergence between them, which becomes the anomaly score of the interval $I$. The parameters $A$ and $B$, which define the minimum and the maximum size of the intervals in question, have to be specified by the user in advance. This is not a severe restriction, since extreme values may be chosen for these parameters in exchange for increased computation time. But depending on the application and the focus of the analysis, there is often prior knowledge about reasonable limits for the size of possible intervals.
After the anomaly scores have been obtained for all intervals, they are sorted in descending order and non-maximum suppression is applied to obtain non-overlapping intervals only. For large time-series with more than 10k samples, we apply an approximative non-maximum suppression that avoids storing all interval scores by maintaining a fixed-size list of currently best-scoring non-overlapping intervals.
Finally, the algorithm returns a ranking of intervals, so that a user-specified number of top $k$ intervals can be selected as output.
\subsection{Probability Density Estimation}
\label{sec:MDI-density-estimation}
The divergence measure used in \eqref{eq:MDI-optimization-problem} requires the notion of the distribution of the data in the intervals $I$ and $\Omega$. We will hence discuss in the following, which models we employ to estimate these distributions and how this can be done efficiently.
\subsubsection{Models}
\label{sec:MDI-distribution-models}
The choice of a specific model for the distributions $p_I$ and $p_\Omega$ imposes some assumptions about the data which may not conform to reality. However, since the MDI algorithm estimates the parameters of those distributions for all possible intervals in the time-series, the use of models that can be updated efficiently is crucial. One such model is Kernel Density Estimation (KDE) with
\begin{equation}
p_\mathfrak{S}(\mathfrak{X}_i) = \frac{1}{\left| \mathfrak{S} \right|} \sum_{j \in \mathfrak{S}} k(\mathfrak{X}_i, \mathfrak{X}_j), \qquad \mathfrak{S} \in \left\{ {I, \Omega} \right\} ,
\label{eq:KDE}
\end{equation}
using a Gaussian kernel
\begin{equation}
k(x, y) = { \left( 2 \pi \sigma^2 \right) }^{- \frac{D}{2}} \cdot \exp \left( - \frac{\left\| x - y \right\|^2}{2 \sigma^2} \right) \;.
\label{eq:gaussian-kernel}
\end{equation}
On the one hand, KDE is a very flexible model, but on the other hand, it does not scale well to long time-series and does not take correlations between attributes into account. The second proposed model does not expose these problems: It assumes that both the data in the anomalous interval $I$ and in the remaining time-series $\Omega$ are distributed according to multivariate normal distributions (\textit{Gaussians}) $\mathcal{N} \left( \mu_I, S_I \right)$ and $\mathcal{N} \left( \mu_\Omega, S_\Omega \right)$, respectively.
\begin{figure*}
\includegraphics[width=\linewidth]{figure2}
\caption{Illustration of time-delay embedding with $\kappa=3, \tau=4$. The attribute vector of each sample is augmented with the attributes of the samples 4 and 8 time steps earlier.}
\label{fig:td-embedding}
\end{figure*}
\subsubsection{Efficient Estimation with Cumulative Sums}
\label{sec:MDI-cumsum}
Both distribution models described above involve a summation over all samples in the respective interval. Performing this summation for multiple intervals is redundant, because some of them overlap with each other. Such a naïve approach of finding the maximally divergent interval has a time complexity of $\mathcal{O} \left( N^2 \cdot L^2 \right)$ with KDE and $\mathcal{O} \left( N \cdot L \cdot \left( N + L \right) \right) \subseteq \mathcal{O} \left( N^2 \cdot L \right)$ with Gaussian distributions. This is due to the number of $\mathcal{O} \left( N \cdot L \right)$ intervals (with $L = (b_t - a_t + 1) \cdot (b_x - a_x + 1) \cdot (b_y - a_y + 1) \cdot (b_z - a_z + 1)$ being the maximum volume of an interval), each of them requiring a summation over $\mathcal{O} \left( L \right)$ samples for the evaluation of one of the divergence measures described later in \cref{sec:divergences}. For KDE, $\mathcal{O}(N)$ distance computations are necessary for the evaluation of the probability density function for each sample, while for Gaussian distributions a summation over all $\mathcal{O}(N)$ samples has to be performed for each interval to estimate the parameters of the distributions.
This would be clearly infeasible for large-scale data. However, these computations can be sped up significantly by using cumulative sums \cite{viola2004robust}. For the sake of clarity, we first consider the special case of a non-spatial time-series $(x_t)_{t=1}^n, x_t \in \mathbb{R}^D$. With regard to KDE, a matrix $C \in \mathbb{R}^{n \times n}$ of cumulative sums of kernelized distances can be used:
\begin{equation}
C_{t,t'} = \sum_{t'' = 1}^{t'} k(x_t, x_{t''}) \;\;.
\label{eq:cumsum-kde}
\end{equation}
This matrix has to be computed only once, which requires $\mathcal{O} \left( n^2 \right)$ distance calculations, and can then be used to estimate the probability density functions of the data in the intervals $I = \left[a,b\right)$ and $\Omega = \left[1,n\right] \setminus I$ in constant time:
\begin{equation}
\begin{split}
p_I(x_t) &= \frac{C_{t,b-1} - C_{t,a-1}}{\left| I \right|} \;\; , \\[3ex]
p_\Omega(x_t) &= \frac{C_{t,n} - C_{t,b-1} + C_{t,a-1}}{n - \left| I \right|} \;\; .
\label{eq:cumsum-kde-recons}
\end{split}
\end{equation}
In analogy, a matrix $C^\mu \in \mathbb{R}^{D \times n}$ of cumulative sums over the samples and a tensor $C^S \in \mathbb{R}^{D \times D \times n}$ of cumulative sums over the outer products of the samples can be used to speed up the estimation of the parameters of Gaussian distributions:
\begin{equation}
C_t^\mu = \sum_{t' = 1}^{t} x_{t'}, \quad
C_t^S = \sum_{t' = 1}^{t} x_{t'} \cdot x_{t'}^\top \;,
\end{equation}
where $C_t^\mu$ and $C_t^S$ are the $t$-th column of $C^\mu$ and the $t$-th $D \times D$ matrix of $C^S$, respectively. Using these matrices, the mean vectors and covariance matrices can be estimated in constant time.
This technique can be generalized to the spatio-temporal scenario using higher order tensors for storing the cumulative sums. The reconstruction of a sum over a given range from such a cumulative tensor follows the \textit{Inclusion-Exclusion Principle} and the number of summands involved in the computation grows, thus, exponentially with the order of the tensor, being 16 for a \nth{4}-order tensor, compared to only 2 summands in the non-spatial case. The exact equation describing the reconstruction in the general case of an $M^\text{th}$-order tensor is given in \cref{app:cumsum-extraction}.
Thanks to the use of cumulative sums, the computational complexity of the MDI algorithm is reduced to $\mathcal{O} \left( N^2 + N \cdot L^2 \right)$ for the case of KDE and to $\mathcal{O} \left( N \cdot L^2 \right)$ for Gaussian distributions.
\subsection{Incorporation of Context}
\label{sec:MDI-embeddings}
The models used for probability density estimation described in the previous section are based on the assumption of independent samples. However, this assumption is almost never true for real data, since the value at a specific point of time and spatial location is likely to be strongly correlated with the values at previous times and nearby locations. To mitigate this issue, we apply two kinds of embeddings that incorporate context into each sample as pre-processing step.
\subsubsection{Time-Delay Embedding}
\label{sec:td-embedding}
Aiming to make combinations of observed values more representative of the hidden state of the system being observed, \textit{time-delay embedding} \cite{packard1980td} incorporates context from previous time-steps into each sample by transforming a given time-series $\left( x_t \right)_{t=1}^n, x_t \in \mathbb{R}^D$, into another time-series $\left( x_t' \right)_{t=1+(\kappa-1)\tau}^n, x_t' \in \mathbb{R}^{\kappa D}$, given by
\begin{equation}
x_t' = \left(
\begin{array}{ccccc}
x_t^\top & x_{t-\tau}^\top & x_{t-2\tau}^\top
& \cdots & x_{t-(\kappa-1) \cdot \tau}^\top
\end{array}
\right)^\top ,
\label{eq:td-embedding}
\end{equation}
where the \textit{embedding dimension} $\kappa$ specifies the number of samples to stack together and the \textit{time lag} $\tau$ specifies the gap between two consecutive time-steps to be included as context. An illustrative example is given in \cref{fig:td-embedding}.
This method is often motivated by Takens' theorem \cite{takens1981detecting}, which, roughly, states that for a certain embedding dimension $\bar{\kappa}$ the hidden state of the system can be reconstructed given the observations of the last $\bar{\kappa}$ time-steps.
\subsubsection{Spatial-Neighbor Embedding}
\label{sec:spatial-embedding}
Correlations between nearby spatial locations are handled similarly: In addition to time-delay embedding, each sample of a spatio-temporal time-series can be augmented by the features of its spatial neighbors (cf. \cref{fig:spatial-embedding}) to enable the detection of spatial or spatio-temporal anomalies. This pre-processing step, which we refer to as \textit{spatial-neighbor embedding}, is parametrized with 3 parameters $\kappa_x, \kappa_y, \kappa_z$ for the embedding dimension along each spatial axis and 3 parameters $\tau_x, \tau_y, \tau_z$ for the lag along each axis.
\begin{figure*}
\begin{subfigure}{0.32\linewidth}%
\centering
$\kappa_x = \kappa_y = 2, \quad \tau_x = \tau_y = 1$ \\
\includegraphics[height=3.8cm]{figure3a}%
\end{subfigure}%
\begin{subfigure}{0.68\linewidth}%
\centering
$\kappa_x = 3, \quad \kappa_y = 2, \quad \tau_x = 3, \quad \tau_y = 2$ \\
\includegraphics[height=3.8cm]{figure3b}%
\end{subfigure}%
\caption{Exemplary illustration of spatial-neighbor embedding with different parameters. The attribute vector of the sample with a solid fill color is augmented with the attributes of the samples with a striped pattern.}
\label{fig:spatial-embedding}
\end{figure*}
Note that, in contrast to time-delay embedding, neighbors from both directions are aggregated, since spatial context is bilinear. For example, $\kappa_x = 3$ would mean to consider 4 neighbors along the $x$-axis, 2 in each direction.
Spatial-neighbor embedding can either be applied before or after time-delay embedding. As opposed to many spatio-temporal anomaly detection approaches that perform temporal and spatial anomaly detection sequentially (e.g., \cite{wu2010spatio,kut2006spatio,cheng2006multiscale}), the MDI algorithm in combination with the two embeddings allows for a joint optimization. However, it implies a much more drastic multiplication of the data size.
\subsection{Divergences}
\label{sec:divergences}
A suitable measure for the deviation of the distribution $p_I$ from $p_\Omega$ is an essential part of the MDI algorithm. The following sub-sections introduce several divergence measures we have investigated and propose a modification to the well-known Kullback-Leibler (KL) divergence that is necessary for being able to compare divergences of distributions estimated from intervals of different size.
\subsubsection{Cross Entropy}
\label{sec:crossent}
Numerous divergence measures, including those described in the following, have been derived from the domain of \textit{information theory}. Being one of the most basic information theoretic concepts, the \textit{cross entropy} between two distributions given by their probability density functions $p$ and $q$ may already be used as a divergence measure:
\begin{equation}
\divergence{CE}(p,q) \coloneqq \text{H}(p, q) \coloneqq \mathbb{E}_p \left[ - \log q \right] \;.
\label{eq:crossent}
\end{equation}
Cross entropy measures how surprising a sample drawn from $p$ is, assuming that it would have been drawn from $q$, and is hence already eligible as a divergence measure, since the unexpectedness grows when $p$ and $q$ are very different.
Since the MDI algorithm assumes, that the data in the intervals $I \in \mathfrak{I}$ and $\Omega$ have been sampled from the distributions corresponding to $p_I$ and $p_\Omega$, respectively, the cross entropy of the two distributions can be approximated empirically from the data:
\begin{equation}
\edivergence{CE}(I, \Omega) = \frac{1}{\left| I \right|} \sum_{i \in I} \log p_\Omega(\mathfrak{X}_i) \;.
\label{eq:empirical-crossent}
\end{equation}
This approximation has the advantage of having to estimate only one probability density, $p_\Omega(x_t)$, explicitly. This is particularly beneficial, since the possibly anomalous intervals $I$ often contain only few samples, so that an accurate estimation of the probability density $p_I$ is difficult.
\subsubsection{Kullback-Leibler Divergence}
\label{sec:KL}
The \textit{Kullback-Leibler (KL) divergence} is a popular divergence measure that builds upon the fundamental concept of cross entropy. Given two distributions $p$ and $q$, the KL divergence can be defined as follows:
\begin{equation}
\divergence{KL}(p, q) \coloneqq \text{H}(p, q) - \text{H}(p, p)
= \mathbb{E}_p \left[ - \log \frac{p}{q} \right] \;.
\label{eq:KL-def}
\end{equation}
As opposed to the pure cross entropy of $p$ and $q$, the KL divergence does not only take into account how well $p$ is explained by $q$, but also the intrinsic entropy $\text{H}(p,p) \eqqcolon \text{H}(p)$ of $p$, so that an interval with a stable distribution would get a higher score than an oscillating one if they had the same cross entropy with the rest of the time-series.
Like cross entropy, the KL divergence can be approximated empirically from the data, but in contrast to cross entropy, this requires estimating the probability densities of both distributions, $p_I$ and $p_\Omega$:
\begin{equation}
\begin{split}
\edivergence{KL}(I,\Omega) &= \frac{1}{\left| I \right|} \cdot \sum_{i \in I} \log \left( \frac{p_I(\mathfrak{X}_i)}{p_\Omega(\mathfrak{X}_i)} \right) \\
&= \frac{1}{\left| I \right|} \cdot \sum_{i \in I} { \log \left( p_I(\mathfrak{X}_i) \right) - \log \left( p_\Omega(\mathfrak{X}_i) \right) } \;.
\label{eq:KL-IO}
\end{split}
\end{equation}
When used in combination with the Gaussian distribution model, the KL divergence comes with an additional advantage from a computational point of view, since there is a known closed-form solution for the KL divergence of two Gaussians \cite{duchi2007derivations}:
\begin{multline}
\divergence{KL}\left( p_I, p_\Omega \right) = \frac{1}{2} \biggl(
\left( \mu_\Omega - \mu_I \right)^\top S_{\Omega}^{-1} \left( \mu_\Omega - \mu_I \right) \\
+ \trace \left( S_{\Omega}^{-1} S_I \right)
+ \log \frac{ \left| S_\Omega \right| }{ \left| S_I \right| } - D
\biggr) \;.
\label{eq:KL-Gaussian-explicit}
\end{multline}
This allows evaluating the KL divergence in constant time for a given interval, which reduces the computational complexity of the MDI algorithm using the KL divergence in combination with Gaussian models to the number of possible intervals: $\mathcal{O} \left( N \cdot L \right)$.
Given this explicit solution for the KL divergence and the closed-form solution for the entropy of a Gaussian distribution \cite{ahmed1989entropy} with mean vector $\mu$ and covariance matrix $S$, which is given by
\begin{equation}
\text{H}(\mathcal{N}(\mu, S)) = \frac{1}{2} \left( \log \left| S \right| + d + d \cdot \log \left( 2 \pi \right) \right) \;,
\label{eq:gaussian-entropy}
\end{equation}
one can easily derive a closed-form solution for the cross entropy of those two distributions as well:
\begin{equation}
\begin{aligned}
& \text{H}(p_I, p_\Omega) \\
={}& \divergence{KL}(p_I, p_\Omega) + \text{H}(p_I) \\
={}& \frac{1}{2} \biggl(
\trace\left( S_\Omega^{-1} S_I \right)
+ \log\left| S_\Omega \right|
+ d \cdot \log(2 \pi) \\
&{+}\:(\mu_\Omega - \mu_I)^\top S_\Omega^{-1} (\mu_\Omega - \mu_I)
\biggr) \;.
\end{aligned}
\end{equation}
Compared with the KL divergence, this does not assign extremely high scores to small intervals $I$ with a low variance, due to the subtraction of $\log \left| S_I \right|$. This may be an explanation for the evaluation results in \cref{sec:eval}, where cross entropy in combination with Gaussian models is often superior to the KL divergence, although it does not account for intervals of varying entropy.
However, in contrast to the empirical approximation of cross entropy in \eqref{eq:empirical-crossent}, this requires the estimation of $p_I$.
\subsubsection{Polarity of the KL divergence and its effect on MDI}
\label{sec:KL-polarity}
It is worth noting that the KL divergence is not a metric and, in particular, not symmetric: $\divergence{KL}(p, q) \ne \divergence{KL}(q, p)$. Some authors use, thus, a symmetric variant \cite{liu2013change}:
\begin{equation}
\divergence{KL-SYM}(p, q) = \frac{1}{2}\ \divergence{KL}(p, q) + \frac{1}{2}\ \divergence{KL}(q, p) \;.
\label{eq:KL-sym}
\end{equation}
This raises the question whether $\divergence{KL}(p_I, p_\Omega)$, $\divergence{KL}(p_\Omega, p_I)$, or the symmetric version $\divergence{KL-SYM}$ should be used for the detection of anomalous intervals. Quantitative experiments with an early prototype of our method \cite{Rodner16:MDI} have shown that neither $\divergence{KL}(p_\Omega, p_I)$ nor $\divergence{KL-SYM}$ provide good performance, as opposed to $\divergence{KL}(p_I, p_\Omega)$.
A visual inspection of the detections resulting from the use of $\divergence{KL}(p_\Omega, p_I)$ with the assumption of Gaussian distributions shows that all the intervals with the highest anomaly scores have the minimum possible size specified by the user and a very low variance. An example is given in \cref{fig:omega-i-bias}. The scores of the top detections in that example are around 100 times higher than those yielded by $\divergence{KL}(p_I, p_\Omega)$.
\begin{figure}[b]
\includegraphics[width=\linewidth]{figure4}
\caption{Example for the bias of $\divergence{KL}(p_\Omega, p_I)$ detections towards small intervals with low empirical variance on a synthetic time-series. The intensity of the fill color of the detected intervals corresponds to the detection scores. The ground-truth anomalous interval is indicated by a red box.}
\label{fig:omega-i-bias}
\end{figure}
This bias of $\divergence{KL}(p_\Omega, p_I)$ towards small low-variance intervals can also be explained theoretically. For the sake of simplicity, consider the special case of a univariate time-series. In this case, the closed-form solution for $\divergence{KL}(p_\Omega, p_I)$ assuming Gaussian distributions given in \eqref{eq:KL-Gaussian-explicit} reduces to
\begin{equation}
\frac{1}{2} \left(
\frac{\sigma_\Omega^2}{\sigma_I^2}
+ \frac{(\mu_I - \mu_\Omega)^2}{\sigma_I^2}
+ \log \sigma_I^2 - \log \sigma_\Omega^2 - 1
\right) \;,
\label{eq:kl-oi-gaussian-univar}
\end{equation}
where $\mu_I$, $\mu_\Omega$ are the mean values and $\sigma_I^2$, $\sigma_\Omega^2$ are the variances of the distributions in the inner and in the outer interval, respectively. It can be seen from \eqref{eq:kl-oi-gaussian-univar} that, due to the division by $\sigma_I^2$, the KL divergence will approach infinity when the variance in the inner interval converges towards $0$. And since the algorithm has to estimate the variance empirically from the given data, it assigns high detection scores to intervals as small as possible, because smaller intervals have a higher chance of having a low empirical variance. The term $\log \sigma_I^2$ cannot counterbalance this effect, though it is negative for $\sigma_I < 1$, since its absolute value grows much more slowly than that of $\sigma_I^{-2}$, as can be seen from the fact that $\forall_{\sigma_I < 1}\left( - \log \sigma_I^2 = \log \sigma_I^{-2} < \sigma_I^{-2} \right)$, since $\forall_{\sigma_I < 1}\left( \sigma_I^{-2} > 1 \right)$.
In contrast, $\divergence{KL}(p_I, p_\Omega)$, where the roles of $I$ and $\Omega$ are swapped, does not possess this deficiency, since $\sigma_\Omega^2$ is estimated from a much larger portion of data and, thus, is a more robust estimate.
The symmetric version $\divergence{KL-SYM}(p_I, p_\Omega)$ is useless as well, since the scores obtained from $\divergence{KL}(p_I, p_\Omega)$ will just be absorbed by the much higher scores of $\divergence{KL}(p_\Omega, p_I)$.
\subsubsection{Statistical Analysis and Unbiased KL Divergence}
\label{sec:KL-unbiased}
Though $\divergence{KL}(p_I, p_\Omega)$ does not overestimate the anomalousness of low-variance intervals as extremely as $\divergence{KL}(p_\Omega, p_I)$ does, the following theoretical analysis will show that it is not unbiased either. In contrast to the previous section, this bias is not related to the data itself, but to the length of the intervals: smaller intervals systematically get higher scores than longer ones. This harms the quality of interval detections, because anomalies will be split up into multiple contiguous small detections (see \cref{fig:kl-det-albedo-biased} for an example).
\begin{figure*}
\begin{subfigure}{0.49\linewidth}%
\caption{$\divergence{KL}(p_I, p_\Omega)$}%
\includegraphics[width=\linewidth]{figure5a}%
\label{fig:kl-det-albedo-biased}%
\end{subfigure}%
\hfill%
\begin{subfigure}{0.49\linewidth}%
\caption{$\divergence{U-KL}(p_I, p_\Omega)$}%
\includegraphics[width=\linewidth]{figure5b}%
\label{fig:kl-det-albedo-unbiased}%
\end{subfigure}%
\caption{(\subref{fig:kl-det-albedo-biased}) Top 10 detections obtained from the KL divergence on a real time-series and (\subref{fig:kl-det-albedo-unbiased}) top 3 detections obtained from the unbiased KL divergence on the same time-series. This example illustrates the phenomenon of several contiguous minimum-size detections when using the original KL divergence (note the thin lines between the single detections in the left plot). The MDI algorithm has been applied with a time-delay embedding of $\kappa=3, \tau=1$ and the size of the intervals to analyze has been limited to be between 25 and 250 samples.}
\label{fig:kl-unbiased-comparison}
\end{figure*}
Recall that $\mathfrak{I}_{m,m}^n$ denotes the set of all intervals of length $m$ in a time-series with $n$ time-steps. Furthermore, let $\vec{0}^d, d \in \mathbb{N},$ denote a $d$-dimensional vector with all coefficients being 0 and $\mathbb{I}_d$ the identity matrix of dimensionality $d$.
When applying the MDI algorithm to a time-series $(x_t)_{t=1}^n, x_t \sim \mathcal{N}(\vec{0}^d, \mathbb{I}_d)$, sampled independently and identically from plain white noise, an ideal divergence is supposed to yield constant average scores for all $\mathfrak{I}_{m,m}, m = a, \dots, b$ (for some user-defined limits $a,b$), i.e., scores independent from the length of the intervals.
For simplicity, we will first analyze the distribution of those scores using the MDI algorithm with Gaussian distributions with the simple, but for this data perfectly valid assumption of identity covariance matrices. In this case, the KL divergence $\divergence{KL}(p_I, p_\Omega)$ of two Gaussian distributions with the mean vectors $\mu_I, \mu_\Omega \in \mathbb{R}^d$ in some intervals $I \in \mathfrak{I}_m, \Omega = [1,n] \setminus I$ for some arbitrary $m$ is given by $\frac{1}{2} \left\| \mu_\Omega - \mu_I \right\|^2$. Moreover, since all samples in the time-series are normally distributed, so are their empirical means:
\begin{align*}
\mu_I &= \frac{1}{m} \sum_{t \in I} x_t \sim \mathcal{N}(\vec{0}^d, m^{-1} \cdot \mathbb{I}_d) \;, \\[3ex]
\mu_\Omega &= \frac{1}{n - m} \sum_{t \notin I} x_t \sim \mathcal{N}(\vec{0}^d, (n-m)^{-1} \cdot \mathbb{I}_d) \;.
\end{align*}
Thus, all dimensions of the mean vectors are independent and identically normally distributed variables. Their difference is, hence, normally distributed too:
\begin{equation*}
\mu_\Omega - \mu_I \sim \mathcal{N} \left( \vec{0}^d, \left(\frac{1}{m} + \frac{1}{n-m}\right) \cdot \mathbb{I}_d \right) \;.
\end{equation*}
Thus, $(\mu_\Omega - \mu_I) / \sqrt{\frac{1}{m} + \frac{1}{n-m}} \sim \mathcal{N}(\vec{0}^d, \mathbb{I}_d)$ is a vector of independent standard normal random variables and
\begin{equation}
\begin{split}
& \divergence{KL}(p_I, p_\Omega) \\
={}& \frac{1}{2} \left(\frac{1}{m} + \frac{1}{n-m}\right)
\sum_{i=1}^{d} \left( \frac{(\mu_\Omega - \mu_I)_i}{\sqrt{\frac{1}{m} + \frac{1}{n-m}}} \right)^2 \\
\sim{}& \frac{1}{2} \left(\frac{1}{m} + \frac{1}{n-m}\right) \cdot \chi_d^2
\label{eq:kl-distribution}
\end{split}
\end{equation}
is the sum of the squares of $d$ independent normal variables and, hence, distributed according to the chi-squared distribution with $d$ degrees of freedom, scaled by half the variance of the variables. The mean of a $\chi_d^2$-distributed random variable is $d$ and the mean of the $\divergence{KL}(p_I, p_\Omega)$ scores for all intervals in $\mathfrak{I}_m$ is, accordingly, $\frac{d}{2} \left( \frac{1}{m} + \frac{1}{n-m} \right)$, which is inversely proportional to the length of the interval $m$. Thus, the KL divergence is systematically biased towards smaller intervals.
When the length $n$ of the time-series is very large, the asymptotic scale of the chi-squared distribution is $\lim\limits_{n \rightarrow \infty} \frac{1}{2} \left(\frac{1}{m} + \frac{1}{n-m}\right) = \frac{1}{2m}$ and the estimated parameters $\mu_\Omega, S_\Omega$ of the outer distribution converge towards the parameters of the true distribution of the data. Thus, if the restriction of the Gaussian model to identity covariance matrices is weakened to a global, shared covariance matrix $S$, the above findings also apply to the case of long time-series with correlated variables and, hence, also when time-delay embedding is applied. Because in this case, the KL divergence reduces to $\frac{1}{2} (\mu_I - \mu_\Omega)^\top S^{-1} (\mu_I - \mu_\Omega)$ and the subtraction of the true mean $\mu_\Omega$ followed by the multiplication with the inverse covariance matrix can be considered as a normalization of the time-series, transforming it to standard normal variables with uncorrelated dimensions.
For the general case of two unrestricted Gaussian distributions, the test statistic
\begin{multline}
\lambda \coloneqq\
dm (\log(m) - 1)
+ m (\mu_I - \mu_\Omega)^\top S_\Omega^{-1} (\mu_I - \mu_\Omega) \\
+ \trace\left( m S_I S_\Omega^{-1} \right)
- m \cdot \log \left| m S_I S_\Omega^{-1} \right|
\label{eq:test-statistic}
\end{multline}
has been shown to be asymptotically distributed according to a chi-squared distribution with $d + \frac{d(d+1)}{2}$ degrees of freedom \cite{anderson1962mvstat}. This test statistic is often used for testing the hypothesis that a given set of samples has been drawn from a Gaussian distribution with known parameters \cite{kanungo1995mvhypot}. In the scenario of the MDI algorithm, the set of samples is the data in the inner interval $I$ and the parameters of the distribution to test that data against are those estimated from the data in the outer interval $\Omega$. The null hypothesis of the test would be that the data in $I$ has been sampled from the same distribution as the data in $\Omega$. The test statistic may then be used as a measure for how well the data in the interval $I$ fit the model established based on the data in the remainder of the time-series.
After some elementary reformulations, the relationship between this test statistic $\lambda$ and the KL divergence becomes obvious: $\lambda = 2m \cdot \divergence{KL}(p_I, p_\Omega)$. This is exactly the normalization of the KL divergence by the scale factor identified in \eqref{eq:kl-distribution}. Thus, we define an {\em unbiased KL divergence} as follows:
\begin{equation}
\divergence{U-KL}(p_I, p_\Omega) \coloneqq
2 \cdot \left| I \right| \cdot \divergence{KL}(p_I, p_\Omega) \;.
\label{eq:kl-unbiased}
\end{equation}
The distribution of this divergence applied to asymptotically long time-series depends only on the number $d$ of attributes and not on the length $m$ of the interval any more. However, this correction may also be useful for time-series of finite length. An example of actual detections resulting from the use of the unbiased KL divergence compared with the original one can be seen in \cref{fig:kl-unbiased-comparison}.
A further advantage of knowing the distribution of the scores is that this knowledge can also be used for normalizing the scores with respect to the number of attributes, in order to make them comparable across time-series of varying dimensionality.
Moreover, it allows the selection of a threshold for distinguishing between anomalous and nominal intervals based on a chosen significance level. This may be preferred in some applications over searching for a fixed number of top $k$ detections.
Interestingly, Jiang et al.\ \cite{jiang2015general} have derived an equivalent unbiased KL divergence ($m \cdot \divergence{KL}(p_I, p_\Omega)$) from a different starting point based on the assumption of a Poisson distribution and the inverse log-likelihood of the interval as anomaly score.
\begin{figure*}
\begin{subfigure}{0.49\linewidth}%
\caption{amplitude\_change\_multvar}%
\includegraphics[width=\linewidth]{figure6a}%
\end{subfigure}%
\hfill%
\begin{subfigure}{0.49\linewidth}%
\caption{frequency\_change\_multvar}%
\includegraphics[width=\linewidth]{figure6b}%
\end{subfigure}%
\caption{Two exemplary synthetic time-series along with the corresponding Hotelling's $T^2$ scores and their gradients. The dashed black line indicates the mean of the scores and the dashed blue line marks a threshold that is 1.5 standard deviations above the mean. Time-delay embedding with $\kappa=3, \tau=1$ was applied before computing the scores.}
\label{fig:proposal-examples}
\end{figure*}
\subsubsection{Jensen-Shannon Divergence}
\label{sec:js-divergence}
A divergence measure that does not expose the problem of being asymmetric is the \textit{Jensen-Shannon (JS) divergence}, which builds upon the KL divergence:
\begin{equation}
\divergence{JS}(p, q)
= \frac{1}{2}\ \divergence{KL} \left( p, \frac{p+q}{2} \right)
+ \frac{1}{2}\ \divergence{KL} \left( q, \frac{p+q}{2} \right) \;.
\label{eq:js-def}
\end{equation}
where $p$ and $q$ are probability density functions. $\frac{p+q}{2}$ is a mixture distribution, so that a sample is drawn either from $p$ or from $q$ with equal probability (though a parametrized version of the JS divergence accounting for unequal prior probabilities exists as well, but will not be covered here).
The JS divergence possesses some desirable properties, which the KL divergence does not have: most notably, it is symmetric and bounded between $0$ and $\log 2$ \cite{lin1991jsdivergence}, so that anomaly scores cannot get infinitely high.
Like the KL divergence, the JS divergence can be approximated empirically from the data in the intervals $I$ and $\Omega$.
However, there is no closed-form solution for the JS divergence under the assumption of a Gaussian distribution (as opposed to the KL divergence), since $\frac{p_I + p_\Omega}{2}$ would then be a Gaussian Mixture Model (GMM). Though several approximations of the KL divergence of GMMs have been proposed, they are either computationally expensive or abandon essential properties such as positivity \cite{hershey2007approximating}. This lack of a closed-form solution is likely to be the reason why the JS divergence was clearly outperformed by the KL divergence in our quantitative experiments in \cref{sec:eval} when the Gaussian model is used, despite its desirable theoretic properties.
\subsection{Interval Proposals for Large-Scale Data}
\label{sec:proposals}
Exploiting cumulative sums and a closed-form solution for the KL divergence, the asymptotic time complexity of the MDI algorithm with a Gaussian distribution model could already be reduced to be linear in the number of intervals (see \cref{sec:MDI-cumsum}). If the maximum length of an anomalous interval is independent from the number of samples $N$, the run-time is also linear in $N$. However, due to high constant-time requirements for estimating probability densities and computing the divergence, the algorithm is still too slow for processing large-scale data sets with millions of samples.
Since anomalies are rare by definition, many of the intervals analyzed by a full scan will be uninteresting and irrelevant for the list of the top anomalies detected by the algorithm.
In order to focus on the analysis of non-trivial intervals, we employ a simple proposal technique that selects interesting intervals based on point-wise anomaly scores.
Simply grouping contiguous detections of point-wise anomaly detection methods in order to retrieve anomalous intervals is insufficient, because it will most likely lead to split-up detections. However, it is not unreasonable to assume that many samples inside of an anomalous interval will also have a high point-wise score, especially after applying contextual embedding. \Cref{fig:proposal-examples}, for example, shows two exemplary time-series from the synthetic data set introduced in \cref{sec:eval-dataset} along with the point-wise scores retrieved by applying the Hotelling's $T^2$ method \cite{macgregor1995statistical}, after time-delay embedding has been applied to the time-series. Note that even in the case of the very subtle amplitude-change anomaly, the two highest Hotelling's $T^2$ scores are at the beginning and the end of the anomaly. The idea is to apply a simple threshold operation on the point-wise scores to extract interesting points and then propose all those intervals for detailed scoring by a divergence measure whose first and last samples are among these points if the interval conforms to the size constraints.
This way, the probability density estimation and the computation of the divergence have to be performed for a comparatively small set of interesting intervals only and not for all possible intervals in the time-series. The interval proposal method is not required to have a low false-positive rate, though, because the divergence measure is responsible for the actual scoring. Instead, it has to act as a high-recall system so that truly anomalous intervals are not excluded from the actual analysis.
Since we are only interested in the beginning and end of the anomalies, the point-wise scores are not used directly, but the centralized gradient filter $\left[-1 \quad 0 \quad 1 \right]$ is applied to the scores for reducing them in areas of constant anomalousness and emphasizing changes of the anomaly scores.
The evaluation in \cref{sec:eval-proposals} will show that the interval proposal technique can speed-up the MDI algorithm significantly without impairing its performance.
\section{Experimental Evaluation}
\label{sec:eval}
In this section, we evaluate our MDI algorithm on a quantitative basis using synthetic data and compare it with other approaches well-known in the field of anomaly detection.
\subsection{Data Set}
\label{sec:eval-dataset}
In contrast to many other established machine learning tasks, there is no widely used standard benchmark for the evaluation of anomaly detection algorithms; not for the detection of anomalous intervals and not even for the very common task of point-wise anomaly detection. This is mainly for the reason that the notion of an ``anomaly'' is not well defined and varies between different applications and even from analyst to analyst. Moreover, anomalies are, by definition, rare, which makes the collection of large-scale data sets difficult. However, even if a large amount of data were available, it would be nearly impossible to annotate it in an intersubjective way everyone would agree with. But accurate and complete ground-truth information is mandatory for a quantitative evaluation and comparison of machine learning techniques. Therefore, we use a synthetic data set for assessing the performance of different variants of the MDI algorithm.
All time-series in that data set have been sampled from a Gaussian process $\mathcal{GP}(m, K)$ with a squared-exponential covariance function $K(x_t,x_{t'}) = \left( 2 \pi \ell^2 \right)^{-\sfrac{1}{2}} \cdot \exp\left( - \frac{\left\| x_t - x_{t'} \right\|^2}{2\ell^2} \right) + \sigma^2 \cdot \delta(t,t')$ and zero mean function $m(x) = 0$. The \textit{length scale} of the GP has been set to $\ell^2 = 0.01$ and the noise parameter to $\sigma^2 = 0.001$. $\delta(t,t')$ denotes Kronecker's delta. Different types of anomalies have then been injected into these time-series, with a size varying between 5\% and 20\% of the length of the time-series:
\\
\noindent
\textbf{meanshift:}
A random, but constant value $\gamma \in [3,4]$ is added to or subtracted from the anomalous samples.
\noindent
\textbf{meanshift\_hard:}
A random, but constant value $\gamma \in [0.5,1]$ is added to or subtracted from the anomalous samples.
\noindent
\textbf{meanshift5:}
Five \texttt{meanshift} anomalies are inserted into the time-series.
\noindent
\textbf{meanshift5\_hard:}
Five \texttt{meanshift\_hard} anomalies inserted into the time-series.
\noindent
\textbf{amplitude\_change:}
The time-series is multiplied with a Gaussian window with standard deviation $\sfrac{L}{4}$ whose mean is the centre of the anomalous interval. Here, $L$ is the length of the anomalous interval and the amplitude of the Gaussian window is clipped at $2.0$. This modified time-series is added to the original one.
\noindent
\textbf{frequency\_change:}
The time-series is sampled from a non-stationary GP, whose covariance function $K(x_t,x_{t'}) = \left( \ell^2(t) \cdot \ell^2(t') \right)^{\sfrac{1}{4}} \cdot \left( \frac{\ell^2(t) + \ell^2(t')}{2} \right)^{-\sfrac{1}{2}} \cdot \exp\left(- \frac{\left\| x_t - x_{t'} \right\|^2}{\ell^2(t) + \ell^2(t')} \right) + \sigma \cdot \delta(t,t')$ uses a reduced length scale $\ell^2(t) = \left\{\begin{matrix}
10^{-2} & \text{if}\ t \notin [a,b), \\
10^{-4} & \text{if}\ t \in [a,b)\phantom{,}
\end{matrix}\right.$ during the anomalous interval $I = [a,b)$, so that correlations between samples are reduced, which leads to more frequent oscillations \cite{paciorek2004nonstationary}.
\noindent
\textbf{mixed:}
The values in the anomalous interval are replaced with the values of another function sampled from the Gaussian process. 10 time-steps at the borders of the anomaly are interpolated between the two functions for a smooth transition. This rather difficult test case is supposed to reflect the concept of anomalies as being ``generated by a different mechanism'' (cf. \cref{sec:MDI-idea}).
\\
\begin{sloppypar}
The above test cases are all univariate, but there are as well similar multivariate scenarios \texttt{meanshift\_multvar}, \texttt{amplitude\_change\_multvar}, \texttt{frequency\_change\_multvar}, and \texttt{mixed\_multvar} with 5-dimensional time-series. Regarding the first three of these test cases, the corresponding anomaly is injected into one of the dimensions, while all attributes are replaced with those of the other time-series in the \texttt{mixed\_multvar} scenario, which is also a property of many real time-series.
\end{sloppypar}
This results in a synthetic test data set with 11 test cases, a total of 1100 time-series and an overall number of 1900 anomalies.
Examples for all test cases are shown in \cref{fig:synthetic-examples}.
\begin{figure}[tb]
\includegraphics[width=\linewidth]{figure7}
\caption{Examples from the synthetic test data set.}
\label{fig:synthetic-examples}
\end{figure}
\subsection{Performance Comparison}
\label{sec:eval-performance}
\begin{figure*}
\includegraphics[width=\linewidth]{figure8}
\caption{Performance comparison of different variants of the MDI algorithm and the baselines on the synthetic data set.}
\label{fig:performance-comparison}
\end{figure*}
\begin{figure*}
\begin{minipage}{0.54\linewidth}
\includegraphics[width=\linewidth]{figure9}
\captionof{figure}{Effect of time-delay embedding with $\kappa=6, \tau=2$ on the performance of the MDI algorithm and the baselines on the synthetic data set.}
\label{fig:td-effect}
\end{minipage}
\hfill
\begin{minipage}{0.41\linewidth}
\includegraphics[width=\linewidth]{figure10}
\captionof{figure}{Performance of the original and the unbiased KL divergence on test cases with multiple or subtle anomalies.}
\label{fig:performance-unbiased}
\end{minipage}
\end{figure*}
Since the detection of anomalous regions in spatio-temporal data is rather a \textit{detection} than a \textit{classification} task, we do not use the \textit{Area under the ROC Curve (AUC)} as performance criterion like many works on point-wise anomaly detection do, but quantify the performance in terms of \textit{Average Precision (AP)} with an Intersection over Union (IoU) criterion that allows an overlap between 50\% and 100\%.
Hotelling's $T^2$ \cite{macgregor1995statistical} and Robust Kernel Density Estimation (RKDE) \cite{kim2012rkde} are used as baselines for the comparison. For RKDE, a Gaussian kernel with a standard deviation of $1.0$ and the Hampel loss function are used. We obtain interval detections from those point-wise baselines by grouping contiguous detections based on multiple thresholds and applying non-maximum suppression afterwards. The overlap threshold for non-maximum suppression is set to 0 in all experiments to obtain non-overlapping intervals only. To be fair, MDI also has to compete with the baselines on the task they have been designed for, i.e., point-wise anomaly detection, by means of AUC. The interval detections can be converted to point-wise detections easily by taking the score of the interval a sample belongs to as score for that sample.
\Cref{fig:performance-comparison} shows that the performance of the MDI algorithm using the Gaussian model is clearly superior on the entire synthetic data set compared to the baselines by means of Mean AP and even on the task of point-wise anomaly detection measured by AUC. The $\divergence{KL}(p_I, p_\Omega)$ polarity of the KL divergence has been used in all experiments following the argumentation in \cref{sec:KL-polarity}. In addition, the performance of the unbiased variant $\divergence{U-KL}(p_I, p_\Omega)$ is reported for the Gaussian model. The parameters of time-delay embedding have been fixed to $\kappa=6, \tau=2$ which we have empirically found to be suitable for this data set. For KDE, we used a Gaussian kernel with bandwidth $1.0$.
While MDI KDE is already superior to the baselines, it is significantly outperformed by MDI Gaussian, which improves on the best baseline by 286\%. This discrepancy between the MDI algorithm using KDE and using Gaussian models is mainly due to time-delay embedding, which is particularly useful for the Gaussian model, because it takes correlations of the variables into account, as opposed to KDE. As can be seen in \cref{fig:td-effect}, the Gaussian model would be worse than KDE and on par with the baselines without time-delay embedding.
Considering the Mean AP on this synthetic data set, the unbiased KL divergence did not perform better than the original KL divergence. However, on the test cases \texttt{meanshift5}, \texttt{meanshift5\_hard}, and \texttt{meanshift\_hard} it achieved an AP twice as high as that of $\divergence{KL}(p_I, p_\Omega)$, which was poor on those data sets (see \cref{fig:performance-unbiased}). Since real data sets are also likely to contain multiple anomalies, we expect $\divergence{U-KL}$ to be a more reliable divergence measure in practice.
Another interesting result is that cross entropy was the best performing divergence measure. This shows the advantage of reducing the impact of the inner distribution $p_I$, which is estimated from very few samples. However, it may perform less reliably on real data whose entropy varies more widely over time than in this synthetic benchmark.
\begin{figure*}
\begin{subfigure}{0.49\linewidth}%
\centering
\caption{Proposal Recall}%
\includegraphics[width=0.9\linewidth]{figure11a}%
\label{fig:proposal-recall}%
\end{subfigure}%
\hfill%
\begin{subfigure}{0.49\linewidth}%
\centering
\caption{Effect of Interval Proposals}%
\includegraphics[width=0.9\linewidth]{figure11b}%
\label{fig:proposal-effect}%
\end{subfigure}%
\caption{(\subref{fig:proposal-recall}) Recall of interval proposals without time-delay embedding and with $\kappa=6, \tau=2$ on the synthetic data set for different proposal thresholds. (\subref{fig:proposal-effect}) Effect of interval proposals on the Mean Average Precision of different variants of the MDI algorithm on the synthetic data set.}
\label{fig:eval-proposals}
\end{figure*}
The Jensen-Shannon divergence performed best for the KDE method, but worst for the Gaussian model. This can be explained by the lack of a closed-form solution for the JS divergence, so that it has to be approximated from the data, while the KL divergence of two Gaussians can be computed exactly. This advantage of the combination of the KL divergence with Gaussians models is, thus, not only beneficial with respect to the run-time of the algorithm, but also with respect to its detection performance.
The differences between the results in \cref{fig:performance-comparison} are significant on a level of 5\% according to the permutation test.
\subsection{Interval Proposals}
\label{sec:eval-proposals}
In order not to sacrifice detection performance for the sake of speed, the interval proposal method described in \cref{sec:proposals} has to act as a high-recall system proposing the majority of anomalous intervals. This can be controlled to some degree by adjusting the threshold $\theta = \mu + \vartheta \cdot \sigma $ applied to the point-wise scores, where $\mu$ and $\sigma$ are the empirical mean and standard deviation of the point-wise scores, respectively. To find a suitable value for the hyper-parameter $\vartheta$, we have evaluated the recall of the proposed intervals for different values of $\vartheta \in [0,4]$ using the usual IoU measure for distinguishing between true and false positive detections. The results in \cref{fig:proposal-recall} show that time-delay embedding is of a great benefit in this scenario too. Based on these results, we selected $\vartheta = 1.5$ for subsequent experiments, which still provides a recall of 97\% and is already able to reduce the number of intervals to be analyzed in detail significantly.
The processing of all the 1100 time-series from the synthetic data set, which took 216 seconds on an Intel Core\texttrademark\ i7-3930K with 3.20GHz and eight virtual cores using the Gaussian model and the unbiased KL divergence after the usual time-delay embedding with $\kappa=6, \tau=2$, could be reduced to 5.2 seconds using interval proposals. This corresponds to a speed-up by more than 40 times.
Though impressive, the speed-up was expected. What was not expected, however, is that the use of interval proposals also increased the detection performance of the entire algorithm by up to 125\%, depending on the divergence. The exact average precision achieved by the algorithm on the synthetic data set with a full scan over all intervals and with interval proposals is shown in \cref{fig:proposal-effect}. This improvement is also reflected by the AUC scores not reported here and is, hence, not specific to the evaluation criterion. A possible explanation for this phenomenon is that some intervals that are uninteresting but distracting for the MDI algorithm are not even proposed for detailed analysis.
\section{Application Examples on Real Data}
\label{sec:applications}
The following application examples on real data from various different domains are intended to complement the quantitative results presented above with a demonstration of the feasibility of our approach for real problems.
\subsection{Detection of North Sea Storms}
\label{sec:exp-storms}
To demonstrate the efficiency of the MDI algorithm on long time-series, we apply it to storm detection in climate data: The coastDat-1 hindcast \cite{coastDat} is a reconstruction of various marine climate variables measured at several locations over the southern North Sea between 51\textdegree\ N, 3\textdegree\ W and 56\textdegree\ N, 10.5\textdegree\ E with an hourly resolution over the 50 years from 1958 to 2007, i.e., approximately 450,000 time steps. Since measurements are not available at locations over land, we select the subset of the data between 53.9\textdegree\ N, 0\textdegree\ E and 56\textdegree\ N, 7.7\textdegree\ E, which results in a regular spatial grid of size $78 \times 43$ located entirely over the sea (cf. \cref{fig:coastdat-map}). Because cyclones and other storms usually have a large spatial extent and move over the region covered by the measurements, we reduce the spatio-temporal data to purely temporal data in this experiment by averaging over all spatial locations. The variables used for this experiment are significant wave height, mean wave period and wind speed.
We apply the MDI algorithm to that data set using the Gaussian model and the unbiased KL divergence. Since North Sea storms lasting longer than 3 days are usually considered two independent storms, the maximum length of the possible intervals is set to 72 hours, while the minimum length is set to 12 hours. The parameters of time-delay embedding are fixed to $\kappa=3, \tau=1$.
28 out of the top 50 and 7 out of the top 10 detections returned by the algorithm can be associated with well-known historic storms. The highest scoring detection is the so-called ``Hamburg-Flut'' which flooded one fifth of Hamburg in February 1962 and caused 340 deaths. Also among the top 5 is the ``North Frisian Flood'', which was a severe surge in November 1981 and lead to several dike breaches in Denmark.
A visual inspection of the remaining 22 detections revealed, that almost all of them are North Sea storms as well. Only 4 of them are not storms, but the opposite: they span times of extremely calm sea conditions with nearly no wind and very low waves, which is some kind of anomaly as well.
A list of the top 50 detections can be found in \cref{app:coastdat-detections} and animated heatmaps of the three variables during the detected time-frames are shown on our web page: \url{http://www.inf-cv.uni-jena.de/libmaxdiv_applications.html}.
Processing this comparatively long time-series using 8 parallel threads took 27 seconds. This time can be reduced further to half a second by using interval proposals without changing the top 10 detections significantly. This supports the assumption, that the novel proposal method does not only perform well on synthetic, but also on real data.
\subsection{Spatio-Temporal Detection of Low Pressure Areas}
\label{sec:exp-slp}
As a genuine spatio-temporal use-case, we have also applied the MDI algorithm to a time-series with daily sea-level pressure (SLP) measurements over the North Atlantic Sea with a much wider spatial coverage than in the previous experiment. For this purpose, we selected a subset of the NCEP/NCAR reanalysis \cite{kalnay1996ncep} covering the years from 1957 to 2011. This results in a time-series of about 20,000 days. The spatial resolution of 2.5\textdegree\ degrees is rather coarse and the locations are organized in a regular grid of size $28 \times 17$ covering the area between 25\textdegree\ N, 52.5\textdegree\ W and 65\textdegree\ N, 15\textdegree\ E.
Again, the MDI algorithm with the Gaussian model and the unbiased KL divergence is applied to this time-series to detect low-pressure fields, which are related to storms. Regarding the time dimension, we apply time-delay embedding with $\kappa=3, \tau=1$ and search for intervals of size between 3 and 10 days. Concerning space, we do not apply any embedding for now and set a minimum size of $7.5\degree \times 7.5\degree$, but no maximum. 7 out of the top 20 detections could be associated with known historic storms.
A visual inspection of the results shows that the MDI algorithm is not only capable of detecting occurrences of anomalous low-pressure fields over time, but also their spatial location. This can be seen in the animations on our web page: \url{http://www.inf-cv.uni-jena.de/libmaxdiv_applications.html}. A few snapshots and a list of detections are also shown in \cref{app:slp-detections}.
It is not necessary to apply spatial-neighbor embedding in this scenario, since we are not interested in spatial outliers, but only in the location of temporal outliers. We have also experimented with applying spatial-neighbor embedding and it led to the detection of some high-pressure fields surrounded by low-pressure fields. Since high-pressure fields are both larger and more common in this time-series, they are not detected as temporal anomalies.
Since we did not set a maximum spatial extent of anomalous regions, the algorithm took 4 hours to process this spatio-temporal time-series. This could, however, be reduced to 22 seconds using our interval proposal technique, with only a minor loss of localization accuracy.
\begin{figure}
\centering
\includegraphics[width=0.6\linewidth]{figure12}
\caption{Map of the area covered by the coastDat dataset. The highlighted box denotes the area from which data have been aggregated for our experiment.}
\label{fig:coastdat-map}
\end{figure}
\subsection{Stylistic Anomalies in Texts of Natural Language}
\label{sec:exp-nlp}
By employing a transformation from the domain of natural language to real-valued features, the MDI algorithm can also be applied to written texts. One important task in Natural Language Processing (NLP) is, for example, the identification of paragraphs written in a different language than the remainder of the document. Such a segmentation can be used as a pre-processing step for the actual, language-specific processing.
In order to simulate such a scenario, we use a subset of the \textit{Europarl} corpus \cite{koehn2005europarl}, which is a sentence-aligned parallel corpus extracted from the proceedings of the European Parliament in 21 different languages. The 33,334 English sentences from the \textit{COMTRANS} subset of \textit{Europarl}, which is bundled with the Natural Language Toolkit (NLTK) for Python, serve as a basis and 5 random sequences of between 10 and 50 sentences are replaced by their German counterparts to create a semantically coherent mixed-language text.
We employ a simple transformation of sentences to feature vectors: Since the distribution of letter frequencies varies across languages, each sentence is represented by a 27-dimensional vector whose first element is the average word length in the sentence and the remaining 26 components are the absolute frequencies of the letters ``a'' to ``z'' (case-insensitive). German umlauts are ignored since they would make the identification of German sentences too easy.
The MDI algorithm using the unbiased KL divergence is then applied in order to search for anomalous sequences of between 10 and 50 sentences in the mixed-language text after sentence-wise transformation to the feature space. Because the number of features is quite high in relation to the number of samples in an interval, we use a global covariance matrix shared among the Gaussian models and do not apply time-delay embedding.
The top 5 detections returned by the algorithm correspond to the 5 German paragraphs that have been injected into the English text. The localization is quite accurate, though not perfect: on average, the boundaries of the detected paragraphs are off by 1.4 sentences from the ground-truth. The next 5 detections are mainly tables and enumerations, which are also an anomaly compared with the usual dialog style of the parliament proceedings.
For this scenario, we had designed the features specifically for the task of language identification. To see what else would be possible with a smaller bias towards a specific application, we have also applied the algorithm to the \nth{1} Book of Moses (Genesis) in the King James Version of the bible, where we use \texttt{word2vec} \cite{mikolov2013efficient} for word-wise feature embeddings. \texttt{word2vec} learns real-valued vector representations of words in a way, so that the representations of words that occur more often in similar contexts have a smaller Euclidean distance. The embeddings used for this experiment have been learned from the Brown corpus using the continuous skip-gram model and we have chosen a dimensionality of 50 for the vector space, which is rather low for \texttt{word2vec} models, but still tractable for the Gaussian probability density model. Words which have not been seen by the model during training are treated as missing values.
The top 10 detections of sequences of between 50 and 500 words according to the unbiased KL divergence are provided in \cref{app:genesis-detections}. The first five of those are, without exception, genealogies, which can indeed be considered as anomalies, because they are long lists of names of fathers, sons and wives, connected by repeating phrases. The \nth{6} detection is a dialog between God and Abraham, where Abraham bargains with God and tries to convince him not to destroy the town Sodom. This episode is another example for stylistic anomalies, since the dialog is a concatenation of very similar question-answer pairs with only slight modifications.
Due to the rather wide limits on the possible size of anomalous intervals, the analysis of the entire book Genesis, a sequence of 44,764 words, took a total of 9 minutes, where we have not yet used interval proposals.
\subsection{Anomalies in Videos}
\label{sec:exp-video}
The detection of unusual events in videos is another important task, e.g., in the domain of video surveillance or industrial control systems. Though videos are already represented as multivariate spatio-temporal time-series with usually 3 variables (RGB channels), a semantically more meaningful representation can be obtained by extracting features from a Convolutional Neural Network (CNN).
In this experiment, we use a video of a traffic scene from the ViSOR repository \cite{vezzani2010visor}. It has a length of 60 seconds (1495 frames) and a rather low resolution of $360 \times 288$ pixels. The video shows a street and a side-walk with a varying frequency of cars crossing the captured area horizontally in both directions. At one point, a group of two pedestrians and one cyclist appears on the side-walk and crosses the area from right to left at a low speed. Another sequence at the end of the video shows a single cyclist riding along the side-walk in the opposite direction at a higher speed. Altogether, 26 seconds of the video contain moving objects and 34 seconds just show an empty street. The nominal state of the scene hence is not unambiguous.
We extract features for each frame of the video from the \texttt{conv5} layer of CaffeNet \cite{jia2014caffe}, which reduces the spatial resolution to $22 \times 17$, but increases the number of feature dimensions to 256. This rather large feature space is then reduced to 16 dimensions using PCA and the MDI algorithm is applied to search for anomalous sub-blocks with a minimum spatial extent of $10 \times 5$ cells and a length between 3 and 12 seconds. The time-delay embedding parameters are fixed to $\kappa=3, \tau=4$ for capturing half a second as context without increasing the number of dimensions too much. We apply the MDI algorithm with both the unbiased KL divergence and cross entropy as divergence measures. The Gaussian distribution model is employed in both cases.
The results (some snapshots are shown in \cref{fig:visor-detections}) exhibit an interesting difference between the two divergence measures: The KL divergence detects a sub-sequence of approximately 10 seconds where absolutely no objects cross the captured area. Thus, car traffic is identified as normal behavior and long spans of time without any traffic are considered as anomalous, because they have a very low entropy and the KL divergence penalizes the entropy of all other intervals, as opposed to cross entropy which does not take the entropy of the detected interval into account. Another detection occurs when the group of pedestrians enters the area. The localization, however, is rather fuzzy and spans nearly the entire frame. Cross entropy, on the other hand, seems to identify the state of low or no traffic as normal behavior and yields two detections at the beginning and the end of the video where the frequency of cars is higher than in the rest of the video. It detects the pedestrians too, but with a better localization accuracy. This detection, however, does not cover the entire side-walk, since the pedestrians are moving from right to left and the algorithm is not designed for tracking moving anomalies.
Without using interval proposals, the comparatively high number of features combined with the large spatial search space would result in a processing time of 13 hours for this video.
This can be reduced to 5 minutes using our novel interval proposal technique.
\begin{figure}
\begin{subfigure}{0.23\linewidth}%
\includegraphics[width=\linewidth]{figure13a}%
\caption{3 s}%
\end{subfigure}%
\hfill%
\begin{subfigure}{0.23\linewidth}%
\includegraphics[width=\linewidth]{figure13b}%
\caption{13 s}%
\end{subfigure}%
\hfill%
\begin{subfigure}{0.23\linewidth}%
\includegraphics[width=\linewidth]{figure13c}%
\caption{23 s}%
\end{subfigure}%
\hfill%
\begin{subfigure}{0.23\linewidth}%
\includegraphics[width=\linewidth]{figure13d}%
\caption{30 s}%
\end{subfigure}%
\caption{Snapshots from the example video with corresponding detections. Regions detected using the unbiased KL divergence start with the character ``A'', those detected by cross entropy start with ``B''. The full video can be found on our web page: \protect\url{http://www.inf-cv.uni-jena.de/libmaxdiv_applications.html}.}
\label{fig:visor-detections}
\end{figure}
\section{Summary and Conclusions}
\label{sec:conclusions}
We have introduced a novel unsupervised algorithm for anomaly detection that is suitable for analyzing large multivariate time-series and can detect anomalous {\em regions} not only in temporal but also in spatio-temporal data from various domains. The proposed MDI algorithm outperforms existing anomaly detection techniques, while being comparatively time efficient, thanks to an efficient implementation and a novel interval proposal technique that excludes uninteresting parts of the data from in-depth analysis. Moreover, we have exposed a bias of the Kullback-Leibler (KL) divergence towards smaller intervals and proposed an unbiased KL divergence that is superior when applied to real data. We have also investigated other divergence measures and found that the use of cross entropy can result in improved performance for data with a low variability of entropy.
Various experiments on data from different domains, including climate analysis, natural language processing and video surveillance, have shown that the algorithm proposed in this work can serve as a generic, unsupervised anomaly detection technique that can facilitate tasks such as process control, data analysis and knowledge discovery.
These application examples emphasize the importance of interval-based anomaly detection techniques, and we hope that our work is able to motivate further research in this area.
For processing data with a large spatial extent or a high number of dimensions, a full scan over all possible sub-blocks of the data would be prohibitively time-consuming. To this end, we have introduced a novel interval proposal technique that can reduce computation time significantly. However, interval proposals usually lead to less accurate detections, which is particularly noticeable with regard to spatial dimensions. Future work might hence investigate applying in-depth analysis not only to the proposed intervals themselves, but also to their neighborhood. An alternative might be a hierarchical approach of successive refinement.
Other open problems to be addressed in the future include efficient probability density estimation in the face of high-dimensional data, the automatic determination of suitable parameters for time-delay embedding, and tracking anomalies moving in space over time. Furthermore, it is often necessary to convince the expert analyst that a detected anomaly really is an anomaly. Thus, future work will include the development of an attribution scheme that can explain which variables or combinations of variables caused a detection and why.
\section*{Acknowledgements}
The support of the project EU H2020-EO-2014 project BACI
``Detecting changes in essential ecosystem and biodiversity properties-towards a
Biosphere Atmosphere Change Index'',
contract 640176, is gratefully acknowledged.
\bibliographystyle{IEEEtran}
|
2,869,038,154,614 | arxiv | \section{Introduction}
For many decades, factor analysis has been a popular method to
model the covariance matrix $\Vary$ of correlated, multivariate observations $\ym_t$ of dimension $\dimy$,
see e.g. \citet{and:int} for a comprehensive review.
Assuming $\nfactrue$ uncorrelated factors,
a factor model yields the representation $\Vary= \facloadtrue \trans{\facloadtrue } + \Varetrue $,
with a $\dimmat{\dimy}{\nfactrue}$ factor loading matrix $\facloadtrue$ and a diagonal matrix $\Varetrue $.
The considerable reduction of the
number of parameters compared to an unconstrained covariance matrix
is a main motivation for the application of factor models in economics and finance, especially, if $\dimy$ is large,
see e.g. \citet{fan-etal:hig_je} and \citet{for-etal:ope}.
Beyond that, the goal of factor analysis is often to estimate the loading matrix $\facloadtrue$ to understand the driving
forces behind the correlation between the features observed through $\ym_{t}$.
The recent years have seen considerable research in the area of sparse Bayesian factor analysis which
achieves additional sparsity beyond the natural parsimonity of factor models in two different ways.
One strand of literature considers sparse factor models through continuous shrinkage priors on the factor loadings, see e.g. \citet{bha-dun:spa}, \citet{roc-geo:fas} and \citet{kas:spa}, among others.
Alternatively, following the pioneering paper by \citet{wes:bay_fac}, many authors considered sparse factor models with point mass mixture
priors on the factor loadings, including basic factor models \citep{car-etal:hig}, dedicated factor models with correlated
(oblique) factors \citep{con-etal:bay} and
dynamic factor models \citep{kau-sch:bay}.
Sparse Bayesian factor analysis with point mass mixture
priors assumes that (many) elements of the factor loading matrix $\facloadtrue$ are 0, without being specific as to which elements are
concerned. Inference with respect to zero loadings is considered as a variable selection problem and
there are several reasons, why variable selection is of interest in sparse Bayesian factor analysis. First of all, sparse Bayesian factor analysis allows to identify \lq\lq simple structures\rq\rq\ where in each row only a few nonzero loadings are present \citep{and-rub:sta}.
Identifying simple structures has been a long standing issue in factor analysis, in particular in psychology, and was implemented recently through sparse Bayesian factor analysis in \citet{con-etal:bay}.
A second motivation is identifying irrelevant variables $y_{it}$ in $\ym_t$ which are uncorrelated
with the remaining variables, meaning that for these variables the entire row of
the factor loading matrix $\facloadtrue$ is zero. The possibility to identify such variables within the framework of sparse Bayesian factor analysis is
of high relevance in economic analysis, given the recent practice to include as many variables as possible \citep{sto-wat:mac,boi-ng:are},
and was implemented through sparse Bayesian factor analysis in \citet{kau-sch:ide}.\footnote{Identifying irrelevant variables also of importance in areas such as bioinformatics, where typically only a few out of potentially ten thousands of genes may be related to a certain physiological outcome \citep{luc-etal:spa}.}
The present paper contributes to the literature on sparse Bayesian factor models using point mass mixture
priors in several ways. As a first major contribution, we explicitly address identifiability issues that arise in sparse
Bayesian factor analysis. In the econometrics literature, identifiability is often reduced to solving
rotational indeterminacy, see e.g.~\citet{gew-sin:int}. However, for sparse Bayesian factor models identification goes
beyond this problem and concerns uniqueness of the variance decomposition in the covariance matrix $\Vary$.
This problem which
has been known for a long time \citep{and-rub:sta} went largely unnoticed in the literature on sparse Bayesian factor analysis, both in bioinformatics as well as in econometrics, and was addressed only recently by
\citet{con-etal:bay} in the context of dedicated sparse factor models.
Our paper provides a major achievement in this respect.
We reverse the two-step identification strategy of \cite{and-rub:sta} and first force a structure on the loading matrix that solves rotational invariance up to trivial rotations. To this aim, we introduce the class of generalized lower triangular (GLT) factor models where the loading matrix is a generalized lower triangular matrix.
Given a GLT structure, we introduce in a second step
a simple counting rule for the nonzero factor loadings as a sufficient condition for verifying variance identification.
As a second contribution, we operate in a sparse overfitting Bayesian factor model to yield inference with respect to the number of unknown factors.
Selecting the number of factors has been known since long to be a very difficult issue. \citet{bai-ng:det2002} define information criteria to choose the number of factors. \citet{lee-son:bay} and \citet{lop-wes:bay} were among the first to address this issue in a careful Bayesian manner using marginal likelihood.
More recently, \citet{con-etal:bay} use Bayesian variable selection in an overfitting model to determine the number of factors in a dedicated factor model.
However, the recent econometric literature on Bayesian factor analysis,
including \citet{ass-etal:bay}, \citet{cha-etal:inv}, and \citet{kau-sch:bay}, does not provide any intrinsically Bayesian solution for determining the
number of factors.
In the present paper, we discuss identification in an overfitting sparse factor model from a formal viewpoint. We gain very useful insights into the structure of the
loading matrix in an overfitting model, if we confine ourselves to the class of GLT factor models.
Using a point-mass mixture prior in an overfitting sparse factor model, we are able to identify the number of factors by postprocessing posterior draws and exploiting \lq\lq column sparsity\rq\rq, i.e.~by counting the number of nonzero columns among the variance identified factor loading matrices.
As a final contribution, we design an efficient Markov chain Monte Carlo (MCMC) procedure that delivers posterior draws from an overfitting sparse factor model under point mass priors which is know to be particularly challenging, see e.g. \citet{pat-etal:pos}.
In addition, we carefully discuss prior specifications on all levels of the model, including a prior for the idiosyncratic variances that
avoids the well-known Heywood problem and a fractional prior for the unrestricted factor loadings.
The rest of the paper is organized as follows. Section~\ref{secide} discusses identification issues for sparse factor models and introduces the class of
GLT factor models. Section~\ref{secbayes} discusses Bayesian inference and selecting the number of factors for GLT factor models. Section~\ref{secalpp} considers applications to exchange rate data and NYSE100 returns. Section~\ref{secconcluse} concludes. Mathematical proofs and technical details are summarized in a comprehensive Web-Appendix.
\section{Identification issues in sparse Bayesian factor analysis} \label{secide}
A basic factor model relates each observation $\ym_t=\trans{(y_{1t}, \ldots, y_{\dimy t})}$ in a random sample $\ym=\{ \ym_t, t=1,\ldots,T\}$ of $T$ observations to a latent $\nfactrue$-variate random variable $\facm_t=\trans{(\fac_{1t} \cdots \fac_{\nfactrue t})}$, the so-called common factors, through:
\begin{eqnarray} \label{fac1}
\ym_t = \facloadtrue \facm_t + \errorm_t,
\end{eqnarray}
where $\facloadtrue$ is the unknown $\dimmat{\dimy}{\nfactrue}$ factor loading matrix with
factor loadings $\loadtrue_{ij}$. $\nfactrue$ is called the number of factors.
Throughout the paper, the common factors are assumed to be orthogonal:
\begin{eqnarray}
\facm_t \sim \Normult{\nfactrue}{\bfz,\identy{\nfactrue}} \label{fac2} .
\end{eqnarray}
A basic assumption in factor analysis is
that $\facm_t$, $\facm_s$, $\errorm_t$, and $\errorm_s$ are pairwise independent for all $t \neq s$.
Furthermore, the following assumption is made concerning the idiosyncratic errors $\errorm_t$:
\begin{eqnarray}
\errorm_t \sim \Normult{\dimy}{\bfz,\Varetrue} , \qquad \Varetrue=\Diag{\idiov_1,\ldots,\idiov_{\dimy}}. \label{fac3}
\end{eqnarray}
Assumption (\ref{fac3}) implies that conditional on $\facm_t$ the $\dimy$ elements of $\ym_t$ are independent, hence all dependence among these variables is explained through the common factors.
For the basic factor model, assumption (\ref{fac3}) together with (\ref{fac2}) implies that the observations $\ym_t$ arise from a multivariate normal distribution,
$\ym_t \sim \Normult{\dimy}{\bfz,\Vary}$, with zero mean and a covariance matrix $\Vary$
with the following constrained structure:
\begin{eqnarray}
\Vary= \facloadtrue \trans{\facloadtrue } + \Varetrue . \label{fac4}
\end{eqnarray}
For a sparse Bayesian factor model, a binary indicator $\delta_{ij}$ is introduced for each element $\loadtrue_{ij}$ of the factor loading matrix $ \facloadtrue$ which takes the value $\loadtrue_{ij}=0$, iff $\delta_{ij}=0$, and $\loadtrue_{ij} \in \mathbb{R}$ is unconstrained otherwise. This yields a binary indicator matrix $\deltav$ of 0s and 1s of the same dimension as $\facloadtrue$.
In sparse Bayesian factor analysis, the indicators $\delta_{ij}$ are unknown and are inferred from the data, using point-mass mixture priors (also called spike-and-slab priors), see Subsection~\ref{priordelta} for more details.
\subsection{Identification of sparse basic factor models}\label{onefactro}
In the present paper, we explicitly address identifiability issues that arise in sparse
Bayesian factor analysis with respect to uniqueness of the variance decomposition.
Assume that $\facloadtrue$ is of full column rank ($\rank{\facloadtrue}=\nfactrue$) and let $\nfactrue$ be the smallest number compatible with representation (\ref{fac4}).
Identification means that for any
$(\facloadtilde ,\Varetilde)$ satisfying (\ref{fac4}), that is:
\begin{eqnarray}
\Vary
= \facloadtilde \trans{\facloadtilde} +
\Varetilde , \label{facide1}
\end{eqnarray}
where
$\Varetilde$ is a diagonal matrix and $\facloadtilde$ a $\dimy \times \nfactrue$ loading matrix,
it follows that $\facloadtilde=\facloadtrue$ and $\Varetilde =\Varetrue$.
Well-known identification problems arise for factor models, meaning that additional structure is
necessary to achieve identifiability. A rigorous approach toward identification of factor models was first offered by \citet{and-rub:sta}.
They considered identification as a two-step procedure, the first step being
identification of the variance decomposition, i.e.~identification of $\Varetrue$ from (\ref{fac4}),
which implies identification of $\facloadtrue \trans{\facloadtrue }$,
and the second step being subsequent identification of $\facloadtrue$ from
$\facloadtrue \trans{\facloadtrue }$, also know as solving the rotational identification problem.
The econometric literature typically reduces identification of factor models to the second problem
and focuses on rotational identification,
taking variance identification for granted, see e.g. \citet{gew-zho:mea}.
However, uniqueness of the factor loading matrix
of $\facloadtrue$ given $\facloadtrue \trans{\facloadtrue }$ does not imply identification. Variance identification is easily violated in
particular for sparse factor analysis, as following considerations illustrate.
%
Consider a sparse one-factor model for $\dimy \geq3 $ measurements, for which rotational invariance is not an issue, with two different
loading matrices. In the first case all but two factor loadings are 0 (e.g. $\lambda_1\neq 0$, $\lambda_2\neq 0$),
whereas in the second case
all but three factor loadings are 0 (e.g. $\lambda_i\neq 0$, $i=1,2,3$),
implying, respectively, the following covariance matrices $ \Vary$:
{\begin{eqnarray*} \small
& \left(
\begin{array}{ccccc}
{\bf \lambda_1^2 + {\idiov_1} } & {\bf \lambda_1 \lambda_2} & && \\
{\bf \lambda_1 \lambda_2} & {\bf \lambda_2^2 + {\idiov_2} } & &&\\
&& {\idiov_3} & & \\
&& & \ddots & \\
& &&& {\idiov_\dimy} \\
\end{array}\right), \,
\left(
\begin{array}{cccccc}
{\bf \lambda_1^2 + {\idiov_1} } & {\bf \lambda_1 \lambda_2} & {\bf \lambda_1 \lambda_3} & && \\
{\bf \lambda_1 \lambda_2} & {\bf \lambda_2^2 + {\idiov_2} } & {\bf \lambda_2 \lambda_3} & &&\\
{\bf \lambda_1 \lambda_3} & {\bf \lambda_2 \lambda_3} & {\bf \lambda_3^2 + {\idiov_3} } & && \\
& && {\idiov_4} & & \\
& && & \ddots & \\
& & &&& {\idiov_\dimy} \\
\end{array}\right). &
\end{eqnarray*}}
As only the diagonal elements
of $\Vary$ depend on $\idiov_i$, the
factor loadings can be identified only via the off-diagonal elements of
$\Vary$. For the first model, only $\Cov{y_{1t}, y_{2t}}=\Omega_{12}$ is nonzero, whereas all remaining covariances are equal to zero, hence, only the three sample moments $\V{y_{1t}}=\Omega_{11}$, $\V{y_{2t}}=\Omega_{22}$, and $\Cov{y_{1t}, y_{2t}}=\Omega_{12}$ are available to identify the four parameters $\idiov_1$, $\idiov_2$, $\lambda_1$, and $\lambda_2$.
Therefore, a sparse factor model with only two nonzero factor loadings is not identified,
since infinitely many different parameters $\idiov_1$, $\idiov_2$, $\lambda_1$, and $\lambda_2$
imply the same distribution for the observed data $\ym_t$.
For the second model the three covariances $\Cov{y_{1t}, y_{2t}}=\Omega_{12}$, $\Cov{y_{1t}, y_{3t}}=\Omega_{13}$, and $\Cov{y_{2t}, y_{3t}}=\Omega_{23}$ are nonzero and in total six sample moments are available to identify the six parameters $(\lambda_i, \idiov_i)$, $i=1,2,3$.
From these considerations, it is evident that a one-factor model is identifiable only, if
at least 3 factor loadings are nonzero, which has been noted as early as \citet{and-rub:sta}.
For a basic factor model with at least two factors,
uniqueness of the variance decomposition, i.e.~the identification of the idiosyncratic variances $\idiov_1, \ldots, \idiov_\dimy$ in
$\Varetrue$ from the variance decomposition (\ref{fac4}) of $\Vary$ has to be verified in addition to solving rotational invariance. More precisely, given any pair $(\facloadtrue ,\Varetrue)$ and
$(\facloadtilde ,\Varetilde)$ satisfying (\ref{fac4}) and (\ref{facide1}), under which condition does this imply that
$\Varetilde =\Varetrue$ and $ \facloadtilde \trans{\facloadtilde} = \facloadtrue \trans{\facloadtrue } $?
In the present paper, we rely on the row deletion property
of \citet{and-rub:sta} to ensure variance identification. \citet[Theorem~5.1]{and-rub:sta} prove that the following condition is sufficient for
the identification of $\facloadtrue \trans{\facloadtrue} $ and $\Varetrue$ from the marginal covariance matrix $\Vary$ given in (\ref{fac4}):
\begin{itemize}
\item[\mbox{\bf AR} .] Whenever an arbitrary row is deleted from $\facloadtrue$, two disjoint submatrices of rank $\nfactrue$ remain.
\end{itemize}
In standard factor analysis, where all rows of $\facloadtrue$ are nonzero and the factor loadings $\loadtrue_{ij}$ are unconstrained except for dedicated zeros that are introduced to resolve the rotation problem (see Subsection~\ref{uniqload}), condition \mbox{\bf AR}\ is typically satisfied, if the following
upper bound for the number of factors $\nfactrue $ holds:
\begin{eqnarray}
\nfactrue \leq \frac{\dimy-1}{2}, \label{boundAR}
\end{eqnarray}
i.e. $\dimy \geq 2 \nfactrue +1$.
From condition \mbox{\bf AR}\ it is apparent that for a sparse factor model
a minimum number of three nonzero elements has to be preserved in each column, despite variable selection,
to guarantee uniqueness of the variance decomposition and identification of $\Varetrue$. Hence, too many zeros in a sparse factor loading matrix may lead to non-identifiability of
$\Varetrue$ and $ \facloadtrue \trans{\facloadtrue}$, and subsequently to a failure to identify $\facloadtrue$.
This issue is hardly ever addressed in the literature on sparse Bayesian factor analysis.
In Theorem~\ref{rule357} in Subsection~\ref{varidesp}, we introduce a counting rule
(which will be called the 3-5-7-9-\ldots\ rule for obvious reasons) that provides a sufficient condition to verify the row deletion property \mbox{\bf AR}\ for sparse Bayesian factor models.\footnote{A less restrictive bound than (\ref{boundAR}) which is widely used
in psychological research is the Lederman bound \citep{led:ran}. However, for the time being we did not succeed in formulating a sufficient counting rule within this class of factor models.}
The identifiability of $\Varetrue$ guarantees that $\facloadtrue \trans{\facloadtrue }$ is
identified.
The second step of identification is then
to ensure uniqueness of the factor loadings, i.e.~unique identification of $\facloadtrue$ from $\facloadtrue \trans{\facloadtrue }$.
As is well-known, without imposing constraints on $\facloadtrue$, the model is invariant under transformations of the form
$\facloadtilde = \facloadtrue \Pm$ and $\facm_t^{\star} = \trans{\Pm} \facm_t$, where $\Pm$ is an arbitrary
$\dimmat{\nfactrue}{\nfactrue}$ orthogonal matrix (i.e. $\Pm \trans{\Pm}= \identy{\nfactrue}$), since evidently,
\begin{eqnarray}
\facloadtilde \trans{\facloadtilde} = \facloadtrue \Pm \trans{\Pm} \trans{\facloadtrue } = \facloadtrue \trans{\facloadtrue }. \label{rotation}
\end{eqnarray}
A special case of rotational invariance is the following trivial rotational invariance,
\begin{align}\label{eq:Ralpha}
\facloadtilde = \facloadtrue \Pm _{\pm} \Pm _{\rho} ,
\end{align}
where the permutation matrix $\Pm_{\rho}$ corresponds to one of the $\nfactrue$! permutations
and the reflection matrix
$\Pm_{\pm}=\Diag{\pm 1, \ldots, \pm 1}$
to one of the $2^\nfactrue$ ways to switch the signs of the $\nfactrue$ columns of $\facloadtrue$.
Often, identification rules are employed
that guarantee identification of $\facloadtrue $ only up to such column and sign switching,
see e.g. \citet{con-etal:bay}. Any structure $\facloadtrue$ obeying such an identification rule represents a whole equivalence class of matrices $\facloadtilde$ given by all possible $2^\nfactrue \nfactrue !$ trivial rotations of $\facloadtrue$ defined in (\ref{eq:Ralpha}).
The usual way of dealing with rotational invariance is to constrain $\facloadtrue$ in such a
way that the only possible rotation in (\ref{rotation}) is the identity $\Pm=\identy{\nfactrue}$.
For orthogonal factors as defined in (\ref{fac2}), at least $\nfactrue(\nfactrue-1)/2$ restrictions on the elements of
$\facloadtrue$ are needed to eliminate rotational indeterminacy \citep{and-rub:sta}.
The common constraint both in econometrics \citep{gew-zho:mea} and statistics \citep{wes:bay_fac,lop-wes:bay} is to consider positive lower triangular (PLT) matrices,
i.e.~to constrain the upper triangular part of $\facloadtrue$ to be zero and
to assume that the main diagonal elements $\loadtrue_{11},\ldots, \loadtrue_{\nfactrue \nfactrue}$ of $\facloadtrue$ are strictly positive.
Although the PLT constraint is pretty popular,
it is often too restrictive in practice.
It induces an order dependence among the responses, making the appropriate choice of the first
$\nfactrue$ response variables an important modeling decision \citep{car-etal:hig}.
Difficulties arise in particular, if one of the true factor loadings $\loadtrue_{jj}$ is equal
or close to 0, see e.g. \citet{lop-wes:bay}.
Alternative strategies have been suggested, for instance by \citet{kau-sch:ide} who
exploit the single value decomposition of
$\facloadtrue \trans{\facloadtrue }$ to solve rotational invariance.
In Subsection~\ref{secGLT}, we introduce a new identification rule based on generalized lower triangular (GLT) structures.
It should be emphasised that constraints imposed on $\facloadtrue$ to solve rotational invariance do not necessarily guarantee uniqueness of the variance decomposition.\footnote{Consider, for instance, a PLT loading matrix where in some column $j$ only two factor loading are nonzero:
the diagonal element $\loadtrue_{jj}$ which is nonzero by definition and a second factor loading $\loadtrue_{n_j,j}$ in some row $n_j>j$.
Such a loading matrix obviously violates the necessary condition for variance identification that each column contains at least three nonzero elements.}
This issue is hardly ever addressed explicitly in the econometric literature, an exception being \citet{con-etal:bay}.\footnote{\citet{con-etal:bay} investigate identification of a dedicated factor model, where equation (\ref{fac1}) is
combined with correlated (oblique) factors, $\facm_t \sim \Normult{\nfactrue}{\bfz,\mathbf{R}}$,
and the factor loading matrix $\facloadtrue$ has a perfect simple structure, i.e. each observation
loads on at most one factor. They prove
a condition that implies uniqueness of the variance decomposition as well as uniqueness of the
factor loading matrix and, consequently, the 0/1 pattern of the indicator matrix $\deltav$, namely:
the correlation matrix $\mathbf{R}$ is of full rank ($\rank{\mathbf{R}}=\nfactrue$) and
each column of $\facloadtrue$ contains at least three nonzero loadings.} Variance identification for sparse Bayesian factor models is discussed in detail in Subsection~\ref{varidesp}.
%
\subsection{Solving rotational invariance through GLT structures} \label{secGLT}
In this paper, we relax the PLT constraint
by allowing $\facloadtrue$ to be a generalized lower triangular (GLT) matrix:
\hspace*{2mm} \begin{itemize}
\item[\mbox{\bf GLT} .] Let $\facloadtrue$ be a $\dimmat{\dimy }{\nfactrue}$ factor loading matrix and let (for each $j=1,\ldots,\nfactrue$) $l_j$ denote
the row index of the top nonzero entry in the $j$th column of $\facloadtrue$ (i.e. $ \loadtrue_{ij}=0, \forall \, i<l_j$).
$\facloadtrue$ is a {\em generalized lower triangular} matrix, if $l_1 < \ldots < l_\nfactrue$ and $\loadtrue_{l_j,j} > 0$ for $j=1,\ldots,\nfactrue$.
\end{itemize}
For a GLT matrix $\facloadtrue$, the leading indices
$l_1 , \ldots, l_\nfactrue$ satisfy $l_j\geq j$ and need not lie on the main diagonal. Obviously, the class of GLT matrices contains PLT matrices as that special case where $l_j= j$
for $j=1,\ldots,\nfactrue$.
This generalization is particularly useful, if the ordering of the response variables is in conflict with the PLT assumption.
Since $\loadtrue_{jj}$ is allowed to be 0, response variables different from the first
$\nfactrue$ ones may lead the factors. Indeed, for each factor $j$, the leading variable is the
response variable $y_{l_j,t}$ corresponding to the leading index $l_j$.
An example of such a GLT matrix is displayed in the left-hand side of Figure~\ref{figsparseglt}.
Evidently, all loadings \emph{above} the leading element $\loadtrue_{l_j,j}$ are zero by definition. A \emph{sparse GLT matrix} results, if in addition some factor loadings \emph{below} the leading element $\loadtrue_{l_j,j}$ are zero as well.
The condition $\loadtrue_{l_j,j}>0$ prevents sign switching and can be substituted by the condition
$\loadtrue_{i_j,j}>0$ for any row $i_j \geq l_j$ with a nonzero factor loading in column $j$.
Condition \mbox{\bf GLT}\ resolves rotational invariance, provided that the leading
indices $l_1 < \ldots < l_\nfactrue$ are ordered: evidently, for any two GLT matrices
$\facloadtilde$ and $ \facloadtrue $ with identical leading indices
the identity $\facloadtilde = \facloadtrue \Pm$ holds, iff $\Pm=\identy{\nfactrue}$.
\begin{Figure}{An example of a sparse GLT matrix with leading indices
$(l_1, \ldots, l_6)=(1,3,10,11,14,17)$ marked by triangles: the ordered GLT structure (left-hand side) and one of the $2^6 \cdot 6$! corresponding unordered GLT structures (right-hand side).}{figsparseglt}{glt}{0.4}
\end{Figure}
Any GLT structure $\facloadtrue$ represents a whole equivalence class of
unordered GLT matrices $\facloadtilde$ given by all possible $2^\nfactrue \nfactrue !$ trivial rotations
of $\facloadtrue$ defined in (\ref{eq:Ralpha}).
Any unordered GLT structure $\facloadtilde$ has (unordered) leading
indices $l_1 , \ldots , l_{\nfactrue}$,
occupying different rows, see the right-hand side of Figure~\ref{figsparseglt}.
The corresponding (ordered) GLT structure
is recovered
from the order statistics $l_{(1)} , \ldots , l_{(\nfactrue)}$ of $l_1, \ldots , l_ \nfactrue$ by a trivial rotation and has leading indices
$l_{(1)} < \ldots < l_{(\nfactrue)}$.
%
In practice, the leading indices $l_1 , \ldots , l_\nfactrue$ of a GLT structure are unknown and need to be identified from the data for a given number of factors $\nfactrue$. This is achieved in sparse Bayesian factor analysis by introducing an indicator matrix $\deltav$ that obeys a GLT structure.
Hence, we need to identify the entire 0/1 pattern in $\deltav$ from $\Vary$, including the leading indices.
Given variance identification, i.e. assuming that $\facloadtrue \trans{\facloadtrue}$ is identified,
a particularly important issue for the identification of a sparse factor model
is whether the 0/1 pattern in
$\deltav$ is uniquely identified. In general, $\deltav$ is not uniquely identified
from $\facloadtrue \trans{\facloadtrue}$, because
non-trivial rotations $\Pm$ might exist that change the zero pattern in
$\facloadtilde = \facloadtrue \Pm$.
In the context of GLT structures, assume that an unordered GLT matrix $\facloadtilde$ exist with leading indices
$\tilde{l}_1 , \ldots , \tilde{l}_\nfactrue$ being possibly
different from the leading indices $l_1 , \ldots , l_\nfactrue$ of the
loading matrix $\facloadtrue$ and both matrices solve $ \facloadtilde \trans{\facloadtilde} = \facloadtrue \trans{\facloadtrue }$.
Then, Theorem~\ref{theGLT} shows that the entire GLT structure $\facloadtrue$ including the leading indices and all zero loadings is uniquely identified from $\facloadtrue \trans{\facloadtrue }$, up to trivial rotations, i.e.
$ \facloadtilde = \facloadtrue \Pm _{\rho} \Pm _{\pm}$, meaning in particular that the sets of leading indices $\{\tilde{l}_1 , \ldots , \tilde{l}_\nfactrue\}$ and $\{l_1 , \ldots , l_\nfactrue\} $ are identical.
\begin{thm}\label{theGLT}
For a sparse GLT structure,
$\deltav$ is uniquely identified, provided that uniqueness of the variance decomposition holds, i.e.:
if $\facloadtrue$ and $\facloadtilde$ are sparse GLT matrices, respectively, with
leading indices $l_1 < \ldots < l_\nfactrue$ and $\tilde{l}_1 < \ldots < \tilde{l}_\nfactrue$
that satisfy
$\facloadtilde \trans{\facloadtilde} =
\facloadtrue \trans{\facloadtrue }$, then $\facloadtilde = \facloadtrue$. Hence,
the leading indices as well as the entire 0/1 pattern
of $\facloadtilde$ and $\facloadtrue$ are identical.
\end{thm}
\noindent See Appendix~\ref{app:proof} for a proof.
While the assumption of a GLT structure resolves the rotational invariance, it does not
guarantee uniqueness of the variance decomposition.\footnote{Consider, for instance, a GLT matrix with
the leading index in column $\nfactrue$ being equal to $l_\nfactrue= \dimy -1$. The loading matrix has at most two nonzero elements in column $\nfactrue$ and violates the necessary condition for variance identification that each column contains at least nonzero three elements.}
In particular, an upper bound on the leading indices is necessary for
\mbox{\bf AR}\ to hold.
\begin{enumerate}
\item[\mbox{\bf GLT-AR} .] Let $\facloadtilde$ be an unordered GLT structure with
leading indices $l_1, \ldots , l_{\nfactrue}$.
The following condition
is necessary for condition \mbox{\bf AR} :
\begin{eqnarray} \label{condlj}
\dimy- l_j \geq 2(\nfactrue-z_j +1), \qquad j=1,\ldots,\nfactrue,
\end{eqnarray}
where $z_j$ is the rank of $l_j$ in the ordered sequence $ l_{(1)} < \ldots < l_{(\nfactrue)}$.
For an ordered GLT structure, (\ref{condlj}) reduces to $\dimy - l_j\geq 2(\nfactrue - j+1)$.
\end{enumerate}
For sparse GLT structures $\facloadtilde$ with zeros below the leading elements, \mbox{\bf GLT-AR}\ is only a necessary, but not a sufficient condition for {\mbox{\bf AR} }\footnote{A GLT structure obeying (\ref{condlj}) with $l_\nfactrue = \dimy - 2$ and $\delta_{m \nfactrue}=0$, for instance, contains only
two nonzero loadings in column $\nfactrue$ and violates the necessary condition for variance identification that each column contains at least nonzero three elements.}
and variance identification has to be verified explicitly. An efficient procedure for dealing
with this challenge is introduced in the following subsection.
\subsection{Verifying the row deletion property for sparse factor loading matrices} \label{varidesp}
For sparse Bayesian factor analysis, conditions for verifying directly from the zero pattern in the factor loading matrix, whether
the row deletion property \mbox{\bf AR}\ holds, would be very useful, but so far only necessary conditions have been provided.
\citet{and-rub:sta}, for instance, prove the following necessary conditions for \mbox{\bf AR} : for every nonsingular $\nfactrue$-dimensional square matrix $\Gm$,
the matrix $\facload=\facloadtrue \Gm$ contains in each column \emph{at least 3}
and in each pair of columns \emph{at least 5} nonzero factor loadings.
\citet[Theorem~3.3]{sat:stu} extends these necessary conditions in the following way: every subset of $1\leq q \leq \nfactrue$ columns
of $\facloadtrue$ contains \emph{at least $2q+1$} nonzero factor loadings.
Extending the results of \citet{sat:stu}, we prove in the following Theorem~\ref{rule357} that for unordered GLT factor matrices
it is {\em sufficient} (and not only necessary) for \mbox{\bf AR}\ that such a counting rule holds for the indicator matrix $\deltav$ for
a {\em single trivial rotation} $\Gm= \Pm _{\pm} \Pm _{\rho}$
of the factor loading matrix $ \facloadtrue $
(and not for every nonsingular matrix $\Gm$).
\begin{thm}[{\bf The 3-5-7-9-\ldots\ counting rule}]\label{rule357}
Consider the following counting rule for an unordered GLT structure $\facloadtilde = \facloadtrue \Pm _{\pm} \Pm _{\rho}$ corresponding to
an ordered GLT structure $\facloadtrue$:
\begin{itemize}
\item[\mbox{\bf NC}\ ] For each $q =1,\ldots,\nfactrue$ and for each submatrix consisting of $q$ column of $\facloadtilde$, the number of nonzero rows in this sub-matrix is at least equal to $2q+1$.
\end{itemize}
Condition \mbox{\bf NC}\ is both necessary and sufficient for the row deletion property \mbox{\bf AR}\ to hold for $\facloadtrue$.
\end{thm}
\noindent See Appendix~\ref{app:proof} for a proof. Theorem~\ref{rule357} operates on the indicator matrix $\deltav$
which is very convenient for verifying variance identification in sparse Bayesian factor analysis.
Most importantly, condition \mbox{\bf NC}\ extends the 3-5 counting rule of \citet{and-rub:sta} to a more general 3-5-7-9-\ldots$\cdots$ rule for the indicator matrix $\deltav$ corresponding to the factor loading matrix.
%
Obviously, if \mbox{\bf NC}\ is violated for a single subset of $q$
columns of $\deltav$, then \mbox{\bf AR}\ is violated for $\facloadtrue$.
For $q=1,2$ as well as for $q=\nfactrue -1, \nfactrue$ the corresponding counting rules can be easily verified from simple functionals of the
indicator matrix
$\deltav$, see Corollary~\ref{Lemma1} in Appendix~\ref{simcount}.
Hence, for factor models with up to 4 factors ($\nfactrue \leq 4$) it is trivial to verify, if the 3-5-7-9-\ldots\ counting rule and hence variance identification
holds.
For models with more than four factors ($\nfactrue > 4$), these simple counting rules are necessary conditions that quickly help to identify
indicator matrices $\deltav$ where \mbox{\bf NC}\ (and hence \mbox{\bf AR} ) is violated.
%
If the simple counting rules of Corollary~\ref{Lemma1} hold, then \mbox{\bf NC}\ could be verified by
iterating over all
subsets of $q=3, \ldots, \nfactrue-2$ columns of $\deltav$; a number rapidly increasing with $\nfactrue$.
The following Theorem~\ref{Lemma2} shows that verifying \mbox{\bf AR}\ greatly simplifies, if the
loading matrix has a block diagonal representation. In this case, \mbox{\bf NC}\ has to be checked only up to the maximum
block size, rather than for the entire loading matrix.
\begin{thm}\label{Lemma2}
Let $ \facloadsc$ be a $\dimmat{m_n}{\nfacr}$ factor loading matrix of full column rank, $\rank{\facloadsc}=\nfacr$
with $m_n$ nonzero rows. Assume that
$ \facloadsc$ has following block diagonal representation after suitable permutations of rows and columns,
with $\Pim_r$ and $\Pim_c$ being the corresponding permutation matrices:
\begin{eqnarray} \label{blockl}
\Pim_r \facloadsc \Pim_c =
\left(
\begin{array}{llll}
\Asub{1} & \bfzmat &\bfzmat &\bfzmat \\
\times & \ddots & \bfzmat &\bfzmat \\
\times & \times & \Asub{Q-1} & \bfzmat \\
\times & \times & \times & \Asub{Q} \\
\end{array}
\right),
\end{eqnarray}
where
$\Asub{q}$, $q=1,\ldots ,Q$, are $(\dimmat{m_q}{r_q})$-dimensional matrices
such that $\sum r_q = \nfacr$ and
$\sum m_q = m_n $. Assume that $\Asub{1}, \ldots, \Asub{Q-1}$ are of full column rank $r_q=\rank{\Asub{q}}$. Then the following holds:
\begin{itemize}
\item[(a)]
If all sub matrices $\Asub{1}, \ldots, \Asub{Q}$ satisfy the row deletion property \mbox{\bf AR}\ with $r=r_q$, then the entire loading matrix $\facloadsc$
satisfies the row deletion property \mbox{\bf AR}\ with $r=\nfacr$.
\item[(b)] If the submatrix $\Asub{Q}$ violates the row deletion property \mbox{\bf AR}\ with $r=r_Q$, then the row deletion property \mbox{\bf AR}\ is violated
for the entire loading matrix $\facloadsc$.
\end{itemize}
\end{thm}
\noindent See Appendix~\ref{app:proof} for a proof.
Part~(a) of Theorem~\ref{Lemma2}
is useful to verify that \mbox{\bf AR}\ holds for sparse loading matrices that have a block diagonal representation as in (\ref{blockl}).
Part~(b) of Theorem~\ref{Lemma2} is useful to quickly identify indicator matrices $\deltav$ where \mbox{\bf AR}\ does not hold.
In Appendix~\ref{verpartbig},
Algorithm~\ref{algARIDE} is discussed that derives representation (\ref{blockl}) sequentially and is useful for verifying variance identification in practice.
\subsection{Identification of irrelevant variables} \label{uniqload}
Irrelevant variables are observation $y_{it}$ for which the entire row $i$ of
the factor loading matrix $\facloadtrue$ is zero. This implies that $y_{it}$ is uncorrelated
with the remaining variables. As argued by \citet{boi-ng:are}, it is useful to identify such variables.
Within the framework of sparse Bayesian factor analysis, such irrelevant variables can be identified
by exploring the 0/1 pattern of the indicator matrix $\deltav$ with respect to zero rows, see \citet{kau-sch:ide}.
In Lemma~\ref{theirr} formal identification of irrelevant variables from $\deltav$ is proven,
provided that the number of factors $\nfactrue$ satisfies a more general upper bound than (\ref{boundAR}).
This commonly used upper bound is based on the assumption that all rows of $\facloadtrue$ are nonzero and
a different upper bound is needed, if we want to learn the position of the
zero rows from a sparse factor analysis applied to all $m$ variables. The corresponding bound
is derived from the fact that we need at least $2\nfactrue+1$ nonzero rows for the row deletion
property \mbox{\bf AR}\ to hold.
\\[1mm]
\begin{lem}\label{theirr} Assume that a $\dimy \times \nfactrue $ factor loading matrix $\facloadtrue$ contains
$m_0$ zero rows and that the number of factors $\nfactrue$ satisfies following upper bound:
\begin{eqnarray}
\nfactrue \leq \frac{\dimy-m_0-1}{2}. \label{boundZERO}
\end{eqnarray}
If uniqueness of the variance decomposition holds, then the position of the zero rows
in $\facloadtrue$ is uniquely identified, that is, any other $\nfactrue$-factor loading matrix $\facloadtilde$ satisfying $ \facloadtilde \trans{\facloadtilde} =
\facloadtrue \trans{\facloadtrue }$ has exactly the same set of zero rows.
\end{lem}
\noindent See Appendix~\ref{app:proof} for a proof.
\subsection{Identification in overfitting factor models} \label{secover}
Assume that the data $\ym=\{\ym_1, \ldots,\ym_T\}$ are generated by the basic factor model (\ref{fac1}) with
the corresponding variance decomposition in (\ref{fac4}) being unique, however,
the true number of factors $\nfactrue$ is not known. In this case, a common procedure is to
perform exploratory factor analysis based on a model with increasing number of factors $\nfac$,
\begin{eqnarray} \label{fac1reg}
\ym_t = \facload \facm_t + \errorm_t, \qquad \errorm_t \sim \Normult{\dimy}{\bfz,\Vare} ,
\end{eqnarray}
where $\facload$ is a $\dimmat{\dimy}{\nfac}$ loading matrix
with elements $\load_{ij}$ and $\Vare$ is a diagonal matrix with strictly positive diagonal elements. As before, we allow the elements
$\load_{ij}$ of $\facload$ in this potentially overfitting sparse factor model to be zero, with the corresponding indicator matrix being denoted by $\deltav$.
Factor analysis based on model (\ref{fac1reg}) yields the extended variance decomposition
\begin{eqnarray}
\Vary= \facload \trans{\facload } + \Vare, \label{fac4beta}
\end{eqnarray}
instead of the true variance decomposition (\ref{fac4}).
If model (\ref{fac1reg}) is not overfitting, that is $\nfac = \nfactrue$, then variance identification implies that $\Vare=\Varetrue$ and $\facload =\facloadtrue \Pm$ for some orthogonal matrix $\Pm$.
However, if $\nfac > \nfactrue$, then model (\ref{fac1reg}) is, indeed, overfitting and
additional identifiability issues have to be addressed for such overfitting factor models.
In particular, identifiability of $\facload \trans{\facload } $ and $\Vare$ from (\ref{fac4beta}) is lost, as infinitely many representations $(\facload,\Vare)$
with $\Vare \neq \Varetrue$ exist that imply the same covariance matrix $\Vary$ as $(\facloadtrue,\Varetrue)$.
This identifiability problem has been noted earlier by \citet{gew-sin:int} and \citet{tum-sat:ide}.
Consider, e.g., a model that is overfitting with $\nfac = \nfactrue+1$. Then infinitely many representations $(\facload,\Vare)$ can be
constructed that imply the same covariance $\Vary$ as $(\facloadtrue,\Varetrue)$, namely:
\begin{eqnarray} \label{adsp}
&& \Vare = \Diag{\idiov_1,\ldots, \idiov_{l_\nfac} - { \loadtrue_{l_\nfac, \nfac}^2}, \ldots, \idiov_\dimy} ,
\quad \facload= \left(\begin{array}{cc}
{\large \bf{ \facloadtrue}} & \left|\begin{array}{c}
\bfz \\
{ \loadtrue_{l_\nfac, \nfac}} \\
\bfz
\end{array} \right.
\end{array} \right) ,
\end{eqnarray}
where $\loadtrue_{l_\nfac, \nfac}$
is an arbitrary factor loading satisfying $0< \loadtrue_{l_\nfac, \nfac}^2< \idiov_{l_\nfac}$
and $l_\nfac$ is an arbitrary row index different from the leading indices $l_1, \ldots, l_\nfactrue $ in $ \facloadtrue$.
The last column of $\facload$ corresponds to a so-called \emph{spurious factor} which loads only on a single
observation. Hence, factor analysis in an overfitting model with $\nfac=\nfactrue +1 $
may yield factor loading matrices $\facload$ of rank $\nfactrue +1$, containing a spurious factor, rather than loading matrices of rank $\nfactrue$ with a zero column.
%
For arbitrary $\nfac > \nfactrue$, \citet{tum-sat:ide} provide a
general representation of the factor loading matrix in an overfitting factor model.
Suppose that $\Vary$ has a decomposition as in (\ref{fac4}) with $\nfactrue$ factors and for some $S \in \mathbb{N}$ with
$\dimy \geq 2\nfactrue + S + 1 $, or equivalently,
\begin{eqnarray} \label{kbound_extend}
\nfactrue \leq \frac{\dimy-S -1}{2},
\end{eqnarray}
the following extended row deletion property holds:
\begin{itemize}
\item[\mbox{\bf TS} ] Whenever $1+S$ rows are deleted from $ \facloadtrue$, then two disjoint submatrices of rank $\nfactrue$ remain.
\end{itemize}
If $ \Vary $ has another decomposition such that $\Vary= \betatilde \trans{\betatilde} + \Vare$ where $\betatilde$ is a $\dimmat{\dimy}{(\nfactrue+s)}$-matrix of rank $\nfactrue+s$ with $ s \leq S$, then \citet[Theorem~1]{tum-sat:ide} show that there exists an orthogonal matrix $\Tm$ of rank $\nfactrue+s$ such that
\begin{eqnarray} \label{decover}
\betatilde \Tm = \left(\begin{array}{cc}
\bf{ \facloadtrue} & \Mm
\end{array} \right) , \qquad \Vare = \bf{ \Varetrue} - \Mm \trans{\Mm },
\end{eqnarray}
where the off-diagonal elements of $\Mm \trans{\Mm}$ are zero.
Hence, $\Mm $ is a so-called \emph{spurious factor loading matrix} that does not contribute to explaining the correlation in $\ym_t$, since
\begin{eqnarray*}
\betatilde \trans{\betatilde } + \Vare = \betatilde \Tm \Tm' \trans{\betatilde } + \Vare =
\facloadtrue \trans{\facloadtrue } + \Mm \trans{\Mm } + (\Varetrue - \Mm \trans{\Mm } ) =
\facloadtrue \trans{\facloadtrue } + \Varetrue =\Vary . \label{fac4A}
\end{eqnarray*}
While (\ref{decover}) is an important result, without imposing further structure on the factor loading matrix it is of limited use in applied factor analysis,
as the separation of $\betatilde$ into the true factor loading matrix $\facloadtrue$ and the spurious factor loading matrix $\Mm $ is possible only up to a general rotation $\Tm$ of $\betatilde$.
The following Theorem~\ref{theoverGLT} shows that extended identification in overfitting sparse factor models can be achieved
within the class of unordered GLT structures as introduced in this paper.
If $\betatilde$ in model (\ref{fac1reg}) is constrained to be an unordered GLT structure,
then $\facloadtrue$ can be easily recovered from (\ref{decover}). First, all rotations in (\ref{decover}) are equal to trivial rotations $\Tm = \Pm _{\pm} \Pm _{\rho}$, only. Hence, the columns of the spurious loading matrix $\Mm $ appear in between the columns of $\facloadtrue$. Second, the spurious loading matrix $\Mm $ is easily identified as an \textit{unordered spurious GLT matrix}, where in each column the leading element is the only nonzero loading. This powerful result is exploited subsequently in our MCMC procedure to navigate through overfitting models
with varying the number of factors, by adding and deleting spurious factors.
\begin{thm} \label{theoverGLT}
Assume that $\facloadtrue $ is a GLT factor loading matrix with leading indices $l_{1} < \ldots < l_{\nfactrue}$ that obeys the extended
row deletion property \mbox{\bf TS}\ for some $S \in \mathbb{N}$. If $\betatilde$ in the extended variance decomposition $\Vary= \betatilde \trans{\betatilde} + \Vare$ is restricted to be an unordered GLT matrix with leading indices $\tilde{l}_1, \ldots, \tilde{l}_{\nfactrue+s}$, then the following holds:
\begin{itemize}
\item[(a)] $\facloadtrue$ and $\Varetrue$ can be represented in terms of $\betatilde$, $\Vare$, and $\Mm $ as in (\ref{decover})
up to trivial rotations $\Tm= \Pm _{\pm} \Pm _{\rho}$.
\item[(b)] $\Mm $ is a spurious GLT structure with leading indices ${n_1} , \ldots , {n_s}$
with exactly one nonzero loading in each column. Furthermore, all leading indices
$\{ {n_1}, \ldots, {n_s} \}$ are different from the leading indices $\{ l_{1} , \ldots , l_{\nfactrue}\}$ of $\facloadtrue $.
\item[(c)] The leading indices $\{\tilde{l}_{1}, \ldots, \tilde{l}_{\nfactrue+s}\}$ of $\betatilde$ are identical to the leading indices $\{l_{1} , \ldots , l_{\nfactrue}, {n_1} , \ldots , {n_s}\}$
of the matrix $ \betatilde \Tm$.
\end{itemize}
\end{thm}
\noindent See Appendix~\ref{app:proof} for a proof.
For an unordered GLT structure, \mbox{\bf TS}\ implies a constraint on the leading indices of $\betatilde$ which extends \mbox{\bf GLT-AR} :
\begin{enumerate}
\item[\mbox{\bf GLT-TS} .] Let $\facloadtilde$ be an unordered GLT structure with $\nfacr$ nonzero columns with
leading indices $l_1, \ldots , l_{\nfacr}$.
The following condition on the leading indices is necessary for condition \mbox{\bf TS} :
\begin{eqnarray} \label{condljTS}
\dimy- l_j - S\geq 2(\nfacr-z_j +1), \quad j=1,\ldots, \nfacr,
\end{eqnarray}
where $z_j$ is the rank of $l_j$ in the ordered sequence $ l_{(1)} < \ldots < l_{(\nfacr)}$.
\end{enumerate}
\section{Bayesian inference} \label{secbayes}
Bayesian inference is performed in the overfitting sparse factor model (\ref{fac1reg}) where $\nfac$
satisfies the upper bound (\ref{kbound_extend}) for a given degree of overfitting $S \in \mathbb{N}$.
Both $\nfac$ as well as $S$ are user-selected parameters.
The maximum number of potential factors $\nfac$ is chosen large enough that zero and spurious columns will appear during posterior inference.
We found it useful to allow for at least $S \geq 2$ spurious columns.
\subsection{Prior specifications} \label{priorel}
Let $\deltav$ be the $\dimmat{\dimy}{\nfac}$ indicator matrix corresponding to the $\dimmat{\dimy}{\nfac}$ loading matrix $\facload $ in model (\ref{fac1reg}). Within our sparse Bayesian factor analysis, a joint prior for $\deltav$, $\facload$ and the variances $\idiov_1, \ldots, \idiov_\dimy$ is
selected, taking the form
$p (\deltav) p(\idiov_1, \ldots, \idiov_\dimy) p(\facload|\deltav, \idiov_1, \ldots, \idiov_\dimy).$
\subsubsection{The prior on the indicators} \label{priordelta}
Following common hierarchical point mass mixture prior on the indicator matrix $\deltav$ is applied:
\begin{eqnarray} \label{prigen}
&& \Prob{\delta_{ij}=1|\tau_{j}}=\tau_{j}, \qquad \tau_j \sim \Betadis{a_0,b_0}, \qquad j=1,\ldots, \nfac, \\ && \Prob{\beta_{ij}=0|\delta_{ij}=0}=1, \nonumber
\end{eqnarray}
where all indicators are independent {\em a priori} given
$\hypv=(\tau_1, \ldots, \tau_\nfac)$.\footnote{Alternative priors (which are not pursued in the present paper) have been considered e.g.
by \citet{con-etal:bay} and \citet{kau-sch:bay}.}
Since the true number of factors $\nfactrue$ is unknown, we employ a prior on $\deltav$ that implies column sparsity apriori. To this goal, the hyperparameters of prior (\ref{prigen})
are chosen such that the number of nonzero columns $\nfacr$ in $\deltav$ is random apriori, taking values less than $\nfac$ with high probability. In this case,
the model is overfitting and we are able to learn the number of factors $\nfactrue$.
Hyperparameters that exclude zero columns in $\deltav$ apriori are prone to overfit the number of factors.
%
Prior (\ref{prigen}) can be rewritten as:
\begin{eqnarray} \label{prialt}
\tau_j \sim \Betadis{a_0,b_0} = \Betadis{b_0 \frac{\alpha}{\nfac},b_0},
\end{eqnarray}
where $\nfac$ is the number of potential factors.
For $\nfac \rightarrow \infty$, prior (\ref{prialt}) converges to the two-parameter Beta prior introduced by \citet{gha-etal:bay}
in Bayesian nonparametric latent feature models which can be regarded as a factor model with infinitely many columns. However, if $k$ exceed the upper bound (\ref{kbound_extend}), variance identification can no longer be achieved. For this reason, we stay within the framework of
factor models with finitely many columns in the present paper, but exploit column sparsity as explained above.
Following \citet{gha-etal:bay}, we choose values $b_0 < 1$ considerably smaller than 1 (a sticky prior)
to allow apriori zero columns for factor models where the number of factors is unknown.
The choice of $\alpha$ (or $a_0$) is guided by the apriori expected simplicity $ \Ew{q_i}$ of the factor loading matrix, where $ q_i = \sum_{j=1}^\nfac \delta_{ij}$ is the number of nonzero loadings in each row which is typically smaller than $\nfac$. This leads to following choice for $a_0$ and $\alpha$:
\begin{eqnarray} \label{defqi}
\Ew{q_i}
= \frac{\nfac a_0 }{a_0 + b_0}= \frac{\alpha}{1 + \alpha/\nfac} \quad \Rightarrow \quad
a_0 = \frac{ b_0\Ew{q_i}}{\nfac -\Ew{q_i}}, \quad \alpha = \frac{\Ew{q_i}}{1 -\Ew{q_i}/\nfac}.
\end{eqnarray}
As common in statistics and machine learning, the prior on $\deltav$ does not account explicitly for identification.
To deal with rotational invariance, an unordered GLT structure as introduced in Subsection~\ref{secGLT}
is imposed on $\deltav$ during MCMC estimation, by sampling only indicator matrices where the leading indices $l_{1}, \ldots , l_{\nfacr}$ of the $\nfacr$ nonzero columns $\betatilde$ of $\facload $
satisfy condition \mbox{\bf GLT-TS}\ given in (\ref{condljTS}) for the specified value of $S$, i.e. prior $p(\deltav)$ is constrained implicitly to unordered sparse GLT structures. The unordered GLT structure enforced during MCMC estimation breaks the invariance of the procedure with respect to the ordering of the data. However, it is less sensitive to the ordering of the data than the PLT constraint.
\subsubsection{The prior on the idiosyncratic variances} \label{priorsi}
When estimating factor models using classical statistical methods, such as maximum likelihood (ML) estimation,
it frequently happens that the optimal solution lies outside the admissible parameter space with one
or more of the idiosyncratic variances $\idiov_i$s being negative, see e.g.
\citet[Section~3.6]{bar:lat}. An empirical study in \citet{joe:som} involving 11 data
sets revealed that such improper solutions are quite frequent and this difficulty became
known as the Heywood problem.
The introduction of a prior on the idiosyncratic variances $\idiov_1, \ldots, \idiov_\dimy$ within a Bayesian framework, typically chosen from the inverted Gamma family, that is
\begin{eqnarray}
\idiov_i \sim \Gammainv{c_0,C_{i0}}, \label{priorsiidg}
\end{eqnarray}
naturally avoids negative values for $\idiov_i$.
Nevertheless, there exists a Bayesian analogue of the Heywood problem which takes the form of
multi-modality of the posterior of $\idiov_i$ with one mode lying at 0. This is likely to happen,
if a small value $c_0$ and fixed hyperparameters $C_{i0}$ are chosen in (\ref{priorsiidg}),
as common in Bayesian factor analysis.
Subsequently, we select $c_0$ and $C_{i0}$ in such a way that Heywood problems are avoided.
Heywood problems typically occur, if the constraint
\begin{eqnarray}
\frac{1}{\idiov_i} \geq (\Vary^{-1})_{ii} \quad \Leftrightarrow \quad \idiov_i \leq \frac{1}{(\Vary^{-1})_{ii}} \label{const1}
\end{eqnarray}
is violated, where the matrix $\Vary$ is the covariance matrix of $\ym_t$ defined in (\ref{fac4}),
see e.g. \citet[p.~54]{bar:lat}. It is clear from inequality (\ref{const1}) that $1/\idiov_i$ has to be bounded away
from 0. For this reason, improper priors on the idiosyncratic variances such as $p(\idiov_i)\propto 1/\idiov_i$
\citep{mar-mcd:bay,aka:fac} are not able to prevent Heywood problems.
Similarly, proper inverted Gamma prior with small degrees of freedom such as $c_0=1.1$
\citep{lop-wes:bay} allow values too close to 0.
As a first improvement, we choose $c_0$ in (\ref{priorsiidg}) large enough to
bound the prior away from 0, typically $c_0=2.5$. Second, we reduce the occurrence probability of a Heywood problem which is equal to
$\Prob{X\leq C_{i0}(\Vary^{-1})_{ii}}$ where $X \sim \Gammad{c_0,1}$ through
the choice of $C_{i0}$. The smaller $C_{i0}$, the smaller is this probability. However, since
$\Ew{\idiov_i}=C_{i0}/(c_0-1)$, a downward bias may be introduced, if $C_{i0}$ is too small.
We choose $C_{i0}=(c_0-1)/(\widehat{\Vary^{-1}})_{ii}$ as the largest value for which inequality
(\ref{const1}) is fulfilled by the prior expectation $\Ew{\idiov_i}$ and $\Vary^{-1}$
is substituted by an estimator $\widehat{\Vary^{-1}}$.
This yields the following prior:
\begin{eqnarray}
\idiov_i \sim \Gammainv{c_0,(c_0-1)/(\widehat{\Vary^{-1}})_{ii}}. \label{priorsiid}
\end{eqnarray}
Inequality (\ref{const1}) introduces an upper bound for $\idiov_i /\om_{ii} $, the proportion of variance not explained by the common
factors,
which is considerably smaller than 1 for small idiosyncratic variances $ \idiov_i $.
Hence, our prior is particularly sensible, if the communalities $R_i^2=1-\idiov_i /\om_{ii}$ are rather unbalanced across variables and the variance of some observations is very well-explained by the common factors, while this is not the case for other variables.
Our case studies illustrate that this prior usually leads to unimodal
posterior densities for the idiosyncratic variances.
An estimator $\widehat{\Vary^{-1}}$ of the inverse $\Vary^{-1}$ of the marginal covariance matrix
is required to formulate prior (\ref{priorsiid}).
If $T>> \dimy$, then the inverse of the sample covariance matrix $\Scov{y}$ could be used,
i.e. $\widehat{\Vary^{-1}}=\Scov{y}^{-1}$. However, this estimator is unstable, if $ \dimy$ is not
small compared $T$, and does not exist, if $\dimy>T$. Hence, we prefer
a Bayesian estimator which is obtained by combining the
sample information with the inverted Wishart prior $\Vary^{-1} \sim \Wishart{\dimy}{\nu_o, \nu_o {\mathbf S}_o}$:
\begin{eqnarray}
\widehat{\Vary^{-1}}= (\nu_o + T/2)(\nu_o {\mathbf S}_o + 0.5 \sum_{t=1}^T \ym_t \trans{\ym_t})^{-1}. \label{Varyhat}
\end{eqnarray}
If the variables $y_{jt}, j=1,\ldots,m,$ are standardized over $t$, then ${\mathbf S}_o=\identy{\dimy}$ is a sensible choice.
\subsubsection{The prior on the factor loadings} \label{priorfl}
Finally, conditional on $\deltav$ and $\idiov_1, \ldots, \idiov_\dimy$, a prior has to be
formulated for all nonzero factor loadings.
Since the likelihood function factors into a product over the rows of the loading matrix, prior independence across the rows is assumed.
For a given $\deltav$, let $\facload_{i\cdot}^{\deltav}$ be
the vector of unconstrained elements in the $i$th row of $\facload$.
The variance of the prior of $\facload_{i\cdot}^{\deltav}$ is assumed to depend on $\idiov_i$, because this allows
joint drawing of $\facload$ and $\idiov_1, \ldots, \idiov_\dimy$ and, even more importantly, sampling the model indicators $\deltav$ without conditioning on the model parameters during MCMC estimation, see Algorithm~\ref{Algo3} in Subsection~\ref{mcmc}.
For each row $i$ with $q_i>0$ nonzero elements, the standard prior takes the form
\begin{eqnarray}
\facload_{i\cdot}^{\deltav}|\idiov_i \sim \Normult{q_i}{\bfz, \bV_{i0}^{\deltav}\idiov_i}, \label{prior1}
\end{eqnarray}
where, typically, $\bV_{i0}^{\deltav}=A_0 \identy{q_i}$
\citep{lop-wes:bay,gho-dun:def,con-etal:bay}.
In addition, a fractional prior in the spirit of \citet{oha:fra} is introduced in this paper for sparse Bayesian factor models which can be interpreted as the posterior of a non-informative prior and a small fraction $b>0$ of the data. This yields a conditionally fractional prior for the \lq\lq regression model\rq\rq\
\begin{eqnarray}
\tilde{\ym}_i= \Xb_i ^{\deltav} \facload_{i\cdot}^{\deltav} + \tilde{\errorm}_i,
\label{regnonp}
\end{eqnarray}
where $\tilde{\ym}_i=\trans{(y_{i1} \cdots y_{iT})}$ and
$\tilde{\errorm}_i=\trans{(\error_{i1} \cdots \error_{iT})}$. $\Xb_i ^{\deltav}$ is a regressor matrix
constructed from the latent factors $\facm_1, \ldots, \facm_T$ (see Appendix~\ref{postdisfac} for details).
The fractional prior is then defined as a fraction of the full conditional likelihood, derived from regression model (\ref{regnonp}):
\begin{eqnarray*} p(\facload_{i\cdot}^{\deltav}|\idiov_i ,b ,\facm)
\propto \displaystyle p(\tilde{\ym}_i| \facm, \facload_{i\cdot}^{\deltav} ,\idiov_i)^b
= \left(\frac{1}{2\pi \idiov_i}\right)^{Tb/2}
\exp\left(-\frac{b}{2\idiov_i}
(\tilde{\ym}_i- \Xb_i ^{\deltav} \facload_{i\cdot}^{\deltav} )'(\tilde{\ym}_i- \Xb_i ^{\deltav} \facload_{i\cdot}^{\deltav})\right).
\end{eqnarray*}
This yields the following fractional prior:\footnote{Similar conditionally conjugate fractional priors have been applied
by several authors for variable selection in latent variable models \citep{smi-koh:par,fru-tue:bay, tue:bay,fru-wag:sto}.}
\begin{eqnarray} \label{priorfrac}
\facload_{i\cdot}^{\deltav} | \idiov_i ,b ,\facm \sim
\Normult{q_i}{\bm_{iT} ^{\deltav} , \bV_{iT}^{\deltav} \idiov_i /b},
\end{eqnarray}
where $\bm_{iT}^{\deltav} $ and $\bV_{iT}^{\deltav} $ are the posterior moments under the non-informative prior
$p(\facload_{i\cdot}^{\deltav} | \idiov_i) \propto \mbox{\rm c}$:
\begin{eqnarray}
\bV_{iT} ^{\deltav} = \left(\trans{(\Xb_i ^{\deltav})} \Xb_i ^{\deltav} \right) ^{-1} , \qquad
\bm_{iT} ^{\deltav} = \bV_{iT} ^{\deltav} \trans{(\Xb_i ^{\deltav})}\tilde{\ym}_i . \label{postmomA_frac}
\end{eqnarray}
Concerning the choice of the fraction $b$, in general,
larger values of $b$ extract more information from the likelihood than smaller values,
which reduces the influence of the sparsity prior $p(\deltav)$ as $b$ increases, leading to a larger
number of estimated factors.
Depending on the relation between $\nfac$, $\dimy$, and $T$, small values such as $b=10^{-3}$, $b=10^{-4}$ or $b=10^{-5}$ yield sparse solutions.
In total, $N=\dimy T$ observations are available to estimate $d(\nfac,\dimy) = \nfac \dimy-\nfac(\nfac-1)/2 = \nfac (\dimy- (\nfac-1)/2)$ free elements in the coefficient matrix $\facload$ for a GLT structure.
If $d(\nfac,\dimy)$ is considerably smaller than $N$, then the variable selection literature suggests to choose $b_N=1/(T\dimy)$.
This is in particular the case, if the potential number of factors $k$ is considerably smaller than $T$.
On the other hand, if $d(\nfac,\dimy)$ is in the order of $N$, then $b_N$ implies a fairly small penalty and may lead to overfitting models. Following \citet{fos-geo:ris}, the risk inflation criterion $b_R=1/d(\nfac,\dimy)^2$ can be applied in this case. For a GLT sructure, $b_R$ implies a stronger penalty than $b_N$, if $d(\nfac,\dimy)> \sqrt{ T \dimy}$.
\subsection{MCMC estimation} \label{mcmc}
We use MCMC techniques to sample from the posterior $ p(\deltav, \idiov_1, \ldots, \idiov_\dimy, \facload,\hypv, \facm|\ym)$
(with $\facm=(\facm_1,\ldots,\facm_T))$ of the overfitting model (\ref{fac1reg}),
given the priors introduced in Subsection~\ref{priorel}.
As noted by many authors, e.g. \citet{pat-etal:pos}, MCMC sampling for sparse Bayesian factor models is notoriously difficult,
since sampling the indicator matrix $\deltav$ corresponds to navigating through an extremely high dimensional model space.
This is even more challenging, if the sparse factor model is overfitting.
In this paper, a designer MCMC scheme is employed which is summarized in Algorithm~\ref{Algo3}, where several steps have been designed specifically for sparse Bayesian factor models under the GLT constraint when the number of factors is unknown. This designer MCMC scheme delivers posterior draws of $\facload $ and $\deltav$ with a varying number $\nfacr$ of nonzero columns.
%
An unordered GLT structure is imposed on the nonzero columns $\betatilde $ and $\deltavtilde$
by requiring that the leading indices $l_1, \ldots , l_{\nfacr} $
obey condition \mbox{\bf GLT-TS}\ given in (\ref{condljTS}).
Non-identification with respect to trivial rotations introduces column and sign switching during MCMC sampling.
Hence, the sampler produces draws that fulfill various {\em necessary} conditions for identification, while the more
demanding {\em sufficient} conditions are assessed through a scanning of the posterior draws
during postprocessing, see Subsection~\ref{subGL}.
\begin{alg}[\textbf{MCMC estimation for sparse Bayesian factor models with unordered GLT structures}] \label{Algo3}
Choose initial values\footnote{See Appendix~\ref{init} for details.} for $(\nfacr,
\deltav, \facload,\idiov_1,\ldots,\idiov_{\dimy},\hypv)$, iterate $M$ times through the following steps and discard the first $M_0$ draws as burn-in:
\begin{itemize}
\item[(F)] Sample the latent factors $\facm_1,\ldots,\facm_T$ conditional on
the model parameters $\facload$ and $\idiov_1,\ldots,\idiov_{\dimy}$ from $ p(\facm_1,\ldots,\facm_T|\facload,\idiov_1,\ldots,\idiov_{\dimy},\ym) $.
\item[(A)] Perform a boosting step based either on ASIS or marginal data augmentation.
\item[(R)] Perform a reversible jump MCMC step
to add or delete spurious columns in $\deltav$ and $ \facload$.
\item[(L)] Loop over all nonzero columns $j$ of the indicator matrix $\deltav$ in a
random order and sample the leading index $l_j$
conditional on the remaining columns $\deltacol{-j}$, the factors $\facm_1,\ldots,\facm_T$, and $\hypv$
without conditioning on the model parameters $ \facload$ and $\idiov_1,\ldots,\idiov_{\dimy}$.
\item[(D)] Loop over all nonzero columns of the indicator matrix $\deltav$ in a random order. Sample for each
column $j$ all indicators below the leading index $l_j$ (i.e.~$\delta_{ij}$ with $i \in I_j=\{l_j +1, \ldots, \dimy \}$)
conditional on the remaining columns $\deltacol{-j}$, the factors $\facm_1,\ldots,\facm_T$, and $\hypv$
(without conditioning on the model parameters
$ \facload$ and $\idiov_1,\ldots,\idiov_{\dimy}$) jointly using Algorithm~\ref{AlgoInd} in Appendix~\ref{mcmcsmodi}.
\item[(H)] Sample $\tau_j |\deltav \sim \Betadis{a_0 + d_j,b_0 + \dimy- d_j}, j=1,\ldots,\nfac$,
where $d_j=\sum_{i=1}^\dimy {\delta_{ij}}$ is the number of nonzero factor loadings in column $j$.
\item [(P)] Sample the model parameters $\facload$ and $\idiov_1,\ldots,\idiov_{\dimy}$ jointly conditional on the indicator matrix $\deltav$ and the factors $\facm_1,\ldots,\facm_T$ from $p(\facload, \idiov_1,\ldots,\idiov_{\dimy}| \deltav,\facm_1,\ldots,\facm_T,\ym)$.
\end{itemize}
\end{alg}
\noindent The most innovative part of this MCMC scheme concerns sampling the indicator matrix $\deltav$.
Updating $\deltav$ for sparse exploratory Bayesian factor analysis without identification constraints on $\deltav$
is fairly straightforward, see e.g. \citet{car-etal:hig} and \citet{kau-sch:bay}, among many others.
However, a more refined approach is implemented in the present paper to address the econometric identification issues for sparse factor models discussed in Section~\ref{secide}.
The nonzero columns of $\facload$ and $\deltav$ are instrumental for estimating the number of factors during postprocessing,
see Subsection~\ref{estr}.
To increase and decrease the number of nonzero columns in $\facload$ and $\deltav$,
Step~(R) exploits Theorem~\ref{theoverGLT} to add and delete spurious factors
through a reversible jump MCMC step described in Subsection~\ref{RJMCMC}.
Similarly as in \citet{con-etal:bay}, it is much easier to introduce new latent factors into the model through these spurious factors,
compared to alternative approaches that would split existing factors or add new ones only under the condition that enough nonzero elements are preserved.
To force the unordered GLT structure on the $\nfacr$ nonzero columns of $\facload$ and $\deltav$, Step~(L) performs MH steps
to navigate through the space of all admissible leading indices $(l_1, \ldots, l_{\nfacr})$ that satisfy \mbox{\bf GLT-TS} , see Subsection~\ref{movelead}.
To implement Step~(D) efficiently, a method for sampling an entire set of indicators $\{ \delta_{ij}, i \in I_j \}$ in a particular column $j$ in one block is developed in Appendix~\ref{mcmcsmodi}.
Step~(F) and Step~(P) operate in a \lq\lq confirmatory\rq\rq\ factor model where certain loadings are
constrained to zeros according to the indicator matrix $\deltav$. Although these steps are standard in Bayesian
factor analysis (see e.g. \citet{lop-wes:bay} and \citet{gho-dun:def})
improvements are suggested such as multi-move sampling of all unknown model parameters $\facload$, and $\idiov_1,\ldots,\idiov_{\dimy}$ in Step~(P),
see Appendix~\ref{SectionF_fac} and \ref{jointfac} for futher details.
Finally,
the boosting Step~(A) is added to improve mixing of the MCMC scheme, see Subsection~\ref{accelerate} and Appendix~\ref{accelerate_App} for more details.
\subsubsection{Special MCMC moves for unordered GLT structures} \label{movelead}
Step~(L) in Algorithm~\ref{Algo3} implements moves that explicitly change the position of the leading indices in the $\nfacr$ nonzero columns of $\deltav$ (including
spurious columns), without violating \mbox{\bf GLT-TS} . Let $\lm =(l_{1} , \ldots , l_{\nfacr})$ be the set of leading indices. Since an unordered GLT structure has to be preserved, the leading index $l_j$ in column $j$
is not free to move, but restricted to a subset $\leadset{S}{\lm_{-j}} \subseteq \{1,\ldots,\dimy\}$ which
depends on the leading indices $\lm_{-j}$ of the other columns and the maximum degree of overfitting $S$.\footnote{See Subsection~\ref{GLTMCMC} for a definition of $\leadset{S}{\lm_{-j}}$.}
%
We scan all nonzero columns of $\deltav$ in a random order and propose to change the position
of $l_j$ in a selected column $j$ using one of four local moves, namely
shifting the leading index, adding a new leading index, deleting a leading
index and switching the leading elements (and all indicators in between) between column $j$ and a randomly selected column $j'$; see Figure~\ref{figStepL} for illustration and Subsection~\ref{updatelead} for further details.
\begin{Figure2}{MCMC moves to change the leading indices of an unordered GLT structure;
from left to right: shifting the leading index, adding a new leading index, deleting a leading
index and switching the leading elements}{figStepL}{lead_change}{switch_lead}{0.4}
\end{Figure2
\subsubsection{Split and merge moves for overfitting models} \label{RJMCMC}
For overfitting factor models, Step~(R) in Algorithm~\ref{Algo3}
is a dimension changing move that explicitly changes the number $\nfacr$ of nonzero columns in $\deltav$ and $\facload$ by adding and deleting a spurious column.
If a spurious column $\Mm$ is identified among the nonzero columns
of $\facload$, then as demonstrated in Subsection~\ref{secover} it can be substituted by a zero column without changing the likelihood function,
by adding $\Mm \trans{\Mm }$ to $\Vare$. On the other hand, any zero column in $\facload$ can be turned into an (additional) spurious column without changing the likelihood function either, see (\ref{adsp}). %
This is the cornerstone of our procedure, however, while the likelihood is invariant to these moves, the prior is not and
simply adding or deleting spurious columns would lead to an invalid MCMC step.
A reversible jump MCMC step as implemented in Step~(R) can correct for that.
The split and merge moves outlined above form a reversible pair that operates in the latent variable model (\ref{fac1reg}) conditional on all parameters, except the hyperparameter $\hypv=(\tau_1, \ldots, \tau_\nfac)$ which is integrated out of prior (\ref{prigen}).
Split and merge moves are local moves operating between the two following factor models:
\begin{eqnarray}
&& y_{l_j,t}= \facload_{{l_j},-j}^{\deltav} \facm_{t,-j} + \error_{l_j,t} ,
\qquad \error_{l_j,t} \sim \Normal{0,\idiov_{l_j}}, \label{spurious1}\\
&& y_{l_j,t}= \facload_{{l_j},-j}^{\deltav} \facm_{t,-j} + \load_{l_j,j} ^{\mbox{\rm \tiny sp}} \fac_{jt}^{\mbox{\rm \tiny sp}} \delta_{l_j,j} + \tilde{\error}_{l_j,t},
\qquad \tilde{\error}_{l_j,t} \sim \Normal{0,\idiov_{l_j}-\delta_{l_j,j}(\load_{l_j,j}^2) ^{\mbox{\rm \tiny sp}}}, \label{spurious2}
\end{eqnarray}
where model (\ref{spurious2}) contains a spurious column with $ \load_{l_j,j} ^{\mbox{\rm \tiny sp}}$ being the only nonzero loading in this column.
If $\delta_{l_j,j}=0$ in model (\ref{spurious2}), then model (\ref{spurious1}) results. However, if $\delta_{l_j,j}=1$, then, as discussed in Subsection~\ref{secover}, model (\ref{spurious2}) is not identified and
$\load_{l_j,j} ^{\mbox{\rm \tiny sp}}$ can take any value such that $ (\idiov_{l_j}) ^{\mbox{\rm \tiny sp}} = \idiov_{l_j} - (\load_{l_j,j}^2) ^{\mbox{\rm \tiny sp}} >0 $.
By integrating model (\ref{spurious2}) with respect to the spurious factor $ \fac_{jt}^{\mbox{\rm \tiny sp}}$, it can be easily verified that both models imply the same distribution $p(y_{l_j,t}| \facload_{{l_j},-j}^{\deltav}, \facm_{t,-j},\idiov_{l_j})$.
The split move turns one of the zero columns
$j$ in (\ref{spurious1}) into a spurious column, by selecting a row $l_j$
not occupied by any other leading index and splitting the variance $\idiov_{l_j}$ of the idiosyncratic error between the new
variance $(\idiov_{l_j}) ^{\mbox{\rm \tiny sp}}$ and
the spurious factor loading $\load ^{\mbox{\rm \tiny sp}}_{l_j,j}$ such that
\begin{eqnarray*}
(\load ^{\mbox{\rm \tiny sp}}_{l_j,j})^2 + (\idiov_{l_j}) ^{\mbox{\rm \tiny sp}} = \idiov_{l_j}.
\end{eqnarray*}
Splitting is achieved by sampling $U$
from a distribution with support [-1,1]
and defining:\footnote{Specific choices for the distribution of $U$ are discussed in Appendix~\ref{RJdetails}. For instance, sampling $U^2$ from a uniform distribution on [0,1] worked pretty well in many situation.}
\begin{eqnarray*} \label{prorjitAmain}
\load ^{\mbox{\rm \tiny sp}}_{l_j,j} = U \sqrt{\idiov_{l_j}} , \qquad (\idiov_{l_j}) ^{\mbox{\rm \tiny sp}}= (1-U^2) \idiov_{l_j} .
\end{eqnarray*}
%
Given $\load ^{\mbox{\rm \tiny sp}}_{l_j,j}$ and $(\idiov_{l_j}) ^{\mbox{\rm \tiny sp}}$, new factors $\fac_{jt} ^{\mbox{\rm \tiny sp}}$ are
proposed for the spurious column $j$, independently for $t=1, \ldots,T$, from the conditional density $ p(\fac_{jt}^{\mbox{\rm \tiny sp}}| \facm_{t,-j},\facload_{{l_j},-j}^{\deltav}, \load ^{\mbox{\rm \tiny sp}}_{l_j,j}, (\idiov_{l_j}) ^{\mbox{\rm \tiny sp}}, y_{l_j,t})$ which takes a very simple form (see Appendix~\ref{RJdetails} for details):
\begin{eqnarray*} \label{mainonfsp}
\fac_{jt} ^{\mbox{\rm \tiny sp}} | \cdot \sim \Normal{E_{jt} ^{\mbox{\rm \tiny sp}},V_{j} ^{\mbox{\rm \tiny sp}}}, \quad
V_j ^{\mbox{\rm \tiny sp}} = 1- U^2, \quad
\displaystyle E_{jt} ^{\mbox{\rm \tiny sp}}= U/\sqrt{\idiov_{l_j}} \times
\left( y_{l_j,t}- \facload_{l_j,-j} \facm_{t,-j} \right).
\end{eqnarray*}
By reversing the split move, the merge move sets the only nonzero factor loading $\load ^{\mbox{\rm \tiny sp}}_{l_j,j}$ in row $l_j$ of a spurious columns $j$
in (\ref{spurious2}) to zero, while increasing the idiosyncratic variance $\idiov_{l_j}$ at the same time. Deleting the spurious column determines $\idiov_{l_j}$ and $U$ in the following way:
\begin{eqnarray*}
\idiov_{l_j} = (\load _{l_j,j}^{\mbox{\rm \tiny sp}}) ^2 + (\idiov_{l_j}) ^{\mbox{\rm \tiny sp}},\qquad
U=\load _{l_j,j}^{\mbox{\rm \tiny sp}} / \sqrt{\load _{l_j,j}^{\mbox{\rm \tiny sp}})^2 + (\idiov_{l_j}) ^{\mbox{\rm \tiny sp}}}.
\end{eqnarray*}
Since column $j$ is turned into a zero column, new factors are proposed from the prior, i.e. $\fac_{jt} \sim \Normal{0,1}$ for all $t=1, \ldots,T$.
At each sweep of the MCMC scheme, a decision has to be made whether a split or a merge move is performed.
Evidently, no merge move can be performed, whenever the current factor loading matrix contains no spurious columns. Similarly, no split move can be performed, whenever no additional spurious columns can be introduced. This happens if no more zero columns are present or if the number of spurious columns is equal to $S$. Otherwise, split and merge move are selected randomly, see Appendix~\ref{RJdetails} which also contains details on the acceptance rates both for split and merge moves.
\subsubsection{Boosting MCMC} \label{accelerate}
Step~(F) and Step~(P) in Algorithm~\ref{Algo3} perform full conditional Gibbs sampling for a confirmatory factor model corresponding to the current indicator matrix $\deltav$, by sampling the factors conditional on the loadings and idiosyncratic variances and sampling the loadings and idiosyncratic variances conditional on the factors. Depending on the signal-to-noise ratio of the latent variable representation, such full conditional Gibbs sampling tends to be poorly mixing.
For the basic factor model (\ref{fac1reg}), where ${\facm}_t \sim \Normult{\nfac}{\bfz,\identy{\nfac}}$,
the information in the data (the \lq\lq signal\rq\rq ) can be
quantified by the matrix $\trans{\facload} \Vare ^{-1} \facload $ in comparison to the identity matrix $\identy{\nfac}$ (the \lq\lq noise\rq\rq ) in the filter for $\facm_t|\ym_t,\facload, \Vare$
(see Appendix~\ref{SectionF_fac}):
\begin{eqnarray*}
{\facm}_t
|\ym_t, \facload, \Vare \sim \Normult{\nfac}{(\identy{\nfac} + \trans{\facload} \Vare ^{-1} \facload) ^{-1} \trans{\facload} \Vare ^{-1} \ym_t , (\identy{\nfac} + \trans{\facload} \Vare ^{-1} \facload) ^{-1} }.
\end{eqnarray*}
In particular for large
factor models with many measurements, one would expect that the data contain ample information to estimate the factors ${\facm}_t$. However, this is the case only, if the information matrix $\trans{\facload} \Vare ^{-1} \facload $ increases with $\dimy$, hence if most of the factor loadings are nonzero. For sparse factor models many columns with quite a few zero loadings are present, leading to a low signal-to-noise ratio and, as a consequence, to poor mixing of full conditional Gibbs sampling, as illustrated in the left-hand panel in Figure~\ref{Boostadd} showing posterior draws of $\trace{\trans{\facload} \Vare ^{-1} \facload }$ without boosting Step~(A) for the exchange data to be discussed in Subsection~\ref{applicEx22}.
\begin{Figure3}
{Exchange rate data; fractional prior with $b=b_N$. Posterior draws of $\trace{\trans{\facload} \Vare ^{-1} \facload}$ without boosting (left-hand side), boosting through ASIS based on choosing $\sqrt{\Psi_j}$ as the largest loading (in absolute values) in each nonzero column (middle) and boosting through MDA based on the inverted Gamma working prior $\Psi_j \sim \Gammainv{1.5,1.5}$ (right-hand side).}{Boostadd}{ex22_noboost}{ex22_boost_asis}{ex22_boost_mda}{0.2}
\end{Figure3}
Hence, for sparse factor models it is essential to include boosting steps
to obtain MCMC scheme with improved mixing properties, while keeping all priors unchanged.
Popular boosting algorithms are the ancillarity-suffiency interweaving strategy (ASIS), introduced by \citet{yu-men:cen}, and marginal data augmentation (MDA), introduced by \citet{van-men:art}.\footnote{ASIS has been applied to SV models \citep{kas-fru:anc}, TVP models \citep{bit-fru:ach}, and factor SV models \citep{kas-etal:eff}; MDA has been applied to factor models by \citet{gho-dun:def,con-etal:bay,pia-pap:bay}.}
There are numerous examples in the literature, where boosting enhances mixing at the cost of changing the prior, an example being the MDA algorithm applied by \citet{gho-dun:def} to the basic factor model. However, changing the prior of the factor loading matrix $\facload$ in the original model is undesirable in any variable selection context and is avoided by the boosting strategies applied in the present paper.
Both for ASIS and MDA, boosting is based on moving from model (\ref{fac1reg}) where ${\facm}_t \sim \Normult{\nfac}{\bfz,\identy{\nfac}}$
to an expanded model
with a more general prior:
\begin{eqnarray*}
\ym_t = \tilde{\facload} \tilde{\facm}_t + \errorm_t, \quad \errorm_t \sim \Normult{\dimy}{\bfz,\Vare}, \qquad
\tilde{\facm}_t \sim \Normult{\nfac}{\bfz,\Psiv},
\end{eqnarray*}
where $\Psiv=\Diag{\Psi_1,\ldots,\Psi_{\nfac}}$ is diagonal. The relation between the two systems is given by following transformation:
\begin{eqnarray}
\tilde{\facm}_t = (\Psiv)^{1/2} \facm_t , \quad \tilde{\facload} = \facload (\Psiv)^{-1/2}. \label{fac5pxmain}
\end{eqnarray}
Note that the nonzero elements in $\tilde{\facload} $ have the same position as the nonzero elements in $\facload$.
An important aspect of applying boosting in the context of sparse Bayesian factor models is the following.
The transformation (\ref{fac5pxmain}) has to be a one-to-one mapping for any kind of boosting based on parameter expansion to be valid.
For sparse Bayesian factor models, this is true only for the {\em nonzero} columns of $ \facload$, whereas for any zero column $j$, (\ref{fac5pxmain}) would be satisfied for arbitrary values $\Psi_{j}$ and many different expanded systems would map into the original system.\footnote{Applying a boosting step to an unobserved factor $f_{jt}$ has the undesirable effect that the prior of $f_{jt}$ is no longer a normal distribution. Rather, it is a scale mixture of Gaussian distributions with the mixing distribution being equal to the distribution of $\Psi_{j}$. For instance, if $\Psi_{j}$ follows an
inverted Gamma distribution as in marginal data augmentation, then moving to the expanded model by rescaling the factors $f_{jt}$ for all $t$ would lead to a model where $f_{jt}$ follows a $t$-prior rather than a normal distribution with scale $\Psi_{j}$.} Hence, we set $\Psi_{j}=1$ for all zero columns of $ \facload$ and, for nonzero columns $j$, choose $\Psi_j $ in a deterministic fashion for ASIS and sample $\Psi_j $ from a working prior for MDA.
For boosting based on ASIS, a nonzero factor loading $\load_{n_j,j}$ is chosen in each nonzero column $j$, to define
the current value of $\Psi_j$ as $\sqrt{\Psi_j}=\load_{n_j,j}$. This creates a factor loading matrix $\tilde{\facload}$ in the expanded system where
for all nonzero columns $j$, $\tilde{\load}_{n_j,j}=1$ whereas $\tilde{\load}_{i,j}= \load_{ij}/\load_{n_j,j}$ for $i\neq n_j$.
For MDA,
$\Psi_j$ is sampled from a working prior $p(\Psi_j)$,
which is independent both of $\facload$ and $\Vare$.
Our assumption of prior independence between the working parameter $\Psiv$ and the remaining parameters $\facload$ and $\Vare$ guarantees that the prior distribution of $\facload$ remains unchanged, despite moving between the two models.
For both boosting strategies, Step~(A) in Algorithm~\ref{Algo3} is implemented as described in detail in Algorithm~\ref{AlgoA} in Appendix~\ref{accelerate_App}.
For illustration, Figure~\ref{Boostadd} shows considerable efficiency gain in the posterior draws of $\trace{\trans{\facload} \Vare ^{-1} \facload }$ for the exchange data, when a boosting strategy is applied, both for ASIS (middle panel) as well as MDA (right-hand panel).
\subsection{Bayesian inference through postprocessing posterior draws} \label{exbaykrandom}
MCMC estimation through Algorithm~\ref{Algo3} delivers draws from the posterior $ p(\deltav, \idiov_1, \ldots, \idiov_\dimy, \facload|\ym)$ that are not
identified in the strict sense discussed in Subsection~\ref{onefactro}.
The only quantity that can be inferred from the posteriors draws, without caring at all about identification, is the marginal covariance matrix $\Vary=\facload \trans{\facload} +\Vare$.
%
%
For posterior inference beyond $\Vary$ such as estimating the number $\nfactrue$ of factors and posterior identification of $\Vare$ and $\facloadtrue \trans{\facloadtrue}$, it is essential to consider only posterior draws for which
the variance decomposition is unique. While most papers ignore this important aspect,
variance identification for sparse Bayesian factor models is fully addressed in the present paper during post-processing.
Due to the point-mass mixture prior employed in this paper, the posterior draws of
$\deltav$ contain valuable information both concerning the sparsity and identifiability
of the factor loading matrix, as the point-mass
mixture prior allows exact zeros in the factor loading matrix both apriori as well as aposteriori.
All posterior draws obtained from Algorithm~\ref{Algo3} are post-processed, to verify if the $\nfacr$ nonzero column $\tilde{\facload}$
of $\facload$ satisfy the row-deletion property condition \mbox{\bf AR}\ with $r=\nfacr$.
For draws with $\nfacr \leq 4$, the simple counting rules outlined in Corollary~\ref{Lemma1} in Appendix~\ref{simcount} are applied. For draws with $\nfacr > 4$, a very efficient procedure is applied that derives a block diagonal representation
as in Theorem~\ref{Lemma2} for $\tilde{\facload}$ sequentially and applies the 3-5-7-9-\ldots\ rule to the corresponding subblocks,
see Algorithm~\ref{algARIDE} in Appendix~\ref{verpartbig} for more details.
Any further Bayesian inference is performed for the $M_V$ variance identified draws, only.
\subsubsection{Identification of the number of factors $r$} \label{estr}
%
Given posterior draws of $\facload$ and $\deltav$, the challenge is to estimate the number of factors $\nfactrue$, if the model is overfitting.
A common procedure to identify
the number of factor is to apply
an incremental procedure, by increasing $\nfac$ step by step, and to use model selection criteria such as information criteria \citep{bai-ng:det2002}
or Bayes factors \citep{lee-son:bay,lop-wes:bay} to choose the number of factors.
Alternatively, a number of authors suggested to estimate the number of factors in one sweep together with the parameters.
\citet{car-etal:hig}, for instance, infer $\nfactrue$ from
the columns from $\deltav$, after removing
columns with a few nonzero elements in a heuristic manner.
\citet{bha-dun:spa} employ a procedure which increasingly shrinks factor loadings toward zero
with increasing column number. The number of factors is changed during sampling by setting
an entire column of the loading matrix to zero, if all factor loadings are close to 0.
\citet{kau-sch:bay} estimate a sparse dynamic factor model with an increasing number $\nfac$
of potential factors and use so-called \lq\lq extracted factor representation\rq\rq\ during MCMC post-processing procedure to select the number of factors.
However, any such heuristic method of inferring the number of factors from the nonzero columns from $\deltav$ in an overfitting model without checking uniqueness of variance decomposition is prone to be biased. Instead, our procedure relies on the mathematically justified representation of
the loading matrix $\facload$ in an overfitting factor model given by Theorem~\ref{theoverGLT} and provides a new, non-incremental approach for selecting the number of factors.
We identify $\nfactrue$ through a one-sweep MCMC procedure which is based on purposefully overfitting the number
$\nfac$ of potential factors within the framework of sparse Bayesian factor analysis as implemented above.
A related strategy was also applied in \citet{con-etal:bay} within the framework of dedicated Bayesian Factor analysis.
Evidently, zero columns (if any) in $\facload$ can be removed, since $\facload \trans{\facload }=\betatilde \trans{\betatilde}$, where $\betatilde$ contains the $\nfacr$ nonzero columns of $\facload$.
As outlined in Section~\ref{secover}, the number $\nfacr$ of nonzero columns is the equal to the number of factors $\nfactrue$, if the variance decomposition is unique for $\nfactrue=\nfacr$.
This is no longer true, if uniqueness of the variance decomposition does not hold for $\nfactrue=\nfacr$. In an overfitting factor model with $\nfac>\nfactrue$, many draws with $\nfacr$ nonzero columns will have a representation as in Theorem~\ref{theoverGLT} and contain a submatrix $\Mm $ with $s$ spurious columns, each of which has exactly one nonzero element. Hence, these draws violate even the most simple condition for variance identification. For such posterior draws $\betatilde$, $\nfacr$ overestimates $\nfactrue$ since, according to Theorem~\ref{theoverGLT}, $\nfacr=\nfactrue+s$, or equivalently:
$\nfactrue = \nfacr-s$.
Hence, methods of inferring the number of factors from the nonzero columns $\nfacr$ of the unconstrained posterior draws $\deltav$ in an overfitting factor model with $\nfac>\nfactrue$ are prone to overestimate the number of factors, in particular, if many draws violate simple conditions for variance identification.
As opposed to this, we rely on uniqueness of variance decomposition and discard draws from the posterior sample
that violate uniqueness of the variance decomposition for $\nfactrue=\nfacr$.
%
For the remaining draws, the
number $\nfacr$ of nonzero columns of $\betatilde$ can be considered as a posterior draw of the number
of factors $\nfactrue$. The entire (marginal) posterior distribution $p(\nfacr|\ym)$ can be estimated
from these draws, using the empirical pdf of the sampled values for $\nfacr$.
The posterior mode $\tilde{\nfactrue}$ of $p(\nfacr|\ym)$
provides a point estimator of the number of factors $\nfactrue$.
This inference is valid, even if the rotation problem for $\facload$ is not solved,
as only uniqueness of the variance decomposition is essential.
It should be noted that point mass mixture priors are particularly
useful in identifying spurious factors, since these priors are able to identify exact zeros in the columns
corresponding to spurious factors.
Under continuous shrinkage priors, see e.g. \citet{bha-dun:spa,roc-geo:fas}, it is not straightforward, how
to identify spurious factors.
\subsubsection{Further inference for unordered variance identified GLT draws}
In addition to estimating the number of factors as in Subsection~\ref{estr}, further Bayesian inference can be performed for the $M_V$ variance identified draws without resolving trivial rotation.
Evidently,
posterior inference is possible for all idiosyncratic variances $\idiov_1, \ldots, \idiov_\dimy$ in $\Vare$.
Functionals of $\Vare$, such as the trace of $\Vare$ and $\Vare^{-1}$ as well as the (log)
determinant of $\Vare$ are useful means of assessing convergence of the MCMC sampler. Furthermore, for each variable $y_{it}$ inference with respect to the proportion of the variance
explained by the common factors (also known as communalities $R^2_i$) is possible:
\begin{eqnarray}
R^2_i = \sum_{j=1}^{\nfac} R_{ij}^2,
\qquad R_{ij}^2 = \frac{\loadtrue_{ij}^2 }{\sum_{l=1}^{\nfactrue} \loadtrue_{il}^2+\idiov_i} .
\label{faccum}
\end{eqnarray}
In addition, due to Lemma~\ref{theirr}, irrelevant variables can be identified through the position of
zero rows. This allows to estimate the (marginal) posterior probability $\Prob{q_i=0|\ym}$ for all variables $y_{it}$ by counting the frequency of the event $q_i=\sum_{j=1}^\nfac \delta_{ij}= 0$ during MCMC
sampling for each row $i=1,\ldots,\dimy$.
%
Finally, overall sparsity in terms of the number $d$ of nonzero elements in $\deltav$,
\begin{eqnarray} \label{modeddd}
d = \sum_{j=1}^\nfac \sum_{i=1}^\dimy \delta_{ij},
\end{eqnarray}
can be evaluated.
Posterior draws of $d$ are particularly useful to check convergence and assessing efficiency of the MCMC sampler, as $d$ captures the ability of the sampler to move across (variance identified) factor models of different dimensions.
\subsubsection{Resolving trivial rotation issues} \label{subGL}
For all unordered GLT draws $\betatilde$ that are variance identified,
the factor loading matrix $\facloadtrue$ and the corresponding indicator matrix $\deltavlam$
are uniquely identified from the $\nfactrue$ nonzero columns $\betatilde$ and $\deltavtilde$ of $ \facload$ and the corresponding indicator matrix
$\deltav$ by Theorem~\ref{theGLT}.
Since the MCMC draws $\betatilde $ and $\deltavtilde $
are trivial rotations
of $\facloadtrue$ and $\deltavlam$, column and sign switching are easily resolved.
First,
the columns of $\deltavtilde$ are ordered such that the leading indices $\lm=(l_1, \ldots , l_\nfactrue)$
obey $l_1 < \ldots < l_\nfactrue$; i.e. $\deltavlam =\deltavtilde \Pm _{\rho}$.
Then, the sign of the entire column $j$ of $\betatilde \Pm _{\rho}$ is switched if the leading element
is negative; i.e. $\facloadtrue= \betatilde \Pm _{\rho} \Pm _{\pm} $.
In addition, the factors $\tilde{\facm}_t$ corresponding to the nonzero columns of $\deltav$ are reordered
through $ \trans{\Pm _{\pm}} \trans{\Pm _{\rho}} \tilde{\facm}_t$ for $t=1,\ldots,T$.
Finally, $\Pm _{\rho}$ is also used to reorder the draws of the hyperparameter $\hypv$ of the prior $p(\deltav)$.
The draws of $(\facloadtrue, \deltavlam)$ are exploited in various ways.
Their leading indices $l_1, \ldots , l_\nfactrue$ are draws from the marginal
posterior distribution $p(l_1, \ldots , l_\nfactrue|\ym)$ allowing
posterior inference w.r.t to $\lm$. In particular, the identifiability constraint
$\lm ^\star=(l_1^\star, \ldots, l_{r^\star}^\star)$ visited most often is determined
together with its frequency $p_{L}$ which reflects posterior uncertainty with respect to
choosing the leading indices. The number $r^\star$ of elements in $ \lm ^\star$ provide yet another estimator of the number of factors.
Furthermore, the
highest probability model (HPM),~i.e. the indicator matrix $\deltavlam_H$ visited most often,
its frequency $p_{H}$ (an estimator of the posterior probability of the HPM), its model size $d_H$, and its leading indices $\lm _H$ are of interest, and whether $\lm _H$ coincides with $\lm ^\star$.
Bayesian inference with respect to the loading matrix $\facloadtrue$
is performed conditional on $\lm^\star$, to avoid switches between
different leading indices.
%
Averaging over the corresponding $M_V p_{L} $
MCMC draws provides an estimate of $\facloadtrue$
and the marginal inclusion probabilities
$\Prob{\delta^\Lambda_{ij}=1|\ym, \lm ^\star}$ for all elements of the corresponding indicator matrix. Also,
the median probability model (MPM) $\deltavlam_M$,
obtained by setting each indicator to one
whenever $\Prob{\delta_{ij}^\Lambda=1|\ym, \lm ^\star}\geq 0.5$,
and its model size $d_M$ are of interest.
\section{Applications} \label{secalpp}
All computations are based on the designer MCMC algorithm introduced in Algorithm~\ref{Algo3}, with boosting in Step~(A)
being based on ASIS with choosing $\sqrt{\Psi_j}$ as the largest loading (in absolute values) in each nonzero column
(see Appendix~\ref{accelerate_App}), choosing $U^2 \sim \Betadis{3, 1.5}$ as proposal $g(u)$ in Step~(R)
(see Appendix~\ref{RJdetails})
and choosing $p_{\mbox{\rm \footnotesize shift}} = p_{\mbox{\rm \footnotesize switch}}=1/3, p_a=0.5$ in Step~(L) (see Appendix~\ref{updatelead}).
\subsection{Sparse factor analysis for exchange rate data} \label{applicEx22}
To analyze exchange rates with respect to the Euro, data was obtained from
the European Central Bank’s Statistical Data Warehouse and ranges from January 3, 2000
to December 3, 2007. It contains $m = 22$ exchange rates listed in Table~\ref{abbrev} from which we derived $T=96$ monthly returns, based on the first trading day in a month. The data are demeaned and standardized.\footnote{A similar set of exchange rates (however with daily returns) was studied in \citet{kas-etal:eff}.}
\begin{Tabelle}{Currency abbreviations.}{abbrev}
{ \small \begin{tabular}{cccc}
\begin{tabular}{rllrlrl}
\hline
1 & AUD & Australia dollar \\
2 & CAD & Canada dollar \\
3& CHF & Switzerland franc \\
4 & CZK & Czech R.\ koruna \\
5 &DKK & Denmark krone \\
6 & GBP & UK pound \\
7 & HKD & Hong Kong dollar \\
8 &IDR & Indonesia rupiah \\
9 &JPY & Japan yen \\
10 &KRW & South Korea won \\
11 & MXN& Mexican Peso\\ \hline
\end{tabular} &&
\begin{tabular}{rrlrlrl} \hline
12 &MYR & Malaysia ringgit \\
13 &NOK & Norway krone \\
14 &NZD & New Zealand dollar \\
15 &PHP & Philippines peso \\
16 &PLN & Poland zloty \\
17 &RON & Romania fourth leu \\
18 &RUB & Russian ruble \\
19 &SEK & Sweden krona \\
20 &SGD & Singapore dollar \\
21 &THB & Thailand baht \\
22 &USD & US dollar \\ \hline
\end{tabular} \end{tabular}
}
\end{Tabelle}
Since the number of factors is unknown, an overfitting factor model is applied with maximum degree of overfitting $S=3$ and
the maximum number of factors $\nfac=9$ obeying inequality (\ref{kbound_extend}).
%
The hyperparameter $b_0$ of the prior (\ref{prigen}) for the indicators is chosen as $b_0=0.6$, while $a_0=0.1714$ is chosen such that
a prior simplicity of $ \Ew{q_i}=2$ is achieved. This implies $\alpha=2.57$ in the parameterization
(\ref{prialt}). This prior introduces column sparsity, see the corresponding prior distributions $p(\nfacr)$ for the
number of nonzero columns reported in Table~\ref{ken_tab1}, with most of the prior mass being considerably
smaller than $\nfac=9$.\footnote{This prior distributions was determined by simulating $\dimmat{\dimy}{\nfac}$ indicator matrices $\deltav$ from the prior
(\ref{prigen}), restricted to GLT structures, and rejecting all draws that did not fulfill condition \mbox{\bf AR}\ for the $\nfactrue=\nfacr$ nonzero columns.}
The prior (\ref{priorsiid}) on the idiosyncratic variances is selected
with $c_0=2.5$ and $\widehat{\Vary^{-1}}$ being estimated from (\ref{Varyhat}) with $\nu_o =3 $ and ${\mathbf S}_o=\identy{\dimy}$.
To study sensitivity to further prior choices, we consider fractional priors (\ref{priorfrac})
with
$b=10^{-5}, b_R, 10^{-4}, b_N, 10^{-3}$.
Since $d(\nfac,\dimy)=175 << N=2112$, choosing $b_N $ is the recommended choice.
In addition, the standard prior (\ref{prior1}) is considered with
$\bV_{i0}^{\deltav}=\identm$, $c_0=1.1$ and $C_{i0} \equiv 0.055$ \citep{lop-wes:bay}.
\begin{Tabelle}{Exchange rate data; Bayesian inference for an overfitting factors model with $k=9$.
The first row shows the prior distribution $p(\nfacr)$ of the number on nonzero columns $\nfacr$ under prior (\ref{prialt}) with $\Ew{q_i}=2$ and $b_0=0.6$.
The upper part shows the posterior distribution $p(\nfacr|\ym)$ of
$\nfacr$ (bold number corresponds to the posterior mode $\tilde{r}$) for various fractional priors with different fractions $b$ ($b_N=4.735\cdot10^{-4} $, $b_R =3.265 \cdot 10^{-5} $) and the prior of \citet{lop-wes:bay} (LW) using only draws satisfying \mbox{\bf AR}\ ($p_V= M_V/M$ is the corresponding fraction). The lower part shows the posterior distribution $p(\nfacr|\ym)$ of
$\nfacr$ without imposing
variance identification. Probabilities smaller
than $<$$10^{-2}$ are indicated by $\approx 0$.}{ken_tab1}
{ \small \begin{tabular}{lccccccccc} \hline
& \multicolumn{8}{c}{$\nfacr$} & \\ \cline{2-9}
& 0-1& 2 & 3 & 4 & 5 & 6 & 7 & 8- 9& $100 \cdot p_V$ \\ \hline
$p(\nfacr)$ & 0.0434
& 0.112 & 0.231 & 0.2642 & 0.1996 & 0.1106 & 0.0336 & 0.0054
& 27.4 \\ \hline
$p(\nfacr|\ym)$ && & & & & & & & \\ \cline{1-1}
$b=10^{-5}$ & 0& 0 & \textbf{0.96} & 0.04 & 0& 0& 0& 0 & 58.2 \\
$b=b_R$ & 0 & 0 & 0.36 & \textbf{0.63} & $\approx 0$ & 0 & 0 &0 & 74.1 \\
$b=10^{-4}$ & 0 & 0 & 0.04 & \textbf{0.95} & $\approx 0$ & 0 & 0& 0 & 80.6 \\
$b= b_N$ & 0 & 0 & $\approx 0$ & \textbf{0.88} & 0.11 & $\approx 0$ &0 &0 & 58.9 \\
$b=10^{-3}$ & 0 & 0 & 0 & \textbf{0.63}& 0.34 & 0.02 & $\approx 0$ & 0 & 42.9 \\
LW & 0 & 0 &0 & $\approx 0$ & 0.19 & \textbf{ 0.47} & 0.29 &0.05 &22.1 \\
\hline
no varide && & & & & & & & \\ \cline{1-1}
$b=10^{-5}$ & 0& 0 & 0.89 & 0.10 & $\approx 0$ & 0& 0& 0 & \\
$b=b_R$ & 0 & 0 & 0.40 & 0.54 & 0.06 & $\approx 0$ & 0& 0 & \\
$b=10^{-4}$ & 0 & 0 & 0.04 & 0.80 & 0.15 & $\approx 0$ & $\approx 0$ & 0 & \\
$b= b_N$ & 0 & 0 & $\approx 0$ & 0.54 & 0.38 & 0.08 & $\approx 0$ &$\approx 0$ & \\
$b=10^{-3}$ & 0 & 0 & 0 & 0.28 & 0.44 & 0.23 & 0.05 & $\approx 0$ & \\
LW & 0 & 0 &0 & $\approx 0$ & 0.05 & 0.27 & 0.43 &0.24 & \\
\hline
\end{tabular}}
\end{Tabelle}
Algorithm~\ref{Algo3} is run for $M=100,000$ draws
after a burn-in of $M_0=50,000$ draws.
To verify convergence, independent MCMC chains were started respectively with $\nfacr^{(0)}=2$ and
$\nfacr^{(0)}=9$ nonzero columns.
As discussed in Subsection~\ref{mcmc}, this sampler navigates in
the space of all unordered GLT structures with an unknown number of nonzero columns and unknown leading indices, without
forcing variance identification. Apart from $\Vary$ no further parameters are identifiable from the unrestricted draws, and
as outlined in Subsection~\ref{exbaykrandom},
we screen for variance identified draws during post-processing. The fraction $p_V$
of variance identified draws is reasonably high, as reported in
Table~\ref{ken_tab1} for each prior.
We use only variance identified draws for further inference. Most importantly, for these draws
the number $\nfacr$ of nonzero columns of $\deltav$ may be regarded as draws of the number $\nfactrue$ of factors.
Table~\ref{ken_tab1} reports the posterior distribution $p(\nfacr|\ym)$ for all priors under investigation and the left-hand side of Figure~\ref{fig_ken_0} shows posterior draws of $\nfacr$ for the fractional prior $b=b_N$ for illustration.
All fractional priors based on $b=10^{-3}, 10^{-4},b_R, b_N$ point at a four factor
solution. The fractional prior with $b=10^{-5}$ introduces too strong shrinkage leading to a three factor model, whereas
the standard prior of \citet{lop-wes:bay} leads to an overfitting model with six factors.
Our designer MCMC scheme shows good mixing across models of different dimension, as illustrated by Figure~\ref{fig_ken_0} showing posterior
draws of $\nfacr$ and the model size $d$ for the fractional prior $b=b_N$, with an inefficiency factor of roughly 8 for $d$.
This good behaviour is particularly due to the RJMCMC Step~(R) in Algorithm~\ref{Algo3}, which has an acceptance rate of 18.9\% for a split and 30.8\% for a merge move.
\begin{Figure2}{Exchange rate data; fractional prior with $b=b_N$. Posterior draws of the number $\nfacr$ of nonzero columns (left-hand side) and model size $d$ (right-hand side). The figure shows the last 20,000 among all
variance identified draws.}{fig_ken_0}{mcmc_fac}{ex22_mcmc_m}{0.2} \end{Figure2}
As outlined in Subsection~\ref{exbaykrandom}, the variance identified draws can be post-processed further.
For instance, it is possible to investigate, if some measurements are uncorrelated with the remaining measurements. This is investigated in Table~\ref{uncorex}
through the posterior probability $\Prob{q_i=0|\ym}$, where $q_i$ is the row sum of $\deltav$.
Various currencies appear to be uncorrelated with the rest, namely
Swiss franc (CHF), Czech koruna (CZK), the Mexican peso (MXN), the New Zealand dollar (NZD ), the Romania fourth leu (RON), and the Russian ruble (RUB).
\begin{Tabelle}{Exchange rate data; posterior probability of the event $\Prob{q_i=0|\ym}$, where $q_i$ is the row sum of $\deltav$ for various exchange rates.}{uncorex}
{\small \begin{tabular}{lccccccc} \hline
& \multicolumn{7}{c}{$\Prob{q_i=0|\ym}$}\\ \cline{2-8}
Currency & CHF & CZK & MXN& NZD & RON& RUB & remaining \\
\hline
$b=10^{-5}$ & 0.98 & 0.95 & 0.98 & 0.85 & 0.97 & 0.97 & 0 \\
$b=b_R$ & 0.96 & 0.89 & 0.94 & 0.75 & 0.90 & 0.91 & 0 \\
$b=10^{-4}$ & 0.93 & 0.82 & 0.91 & 0.68 & 0.78 & 0.81 & 0 \\
$b= b_N$ & 0.83 & 0.59 & 0.78 & 0.44 & 0.56 & 0.51 & 0 \\
$b=10^{-3}$ & 0.28 & 0.64 & 0.35 & 0.76 & 0.59 & 0.73 & 0 \\
\hline
LW & 0.14 & 0.01 & 0.11 & 0.01 & 0.02 & $\approx 0$ & 0 \\
\hline
\end{tabular}}
\end{Tabelle}
\begin{Tabelle}{Bayesian inference under the GLT structures with unknown
number of factors and unknown leading indices (posterior draws of $\lm=(l_1,\ldots,\l_r)$ ordered by size), based on the $M_V$
variance identified draws. Posterior mode estimator $\tilde{r}$ of
the number of factors; posterior expectation
$\hat{d}=\Ew{d|\ym}$ of the model size $d$; total number of visited models $N_v$;
frequency $p_H$ (in percent), leading indices $\lm _H$ and model size $d_H$ of the HPM;
leading indices $\lm ^\star$ visited most often, corresponding frequency $p_L$ (in percent) and correspding number of factors $r^\star$; model size $d_M$ of the MPM .
}{ken_tab3}
{\small \begin{tabular}{lcccccccccc}
\hline Prior & $\tilde{r}$ &$\hat{d}$ & $N_v$ & $100p_H$ & $\lm_H$ & $d_H$ & $\lm^{\star}$ & $100p_L$ & $r^\star$ & $d_M$ \\ \hline
$b=10^{-5}$ & 3 & 21 & 2709 & 42.8 & (1,2,5) & 20& (1,2,5) & 88.5 &3 & 20 \\
$b=b_R $ & 4 & 24 & 10809 & 10.6 & (1,2,5,7) & 20 & (1,2,5,7) & 49.7 & 4 & 20 \\
$b=10^{-4}$ & 4 & 27 &19198 & 11.9 & (1,2,5,7) & 26& (1,2,5,7)& 85.5 & 4 & 26 \\
$b=b_N$ &4 &29 & 42906 & 2.9 & (1,2,5,7) & 26 & (1,2,5,7) & 65.3 & 4 & 26 \\
$b=10^{-3}$ & 4& 32 & 50920 & 0.5 & (1,2,5,7) & 26 & (1,2,5,7) & 37.3 & 4 & 27 \\
\hline
LW & 6 & 59 & 32921 & 0.01 & (1,2,3,4,5,6) & 56 & (1,2,3,4,5,6) & 11.2& 6 & 52\\
\hline
\end{tabular}}
\end{Tabelle}
Further Bayesian inference is reported in Table~\ref{ken_tab3}, including
the posterior mode estimator $\tilde{r}$, the posterior mean $\hat{d}$ of the model size $d$ defined in (\ref{modeddd}),
the total number $N_v$ of visited GLT structures, the identifiability constraint
$\lm ^\star=(l_1^\star, \ldots, l_{r^\star}^\star)$ visited most often together with its frequency $p_{L}$ (in percent),
as well as the frequency $p_H$ (in percent), the leading indices $\lm _H$ and model size $d_H$ of the highest probability model (HPM) $\deltavlam_H$.
For all priors, $\lm ^\star$ coincides with $\lm _H$.
For all 4-factor models, the GLT constraint $\lm ^\star=(1,2,5,7)$ turns out to be the
most likely constraint, whereas for the 3-factor models the GLT constraints
$\lm ^\star=(1,2,5)$ is preferred.
Once more we find that a standard prior as in \citet{lop-wes:bay} leads to an overfitting model both in terms of the factors as well in terms of the model size. Too many models are visited, leading to a very small posterior probability $p_H$ for the HPM.
As a final step, the factor loadings $\facloadtrue$ and the MPM are identified for a 4-factor model.
This inference is based on all posterior draws where the
leading indices of $\deltav$ (after reordering) coincide with the GLT constraint
$\lm ^\star=(1, 2,5,7)$. From these draws, the marginal inclusion probabilities
$\Prob{\delta_{ij}=1|\ym, \lm ^\star}$ and the corresponding median probability model (MPM) are derived. Its model size $d_M$ is reported in Table~\ref{ken_tab3} for all priors.
For most fractional priors, the HPM and the MPM coincide. Table~\ref{ken_mpm_tab2} reports the marginal inclusion probabilities
$\Prob{\delta_{ij}=1|\ym, \lm ^\star}$ for the fractional prior $b=b_N$ and
Figure~\ref{fig_deltaplot} displays both models for illustration. The resulting model indicates considerable sparsity, with many factor loadings being shrunk toward zero. Factor~2 is a common factor among the correlated currencies, while the remaining factors are three group specific, for the most part dedicated factors.
\begin{Tabelle}{Inclusion probabilities for the indicator matrix $\deltav$ for the fractional prior $b=b_N$ averaged over the variance identified draws
with $\lm ^\star= (1,2,5,7) $ (leading indices $\lm=(l_1,l_2,l_3,l_4)$ ordered by size).}{ken_mpm_tab2}
{ \small \begin{tabular}{ccccc} \hline
Currency & Factor~1 & Factor~2 & Factor~3 & Factor~4 \\ \hline
AUD & 1 & 0 & 0 & 0 \\
CAD & 1 & 1 & 0 & 0 \\
CHF & 0.01 & 0.12 & 0 & 0 \\
CZK & 0.01 & 0.21 & 0 & 0 \\
DKK & 0.02 & 1 & 1 & 0 \\
GBP & 0.07 & 1 & 0.05 & 0 \\
HKD & 0.01 & 1 & 0.97 & 1 \\
IDR & 0.04 & 1 & 0.03 & 1 \\
JPY & 0.13 & 1 & 0.01 & 0.02 \\
KRW & 0.01 & 1 & 0.06 & 0.02 \\
MXN & 0.01 & 0.16 & 0.01 & 0.01 \\
MYR & 1 & 0.06 & 0.01 & 0.01 \\
NOK & 0.01 & 1 & 0.01 & 0.02 \\
NZD & 0.09 & 0.42 & 0.04 & 0.01 \\
PHP & 0.01 & 1 & 0.95 & 0.04 \\
PLN & 0.01 & 1 & 0.02 & 0.73 \\
RON & 0.14 & 0.06 & 0.24 & 0.01 \\
RUB & 0.27 & 0.11 & 0.09 & 0.16 \\
SEK & 0.01 & 1 & 0.01 & 0.99 \\
SGD & 0.03 & 1 & 0.03 & 0.99 \\
THB & 0.01 & 1 & 0.01 & 0.02 \\
USD & 0.02 & 1 & 1 & 0.01 \\
\hline
\end{tabular}
}
\end{Tabelle}
\begin{Figure}{Exchange rate data; indicator matrix $\deltav$ corresponding both to the HPM and the MPM for a fractional prior with $b=b_N$. The number of estimated factors is equal to 4.}{fig_deltaplot}{fig_delta}{0.5} \end{Figure}
Finally, Table~\ref{ken_fac4} shows the posterior mean of the factor loading matrix,
the idiosyncratic variances and the communalities, obtained by averaging over all draws
where the leading indices of $\deltav$ coincide with $\lm ^\star$.
Sign switching in the posterior draws of $\facloadtrue$ is resolved through the constraint $\loadtrue_{11} >0$, $\loadtrue_{22} >0$, $\loadtrue_{53} >0$, and $\loadtrue_{74} >0$. As expected, nonzero factors loading have
relatively high communalities for the different currencies, whereas for zero rows the communalities are practically equal to zero.
\begin{Tabelle}{Exchange rate data; posterior mean of the factor loadings $\loadtrue_{ij}$, the communalities $R^2_{ij}$ (in percent)
and the idiosyncratic variances $\sigma_i^2$ (fractional prior $b=b_N$)
for a 4-factor model with the GLT constraint
$\lm ^\star
(1,3,5,7)$. Entries
with
$|\loadtrue_{ij}| <0.01$ and entries
with
$ R^2_{ij} < 0.1$ are indicated by $\approx 0$.}{ken_fac4}
{\small \begin{tabular}{lccccccccc} \hline
& \multicolumn{4}{c}{Factor loadings} & \multicolumn{4}{c}{Communalities}& \\
Currency & $\loadtrue_{i1}$ & $\loadtrue_{i2}$ & $\loadtrue_{i3}$ & $\loadtrue_{i4}$ & $R^2_{i1} $ & $R^2_{i2}$ & $R^2_{i3}$ & $R^2_{i4}$
& $\sigma_i^2$ \\ \hline
AUD & 0.96 & 0 & 0 & 0 & 88 & 0 & 0 & 0 & 0.12 \\
CAD & 0.39 & 0.6 & 0 & 0 & 17 & 39 & 0 & 0 & 0.42 \\
CHF & $\approx 0$ & -0.02 & 0 & 0 & $\approx 0$ & 0.36 & 0 & 0 & 0.98 \\
CZK & $\approx 0$ & 0.04 & 0 & 0 & $\approx 0$ & 0.96 & 0 & 0 & 0.98 \\
DKK & $\approx 0$ & 1.1 & 0.22 & 0 & $\approx 0$ & 95 & 4.2 & 0 & 0.01 \\
GBP & 0.01 & 0.57 & -0.01 & 0 & 0.39 & 32 & 0.27 & 0 & 0.70 \\
HKD & $\approx 0$ & 0.5 & 0.39 & 0.76 & $\approx 0$ & 22 & 14 & 49 & 0.17 \\
IDR & 0.01 & 0.8 & -0.01 & 0.42 & $\approx 0$ & 58 & $\approx 0$ & 16 & 0.29 \\
JPY & 0.02 & 0.93 & $\approx 0$ & $\approx 0$ & 0.35 & 76 & $\approx 0$ & $\approx 0$ & 0.27 \\
KRW & $\approx 0$ & 1.1 & 0.01 & $\approx 0$ & $\approx 0$ & 96 & $\approx 0$ & $\approx 0$ &0.01 \\
MXN & $\approx 0$ & 0.03 & $\approx 0$ & $\approx 0$ & $\approx 0$ & 0.65 & $\approx 0$ & $\approx 0$ & 0.98 \\
MYR & 0.79 & $\approx 0$ & $\approx 0$ & $\approx 0$ & 61 & $\approx 0$ & $\approx 0$ & $\approx 0$ & 0.40 \\
NOK & $\approx 0$ & 0.89 & $\approx 0$ & $\approx 0$ & $\approx 0$ & 70 & $\approx 0$ & $\approx 0$ & 0.33 \\
NZD & 0.025 & 0.11 & -0.01 & $\approx 0$ & 0.75 & 3.2 & 0.29 & $\approx 0$ & 0.95 \\
PHP & $\approx 0$ & 0.55 & -0.42 & 0.01 & $\approx 0$ & 29 & 18 & 0.14 & 0.56 \\
PLN & $\approx 0$ & 1 & $\approx 0$ & 0.12 & $\approx 0$ & 86 & $\approx 0$ & 1.9 & 0.14 \\
RON & 0.04 & $\approx 0$ & -0.08 & $\approx 0$ & 1.3 & 0.11 & 3 & $\approx 0$ & 0.95 \\
RUB & -0.09 & 0.02 & 0.03 & 0.05 & 3.2 & 0.35 & 0.84 & 1.6 & 0.94 \\
SEK & $\approx 0$ & 0.98 & $\approx 0$ & 0.31 & $\approx 0$ & 82 & $\approx 0$ & 8.5 & 0.11 \\
SGD & $\approx 0$ & 0.75 & $\approx 0$ & 0.39 & $\approx 0$ & 51 & $\approx 0$ & 14 & 0.37 \\
THB & $\approx 0$ & 0.59 & $\approx 0$ & $\approx 0$ & $\approx 0$ & 33 & $\approx 0$ & $\approx 0$ & 0.7 \\
USD & $\approx 0$ & 1.1 & 0.22 & $\approx 0$ & $\approx 0$ & 95 & 4.2 & $\approx 0$ & 0.01 \\
\hline
\end{tabular}}
\end{Tabelle}
\subsection{Sparse factor analysis for NYSE100 returns}
To show that our approach also scales to higher dimensions, we consider monthly log returns from $m=73$ firms from NYSE100
observed for $T=240$ months from January 1992 to December 2011. Again, the data are standardized.
Since the number of factors is unknown, an overfitting factor model is applied with the maximum degree of overfitting $S=4$ and
$\nfac=20$ being considerably smaller than the upper bound given by (\ref{kbound_extend}).
The hyperparameters of the prior (\ref{prigen}) for the indicators are chosen as
$ a_0=0.05$ and $b_0=0.1$, implying a prior simplicity of $ \Ew{q_i}=6.\dot{6}$ and $\alpha=10$ in parameterization (\ref{prialt}).
The prior (\ref{priorsiid}) is chosen for $\sigma^2_i$ with $c_0=2.5$ and $\widehat{\Vary^{-1}}$ being estimated as in (\ref{Varyhat}), with $\nu_o =3 $ and ${\mathbf S}_o=\identy{\dimy}$. Since $d(\nfac,\dimy)=1,270 << N=17,520$, we consider fractional priors with $ b= 10^{-5}, b_N, 10^{-4}$, where $b_N =5.71\cdot 10^{-5}$.
Further tuning is exactly as in Subsection~\ref{applicEx22}.
The designer MCMC scheme outlined in Algorithm~\ref{Algo3} is used to obtain $M=100,000$ draws
after a burn-in of $M_0=50,000$ draws starting, respectively, with $\nfacr^{(0)}=7$ and
$\nfacr^{(0)}=20$. Functionals of the posterior draws were used to monitor MCMC convergence.
The fraction $p_V$ of MCMC draws satisfying \mbox{\bf AR}\ is smaller than in the previous subsection but, being in the order of 8 to 11\%, still acceptable.
Although the prior $p(\nfacr)$ is fairly wide-spread, the posterior distribution $p(\nfacr|\ym)$ derived from all variance identified draws turns out to be strongly centered on $\tilde{\nfactrue}=12$ for all three fractional priors, see Table~\ref{NASDAQ_tab1}.
\begin{Tabelle}{NYSE100 return data; Bayesian inference for an unknown number of factors (maximum number of factors $k=20$)
under prior (\ref{prigen}) with $ a_0=0.05$ and $b_0=0.1$.
$p_V$ is the fraction of draws satisfying \mbox{\bf AR} . Posterior distribution $p(\nfacr|\ym)$ of the number
$\nfacr$ of nonzero columns (bold number corresponding to the posterior mode $\tilde{r}$) for various fractional priors on $\facload_{i\cdot}^{\deltav}$ with $ b= 10^{-5}, b=b_N =5.71\cdot 10^{-5}, b= 10^{-4}$. Upper part: variance identified draws; lower part: all posterior draws.}{NASDAQ_tab1}
{\small \begin{tabular}{lccccccc} \hline
& \multicolumn{6}{c}{$\nfacr$} & \\ \cline{2-8}
& $\leq 11$ & 12 & 13 & 14 & 15 & $\geq 16$ & $100 p_V$ \\ \hline
$p(\nfacr|\ym)$ &&&&&&&\\ \cline{1-1}
$b=10^{-5}$ & 0& \textbf{ 0.98} & 0.02 & 0& 0& 0 & 7.6 \\
$b= b_N$ & 0 & \textbf{ 0.70} & 0.27 & 0.02 &0 &0 & 10.7 \\
$b=10^{-4}$ & 0 & \textbf{0.56} & 0.35 & 0.09 & 0 & 0 & 9.4 \\ \hline
no varide &&&&&&&\\ \cline{1-1}
$b=10^{-5}$ & 0& 0.88 & 0.12 & 0.01 & 0 & 0 & \\
$b= b_N$ & 0 & 0.30 & 0.45 & 0.22 & 0.03 & 0 & \\
$b=10^{-4}$ & 0 & 0.21 & 0.52 & 0.23 & 0.04 & 0 & \\
\hline
\end{tabular}}
\end{Tabelle}
The MCMC scheme shows good mixing, despite the high dimensionality, as illustrated by Figure~\ref{fig_nyse_0} showing draws from the posterior distributions
$p(\nfacr|\ym)$ and $p(d|\ym)$ for $b=b_N$. The RJMCMC Step~(R) in Algorithm~\ref{Algo3} has an acceptance rate of 8.6\% for a split and 14.7\% for a merge move and the inefficiency factor for $d$ is equal to 8.
\begin{Tabelle}{NYSE100 return data; sequence of leading indices $ \lm ^\star$ visited most often together with its frequency $100 p_{L}$ (in percent)
for various fractional priors.}{nyse_listar}
{ \small \begin{tabular}{llcr}
\hline
& $\lm ^\star $ & $r^\star$ & $100 p_{L}$\\
\hline
$b= b_N$ & (1,2,3,4,5,6,7,8,9,14,15,26) & 12 & 10.3 \\
& (1,2,3,4,5,6,7,8,9,14,15,26) & 12 & 9.9 \\
$b= 10^{-4} $ & (1,2,3,4,5,6,7,8,9,14,15,26) & 12 & 9.8 \\
& (1,2,3,4,5,6,7,8,9,14,15,26)& 12 & 10.8\\
$b= 10^{-5}$ & (1,2,3,4,5,6,7,14,15,19,25,26) & 12 & 19.0 \\
& (1,2,3,4,5,6,7,9,14,15,25,26) & 12 & 25.2\\
\hline
\end{tabular}
}
\end{Tabelle}
\begin{Figure2}{NYSE100 return data; fractional prior with $b=b_N$. All (11906 variance identified) posterior draws of the number $\nfacr$ of factors (left-hand side) and
model size $d$ (right-hand side).}{fig_nyse_0}{nyse_mcmc_nfac}{nyse_mcmc_m}{0.2}
\end{Figure2}
In Table~\ref{nyse_listar}, the identifiability constraint
$\lm ^\star=(l_1^\star, \ldots, l_{r^\star}^\star)$ visited most often is reported together with its frequency $p_{L}$ for all three priors for both runs.
Also $\lm ^\star$ points at a 12-factor model for all priors and coincides for both runs for $b= b_N$ and $b= 10^{-4} $.
Further inference with respect to $\facloadtrue$ and $\deltav$ is based on all posterior draws where the
leading indices of $\deltav$ (after reordering) are equal to $\lm ^\star$.
The corresponding median probability model (MPM)
is shown for $b= b_N$ in Figure~\ref{fig_deltanyse} and is extremely sparse with only $d_M=156$ nonzero loadings.
The MPM clearly indicates that all returns are correlated\footnote{This confirmed by the posterior probabilities $\Prob{q_i=0|\ym}$ which are equal to 1 for all firms.} and one main factor is present which loads on all returns. The remaining factors are for the most part dedicated factors that capture cross-sectional correlations between specific firms.
\begin{Figure}{NYSE100 return data; $\deltav$ corresponding to the MPM with $\lm ^\star = (1,2,3,4,5,6,7,8,9,14,15,26)$ for a fractional prior with $b=b_N$.}{fig_deltanyse}{nyse_delta}{0.5}
\end{Figure}
\section{Concluding remarks} \label{secconcluse}
We have characterised, identified and estimated (from a Bayesian viewpoint) a fairly important and highly implemented class of sparse factor models when the number of common factors is unknown. More specifically, we have explicitly and rigorously addressed identifiability issues that arise in this class of models by going well beyond and much deeper than simply applying rotation for identification and seeking instead uniqueness of the variance decomposition.
In addition, our framework leads to a natural, efficient and simultaneous coupling of model estimation and selection on one hand and model identification and reduction as well as rank estimation (number of factors) on the other hand. More precisely, by combining point-mass mixture priors with overfitting sparse factor modelling, in a generalised lower triangular loadings representation, we obtain posterior summaries regarding factor loadings, common factors as well as the number of common factors via postprocessing our highly efficient and customised MCMC scheme. Two applications, one with $m=22$ variables and $T=96$ observations and one with $m=73$ and $T=240$, illustrates in detail many of the existing and new aspects of estimating a parsimonious and sparse factor model when the number of factors is unknown.
The new framework is readily available for some straightforward extensions. Theorem~\ref{Lemma2}, for example,
is not confined to GLT structures and is applicable to any (sparse) loading matrix which arises
in statistics and machine learning (see e.g. the web appendix of \citet{roc-geo:fas} where the factor model fitted to the applicants data obviously is not identified) or to spatial
factor models with 0-1 neighbouring structures (see \citet{lop-etal:spa} and \citet{sch-lop:dyn}, and their references),
but also in economics and genetics \citep{car-etal:hig}.
Other relatively immediate extensions are
(i) idiosyncratic errors following Student's $t$-distributions or more general Gaussian mixtures
and (ii) dynamic sparse factor models with stationary common factors; both extensions commonly found in econometrics applications, see e.g. the recent papers by \citet{pia-pap:bay} and \citet{kau-sch:bay}.
Finally, extending our approach, in particular Theorem~\ref{theoverGLT}, to correlated factors
could prove useful towards generalizing the work of \citet{con-etal:bay} to simple structures with more than one nonzero loading per factor.
\bibliographystyle{chicago}
|
2,869,038,154,615 | arxiv | \subsection*{Abstract}
Privacy-preserving releasing of complex data (e.g., image, text, audio) represents a long-standing challenge for the data mining research community. Due to rich semantics of the data and lack of {\em a priori} knowledge about the analysis task, excessive sanitization is often necessary to ensure privacy, leading to significant loss of the data utility. In this paper, we present {\sf dp-GAN}\xspace, a general private releasing framework for semantic-rich data. Instead of sanitizing and then releasing the data, the data curator publishes a deep generative model which is trained using the original data in a differentially private manner; with the generative model, the analyst is able to produce an unlimited amount of synthetic data for arbitrary analysis tasks. In contrast of alternative solutions, {\sf dp-GAN}\xspace highlights a set of key features: (i) it provides theoretical privacy guarantee via enforcing the differential privacy principle; (ii) it retains desirable utility in the released model, enabling a variety of otherwise impossible analyses; and (iii) most importantly, it achieves practical training scalability and stability by employing multi-fold optimization strategies. Through extensive empirical evaluation on benchmark datasets and analyses, we validate the efficacy of {\sf dp-GAN}\xspace.\\
(The source code and the data used in the paper is available at: https://github.com/alps-lab/dpgan)
\subsection*{Appendix A: Privacy Accounting}
\begin{theorem}
There exists constants $c_1$ and
$c_2$ so that given sampling ratio
$q = m/n$ and the number of steps $t$,
for any $\epsilon < c_1 q^2 t$,
Algorithm~1 in \cite{Abadi:2016:dpdl} is $(\epsilon,
\delta)$--differential privacy for any
$\delta > 0$ if we choose
\begin{equation*}
\sigma \ge c_2 \frac{q\sqrt{T\log(1 / \delta)}}
{\epsilon}.
\end{equation*}
\end{theorem}
\subsection*{Appendix B: Network Architectures}
\textbf{MNIST} $D$: input~-$(28, 28, 1)$ $\rightarrow$ Conv~
(nb\_filter: 64, filter\_size: 5, strides: 2, activation: leaky\_relu)~-$(14, 14, 64) \rightarrow$
Conv~(128, 5, 2, leaky\_relu)~-$(7, 7, 128) \rightarrow$
Conv~(256, 5, 2, leaky\_relu)~-$(4, 4, 256) \rightarrow$
FullyConnect~(output\_dim: 1, activation: identity)~-$(1)$ .
$G$: random noises~-$(128) \rightarrow $
FullyConnect~(4096, identity)~-$(4096) \rightarrow$ -> BN+ReLU~-$(4096) \rightarrow$
ConvTranspose~(nb\_filters: 128, strides: 2, activations: ReLU)~-$(8, 8, 128) \rightarrow$
Slicing~-$(7, 7, 128) \rightarrow$
ConvTranspose~(nb\_filters: 256, strides: 2, activations: ReLU)~-$(14, 14, 64) \rightarrow$
ConvTranspose~(nb\_filters: 256, strides: 2, activations: tanh)~-$(28, 28, 1)$.
\vspace{3pt}
\textbf{CelebA}. $D$: input~-$(48, 48, 3) \rightarrow$
Conv~(128, 5, 2, leaky\_relu)~-$(24, 24, 128) \rightarrow$
Conv~(256, 5, 2, leaky\_relu)~-$(12, 12, 256) \rightarrow$
Conv~(512, 5, 2, leaky\_relu)~-$(6, 6, 512) \rightarrow$
FullyConnect~(1, identity)~-$(1)$ .
$G$: random noises~-$(128) \rightarrow $
FullyConnect~(18432, identity)~-$(18432) \rightarrow$
Upsample Residual Block~(512, 5)~-$(12, 12, 512) \rightarrow $
Upsample Residual Block~(256, 5)~-$(24, 24, 256) \rightarrow $
Upsample Residual Block~(128, 5)~-$(48, 48, 128) \rightarrow $
BN + ReLU~-$(48, 48, 128) \rightarrow$
Conv~(3, 3, 1, tanh)~-$(48, 48, 3)$.
\vspace{3pt}
\textbf{LSUN}. $D$: input~-$(64, 64, 3) \rightarrow$
Conv~(64, 5, 2, leaky\_relu)~-$(32, 32, 64) \rightarrow$
Conv~(128, 5, 2, leaky\_relu)~-$(16, 16, 128) \rightarrow$
Conv~(256, 5, 2, leaky\_relu)~-$(8, 8, 256) \rightarrow$
Conv~(512, 5, 2, leaky\_relu)~-$(4, 4, 512) \rightarrow$
FullyConnect~(1, identity)~-$(1)$ .
$G$: random noises~-$(128) \rightarrow $
FullyConnect~(8192, identity)~-$(8192) \rightarrow$
Upsample Residual Block~(512, 5)~-$(8, 8, 512) \rightarrow $
Upsample Residual Block~(256, 5)~-$(16, 16, 256) \rightarrow $
Upsample Residual Block~(128, 5)~-$(32, 32, 128) \rightarrow $
Upsample Residual Block~(64, 5)~-$(64, 64, 64) \rightarrow $
BN + ReLU~-$(64, 64, 64) \rightarrow$
Conv~(3, 3, 1, tanh)~-$(64, 64, 3)$.
%
\section{Preliminaries}
\label{sec:background}
In this section, we introduce the two basic building blocks of {\sf dp-GAN}\xspace, generative adversarial network and differential privacy.
\begin{figure}
\centering
\epsfig{file=figures/gan.eps, width=85mm}
\caption{Illustration of generative adversarial networks. \label{fig:gan}}
\end{figure}
\subsection{Generative Adversarial Network}
The generative adversarial network (GAN)~\cite{Goodfellow:2014:nips}
is a class of unsupervised learning algorithms which are implemented by
an adversarial process. As illustrated in Figure~\ref{fig:gan}, the GAN\xspace architecture typically comprises two neural networks, a generator $G$ and a discriminator $D$, in which $G$ learns to map from a latent distribution $p_z$ to the true data distribution $p_{\rm data}$, while $D$ discriminates between instances sampled from $p_{\rm data}$ and that generated by $G$. Here $G$'s objective is to ``fool'' $D$ by synthesizing instances that appear to have come from $p_{\rm data}$. This framework corresponds to solving a minimax two-player game with the following objective function:
\begin{equation}
\min_\theta \max_w \mathbb{E}_{x \sim p_{\text{data}}} [\log D_w(x)] + \mathbb{E}_ {z \sim p_z} [\log (1 - D_w(G_\theta(z)))]
\end{equation}
where $x$ and $z$ are sampled from $p_\text{data}$ and $p_{\bm{z}}$ respectively.
Since its advent, GAN finds applications in varied unsupervised and semi-supervised learning tasks~\cite{Chen:2016:infogan, Radford:2015:dcgan, Donahue:2016:adl, Kumar:2017:semiinv, Reed:2016:generative,Ledig:2016:superres,Yeh:2016:inpaint,Rajeswar:2017:adversarial}.
One line of work takes the trained discriminator as a feature
extractor and applies it in varied settings; the other line focuses on
the latent variable $z$ in the generator, either using regularization to make $z$ semantically
meaningful~\cite{Donahue:2016:adl, Chen:2016:infogan} or extracting information in the latent space
directly~\cite{Radford:2015:dcgan}.
Despite its simplicity, the original GAN formulation is
unstable and inefficient to train. A number of followup work
~\cite{Zhao:2016:energy, Chen:2016:infogan, Radford:2015:dcgan,
Nowozin:2016:fgan,Arjovsky:2017:wgan, Gulrajani:2017:wganip}
propose new training procedures and network architectures to improve training stability and convergence rate. In particular, the Wasserstein generative adversarial network (WGAN)~\cite{Arjovsky:2017:wgan} and Improved Training of Wasserstein GANs~\cite{Gulrajani:2017:wganip}
attempt to minimize the earth mover distance between the synthesized distribution and the true distribution rather than their Jensen-Shannon divergence as in the original GAN formulation. Formally, improved WGAN adopts the following objective functions:
\begin{align}
& \argmin_w -D_w(G_\theta(z)) \\
& \argmin_\theta D_w(G_\theta(z)) - D_w(x) + \lambda \left( \left\Vert \nabla_{\hat{x}} D_w(\hat{x})\right\Vert_2 - 1\right)^2
\end{align}
Here, $\hat x = \alpha x + (1 - \alpha) G_\theta(z)$, in which $\alpha$ is
a random number sampled from $[0, 1]$. The regularization term enforces the norm of $D$'s gradients to be close to 1. This formulation is shown to allow more stable and faster training~\cite{Gulrajani:2017:wganip}.
In the following, without loss of generality, we will exemplify with the improved WGAN formulation to implement {\sf dp-GAN}\xspace.
\subsection{Differential Privacy}
By providing theoretically guaranteed protection, differential privacy (DP)~\cite{Dwork:2009:tcc, Dwork:2006:icalp, Dwork:2014:book} is considered one of the strongest privacy definitions.
\vspace{3pt}
{\bf Definitions.}
We say a randomized mechanism $\mathcal{M} : \mathcal{D}^n \mapsto \mathcal{R}$ satisfies
$\epsilon$-DP if for any adjacent databases
$d, d' \in \mathbb{D}^n$ (which are identical except for one single data entry) and any subset $R \subseteq \mathcal{R}$, it holds that
${\rm Pr}[\mathcal{M}(d) \in R] \leq e^\epsilon {\rm Pr}[\mathcal{M}(d') \in R]$. A relaxed version, $(\epsilon, \delta)$-DP, allows the plain $\epsilon$-DP to be compromised
with a small probability $\delta$:
${\rm Pr}[\mathcal{M}(d) \in R] \leq e^\epsilon {\rm Pr}[\mathcal{M}(d') \in R] + \delta$.
In this work, we consider $(\epsilon, \delta)$-DP as the default privacy definition.
\vspace{3pt}
{\bf Mechanisms.}
For a given deterministic function $f$, DP is often achieved by injecting random noise into $f$'s output, while the noise magnitude is determined by $f$'s sensitivity. If $f$ is vector-valued, i.e., $f:\mathcal{D}^n \mapsto \mathcal{R}^m$, its sensitivity is defined as:
$\Updelta f = \max_{d, d'} \| f(d) - f(d') \|$, where $\Updelta f$ represents the maximum influence of a single data entry on $f$'s output, quantifying the (worst-case) uncertainty to be added to $f$'s output to hide the presence of that entry.
If $f$'s sensitivity is defined using $\ell_2$ norm, the Gaussian mechanism~\cite{Dwork:2014:book} is a common choice for randomizing $f$'s output:
\begin{equation}
\nonumber
\mathcal{M} (d) = f(d) + \mathcal{N}(0, (\Updelta f)^2 \sigma \mathcal{I}),
\end{equation}
where $\mathcal{N}(0, (\Updelta f)^2 \sigma \mathcal{I})$ is a Gaussian
distribution with zero mean and covariance matrix $ (\Updelta f)^2 \sigma \mathcal{I}$ and $\mathcal{I}$ is the identity matrix.
\vspace{3pt}
{\bf Properties.} In addition, DP also features the following key properties, which we leverage in implementing {\sf dp-GAN}\xspace.
\begin{myitemize}
\item {\em Closure under post-processing}. Any computation on the output of a DP-mechanism does not increase privacy loss.
\item {\em Sequential composability}. The composition of a sequence of DP-mechanisms is also DP-satisfying.
\end{myitemize}
We may use the composition theorems~\cite{Dwork:2014:book,Dwork:2010:boosting} to estimate the privacy loss after $k$-fold application of DP-mechanisms.
\section{Conclusion and Discussion}
\label{sec:end}
In this paper, we present {\sf dp-GAN}\xspace, a generic framework of publishing semantic-rich data in a privacy-preserving manner. Instead of releasing sanitized datasets, {\sf dp-GAN}\xspace releases differentially private generative models, which can be used by analysts to synthesize unlimited amount of data for arbitrary analysis tasks. To achieve this, {\sf dp-GAN}\xspace integrates the generative adversarial network framework with differential privacy mechanisms, provides refined analysis of privacy loss within this framework, and employs a suite of optimization strategies to address the training stability and scalability challenges. Using benchmark datasets and analysis tasks, we show that {\sf dp-GAN}\xspace is able to synthesize data of utility comparable to original data, at the cost of modest privacy loss.
This work also opens several avenues for further research. For example, in this paper we mostly focus on publishing image data, while it is worth investigation to adapt {\sf dp-GAN}\xspace to support other types of semantic-rich data (e.g., LSTM for language modeling tasks). In addition, {\sf dp-GAN}\xspace is formulated as an unsupervised framework, while its extension to supervised and semi-supervised learning is attractive for data with label information.
\section{Empirical Evaluation}
\label{sec:eval}
In this section, we empirically evaluate the proposed {\sf dp-GAN}\xspace framework. The experiments are designed to answer four key questions that impact {\sf dp-GAN}\xspace's practical use. First, is {\sf dp-GAN}\xspace able to synthesize visually vivid image data, under the DP constraint? Second, does the synthesized data demonstrate sufficient quality and diversity, from a quantitative perspective? Third, does the synthesized data retain enough utility for concrete data analysis tasks? Finally, how do different optimization strategies influence {\sf dp-GAN}\xspace's performance?
We begin with describing the experimental setting.
\subsection{Experimental Setting}
In our experiments, we use three benchmark datasets:
\begin{myitemize}
\item MNIST, which consists of 70K handwritten digit images of size $28\times28$, split into 60K training and 10K test samples.
\item CelebA, which comprises 200K celebrity face images of size $48\times 48$, each with 40 attribute annotations.
\item LSUN, which contains around one million labeled images of size $64\times 64$, for each of the 10 scene categories.
\end{myitemize}
For the MNIST and CelebA datasets, we split the training data (which is the entire dataset if no labeling information is considered) using the ratio of
$2:98$ as publicly available data $\mathcal{D}_{\rm pub}$ and private data $\mathcal{D}_{\rm pri}$ respectively. We train {\sf dp-GAN}\xspace on
$\mathcal{D}_{\rm pri}$ under the DP constraint.
For the LSUN dataset, we consider two settings. First, we consider it as an unlabeled dataset and split it into $2:98$ as public data $\mathcal{D}_{\rm pub}$ and private data $\mathcal{D}_{\rm pri}$, which we denote as
LSUN-U. Second, we consider the label information of the dataset. We sample 500K images from each of the top 5 categories (in terms of number of images), which are then split into $2 : 98$ as $\mathcal{D}_{\rm pub}$ and $\mathcal{D}_{\rm pri}$ respectively. We refer to this dataset as LSUN-L.
The network architecture of {\sf dp-GAN}\xspace is similar to~\cite{Gulrajani:2017:wganip}, which we adpat to each dataset.
The default setting of the parameters is as follows: the coefficient of gradient penalty $\lambda=10$,
the number of critic iterations per GAN's iteration $n_\text{critic} = 4$, the batch size $m = 64$.
The setting of the parameters specific to each dataset is summarized in Table~\ref{tab:expsetting},
where $(\alpha, \beta_1, \beta_2)$ are the hyper-parameters of the Adam optimizer, $(\epsilon, \delta)$ are the privacy budget, and $\sigma$ is the noise scale. The setting of $\sigma$ follows the setting in~\cite{Abadi:2016:dpdl}, which is considered sufficiently strict in typical applications.
The last two hyper-parameters are for advanced {\sf dp-GAN}\xspace: $k$ is the number of groups for weight clustering, and $t_\text{warm}$ is the number of iterations for warm starting with public data.
\begin{table}
\centering
\begin{tabular}{c | c c c c c c c c}
Dataset & $\alpha$ & $\beta_1$ & $\beta_2$ & $\epsilon$ & $\delta$ & $
\sigma$ & $k$ & $t_\text{warm}$ \\
\hline
MNIST & 0.002 & 0.5 & 0.9 & 4 & $10^{-5}$ & 1.086 & 5 & 300 \\
CelebA & 0.002 & 0.0 & 0.9 & 10 & $10^{-5}$ & 0.543 & 6 & 800 \\
LSUN-U & 0.002 & 0.0 & 0.9 & 10 & $10^{-5}$ & 0.434 & 7 & 2400 \\
LSUN-L & 0.002 & 0.0 & 0.9 & 10 & $10^{-5}$ & 0.434 & 7 & 2000 \\
\hline
\end{tabular}
\caption{Parameter setting for each dataset.}
\label{tab:expsetting}
\end{table}
All the experiments are conducted on TensorFlow.
\subsection{Qualitative Evaluation}
In this set of experiments, we qualitative evaluate the quality of the data synthesized by {\sf dp-GAN}\xspace. Figure~\ref{fig:mnist},~\ref{fig:lsunbedroom},~\ref{fig:lsun10cat}, and ~\ref{fig:celeba48} show a set of synthetic samples generated by {\sf dp-GAN}\xspace, which has been trained on the MNIST, LSUN-U, LSUN-L, and CelebA datasets respectively. It is noted that in all the cases, {\sf dp-GAN}\xspace is able to generate visually vivid images of quality comparable to original ones, while, at the same time, providing strong privacy protection (see Table~\ref{tab:expsetting}).
\begin{figure*}[t]
\centering
\epsfig{width = 175mm, file = figures/mnist_merged_new.eps}
\caption{Synthetic samples for the MNIST dataset ($\epsilon=4, \delta \leq 10^{-5}$)}
\label{fig:mnist}
\end{figure*}
\begin{figure*}[t]
\centering
\epsfig{width = 175mm, file =figures/lsun_bedroom_merged.eps}
\caption{Synthetic samples for LSUN-U dataset ($\epsilon=10, \delta \leq 10^{-5}$)}
\label{fig:lsunbedroom}
\end{figure*}
\begin{figure*}[t]
\centering
\epsfig{width = 175mm, file = figures/lsun_10cat_merged.eps}
\caption{Synthetic samples for the LSUN-L dataset ($\epsilon=10, \delta \leq 10^{-5}$)}
\label{fig:lsun10cat}
\end{figure*}
\begin{figure*}[t]
\centering
\epsfig{width = 175mm, file = figures/celeba_48_merged_new.eps}
\caption{Synthetic samples for the CelebA dataset ($\epsilon=10, \delta \leq 10^{-5}$)}
\label{fig:celeba48}
\end{figure*}
\subsection{Quantitative Evaluation}
Next we conduct quantitative evaluation of {\sf dp-GAN}\xspace's performance. Specifically, we first compare the synthetic data against the real data in terms of their statistical properties, including Inception scores and Jensen-Shannon divergence; we then evaluate the quality of the synthetic data in semi-supervised classification tasks.
\subsubsection*{\bf Statistical Properties}
In \cite{Salimans:2016:improved}, Salimans {\em et al.} propose to use Inception score to measure the quality
of data generated by GAN. Formally, the Inception score\footnote{Even though the datasets here are not ImageNet, we still refer to
Eqn.~\ref{equ:is} as Inception score in the following.} of a generator $G$ is defined as:
\begin{equation}
s(G) = \exp \left( \mathbb{E}_{x \sim G(z) }
{\rm KL}( {\rm Pr}(y|x) || {\rm Pr}(y)) \right)
\label{equ:is}
\end{equation}
Here, (i) $x$ is a sample generated by $G$.
(ii) ${\rm Pr}(y|x)$ is the conditional distribution imposed
by a pre-trained classifier
to predict $x$'s label $y$. If $x$ is similar to a real sample, we expect the entropy of ${\rm Pr}(y|x)$ to be small. (iii)
${\rm Pr}(y) = \int_{x} {\rm Pr}(y|x = G(z)
{\rm d}z$ is the marginal distribution of $y$. If $G$ is able to generate a diverse set of samples, we expect the entropy of ${\rm Pr}(y)$ to be large. Thus, by measuring the KL divergence of the two distributions, $s(G)$ captures both the quality and diversity of the synthetic data. For the MNIST and LSUN-L datasets, we use the entire training set to train baseline classifiers to estimate ${\rm Pr}(y|x)$. The classifiers are tuned to achieve reasonable performance on the validation sets (99.06\% for MNIST and 88.73\% for LSUN-L).
Table~\ref{tab:iscore} summarizes the Inception scores of synthetic data (generated by regular GAN and {\sf dp-GAN}\xspace) and real data for the MNIST and LSUN-L datasets. It can be noticed that {\sf dp-GAN}\xspace is able to synthesize data with Inception scores fairly close to the real data and that generated by regular GANs (without privacy constraints). For example, in the case of MNIST, the difference between the real data and the synthetic data by {\sf dp-GAN}\xspace is less than 1.32.
\begin{table}
\centering
\begin{tabular}{ c | c c c c }
Dataset & Setting & $n$ ($\times 10^6$) & $(\epsilon, \delta)$ & Score \\
\hline
\multirow{3}{*}{MNIST} & real & $0.06$ & - & $9.96 \pm 0.03 $\\
& GAN & $0.06$ & - & $9.05 \pm 0.03$ \\
& {\sf dp-GAN}\xspace & $0.05$ & $(4, 10^{-5})$ & $8.64 \pm 0.03 $\\
\hline
\hline
\multirow{3}{*}{LSUN-L} & real & $2.50$ & - & $4.16 \pm 0.01 $\\
& GAN & $2.50$ & - & $3.11 \pm 0.01$ \\
& {\sf dp-GAN}\xspace & $2.45$ & $(10, 10^{-5}) $ & $2.78 \pm 0.01$ \\
\hline
\end{tabular}
\caption{Inception scores of real and synthetic data on the MNIST and LSUN-L datasets (with label information).}
\label{tab:iscore}
\end{table}
\begin{table}
\centering
\begin{tabular}{c | c c c c}
Dataset & Setting & $n$ ($\times 10^6$) & $(\epsilon,\delta)$ & Score \\
\hline
\multirow{3}{*}{CelebA} & real & $0.22 $ & - & $0.00 \pm 0.00$ \\
& GAN & $0.20 $ & - & $0.09 \pm 0.00$\\
& {\sf dp-GAN}\xspace & $0.20 $ & $(10, 10^{-5}) $ & $0.28 \pm 0.00$ \\
\hline
\hline
\multirow{3}{*}{LSUN-U} & real & $2.50 $ & - & $0.00 \pm 0.00$ \\
& GAN & $2.50 $ & - & $0.25 \pm 0.00 $\\
& {\sf dp-GAN}\xspace & $ 2.45 $ & $(10, 10^{-5})$ & $0.29 \pm 0.00 $\\
\hline
\end{tabular}
\caption{Jensen-Shannon scores of real and synthetic data on CelebA and LSUN-U datasets (without label information).}
\label{tab:iscorewolab}
\end{table}
To measure {\sf dp-GAN}\xspace's performance with respect to unlabeled data (e.g., CelebA and LSUN-U), we train another discriminator $D'$ using the real data and test whether $D'$ is able to discriminate the synthetic data. We consider two distributions: (i)
${\rm Pr}(y|x)$ is the conditional distribution that $D'$'s prediction about $x$'s source (real or synthetic) and (ii) $\mathcal{B}_p$ is a Bernoulli distribution with $p=0.5$. We use the
Jensen-Shannon divergence of the two distributions to measure the quality of the synthetic data:
\begin{displaymath}
s(G) = \frac{1}{2} \text{KL} ( {\rm Pr}( y | x) || \mathcal{B}_p ) +
\frac{1}{2} \text{KL} ( \mathcal{B}_p || {\rm Pr}( y | x ) )
\end{displaymath}
Intuitively, a smaller value of $s(G)$ indicates that $D'$ has more difficulty to discriminate the synthetic data, i.e., better quality of the data generated by $G$.
Table~\ref{tab:iscorewolab} summarizes the quality scores of the real and synthetic data (regular GAN and {\sf dp-GAN}\xspace) on the
CelebA and LSUN-U datasets. Observe that {\sf dp-GAN}\xspace generates data of quality close to that by regular GAN (without privacy constraints), especially in the case of LSUN-U, i.e., 0.25 versus 0.29. This may be explained by that compared with CelebA, LSUN-U is a relatively larger dataset, enabling {\sf dp-GAN}\xspace to better capture the underlying data distribution.
GANs that generate images with a single label or without a explicit classification task.
We can consider train another discriminator $D$ to produce a signal to identify if
the images come from generative distribution $G\left(p \left(z \right)\right)$~$
(z \sim p(z))$ or real data distribution $p_\text{data}$. We donate $y = 1$ if an image $x$
comes from real distribution, and $y = - 1$ if it is a generated one. Here we use
\textit{Jensen--Shannon divergence} to measure the distance between two distributions.
Specifically, we take
\begin{equation}
s(G) = \frac{1}{2} \text{KL} \left( p\left( y | x\right) \middle\Vert q\left(y\right) \right) +
\frac{1}{2} \text{KL} \left( q\left(y\right) \middle\Vert p\left( y | x \right) \right)
\end{equation}
to measure the quality of generated examples without an explicit supervised task.
Here $p\left(y | x\right)$ is the conditional
probability of a sample $x$ is coming from $p_\text{data}$, which is a Bernoulli distribution,
and $q(y)$ is a Bernoulli distribution with parameter $p = 0.5$. Thus,
the better the generator, the smaller its score, as even a good discriminator is not able to
decide if the sample $x$ is coming from the generator $G$ or $p_\text{data}$.
In table~\ref{tab:iscorewolab}, we show the quality of scores of unlabeled images with
CelebA and LSUN--Bedroom datasets.
\begin{table}
\centering
\begin{tabular}{c | l c c c}
Dataset & Setting & $n$ ($\times 10^6$) & $(\epsilon,\delta)$ & Score \\
\hline
\multirow{3}{*}{CelebA} & real & $0.22 $ & - & $0.00 \pm 0.00$ \\
& sync.~(w/o DP) & $0.20 $ & - & $0.09 \pm 0.00$\\
& sync.~(w/ DP) & $0.20 $ & $(10, 10^{-5}) $ & $0.28 \pm 0.00$ \\
\hline
\hline
\multirow{3}{*}{LSUN-U} & real & $2.50 $ & - & $0.00 \pm 0.00$ \\
& sync.~(w/o DP) & $2.50 $ & - & $0.25 \pm 0.00 $\\
& sync.~(w/ DP) & $ 2.45 $ & $(10, 10^{-5})$ & $0.29 \pm 0.00 $\\
\hline
\end{tabular}
\caption{Inception scores for generated examples and real examples ON
CelebA and LSUN--Bedroom.}
\label{tab:iscorewolab}
\end{table}
\subsubsection*{\bf Analysis Tasks}
We further evaluate {\sf dp-GAN}\xspace's performance in concrete analysis tasks. Specifically, we consider the use of synthetic data in a semi-supervised classification task. In such a task, the analyst possesses a small amount of public, labeled data and a large amount of synthetic, unlabeled data (generated by {\sf dp-GAN}\xspace). The goal is to leverage both the labeled and unlabeled data to train a better classifier than that trained only using the limited labeled data.
To make things more interesting,
we consider the setting of two separate classifiers. The first one $\mathcal{C}_1
$ has the same structure as a regular image
classifier; while the second one $\mathcal{C}_2$
classifies using both an image and its code. The
architecture of $\mathcal{C}_2$ is designed to learn the correlation between the codes and the
images. The learning procedure is sketched in Algorithm~\ref{alg:semi}, it consists of two part for
each iteration.
In the first part (line 2-5), we first sample a batch of $m$ codes $\hat{z}$,
and generate images $\hat{z}$ from generator $G$ with $\hat{z}$, and use $\mathcal{C}_1$ to
classify $\hat z$ into category $\hat y$. Then we update $\mathcal{C}_2$ with $(\hat{z}, \hat{x}, \hat{y})$~
(line 5). In the second part (line 6-9), we sample a batch of $m\cdot(1 - p_s)$ real examples $(x, y)$ from the labeled data, and
then sample another batch of $m \cdot p_s$ codes $\hat{z}$ and their synthetic images $ \hat{z}$, and labeled them with $\mathcal{C}_2$ as $\hat{y}$.
Now we take both sets of inputs to update $\mathcal{C}_1$.
We hope that $\mathcal C_1$ and $\mathcal C_2$ would converge quickly.
However, In the experiments, we found that if we use the data from $\mathcal C_2$ too early, it would cause
the entire model unstable, and difficult to converge to a proper accuracy, due to that $\mathcal{C}_2$ is not
fully trained with correct labels (i.e., in early state, both $\mathcal C_1$ and $\mathcal C_2$ have low accuracy).
Thus, In practice, it is sensible to increase $p_s$ gradually during the training after some iterations. In our experiments, we first introduce
$p_s = 0$ at the one third point of the regular model, and gradually increase it to $p_\text{s, final}$. Then we follow the flow in Algorithm~\ref{alg:semi} with
$p_s = p_\text{s, final}$.
\begin{algorithm}
\caption{Semi-Supervised Classification}
\label{alg:semi}
\KwIn{$m$ - batch size; $p_s$ - percentage of synthetic data in training; $G_\theta$ - privacy-preserving generator; $\mathcal{D}_{\rm pub}$ - public labeled dataset}
\KwOut{$\mathcal C_1^{\theta_1}$ - image classifier; $\mathcal{C}_2^{\theta_2}$ - image \& code classifier
}
\While{$\mathcal C_1^{\theta_1}$ or $\mathcal{C}_2^{\theta_2}$ not converged yet}{
\tcp{training $\mathcal{C}_2$}
sample $\{\bm \hat z_i\}_{i = 1}^m \sim p_z$ \;
generate $ \{ \hat{x}_i\}_{i = 1}^m$ with $G_\theta$ and $\{\hat z_i \}_{i = 1}^m$ \;
$\{\hat y_i\}_{i = 1}^m \gets$ $\mathcal C_1^{\theta_1}$~($\{ \hat{x}_i\}_{i = 1}^m$) \;
update $\mathcal C_2$ with $(\theta_2, \{ (\hat{z}_i, \hat{x}_i, \hat{y}_i) \}_{i = 1}^m )$ \;
\tcp{training $\mathcal{C}_1$}
sample $\{\hat z_i\}_{i = 1}^{m \cdot p_s} \sim p_z$ \;
generate $ \{\hat x_i\}_{i = 1}^{m \cdot p_s}$ with $G_\theta$ and $\{ \hat z_i \}_{i = 1}^{m \cdot p_s}$ \;
$\{\hat y_i\}_{i = 1}^{m \cdot p_s} \gets$ $\mathcal C_2^{\theta_1}$~($ \{(\hat{z}_i, \hat{x}_i)\}_{i = 1}^{m \cdot p_s}$) \;
sample $\{ (x_i, y_i) \}_{i = 1}^{m \cdot (1 - p_s)}$ from $\mathcal{D}_{\rm pub}$ \;
update $\mathcal C_1$ with $\left(\theta_1,
\{ (\hat{x}_i, \hat{y}_i) \}_{i = 1}^{m \cdot p_s}, \{ (x_i, y_i) \}_{i = 1}^{m \cdot (1 - p_s)}\right)$\;
}
\Return $\mathcal C_1^{\theta_1}$, $\mathcal{C}_2^{\theta_2}$
\label{alg:semisupervised}
\end{algorithm}
We evaluate {\sf dp-GAN}\xspace's performance in such a task on the LSUN-L dataset.
It is clear that the semi-supervised classifier steadily outperforms the supervised classifier. The difference is especially evident when the size of the public data is small (i.e., limited number of labeled samples). For example, for $n = 0.5\times 10^4$, the semi-supervised classifier outperforms the supervised one by more than 6\%. We can thus conclude that {\sf dp-GAN}\xspace supplies valuable synthetic data for such semi-supervised classification tasks.
\begin{table}
\centering
\caption{Semi-supervised Classification Task Result (LSUN-L)}
\label{tab:semi}
\begin{tabular}{c | c c c c }
Setting & n ($\times 10^4$) & $p_\text{s, final} $ & Original accuracy & Semi accuracy\\
\hline
\multirow{4}{*}{GAN} & 0.5 & 0.2 & 0.538 & 0.615\\
& 1.5 & 0.2 & 0.650 & 0.661\\
& 2.5 & 0.2 & 0.665& 0.699\\
& 5.0 & 0.2 & 0.733 & 0.755 \\
\hline
\multirow{4}{*}{{\sf dp-GAN}\xspace} & 0.5 & 0.2 & 0.538 & 0.571 \\
& 1.5 & 0.2 & 0.650 & 0.669 \\
& 2.5 & 0.2 & 0.665 & 0.695 \\
& 5.0 & 0.2 & 0.733 & 0.737
\end{tabular}
\end{table}
\begin{table*}
\centering
\caption{Semi--supervised classification tasks}
\label{tab:semi}
\begin{tabular}{c | c c c c }
Dataset & \#N~(public) & Fraction & Original accuracy & Semi accuracy\\
& \\
\hline
\multirow{5}{*}{MNIST~(No DP)} & 100 & 0.2 & 0.720 & 0.567 \\
& 200 & 0.2 & 0.824 & 0.802\\
& 300 & 0.2 & 0.874 & 0.824 \\
& 500 & 0.2 & 0.909 & 0.895 \\
& 1000 & 0.2 & 0.934 & 0.923\\
\hline
\multirow{5}{*}{MNIST~(DP)} & 100 & 0.2 & 0.720 & 0.569 \\
& 200 & 0.2 & 0.824 & 0.747 \\
& 300 & 0.2 & 0.874 & 0.835 \\
& 500 & 0.2 & 0.909 & 0.884 \\
& 1000 & 0.2 & 0.934 & 0.903 \\
\hline
\multirow{4}{*}{LSUN--5 Cat~(No DP)} & 5000 & 0.2 & 0.538 & 0.615\\
& 15000 & 0.2 & 0.650 & 0.661\\
& 25000 & 0.2 & 0.665& 0.699\\
& 50000 & 0.2 & 0.733 & 0.755 \\
\hline
\multirow{4}{*}{LSUN--5 Cat~(DP)} & 5000 & 0.2 & 0.538 & 0.571 \\
& 15000 & 0.2 & 0.650 & 0.669 \\
& 25000 & 0.2 & 0.665 & 0.695 \\
& 50000 & 0.2 & 0.733 & 0.737
\end{tabular}
\end{table*}
\subsection{Effectiveness of Optimizations}
In the final set of experiments, we evaluate the impact of different optimization strategies on {\sf dp-GAN}\xspace's performance.
We first measure the strategy of weight clustering on the number of allowed iterations given the same privacy constraints. Table~\ref{tab:iternums} compares the number of allowed iterations before and after applying the weight clustering strategy. It is clear that across all the datasets, this strategy significantly increases the number of allowed iterations, thereby improving the retained utility in the generative models.
We further measure the impact of different configures of multiple optimization strategies on {\sf dp-GAN}\xspace's performance with results listed in Table~\ref{tab:optimquality} and Table~\ref{tab:optimqualityun} for the labeled and unlabeled datasets respectively. It is observed that in general, combining multi-fold optimizations significantly boosts {\sf dp-GAN}\xspace's performance. For example, in the case of Inception score, the score is increased from 6.59 to 8.64.
\begin{table}
\centering
\caption{Effect of weight grouping on the number of allowed iterations: $t_\text{before}$ and $t_\text{after}$ are respectively the maximum number of iterations
under privacy constraint.}
\label{tab:iternums}
\begin{tabular}{c | c c }
Dataset & $t_\text{before}$ & $t_\text{after}$ \\
\hline
MNIST & 560 & 780 \\
CelebA & 2910 & 4070 \\
LSUN-L & 15010 & 19300 \\
LSUN-U & 24630 & 31670 \\
\hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Impact of optimization on Inception scores (MNIST) $S_1$ - Weight/Bias Separation, $S_2$ - Automated Weight Grouping, $S_3$ - Adaptive Clipping, $S_4$ - Warm Starting}
\label{tab:optimquality}
\begin{tabular} {c | c c c c c c}
Strategy & \multicolumn{6}{c}{Configuration} \\
\hline
\hline
$S_1$ & & \checkmark & & \checkmark & & \checkmark \\
$S_2$ & & & & & \checkmark & \\
$S_3$ & & & \checkmark & \checkmark & \checkmark & \checkmark \\
$S_4$ & & & & & & \checkmark \\
\hline
\hline
\multirow{2}{*}{Score} & $6.59$ & $6.46$ & $7.76$
& $8.20$ & $8.03$ & \bm{$8.64$} \\
& $\pm 0.03 $ & $\pm 0.04 $ & $\pm 0.05 $
& $\pm 0.02$ & $\pm 0.04 $ & \bm{$\pm 0.03$}\\
\hline
\end{tabular}
\end{table}
\begin{table*}
\centering
\caption{Effectiveness of optimizations with respect to quality
scores~(labeled)}
\label{tab:optimquality}
\begin{tabular} {c | c c c c c }
Dataset & Weight/Bias Grouping & Automatic Grouping & Gradient estimation &
Pre--trained & Score \\
\hline
\multirow{6}{*}{MNIST} & & & & & $6.59 \pm 0.03 $\\
& \checkmark & & & & $6.46 \pm 0.04 $ \\
& & & \checkmark & & $7.76 \pm 0.05 $ \\
& \checkmark & & \checkmark & & $8.20 \pm 0.02$\\
& & \checkmark & \checkmark & & $8.03 \pm 0.04 $ \\
& \checkmark & & \checkmark & \checkmark & \bm{$8.64 \pm 0.03$} \\
\hline
\end{tabular}
\end{table*}
\begin{table}
\centering
\caption{Effectiveness of optimizations with respect to quality
scores~(unlabeled)}
\label{tab:optimqualityun}
\begin{tabular} {c | c c c c c c}
Strategy & \multicolumn{6}{c}{Configuration} \\
\hline
\hline
$S_1$ & & \checkmark & & \checkmark & & \checkmark \\
$S_2$ & & & & & \checkmark & \\
$S_3$ & & & \checkmark & \checkmark & \checkmark & \checkmark \\
$S_4$ & & & & & & \checkmark \\
\hline
\hline
\multirow{2}{*}{Score} & $0.31$ & $0.31$ & $0.31$
& $0.31$ & $0.31$ & \bm{$0.28$} \\
& $\pm 0.00 $ & $\pm 0.00 $ & $\pm 0.00 $
& $\pm 0.00$ & $\pm 0.00 $ & \bm{$\pm 0.00$}\\
\hline
\end{tabular}
\end{table}
\section{Introduction}
\label{sec:intro}
With the continued advances in mobile computing and the surging popularity of social media, a massive amount of semantic-rich data (e.g., image, text, audio) about individuals is being collected. While analyzing and understanding such data entails tremendous commercial value (e.g., targeted advertisements and personalized recommendations), governments and organizations all have recognized the critical need of respecting individual privacy in such practice~\cite{apple}. In general, privacy protection can be enforced in two settings. In the {\em interactive} setting, a trusted curator collects data from individuals and provides a privacy-preserving interface for the analyst to execute queries over the data; in the more challenging {\em non-interactive} setting, the curator releases a ``sanitized'' version of the data, simultaneously providing analysis utility for the analyst and privacy protection for the individuals represented in the data~\cite{Dwork:2014:book}.
Hitherto, privacy-preserving releasing of semantic-rich data still represents a long-standing challenge for the privacy and security research communities: the rich semantics of such data enable a wide variety of potential analyses, while the concrete analyses are often unknown ahead of releasing, especially in the case of exploratory data analysis. Therefore, to ensure privacy, excessive sanitization is often necessary, which may completely destroy the data utility for potential analyses.
\begin{figure}
\epsfig{file=figures/framework.eps, width=85mm}
\caption{High-level design of {\sf dp-GAN}\xspace, a privacy-preserving releasing framework for semantic-rich data. \label{fig:framework}}
\end{figure}
In this paper, we tackle this challenge by integrating the state-of-the-art deep learning methods with advanced privacy-preserving mechanism. Specifically, we present {\sf dp-GAN}\xspace, a new private releasing framework for semantic-rich data. With {\sf dp-GAN}\xspace, instead of releasing a sanitized version of the original data, the curator publishes a generative model (i.e., generative adversarial network~\cite{Goodfellow:2014:nips}), which is trained using the original data in a privacy-preserving manner. The analyst, once equipped with this generative model, is able to produce synthetic data for the intended analysis tasks. The high-level framework of {\sf dp-GAN}\xspace is illustrated in Figure~\ref{fig:framework}.
In comparison with alternative solutions (e.g., sanitizing and then releasing the data), {\sf dp-GAN}\xspace highlights with a number of significant advantages. First, it enforces {\em differential privacy}~\cite{Dwork:2006:icalp}, the state-of-the-art privacy principle, in the training of generative models. Due to its closure under post-processing property~\cite{Dwork:2014:book}, differential privacy ensures that the released model provides theoretically guaranteed privacy protection for the training data. Second, the use of generative models (e.g., generative adversarial networks in particular) as the vehicles of data releasing enables the synthesized data to capture the rich semantics of the original data. The faithful preservation of desirable utility leads to a variety of otherwise impossible analyses. For example, we show empirically that {\sf dp-GAN}\xspace is able to effectively support semi-supervised classification tasks. Finally, the generative model is able to produce an unlimited amount of synthetic data for arbitrary analysis tasks, as shown in Figure~\ref{fig:framework}.
However, realizing {\sf dp-GAN}\xspace entails two major challenges. First, it requires new algorithmic advances to implement differential privacy within generative model training. To this end, we extend the framework of Improved Wasserstein GAN~\cite{Gulrajani:2017:wganip} by integrating the state-of-the-art privacy enhancing mechanisms (e.g., Gaussian mechanism~\cite{Dwork:2014:book}) and provide refined analysis of privacy loss within this framework. Second, the stability and scalability issues of training GAN\xspace models are even more evident once privacy enhancing mechanisms are incorporated. To this end, we develop multi-fold optimization strategies, including {\em weight clustering}, {\em adaptive clipping}, and {\em warm starting}, which significantly improve both training stability and utility retention. Our contributions can be summarized as follows.
\begin{myitemize}
\item First, to our best knowledge, {\sf dp-GAN}\xspace is the first working framework that realizes the paradigm of privacy-preserving model releasing for semantic-rich data. We believe this new paradigm is applicable for a broad range of privacy-sensitive data publishing applications.
\item Second, in implementing {\sf dp-GAN}\xspace, we develop multi-fold system optimization strategies that not only successfully incorporate privacy enhancing mechanisms within training deep generative model, but also significantly improve the stability and scalability of generative model training itself.
\item Third, we conduct extensive empirical evaluation using real large-size image data to validate the efficacy of {\sf dp-GAN}\xspace. We show that {\sf dp-GAN}\xspace, besides providing theoretically guaranteed privacy protection, preserves desirable utility of the original data, enabling a set of otherwise impossible analysis tasks.
\end{myitemize}
The remainder of the paper proceeds as follows. Section~\ref{sec:background} reviews the background of deep generative models and differential privacy; Section~\ref{sec:model} presents the high-level design of {\sf dp-GAN}\xspace; Section~\ref{sec:opt} details its implementation, in particular, the multi-fold optimizations to improve the stability and scalability of model training; Section~\ref{sec:eval} empirically evaluates our proposed solution; Section~\ref{sec:liter} discusses additional relevant literature; The paper is concluded in Section~\ref{sec:end}.
\section{Additional Related Work}
\label{sec:liter}
Recent research has suggested that it is possible to enforce strong differential privacy protection in many types of analyses without significant utility loss (see~\cite{Dwork:2009:tcc} for an excellent survey).
The existing work can be roughly categorized into supervised settings, such as logistic regression~\cite{Chaudhuri:2011:erm} and
support vector machine~(SVM)~\cite{Chaudhuri:2011:erm, Rubinstein:2009:dpsvm},
and unsupervised settings, such as publishing histograms~\cite{Xu:2013:differentially},
releasing contingency tables~\cite{Yang:2012:differential},
hypothesis testing~\cite{Gaboardi:2016:dpht},
collaborative recommendation~\cite{Zhu:2016:dprecommend}, K-Means clustering~\cite{Su:2016:dpkmeans},
and spectral graph analysis~\cite{Wang:2013:dpga}. To our best knowledge, this work represents one of the first attempts in the direction of differentially private publishing of semantic-rich data.
More recently, extensive research effort has focused on enforcing differential privacy in training deep learning models. Adadi {\em et al.} \cite{Abadi:2016:dpdl} proposed to use differentially private stochastic gradient descent~\cite{Song:2013:stochastic} to enforce $(\epsilon, \delta)$-differential privacy in training deep neural networks. Phan {\em et al.}~\cite{Phan:2016:dpae}, proposed to apply the
functional mechanism~\cite{Zhang:2012:functional} to train differentially private auto-encoders. In \cite{Phan:2017:dplap}, Phan {\em et al.} proposed an adaptive Laplace mechanism to reduce the required random noise. Our work advances this line of research by enforcing differential privacy in the setting of training generative adversarial networks, a new class of deep learning models.
The work most relevant to ours is perhaps~\cite{Gergely:2017:dpmixgen}, in which Gergely {\em et al.} proposed a framework of training differential private deep generative networks. Our work however differs from~\cite{Gergely:2017:dpmixgen} in significant ways.
First, \cite{Gergely:2017:dpmixgen} used a two-stage process that first performs clustering and then produces generative models such as
Restricted Boltzmann Machine (RBM)~\cite{Fischer:2012:rbmintro} and Variational Auto-Encoder (VAE)~\cite{Kingma:2013:vae}; in contrast, our work provides an end-to-end solution that produces general GAN, which is known to outperform RBM and VAE in data synthesis. Second, the method in \cite{Fischer:2012:rbmintro} only works well for low-dimensional data (e.g., 784 for MNIST and 1303 for CDR); in contrast, {\sf dp-GAN}\xspace is able to generate high-quality, high-dimensional synthetic data (e.g., 12,288 for LSUN). In~\cite{Jones:2017:dpgan}, Jones {\em et al.} also proposed a differentially private GAN
framework, which however only generates low-dimensional samples~($ 3 \times 12 $) and meanwhile requires label information. In comparison,
{\sf dp-GAN}\xspace works well for high-dimensional data without any labeling information. To achieve this, {\sf dp-GAN}\xspace adopts multiple optimization strategies that improve both training stability and utility retention.
%
\section{Models and Algorithms}
\label{sec:model}
In this section, we present the basic design of {\sf dp-GAN}\xspace, a generic framework for differentially private releasing of semantic-rich data.
\subsection{Overview}
Similar to the line of work on differentially private deep learning (e.g.,~\cite{Abadi:2016:dpdl}), {\sf dp-GAN}\xspace achieves DP by injecting random noise in the optimization procedure (e.g., stochastic gradient descent~\cite{Song:2013:stochastic}). Yet, the GAN architecture, which comprises a generator $G$ and a discriminator $D$, presents unique challenges for realizing this idea. A na\"{i}ve solution is to inject noise in training both $G$ and $D$; the minimax game formulation however makes it difficult to tightly estimate the privacy loss, resulting in excessive degradation in the produced models.
We opt to add random perturbation only in training $D$. The rationale behind our design choice is as follows. First, as shown in Figure~\ref{fig:gan}, the real data is directly accessible only by $D$; thus, it suffices to control the privacy loss in training $D$. Second, in comparison with $G$, which often employs building blocks such as batch normalizations~\cite{Ioffe:2015:bn} and residual layers~\cite{He:2016:resnet, He:2016:imapresnet} in order to generate realistic samples, $D$ often features a simpler architecture and a smaller number of parameters, which make it possible to tightly estimate the privacy loss.
After deciding where to enforce privacy protection, next we present the basic construct of {\sf dp-GAN}\xspace, as sketched in Algorithm \ref{alg:basic}. At a high level, {\sf dp-GAN}\xspace is built upon the improved WGAN framework and enforces DP by injecting random noise in updating the discriminator $D$. Specifically, when computing $D$'s gradients with respect to a real sample $x$ (line 7), we first clip the gradients by a threshold $C$ (line 8), ensuring that the sensitivity is bounded by $C$; we then add random noise sampled from a Gaussian distribution. Additionally, we use a privacy accountant $\mathcal{A}$ similar to~\cite{McSherry:2009:PIQ:1559845.1559850} to track the cumulative privacy loss. This process iterates until convergence or exceeding the privacy
budget (line 14).
\SetKwInput{Require}{Require}
\begin{algorithm}
\caption{Basic {\sf dp-GAN}\xspace}
\label{alg:basic}
\KwIn{$n$ - number of samples; $\lambda$ - coefficient of gradient penalty; $n_\text{critic}$ - number of critic iterations
per generator iteration; $n_\text{param}$ - number of discriminator's parameters; $m$ - batch size; ($\alpha,
\beta_1, \beta_2$) - Adam hyper-parameters; $C$ - gradient clipping bound; $\sigma$ - noise scale; ($\epsilon_0$, $\delta_0$) - total privacy budget}
\KwOut{differentially private generator $G$}
\While{$\theta$ has not converge} {
\For {$t = 1, \cdots, n_\text{critic}$} {
\For {$i = 1, \cdots, m$} {
sample $x \sim p_{\rm data}$, $z \sim p_z$, $\rho \sim \mathcal{U}\left[0, 1\right]$ \;
$\hat{x} \gets \rho x + (1 - \rho) G(z) $ \;
$\ell^{(i)} \gets D(G (z)) - D(x) +
\lambda \left( \left\Vert \triangledown_{\hat{x}} D( \hat{x} )
\right\Vert_2 - 1 \right)^2$ \;
%
\tcp{computing discriminator's gradients}
$g^{(i)} \leftarrow \triangledown_{w} \ell^{(i)}$\;
\tcp{clipping and perturbation ($\xi \sim \mathcal{N}\left(0, (\sigma C\right)^2\mathcal{I})$)}
$g^{(i)} \gets g^{(i)} / \max(1, || g^{(i)} ||_2/C ) + \xi$\;
}
\tcp{updating privacy accountant}
update $\mathcal{A}$ with $(\sigma, m, n_\text{param})$ \;
\tcp{updating discriminator}
$w \gets \text{Adam}\left( \frac{1}{m} \sum_{i = 1}^m g^{(i)},
w, \alpha, \beta_1, \beta_2 \right)$\;
}
sample $\{ z^{(i)} \}_{i = 1}^m \sim p_z$ \;
\tcp{updating generator}
$\theta \gets \text{Adam}\left( \nabla_\theta \frac{1}{m} \sum_{i = 1}^m
-D( G(z^{(i)}) ) , \theta, \alpha, \beta_1, \beta_2
\right) $\;
\tcp{computing cumulative privacy loss}
$\delta \gets$ query $\mathcal{A}$ with $\epsilon_0$\;
\lIf {$\delta > \delta_0$} {
break}}
\Return{G}
\end{algorithm}
\subsection{Privacy Analysis}
A key component of {\sf dp-GAN}\xspace is to keep track the cumulative privacy loss during the course of training, i.e., privacy accountant $\mathcal{A}$, which integrates two building blocks:
moments accounting and sub-sampling. Next we elaborate on each component.
\subsubsection*{\bf Moments Accounting}
In~\cite{Abadi:2016:dpdl}, Abadi {\em et al.} propose moments accounting, a privacy accounting method, which provides tighter estimation of the privacy loss than the composition theorems. Specifically,
consider the privacy loss as a random
variable $Z$, which is defined as:
\begin{equation}
\nonumber
Z(o; \mathcal{M}, d, d^\prime) = \log \frac{{\rm Pr}[\mathcal{M}(x) = o]}{
{\rm Pr}[\mathcal{M}(d^\prime) = o]}
\end{equation}
where $d, d^\prime \in \mathcal{D}^n$ are two neighboring datasets, $\mathcal{M}$ is the random
mechanism, and $o \in \mathcal{R}$ is an outcome.
The privacy loss can be estimated by bounding the $\lambda$--th
moment of $Z$, which is calculated
via evaluating the moment generating function of $Z$ at $\lambda$:
\begin{equation}
\nonumber
\alpha_\mathcal{M} (\lambda; d, d^\prime) =
\log \mathbb{E}_{o \sim \mathcal{M}(d)} \left[\exp(
\lambda Z(o; \mathcal{M}, d, d'))\right]
\end{equation}
To enforce DP, one needs to consider
$\alpha_\mathcal{M}$ across all possible $d, d'$, i.e.,
$\alpha_\mathcal{M} \triangleq \max_{d, d'} \alpha_\mathcal{M}
(\lambda; d, d' )$.
Using Markov's inequality, it can be proved that for any $\epsilon > 0$, $\mathcal{M}$
satisfies $(\epsilon, \delta)$-DP for
$\delta = \min_{\lambda} ( \alpha_\mathcal{M} - \lambda \epsilon )$~\cite{Abadi:2016:dpdl}.
Besides, if $\mathcal{M}$ is the composition of a sequence of
sub-mechanisms $\{\mathcal{M}_j\}_{j=1}^J$, it holds that
$\alpha_\mathcal{M}(\lambda) \leq
\sum_{j=1}^J \alpha_{\mathcal{M}_j} (\lambda)$.
In tracking the privacy loss, we apply numerical integration to compute $\alpha_\mathcal{M} (\lambda)$.
\subsubsection*{\bf Sub-sampling}
During each iteration of training $D$, we sample a batch of examples from the real dataset (line 4). The randomness due to sampling adds another level of privacy protection.
According to the privacy amplification
theorems~\cite{Beimel:2014:sample, Kasiviswanathan:2011:sample}, this sampling procedure achieves
$(\mathcal{O}(q \epsilon, q \delta))$-DP per iteration with respect to
the whole dataset where $q = m/n$ is the sampling ratio per batch,
$\sigma=\sqrt{2\log(1.25/\delta)}/\epsilon$, and $\epsilon \leq 1$.
Using moments accounting~\cite{Abadi:2016:dpdl}, it can be proved that Algorithm~\ref{alg:basic} is $( \mathcal{O} ( q\epsilon\sqrt{t}), \delta)$-DP,
where $t$ is the total number of iterations in the main loop,
if the noise scale $\sigma$ and the clipping threshold $C$ are chosen appropriately.
\begin{theorem}
\label{theorem:basic}
Algorithm~\ref{alg:basic} is $( \mathcal{O} ( q\epsilon\sqrt{t}), \delta)$-DP,
where $t$ is the total number of iterations in the main loop,
if the noise scale $\sigma$ and the clipping threshold $C$ are chosen appropriately.
\end{theorem}
\begin{proof}
We have the following facts about moments accounting, Gaussian mechanism, and random sampling~\cite{Abadi:2016:dpdl}:
\begin{myitemize}
\item (1) Let $\mathcal{M}$ be the composition of a sequence of sub-mechanisms $\{\mathcal{M}_j\}_{j=1}^J$, it holds that $\alpha_\mathcal{M}(\lambda) \leq \sum_{j=1}^J \alpha_{\mathcal{M}_j} (\lambda)$.
\item (2) Using Markov's inequality, we have for any $\epsilon > 0$, $\mathcal{M}$
satisfies $(\epsilon, \delta)$-DP for
$\delta = \min_{\lambda} ( \alpha_\mathcal{M} - \lambda \epsilon )$.
\item (3) Consider a function $f$ which maps a data sample to a real-valued vector, with its output bounded by $||f||_2\leq 1$. Let $\sigma \geq 1$ and $\mathcal{I}$ be a set of samples from $[n]$ where each $i \in \mathcal{I}$ is selected from $[n]$ independently with probability $q \leq \frac{1}{16\sigma}$. Then for any positive integer $\lambda \leq -\sigma^2 \ln (q\sigma)$, the mechanism $\mathcal{M}(d) = \sum_{i \in \mathcal{I}} f(d_i) + \mathcal{N}(0, \sigma^2\mathbf{I})$ satisfies
\begin{displaymath}
\alpha_\mathcal{M}(\lambda) \leq \frac{q^2 \lambda (\lambda +1)}{(1-q)\sigma^2} + \mathcal{O}(q^3 \lambda^3/\sigma^3)
\end{displaymath}
\end{myitemize}
Assume that $\sigma$ and $\lambda$ satisfy the condition in (3). The log-moment of Algorithm\,\ref{alg:basic} is bounded by $\alpha(\lambda) \leq q^2\lambda^2 t/\sigma^2$, according to (2) and (3). To ensure that Algorithm\,\ref{alg:basic} satisfies $(\bar{\epsilon}, \bar{\delta})$-DP, it suffices to have (i) $q^2\lambda^2 t/\sigma^2 \leq \lambda \bar{\epsilon}/2$, (ii) $\exp(-\lambda \bar{\epsilon}/2) \leq \bar{\delta}^2$, and (iii) $\lambda \leq -\sigma^2 \log(q\sigma)$.
With easy calculation, it can be verified that there exist two constants $c_1$ and $c_2$, such that when $\bar{\epsilon} = c_1 q^2 t$ and $\sigma = c_2 q\sqrt{-\log\bar{\delta}}/\bar{\epsilon}$, all the aforementioned conditions are met.
\end{proof}
\section{Optimizations}
\label{sec:opt}
The GAN formulation is known for its training stability issue~\cite{Gulrajani:2017:wganip}. This issue is even more evident in the {\sf dp-GAN}\xspace framework, as random noise is injected in each training step. In our empirical study (Section~\ref{sec:eval}), it is observed that the basic {\sf dp-GAN}\xspace suffers a set of drawbacks.
\begin{myitemize}
\item Its synthesized data is often of low quality, e.g., unrealistic looking images.
\item It converges slower than its regular GAN counterpart, resulting in excessive privacy loss, and sometimes even diverges.
\item Its framework is fairly rigid, unable to take advantage of extra resources, e.g., a small amount of public data.
%
%
%
\end{myitemize}
Here we propose a suite of optimization strategies that significantly improve {\sf dp-GAN}\xspace's training stability and convergence rate. Specifically, we enhance the basic {\sf dp-GAN}\xspace along three directions.
\begin{myitemize}
\item Parameter grouping - By carefully grouping the parameters and perform stratified clipping over different groups, we strike a balance between convergence rate and privacy cost.
\item Adaptive clipping - By monitoring the change of gradient magnitudes, we dynamically adjust the clipping bounds to achieve faster convergence and stronger privacy.
\item Warm starting - By initializing the model with a good starting point, we boost up the convergence and save the privacy budget for critical iterations.
\end{myitemize}
Next we detail each of these optimization strategies.
\subsection{Parameter Grouping}
As shown in Algorithm~\ref{alg:basic}, the DP constraint essentially influences the training in two key operations (line 8): clipping - the norm of gradients is truncated by an upper bound $C$, and perturbation - random noise is added to the gradients. We propose to explore the opportunities to optimize these two critical operations.
In Algorithm~\ref{alg:basic}, the gradients of all the parameters are grouped together to compute the norm. This global clipping scheme minimizes the privacy budget spent in each iteration, but introduces excessive random noise for some parameters, causing slow convergence. At the other end of the spectrum, one may clip the gradient of each parameter with a parameter-specific clipping bound, which may reduce the overall amount of random noise, but at the cost of privacy budget. Here we propose two alternative grouping strategies that strike a balance between convergence rate and privacy loss per iteration.
\subsubsection*{\bf Weight-Bias Separation} In most GAN architectures (e.g., convolutional layers and fully connected layers), there are two types of parameters, weights and biases. For example, a fully connected layer models a linear function $f(x) = w\cdot x + b$ where $w$ and $b$ are the weight and bias parameters respectively. In our empirical study, it is observed that the magnitudes of the biases' gradients are often close to zero, while the magnitudes of the weights' gradients are much larger. Thus, our first strategy is to differentiate weight and bias parameters and to group the gradients of all the bias parameters together for the clipping operation. Given the large number of bias parameters, under the same amount of overall privacy budget, this strategy almost doubles the allowed number of iterations, with
little influence on the convergence rate (details in Section~\ref{sec:eval}).
\subsubsection*{\bf Weight Clustering} While it is natural to group the bias parameters together as many of them are close to zero, the grouping of the weight parameters is much less obvious. Here
we propose a simple yet effective strategy to stratify and cluster the weight parameters. Assuming that we have the optimal parameter-specific clipping bound $\{c(g_i)\}_{i}$ for each weight's gradient $\{g_i\}_{i}$ (we will show how to achieve this shortly), we then cluster these parameters into a predefined number of groups using a hierarchical clustering procedure, as sketched in Algorithm~\ref{alg:grouping}.
Specifically, starting with each gradient forming its own group (line 1), we recursively find two groups $G, G'$ with the most similar clipping bounds and merge them to form a new group (line 3-4). As we use $\ell_2$ norm, the clipping bound of the newly formed group is computed as $\sqrt{c(G)^2 + c(G')^2}$.
\begin{algorithm}
\KwIn {$k$ - targeted number of groups; $\{c(g_i)\}_i$ - parameter-specific gradient clipping bounds}
\KwOut{$\mathcal{G}$ - grouping of parameters}
%
%
$\mathcal{G} \gets \{(g_i: c(g_i))\}_i$\;
\While{$|\mathcal{G}| > k $} {
$G, G' \gets \argmin_{G, G' \in \mathcal{G}}
\max \left(
\frac{c(G)}{c(G')}
,
\frac{c(G')}{c(G)}
\right)$ \;
merge $G$ and $G'$ with clipping bound as $\sqrt{c(G)^2 + c(G')^2}$\;
}
\Return $\mathcal{G}$
\caption{Weight-Clustering}
\label{alg:grouping}
\end{algorithm}
\subsection{Adaptive Clipping}
In Algorithm \ref{alg:basic}, the gradient clipping bound $C$ is a hyper-parameter
that needs careful tuning. Overly small $C$ amounts to excessive truncation of the gradients, while overly large $C$ is equivalent to overestimating the sensitivity, both resulting in slow convergence and poor utility. However, within the improved WGAN framework, it is challenging to find a near-optimal setting of $C$, due to reasons including: (i) the magnitudes of the weights and biases and their gradients vary greatly across different layers; and (ii) the magnitudes of the gradients are constantly changing during the training.
To overcome these challenges, we propose to constantly monitor the magnitudes of the gradients before and during the training, and set the clipping bounds based on the average magnitudes. Specifically, we assume that besides the private data $\mathcal{D}_{\rm pri}$ to train the model, we have access to a small amount of public data $\mathcal{D}_{\rm pub}$ which is available in many settings. During each training step, we randomly sample a batch of examples from $\mathcal{D}_{\rm pub}$, and set the clipping bound of each parameter as the average gradient norm with respect to this batch. In our empirical study (Section~\ref{sec:eval}), we find that this adaptive clipping strategy leads to much faster training convergence and higher data utility.
\subsection{Warm Starting}
It is expected that due to the random noise injected in each training step, the GAN with the DP constraint often converges slower than its vanilla counterpart, especially during its initial stage. To boost up the convergence rate, we propose to leverage the small amount of public data $\mathcal{D}_{\rm pub}$ to initialize the model. Specifically, using $\mathcal{D}_{\rm pub}$, we first train a few iterations without the DP constraint, and then continue the training using $\mathcal{D}_{\rm pri}$ under the DP constraint.
This strategy provides a warm start for {\sf dp-GAN}\xspace. It helps find a satisfying starting point, which is essential for the model to converge, and also saves a significant amount of privacy budget for more critical iterations.
An astute reader may point out that since there is public data available, one may just use the public data for training. The issue is that the public data is often fairly limited, which may not be sufficient to train a high-quality GAN. Further, the large amount of private data is valuable for improving the diversity of the samples synthesized by the generator (details in Section~\ref{sec:eval}).
\SetKwProg{myproc}{Procedure}{}{}
\begin{algorithm}
\caption{Advanced {\sf dp-GAN}\xspace}
\label{alg:advanced}
\KwIn {$n$ - number of samples; $\mathcal{D}_{\rm pub}$ - public dataset;
$\lambda$ - coefficient of gradient penalty; $n_\text{critic}$ - number of critic iterations
per generator's iteration; $n_\text{param}$ - number of discriminator's parameters; $m$ - batch size for training GAN;
$m_\text{pub}$ - batch size for estimating norms of gradients;
($\alpha,
\beta_1, \beta_2$) - Adam hyper-parameters; $C$ - gradient clipping bound; $\sigma$ - noise
scale; ($\epsilon_0$, $\delta_0$) - overall privacy target; $k$ - number of parameter groups}
\KwOut{$G$ - differentially private generator}
\tcp{warm starting}
$\left(w, \theta \right) \gets $ train regular improved
WGAN using $\mathcal{D}_{\rm pub}$\;
\While{$\theta$ has not converged} {
\For {$t = 1, \cdots, n_\text{critic}$} {
\tcp{computing gradients of public data}
sample $\{\bar{x}_i\}_{i=1}^{m_{\rm pub}} \sim \mathcal{D}_\text{pub}$ \;
$\{\bar{g}^{(i)}\}^{m_\text{pub}}_{i = 1} \gets$ Improved WGAN-Gradient~
($\{\bar{x}_i\}_{i=1}^{m_{\rm pub}}$, $m_{\rm pub}$)\;
\tcp{grouping parameters with similar clipping bounds}
$\{(G_j, c_j)\}_{j = 1}^k \gets$ Weight-Clustering~
($k$, $\{\bar{g}^{(i)}\}^{m_\text{pub}}_{i = 1}$)\;
\tcp{computing gradients of real data}
sample $\{x_i\}_{i=1}^m \sim p_{\rm data}$ \;
$\{g^{(i)}\}^{m}_{i = 1} \gets$ Improved WGAN-Gradient~
($\{x_i\}_{i=1}^{m}$, $m$)\;
\For {$i = 1, \cdots, m$} {
${g^{(i)}_j} \gets g^{(i)} \cap G_j$ for $j = 1, \cdots, k$ \;
\For {$j = 1, \cdots, k$}{
\tcp{clipping and perturbation $\xi \sim \mathcal{N}(0, (\sigma c_j)^2\mathcal{I})$}
$g^{(i)}_j \gets g^{(i)}_j / \max(1, || g^{(i)}_j ||_2/c_j ) + \xi$\;
}
}
\tcp{updating privacy accountant}
update $\mathcal{A}$ with $(\sigma, m, k)$ \;
\tcp{updating discriminator}
$w_j \gets \text{Adam}( \frac{1}{m} \sum_{i = 1}^m g^{(i)}_j,
w_j, \alpha, \beta_1, \beta_2 )$ for $j = 1, \cdots, k$\;
$w \gets \left\{w_j\right\}_{j = 1}^k$\;
}
sample $\{ z^{(i)} \}_{i = 1}^m \sim p_z$ \;
\tcp{updating generator}
$\theta \gets \text{Adam}( \nabla_\theta \frac{1}{m} \sum_{i = 1}^m
-D( G(z^{(i)})) , \theta, \alpha, \beta_1, \beta_2
)$\;
\tcp{computing cumulative privacy loss}
$\delta \gets$ query $\mathcal{A}$ with $\epsilon_0$\;
\lIf {$\delta \geq \delta_0$} {
break
}
}
\Return{G}\;
\myproc{\rm Improved WGAN-Gradient~($\{x_i\}_{i=1}^m, m$)}
{
\For {$i = 1, \cdots, m $}{
sample $z \sim p_z$, $\rho \sim \mathcal{U}[0, 1]$\;
$\hat{x} \gets \rho x_i + (1 - \rho) G(z) $ \;
$\ell^{(i)} \gets D(G (z)) - D(x_i) +
\lambda ( || \triangledown_{\hat{x}} D( \hat{x})
||_2 - 1 )^2$ \;
$g^{(i)} \leftarrow \triangledown_{w} \ell^{(i)}$\;
}
\Return $\{ g^{(i)} \}_{i = 1}^m$\;
}
\end{algorithm}
\subsection{Advanced Algorithm}
Putting everything together, Algorithm~\ref{alg:advanced} sketches the
enhanced {\sf dp-GAN}\xspace framework. Different from Algorithm~\ref{alg:basic}, we initialize the
model with a warm starting procedure using the public data $\mathcal{D}_\text{pub}$ (line 1). During each training iteration, we first estimate the clipping bound of each parameter using $\mathcal{D}_\text{pub}$ (line 4-5), then group the parameters into $k$ groups $\{G_j\}_{j=1}^k$, each $G_j$ sharing similar clipping bound $c_j$ (line 6). In our current implementation, we use the average clipping bounds in $G_j$ to estimate $c_j$. We then perform group-wise clipping and perturbation (line 9-12). The remaining part is similar to Algorithm~\ref{alg:basic}. The process iterates until the generator's parameters converge or the privacy budget is used up (line 19).
Astute readers may raise the concern about possible additional privacy loss due to the multiple optimization strategies. We have the following theorem.
\begin{theorem}
Algorithm~\ref{alg:advanced} is $( \mathcal{O} ( q\epsilon\sqrt{t}), \delta)$-DP,
where $t$ is the total number of iterations in the main loop,
if the noise scale $\sigma$ and the clipping threshold $C$ are chosen appropriately.
\end{theorem}
\begin{proof}
Algorithm~\ref{alg:advanced} differs from Algorithm~\ref{alg:basic} mainly in its use of finer-grained clippings for different groups of parameters, which however does not cause additional privacy loss. Intuitively, thanks to the composability property of the moments accounting~\cite{Abadi:2016:dpdl}, the privacy loss due to applying parameter-specific clipping is completely accounted.
Next we prove that the strategy of weight clustering does not cause unaccounted privacy loss, while similar arguments apply to other optimization strategies as well.
In Algorithm\,\ref{alg:basic}, in the $i$-th iteration, the gradient $g^{(i)}$ is first clipped by a global bound $c$ and the random noise $\xi \sim \mathcal{N}(0, (\sigma c)^2\mathbf{I})$ is applied to $g^{(i)}$ to ensure $(\epsilon, \delta)$-DP, where $\sigma = \sqrt{2 \log(1.25/\delta)}/\epsilon$.
In Algorithm~\ref{alg:advanced}, $g^{(i)}$ is divide into $k$ sub-vectors $\{g_j^{(i)}\}_{j =1}^k$. Each sub-vector $g_j^{(i)}$ is clipped by a group-specific bound $c_j$ and the random noise $\xi_j \sim \mathcal{N}(0, (\sigma c_j)^2\mathbf{I})$ is applied, where $\sigma = \sqrt{2 \log(1.25/\delta)}/\epsilon$. Thus, releasing each $g_j^{(i)}$ satisfies $(\epsilon, \delta)$-DP. As $\{g_j^{(i)}\}_{j =1}^k$ are disjoint, applying the parallel composability property of DP~\cite{Dwork:2014:book}, releasing $\{g_j^{(i)}\}_{j =1}^k$ also satisfies $(\epsilon, \delta)$-DP.
\end{proof}
|
2,869,038,154,616 | arxiv | \section{Introduction}
Modern methods for stretching single molecules provide a valuable insight about the response of polymers to external forces. The interest on single molecules loading encouraged new research and technological developments on related mechanical experiments. Typically, mechanical methods allow the manipulation of a polymer molecule in two ways: the stretching of the chain by the direct action of an external force or by the application of an external field. If we consider homogeneous polymers (with all monomers described by the same effective elastic stiffness), then we obtain a uniform strain with the external force and a non-uniform strain with the applied field.
To exert an external force on a polymer fixed at one end, laser optical tweezers (LOTs)\cite{lots}, magnetic tweezers (MTs)\cite{mts} or atomic force microscope (AFM)\cite{AFM} can be used. Many experiments have been performed over a wide class of polymers with biological relevance, such as the nucleic acids (DNA, RNA)\cite{nucleic}, allowing the stretching of the entire molecule and providing the reading and the mapping of genetic information along the chain.\cite{bensimon, chan} Furthermore, it has been possible to describe the elastic behaviour of single polymers consisting of domains which may exhibit transitions between different stable states.\cite{rief1, rief2, mancaII}
Other investigations performed on double-stranded DNA determined the extension of the polymer as a function of the applied force\cite{busta0}, providing results in very good agreement with the Worm-Like Chain (WLC) model\cite{smith1, marko,manca} and the Freely-Jointed Chain (FJC) model.\cite{manca,huguet}
Alternatively, it is possible to manipulate single molecules by an external field. In this case the external field acts on the molecules from a distance or, in other words, without a defined contact point for applying the traction. A non-uniform stretching performed by an external field can be induced either via a hydrodynamic (or electrohydrodynamic) flow field\cite{trahan, wang, hsieh} or via an electric (or magnetic) field.\cite{schwartz, strick1, strick2} One experimental advantage of using flow fields is that the liquid surrounding the tethered molecule can be easily replaced; this is indeed an important feature for many single-molecules studies of enzymes which require varying buffer conditions.\cite{busta_cat} The flow field technique was extensively applied in single-molecule study of DNA elasticity\cite{smith1} as well as to characterize the rheological properties of individual DNA molecules.\cite{smith2,perkins1,perkins2} The use of an electric field has been adopted for driving the alignment of DNA on a solid surface for applications such as gene mapping and restriction analysis.\cite{schwartz} Finally, magnetic fields have been used to apply torsional stress to individual DNA molecules.\cite{strick1,strick2}
In order to understand the response of polymers to external fields and to study their statistics, some theoretical models have been proposed. These models are typically based on the FJC and WLC schemes, generalized with the inclusion of the given applied field. Some studies have shown that in a weak external field the persistence length along the field direction is increased, while it is decreased in the perpendicular direction; moreover, as the external field becomes stronger, the effective persistence length grows exponentially with the field strength.\cite{warner, vroege, kamien} Other investigations under a constant velocity flow have shown that a flexible polymer displays three types of conformation: unperturbed at low velocity; ``trumpet'' shaped when partially stretched; ``stem and flowers'' shaped, with a completely stretched portion (the stem) and a series of blobs (the flowers), at larger loading.\cite{brochard1, brochard2, brochard3} Polymer models have been studied in elongational flows to analyze the coil stretching and chain retraction as a function of polymer and flow parameters, finding good agreement with experimental data.\cite{henvey, rabin} Conformational properties of semiflexible polymer chains in uniform force field were also studied for two-dimensional models.\cite{lamura}
In spite of all these relevant efforts, it is yet a challenge to base on one same unified theoretical framework and understanding of all aspects of polymer mechanics in an external field.
Building on our previous studies,\cite{mancaII,manca} in this paper we study the conformational and mechanical properties of flexible and semi-flexible non-branched polymer model chains tethered at one end and immersed in an external force field. This situation is useful to describe almost two physical conditions of interest: a polymer chain immersed in a fluid in a uniform motion (our model is valid only when the action of the fluid motion can be described by a distribution of given forces applied to all monomers) and an arbitrarily charged chain inserted in a uniform electric field.
Our theoretical approach is twofold, since we adopt both analytical (statistical mechanics\cite{gibbs,weiner}) and numerical techniques (Monte Carlo simulations\cite{binder,confser}). While the analytical approach is useful to obtain the explicit partition function in some specific cases, Monte Carlo simulations are crucial to study more generic cases, inaccessible to analytical treatments. In particular, while we develop our theoretical framework starting from the more tractable FJC model, we take full profit from our MC simulations to extend our study also to the WLC model.
The structure of the paper is the following. In Section II we introduce the mathematical formalism adopted and we derive a generic form of the partition function in $\Re^d$ for a generalized FJC model where the extensibility of the bonds is taken into account. In the Section III we find the two specific forms of the partition function for the 2D- and the 3D-case for the pure FJC polymer with non extensible bonds. Moreover, we obtain in both cases the variance and the covariance among the positions of the monomers. In the Section IV we present the generalization of previous results to the semi-flexible WLC model. We present two closed-forms approximations for the 2D- and the 3D-case and the comparisons with MC simulations. In section V we analyze the behavior of a chain in an external field to which also an external force is applied at the end of the chain. The case with the force not aligned with the field is particularly interesting and shows the power of the MC method. Finally, in Section VI some conclusions are drawn.
\section{General theoretical framework}
As previously discussed, the polymer models most used in literature are the FJC and the WLC. As argued in Ref.\onlinecite{cohen}, for weak tension and weak external field, it is acceptable to model the polymer as a FJC model. This model breaks down only when the curvature of the conformation is very large because it ignores the consequent great bending energy. Since we will look upon this problem in the end of this work, we now give way to the case of a FJC. In particular we consider a FJC with two additional hypothesis. Firstly we consider the possible extensibility of the bonds of the chain through a standard quadratic potential characterized by a given equilibrium length: such an extension mimics the possible stretching of the chemical bond between two adjacent monomers. If necessary, the extensibility of the bonds, here described by linear springs, can be easily extended to more complex, nonlinear springs.\cite{blundell} Moreover, we take into account a series of arbitrary forces applied to each monomer: these actions mimic the effects of an external physical field applied to the system. In addition, we contemplate the presence of an arbitrary force applied to the terminal monomer of the chain.
All calculations will be performed in $\Re^d$ and we will specialize the results both in the 2D-case and in the 3D-case when needed. The idea is to write the complete form
of the Hamiltonian of the system and to build up the corresponding statistical mechanics.\cite{manca} The starting point is therefore the calculation of the classical partition function. In fact, when this quantity is determined, it is possible to obtain the force-extension curve (the equation of state) through simple derivations.
\begin{figure}
\resizebox{1.0\columnwidth}{!}{\includegraphics{polymer_field.eps}}
\caption{(color online) A polymer chain in an external field. The first monomer is clamped at position $\vec r_0$ while the others are free to fluctuate. Each monomer is subjected to an external force $\vec g_K$ (different in strength and direction for any $K$): all these forces mimic an external field. Another external force, playing the role of a main pulling load, $\vec f$, is applied to the last monomer at the position $\vec r_N$.
}
\label{polymer_model}
\end{figure}
Let us consider a non-branched linear polymer with $N$ monomers (see Fig. \ref{polymer_model}) at positions defined by $\vec r_1, ... ,\vec r_N \in \Re^d $ (for considering $d = 2$ or $d = 3$ according to the specific problem of interest). To each monomer a given external force is applied and named $\vec g_1, ... ,\vec g_N$. Another external force, playing the role of main pulling load, $\vec f$, is applied to the last monomer at the position $\vec r_N$. While the chain is clamped at position $\vec r_0$, the monomers are free to fluctuate. The Hamiltonian of the system is therefore given by
\begin{eqnarray}
\label{hamiltonian}
H &=& \sum_{i=1}^{N} \frac{\vec p_i \cdot \vec p_i}{2m} +\frac{1}{2}k\sum_{K=1}^{N}\left( | \vec{r}_K-\vec{r}_{K-1}|-l\right)^{2} \\
\nonumber
&&- \sum_{K=1}^{N} \vec g_K \cdot \vec r_K - \vec f \cdot \vec r_N
\end{eqnarray}
where $\vec p_i$ are the linear momenta, $m$ the mass of the monomers, $k$ the spring constant of the inter-monomer interaction, and $l$ the equilibrium length of the monomer-monomer bond.
We search for the partition function of the system defined as:
\begin{eqnarray}
Z_d = c \underbrace{\int_{\Re^d} ... \int_{\Re^d}}_{2N- \mbox{times}} \exp \left( -\frac{H}{k_BT} \right) d\vec r_1 ... d\vec r_N d\vec p_1 ... d\vec p_N
\end{eqnarray}
where $c$ is a multiplicative constant which takes into account the number of microstates. As well known, the kinetic part can be straightforwardly integrated and it yields a further non-influencing multiplicative constant; then we can write the partition function as an integral over the positional space only.
This integral can be easily handled through the standard change of variable
\begin{eqnarray}
\left\{
\begin{array}{ll}
\vec \xi_1 = \vec r_1 - \vec r_0 \\
\vec \xi_2 = \vec r_2 - \vec r_1 \\
\hspace{0.6cm} \vdots \\
\vec \xi_N = \vec r_N - \vec r_{N-1} \\
\end{array}
\right.
\end{eqnarray}
having the Jacobian determinant $J = \left| \frac{\partial(\vec r_1 ... \vec r_N)}{\partial(\vec \xi_1 ... \vec \xi_N)} \right| = 1$. We consider the terminal $\vec r_0$ of the chain fixed in the origin of axes, i.e. $\vec r_0 = \vec 0$.
So, we cast the positions $\vec r_i$ in terms of the variables $\vec \xi_J$ as follows
\begin{eqnarray}
\left\{
\begin{array}{ll}
\vec r_1 = \vec \xi_1 + \vec r_0 = \vec \xi_1 \\
\vec r_2 = \vec \xi_2 + \vec r_1 = \vec \xi_2 + \vec \xi_1 \\
\hspace{0.6cm} \vdots \\
\vec r_N = \vec \xi_N + \vec \xi_{N-1} + ... + \vec \xi_1 \\
\end{array}
\right.
\end{eqnarray}
By setting the general solution as $ \vec r_i = \sum_{K=1}^{i} \vec \xi_K $, the partition function becomes
\begin{eqnarray}
Z_d &=& c \underbrace{\int_{\Re^d} ... \int_{\Re^d}}_{N- \mbox{times}} \exp \left[ -\frac{k}{2k_BT}\sum_{K=1}^{N}\left( |\vec{\xi}_K|-l\right)^{2}
\right] \\
\nonumber
&& \times \exp \left[ \frac{1}{k_BT}\sum_{K=1}^{N} \vec g_K \cdot \sum_{J=1}^{K} \vec \xi_J \right]\\
\nonumber
&& \times \exp \left[ \frac{1}{k_BT} \vec f \cdot \sum_{K=1}^{N} \vec \xi_K \right] d\vec \xi_1 ... d\vec \xi_N
\end{eqnarray}
Inverting the two summation symbols
\begin{eqnarray}
\sum_{K=1}^{N} \vec g_K \cdot \sum_{J=1}^K \vec \xi_J = \sum_{K=1}^{N} \vec \xi_K \cdot \sum_{i=K}^{N} \vec g_i
\end{eqnarray}
we obtain
\begin{eqnarray}
\label{partition1}
Z_d = c \prod_{K=1}^{N} \int_{\Re^d} e^{ -a \left( |\vec{\xi}|-l\right)^{2} } e^{ \vec V_K \cdot \vec \xi } d\vec \xi
\end{eqnarray}
where
\begin{eqnarray}
\label{partval1}
a &=& \frac{k}{2k_BT} > 0
\\
\label{partval2}
\vec V_K &=& \frac{1}{k_BT} \left( \vec f + \sum_{i=K}^{N} \vec g_i \right)
\end{eqnarray}
It exists a deep conceptual connection between the last integral for the partition function and the theory of the $d$-dimensional Fourier transforms. The Fourier integral of an arbitrary function $f(\vec \xi)$ is defined as
\begin{equation}
F(\vec \omega) = \int_{\Re^d} f(\vec \xi) e^{-i \vec \omega \cdot \vec \xi} d\vec \xi
\end{equation}
with inverse transform given by
\begin{equation}
f(\vec \xi) = \frac{1}{(2 \pi)^d} \int_{\Re^d} F(\vec \omega) e^{i \vec \omega \cdot \vec \xi} d\vec \omega
\end{equation}
If we consider
\begin{equation}
\label{function}
f(\vec \xi) = e^{ -a \left( |\vec{\xi}|-l\right)^{2} }
\end{equation}
it is easy to realize that the integral in Eq.(\ref{partition1}) is the Fourier transform of $f(\vec \xi)$ calculated for $\vec \omega = i \vec V_K$, i.e.
\begin{equation}
\label{partFourier}
Z_d = c \prod_{K=1}^{N} F(i \vec V_K)
\end{equation}
with $a$ e $\vec V_K$ defined respectively in Eq.(\ref{partval1}) and Eq.(\ref{partval2}).
It is important to remark that the function in Eq.(\ref{function}) has a spherical symmetry (i.e. it depends only on the length of the vector $\vec \xi$) and, therefore, also its Fourier transform $F(\vec \omega)$ exhibits the spherical symmetry, depending only on the quantity $|\vec \omega|$ in the transformed domain. In fact, for such spherically-symmetric functions it holds that: if $f(\vec \xi) = f(|\vec \xi|) $ then $ F(\vec \omega) = F(|\vec \omega|) $. Furthermore, we have that
\begin{equation}
F(\Omega) = \int_{0}^{+\infty} 2\pi\rho f(\rho) \left( \frac{2\pi\rho}{\Omega} \right)^{\frac{d}{2}-1} J_{\frac{d}{2}-1}(\rho\Omega) d\rho
\end{equation}
for $ d=2n \mbox{ (even)}$, and
\begin{equation}
F(\Omega) = \int_{0}^{+\infty} 4\pi\rho^2 f(\rho) \left( \frac{2\pi\rho}{\Omega} \right)^{\frac{d-3}{2}} j_{\frac{d-3}{2}}(\rho\Omega) d\rho
\end{equation}
for $ d=2n+1 \mbox{ (odd)} $, where $\rho = |\vec \xi|$ and $\Omega = |\vec \omega|$.\cite{schwartz-math} Here $J_{\nu}(z)$ and $j_{\nu}(z)$ are the cylindrical and spherical Bessel functions of the first kind respectively, correlated by the standard relation $j_{\nu}(z) = \sqrt{\frac{\pi}{2z}}J_{{\nu}+\frac{1}{2}}(z)$.\cite{abra,grad}
In our calculations we have to set $\vec \omega = i \vec V_K$ and, therefore, we obtain $ \Omega = i |\vec V_K|$.
Moreover, when the argument of $J_{\nu}(z)$ and $j_{\nu}(z)$ is supposed imaginary we obtain the modified Bessel functions of the first kind\cite{abra,grad}
\begin{equation}
\begin{array}{ll}
I_{\nu}(z) = (i)^{-\nu} J_{\nu}(iz) \\
i_{\nu}(z) = (i)^{-\nu} j_{\nu}(iz) \\
\end{array}
\end{equation}
For example we have the explicit expression $j_0(z)=\frac{\sin z}{z}$ and $i_0(z)=\frac{\sinh z}{z}$ while, on the contrary, $I_0(z)$ and $J_0(z)$ cannot be written in closed form.
So, for $d$ even we eventually obtain
\begin{equation}
\frac{F(i \vec V_K)}{2\pi} = \int_{0}^{+\infty} \rho \hspace{0.1cm} e^{-a(\rho-l)^2} \left( \frac{2\pi\rho}{|\vec V_K|} \right)^{\frac{d-2}{2}} I_{\frac{d-2}{2}}(\rho|\vec V_K|) d\rho
\end{equation}
and, on the other hand, for $d$ odd we have
\begin{equation}
\frac{F(i \vec V_K)}{4\pi} = \int_{0}^{+\infty} \rho^2 \hspace{0.1cm} e^{-a(\rho-l)^2} \left( \frac{2\pi\rho}{|\vec V_K|} \right)^{\frac{d-3}{2}} i_{\frac{d-3}{2}}(\rho|\vec V_K|) d\rho
\end{equation}
Finally, by using Eq.(\ref{partFourier}), the partition function is given by
\begin{equation}
\label{partition_even}
Z_d = c \prod_{K=1}^{N} \int_{0}^{+\infty} \rho \hspace{0.1cm} e^{-a(\rho-l)^2} \left( \frac{\rho}{|\vec V_K|} \right)^{\frac{d-2}{2}} I_{\frac{d-2}{2}}(\rho|\vec V_K|) d\rho
\end{equation}
for $d$ even, and
\begin{equation}
\label{partition_odd}
Z_d = c \prod_{K=1}^{N} \int_{0}^{+\infty} \rho^2 \hspace{0.1cm} e^{-a(\rho-l)^2} \left( \frac{\rho}{|\vec V_K|} \right)^{\frac{d-3}{2}} i_{\frac{d-3}{2}}(\rho|\vec V_K|) d\rho
\end{equation}
for $d$ odd, where $ a$ and $ \vec V_K $ are given in
Eqs.(\ref{partval1}) and (\ref{partval2}). In the framework of statistical mechanics, the knowledge of the partition function allows to determine all needed expected values describing the statistics of the chain (i.e., average values of the positions, variances of the positions and so on).
\section{Freely-jointed chain model under external field}
\subsection{Average values of positions}
In the previous section we obtained the general expression of the partition function for the case where the extensibility of the bonds is taken into account. This is described by the parameter $k$, which characterizes the elastic bond between adjacent monomers. In the present Section we want to study the effects of an arbitrary distribution of forces on a pure freely jointed chain model (FJC). Therefore we need to obtain the specific form of the partition function in the case of rigid bonds of fixed length $l$. From the mathematical point of view it means that we will consider $k \rightarrow \infty$, a condition representing a inextensible spring.
Because of the relation $\sqrt{\frac{\alpha}{\pi}} e^{-\alpha x^2} = \delta(x)$ when $\alpha \rightarrow \infty$ we may determine the limit of Eq.(\ref{partition_even}) and Eq.(\ref{partition_odd}) for $a \rightarrow \infty$ (i.e. for $k \rightarrow \infty$, FJC limit). Since the arbitrariness of the constant $c$, we may consider in Eqs. (\ref{partition_even}) and (\ref{partition_odd}) a multiplicative constant term $(\sqrt{\frac{a}{\pi}})^N$. Then, by using the translated property $\sqrt{\frac{a}{\pi}} e^{-a(\rho-l)^2} \rightarrow \delta(\rho-l)$ for $a \rightarrow \infty$ we perform all the integrals thereby obtaining
\begin{equation}
\label{partition_even2}
Z_d = c \prod_{K=1}^{N} \frac{1}{|\vec V_K|^{\frac{d-2}{2}}} I_{\frac{d-2}{2}}(l|\vec V_K|) \hspace{1cm} \mbox{$d$ even}
\end{equation}
\begin{equation}
\label{partition_odd2}
Z_d = c \prod_{K=1}^{N} \frac{1}{|\vec V_K|^{\frac{d-3}{2}}} i_{\frac{d-3}{2}}(l|\vec V_K|) \hspace{1cm} \mbox{$d$ odd}
\end{equation}
In particular, for $d = 2$ we have
\begin{equation}
\label{partition_d2}
Z_2 = c \prod_{K=1}^{N} I_0 \left( \frac{l}{k_BT} \left| \vec f + \sum_{i=K}^{N} \vec g_i \right| \right)
\end{equation}
while for $d=3$ we obtain
\begin{equation}
\label{partition_d3}
Z_3 = c \prod_{K=1}^{N} \frac{\sinh \left( \frac{l}{k_BT} \left| \vec f + \sum_{i=K}^{N} \vec g_i \right| \right)}{\frac{l}{k_BT} \left| \vec f + \sum_{i=K}^{N} \vec g_i \right|}
\end{equation}
All the expressions given in Eqs.(\ref{partition_even2}), (\ref{partition_odd2}), (\ref{partition_d2}), (\ref{partition_d3}) can be summarized in the general form
\begin{equation}
\label{partition_general}
Z_d = c \prod_{K=1}^{N} f(|\vec V_K|)
\end{equation}
with a suitable function $f(x)$. By using this expression of the partition function we can find the average position of the $i$-th monomer of the chain; indeed, from the definition of the Hamiltonian in Eq.(\ref{hamiltonian}) we state that $\vec r_i = -\frac{\partial H}{\partial \vec g_i}$ and, therefore, we get
\begin{equation}
\label{shape}
\langle \vec r_i\rangle = k_BT \frac{\partial}{\partial \vec g_i} \ln Z_d
\end{equation}
which represents the shape of the polymer chain under the effects of the external field $\vec g_i$ and the applied force $\vec f$. Now we can substitute Eq.(\ref{partition_general}) into Eq.(\ref{shape}), obtaining
\begin{equation}
\label{shape3}
\langle \vec r_i\rangle = \sum_{K=1}^{i} \frac{\vec V_K}{|\vec V_K|} \left[ \frac{1}{f(x)} \frac{\partial f(x)}{\partial x} \right]_{x=|\vec V_K|}
\end{equation}
In 2D we have $f(x) = I_0(lx)$ and therefore we obtain
\begin{equation}
\label{2Dcampo}
\langle \vec r_i\rangle = l \sum_{K=1}^{i} \frac{I_1 \left( \frac{l}{k_BT} \left| \vec f + \sum_{J=K}^{N} \vec g_J \right| \right)}{I_0 \left( \frac{l}{k_BT} \left| \vec f + \sum_{J=K}^{N} \vec g_J \right| \right)} \frac{\vec f + \sum_{J=K}^{N} \vec g_J}{\left| \vec f + \sum_{J=K}^{N} \vec g_J \right|}
\end{equation}
For such a 2D case, by applying Eq.(\ref{2Dcampo}), the average values of the longitudinal component of the positions have been calculated and are plotted in Fig.\ref{positions_2D} as a function of the chain length $N$ and the field strength $g$. We have considered only the action of an external uniform field with $\vec g_J=\vec g$ and amplitude $g$.
Although this case lends itself to a full analytical solution, numerical simulations were also performed by using a conventional implementation of the Metropolis version of the Monte Carlo algorithm.\cite{binder} The initial state of the chain is defined by a set of randomly chosen positions.
The displacement extent of each step governs the efficiency of the configurational space sampling. Therefore, we analysed several runs in order to optimize its value.\cite{frenkel,allen}
The perfect agreement between the theory and the MC simulations provides a strict check of the numerical procedure, to be used in the foregoing.
On the other hand, in 3D we have $ f(x) = \frac{\sinh (lx)}{lx} $, leading to
\begin{equation}
\label{3Dcampo}
\langle \vec r_i\rangle = l \sum_{K=1}^{i} \mathcal{L} \left( \frac{l}{k_BT} \left| \vec f + \sum_{J=K}^{N} \vec g_J \right| \right) \frac{\vec f + \sum_{J=K}^{N} \vec g_J}{\left| \vec f + \sum_{J=K}^{N} \vec g_J \right|}
\end{equation}
where $\mathcal{L}(x) = \coth x - \frac{1}{x}$ is the Langevin function. By using Eq.(\ref{3Dcampo}), as before, it is possible to plot the average values of the longitudinal component of the positions for the 3D case (Fig.\ref{positions_3D}).
Also in this case we adopted a uniform field $g$ and the good agreement with the MC simulations is evident.
\begin{figure}[ht]
\resizebox{0.7\columnwidth}{!}{\includegraphics{rmeanZ_2D_N.eps}}\\
\resizebox{0.7\columnwidth}{!}{\includegraphics{rmeanZ_2D_Gamp.eps}}
\caption{(color online) Average values of the longitudinal component of the positions induced by the external field for the 2D FJC case. The red solid lines correspond to the analytical results Eqs.(\ref{2Dcampo}) and (\ref{fjc2Dg}), MC results are superimposed in black circles. Top panel: each curve corresponds to different chain lengths $N=10, 20, 30, 40 ,50$ for a fixed value ${gl}/(k_B T)=1$ (e.g., corresponding to $l=1$nm, $g=4$pN at $T=293$K). Bottom panel: each curve corresponds to the different values $gl/(k_B T)=0.1, 0.25, 0.5, 1, 2, 10$ for a fixed chain length $N=20$.
}
\label{positions_2D}
\end{figure}
\begin{figure}[ht]
\resizebox{0.7\columnwidth}{!}{\includegraphics{rmeanZ_3D_N.eps}}\\
\resizebox{0.7\columnwidth}{!}{\includegraphics{rmeanZ_3D_Gamp.eps}}
\caption{(color online) Average values of the longitudinal component of the positions induced by the external field for the 3D FJC case. The red solid lines correspond to the analytical results Eqs.(\ref{3Dcampo}) and (\ref{fjc3Dg}), MC results are superimposed in black circles. Top panel: each curve corresponds to different chain lengths $N=10, 20, 30, 40 ,50$ for a fixed value ${gl}/(k_B T)=1$. Bottom panel: each curve corresponds to the different values $gl/(k_B T)=0.1, 0.25, 0.5, 1, 2, 10$ for a fixed chain length $N=20$.
}
\label{positions_3D}
\end{figure}
As particular case, if there is only the force $\vec{f}$ applied to the system we obtain the standard scalar force-extension curves linking $r=\vert \langle\vec{r}_N\rangle\vert$ with $f=\vert \vec{f}\vert$. In 2D we have
\begin{equation}
\label{fjc2Df}
\frac{r}{lN}=\frac{I_1\left(\frac{lf}{k_BT} \right) }{I_0\left(\frac{lf}{k_BT} \right)}
\end{equation}
in agreement with recent results,\cite{kierfeld} while in 3D we obtain
\begin{equation}
\label{fjc3Df}
\frac{r}{lN}=\mathcal{L}\left(\frac{lf}{k_BT} \right)
\end{equation}
which is a classical result.\cite{manca,rubinstein} The simple results in Eqs.(\ref{fjc2Df}) and (\ref{fjc3Df}) have been used to obtain the limiting behaviors under low ($f\rightarrow 0$) and high ($f\rightarrow \infty$) values of the applied force, as shown in Table \ref{asym}.
Building on such first results we now focus on some particular interesting approximations.
More specifically, it can be interesting to find approximate results for the case of a homogeneous field and no end-force, $\vec{f}=0$ and $\vec g_J=\vec g$ for any $J$. In this case we search for the scalar relation between
$r=\vert \langle\vec{r}_N\rangle\vert$ and $g=\vert \vec{g}\vert$. In the 2D case, from Eq.(\ref{2Dcampo}), we have
\begin{eqnarray}
\nonumber
\frac{r}{lN} &=& \frac{1}{N} \sum_{k=1}^{N} \frac{I_1 \left( \frac{lg}{k_BT} (N-k+1) \right)}{I_0 \left( \frac{lg}{k_BT} (N-k+1) \right)} \\
\nonumber
&\simeq & \frac{1}{N} \int_{0}^{N} \frac{I_1 \left( \frac{lg}{k_BT} (N-x+1) \right)}{I_0 \left( \frac{lg}{k_BT} (N-x+1) \right)}dx \\
&= & \frac{1}{N}\frac{1}{\frac{lg}{k_BT}} \log \frac{I_0 \left( \frac{lg}{k_BT} (N+1) \right)}{I_0 \left( \frac{lg}{k_BT} \right)}
\label{fjc2Dg}
\end{eqnarray}
On the other hand, for the 3D case we obtain
\begin{eqnarray}
\nonumber
\frac{r}{lN} &=& \frac{1}{N} \sum_{K=1}^{N} \mathcal{L} \left( \frac{l}{k_BT}(N-k+1) \right) \\
\nonumber
&\simeq & \frac{1}{N} \int_{0}^{N}\mathcal{L} \left( \frac{l}{k_BT}(N-x+1) \right)dx\\
&= &\frac{1}{N}\dfrac{1}{\frac{lg}{k_BT}}\log \dfrac{\mbox{e}^{ 2\frac{lg}{k_BT} (N+1)}-1}{(N+1) \left(\mbox{e}^{2 \frac{lg}{k_BT} }-1\right)} -1
\label{fjc3Dg}
\end{eqnarray}
We have usefully exploited the fact that, for large $N$, the sums can be approximately substituted with the corresponding integrals, which are easier to be handled. The closed-form expressions given in Eqs.(\ref{fjc2Dg}) and (\ref{fjc3Dg}) are very useful to obtain the limiting behaviors of the polymer under low ($g\rightarrow 0$) and high ($g\rightarrow \infty$) values of the applied field, as shown in Table \ref{asym}. Moreover, we have verified the validity of Eqs.(\ref{fjc2Dg}) and (\ref{fjc3Dg}) through a series of comparisons with MC results (see Fig.\ref{forcextension_FJC} in the next Section for details).
\subsection{Covariances and variances of positions}
In this Section, we search for the covariance among the positions of the monomers. It is important to evaluate such a quantity in order to estimate the variance of a given position (measuring the width of the probability density around its average value) and the correlation among different monomer positions (measuring the persistence of some geometrical features along the chain). In order to do this, we identify the $\alpha$-th component of the $i$-th monomer as $r_{i \alpha}$. The covariance of the generic monomer simply defined as (it represent the expectation value of the second order):
\begin{eqnarray}
\mbox{Cov}(r_{i \alpha}, r_{J \beta}) &=& \langle (r_{i \alpha} - \langle r_{i \alpha}\rangle )(r_{J \beta} - \langle r_{J \beta}\rangle ) \rangle \\ \nonumber
&=& \langle r_{i \alpha} r_{J \beta}\rangle - \langle r_{i \alpha}\rangle \langle r_{J \beta}\rangle
\end{eqnarray}
Taking the derivative of the partition function with respect to the $\alpha$ and the $\beta$ components of the force vectors $\vec g_i$ and $\vec g_J$ we can solve the problem as follows. We consider the standard expression for the partition function
and we can elaborate the following expression
\begin{eqnarray}
\label{correlation_term}
\langle r_{i \alpha} r_{J \beta}\rangle = (k_BT)^2 \left( \frac{\partial \ln Z_d}{\partial g_{i \alpha}} \frac{\partial \ln Z_d}{\partial g_{J \beta}} + \frac{\partial^2 \ln Z_d}{\partial g_{i \alpha} \partial g_{J \beta} } \right)
\end{eqnarray}
or, equivalently, by introducing Eq.(\ref{shape})
\begin{eqnarray}
\langle r_{i \alpha} r_{J \beta}\rangle = \langle r_{i \alpha}\rangle \langle r_{J \beta}\rangle + k_BT \frac{\partial}{\partial g_{J \beta}} \langle r_{i \alpha} \rangle\,\,\,\,
\end{eqnarray}
but we can simply determine that
\begin{eqnarray}
\frac{\partial}{\partial g_{J \beta}} \langle r_{i \alpha}\rangle = \frac{\partial}{\partial g_{J \beta}} \sum_{K=1}^{i} \frac{\vec V_K \cdot \vec e_\alpha}{|\vec V_K|} \left[ \frac{1}{f(x)} \frac{\partial f(x)}{\partial x} \right]_{x=|\vec V_K|}
\end{eqnarray}
where we have defined the unit vector $\vec e_{\alpha}$ as the basis of the orthonormal reference frame. Being
\begin{eqnarray}
\vec V_K \cdot \vec e_\alpha = \frac{1}{k_BT} \left( f_\alpha + \sum_{i=K}^{N} g_{i \alpha} \right)
\end{eqnarray}
and
\begin{eqnarray} \frac{\partial |\vec V_K|}{\partial g_{J \beta}} = \frac{1}{k_BT} \frac{\vec V_K \cdot \vec e_\beta}{|V_K|} \sum_{q=K}^{N} \delta_{Jq}
\end{eqnarray}
after long but straightforward calculations we obtain
\begin{eqnarray}
&& k_BT \frac{\partial}{\partial g_{J \beta}} \langle r_{i \alpha}\rangle = \sum_{K=1}^{\mbox{min}\{i,J\}} \frac{1}{|\vec V_K|f(|\vec V_K|)} \\
\nonumber
&&\times \left\{ \delta_{\alpha \beta} f'(|\vec V_K|) + f''(|\vec V_K|) \frac{V_{K\alpha} V_{K\beta}}{|\vec V_K|} \right. \\ \nonumber
&&- \left. V_{K\alpha} f'(|\vec V_K|) \frac{V_{K\beta}}{|\vec V_K|^2} - V_{K\alpha}
\frac{ f'(|\vec V_K|)^2}{f(|\vec V_K|)} \frac{V_{K\beta}}{|\vec V_K|} \right\}
\end{eqnarray}
Ordering the terms we finally obtain the important result
\begin{eqnarray}
\mbox{Cov}(r_{i \alpha}, r_{J \beta})
&=& \sum_{K=1}^{\mbox{min}\{i,J\}} \frac{\delta_{\alpha \beta}}{|\vec V_K|} \frac{f'(|\vec V_K|)}{f(|\vec V_K|)} \\
\nonumber &+& \sum_{K=1}^{\mbox{min}\{i,J\}} \frac{V_{K\alpha} V_{K\beta} }{|\vec V_K|^2 f(|\vec V_K|)} \\ \nonumber
&\times& \left \{ f''(|\vec V_K|) - \frac{f'(|\vec V_K|)}{|\vec V_K|} - \frac{ f'(|\vec V_K|)^2}{f(|\vec V_K|)} \right\}
\end{eqnarray}
It represents the final form of the covariance between two different components of the positions of two different monomers.
If we look at the variance of a single component of a single position ($i=J$, $\alpha = \beta$) we have the simpler result
\begin{eqnarray}
\label{variance}
\sigma^{2}_{i \alpha}
&=& \sum_{K=1}^{i} \frac{f'(|\vec V_K|)}{|\vec V_K|f(|\vec V_K|)} + \sum_{K=1}^{i} \frac{V_{K\alpha}^2 }{|\vec V_K|^2 f(|\vec V_K|)} \\ \nonumber
&\times& \left \{ f''(|\vec V_K|) - \frac{f'(|\vec V_K|)}{|\vec V_K|} - \frac{ f'(|\vec V_K|)^2}{f(|\vec V_K|)} \right\}
\end{eqnarray}
In order to use the previous expressions we have to specify the function $f$ and its derivatives for the two-dimensional and the three-dimensional case.
In the 2D case we have $ f(x) = I_0(lx)$, $ f'(x) = l I_1(lx) $ and $ f''(x) = \frac{l^2}{2} [I_0(lx) + I_2(lx)] $. On the other hand, for the 3D case we have
$ f(x) = \frac{\sinh(lx)}{lx} $, $ f'(x)/f(x) = l \mathcal{L}(lx)$ and $ f''(x)/f(x) = l^2 -2l \mathcal{L}(lx) /x$.
This completes the determination of the covariance.
\begin{figure}[ht]
\hspace{-0.7cm}\resizebox{0.7\columnwidth}{!}{\includegraphics{varZ_3D_N.eps}}\\
\resizebox{0.7\columnwidth}{!}{\includegraphics{varXY_3D_N.eps} }
\caption{(color online) Longitudinal (top panel) and transversal (bottom panel) component of the variance of positions for the 3D FJC case. The red solid lines correspond to the analytical result Eq.(\ref{variance}), MC results are superimposed in black circles. Each curve corresponds to different chain lengths $N=10, 20, 30, 40 ,50$ for a fixed value of the external field defined by ${gl}/(k_B T)=1$.
}
\label{variances_3D_N}
\end{figure}
\begin{figure}[ht]
\resizebox{0.7\columnwidth}{!}{\includegraphics{varZ_3D_Gamp.eps}}\\
\resizebox{0.7\columnwidth}{!}{\includegraphics{varXY_3D_Gamp.eps}}
\caption{(color online) Longitudinal (top panel) and transversal (bottom panel) component of the variance of positions for the 3D FJC case. The red solid lines correspond to the analytical result Eq.(\ref{variance}), MC results are superimposed in black circles. Each curve corresponds to different values of the external field amplitude defined by $gl/(k_B T)=0.1, 0.25, 0.5, 1, 2, 10$ for a fixed chain length $N=20$.
}
\label{variances_3D_Gamp}
\end{figure}
We report in Fig.\ref{variances_3D_N} and Fig.\ref{variances_3D_Gamp} the longitudinal and transversal component of the variance as a function of the chain length and the field strength for the 3D case (with $f=0$). The 2D case is very similar and it has not been reported here for sake of brevity. We can observe some interesting trends: the longitudinal variance of the position is a decreasing function of the number of polymers $N$ while the transversal one is a increasing function (with a fixed amplitude of the external field $g$). Moreover, both variances are rapidly increasing along the chain, assuming the largest value in the last free monomer, which is more subject to strong fluctuations. It interesting to observe that the variance (both longitudinal and transversal components) is a linear function of the position $i$ along the chain (it linearly intensifies along the chain itself) with a simple force $f$ applied at the free end: conversely, with a uniform field $g$, the distribution of forces generates a strongly non-linear intensification of the variances moving towards the free end-terminal. So, from the point of view of the variances, the application of a field or the application of a single force generates completely different responses. In Fig.\ref{variances_3D_Gamp} we can also observe that the variances are decreasing functions of the strength of the field (both for the longitudinal and transversal components); in fact, the intensity of the fields tends to reduce the fluctuations of the chain, increasing, at the same time, the tension within the bonds.
\section{Worm-like chain model under external field}
In previous Sections we treated systems described by the FJC model, characterized by the complete flexibility of the chain and, therefore, by the absence of any bending contribution to the total energy. Nevertheless, in many polymer chains, especially of biological origin, the specific flexibility (described by the so-called persistence length\cite{kamien2}) has a relevant role in several bio-mechanical processes.
In order to take into consideration these important features, with relevant applications to bio-molecules and bio-structures, in this Section we introduce the semi-flexible polymer chain characterized by a given bending energy added to the previous Hamiltonian
\begin{eqnarray}
\label{hwlc}
H &=& \sum_{i=1}^{N} \frac{\vec p_i \cdot \vec p_i}{2m} +\frac{1}{2}k\sum_{K=1}^{N}\left( \Vert \vec{r}_K-\vec{r}_{K-1}\Vert-l\right)^{2} \\
\nonumber
&&+\frac{1}{2}\kappa\sum_{i=1}^{N-1}\left(\vec{t}_{i+1}-\vec{t}_{i} \right)^{2} - \sum_{K=1}^{N} \vec g_K \cdot \vec r_K - \vec f \cdot \vec r_N
\end{eqnarray}
where $\kappa$ is the bending stiffness, $ k $ is the stretching modulus and $ \vec{t}_{i}=(\vec{r}_{i+1}-\vec{r}_{i})/\Vert \vec{r}_{i+1}-\vec{r}_{i}\Vert $ is the unit vector collinear with the $i$-th bond (see Ref.\onlinecite{manca} for details). In particular we take into consideration the classical WLC model, describing an inextensible semi-flexible chain: it means that the spring constant $ k $ is set to a very large value (ideally $k\rightarrow\infty$) so that the bond lengths remain fixed at the value $l$. It is well known that it is not possible to calculate the partition functions in closed form for the WLC polymers. Nevertheless, some standard approximations exist for such cases leading to simple expressions for the force-extension curves when a single force $f$ is applied to one end of the chain. In the following, starting from these results, we search for the force-extension curves when the polymers is stretched through a constant field $g$.
We start with the result for the 2D-WLC with an applied force $f$: the approximated force extension curve is given by \cite{woo}
\begin{eqnarray}
\label{markosiggia2D}
\frac{fl}{k_B T}=\frac{l}{L_{p}}\left[ \frac{1}{16(1-\zeta)^{2}}-\frac{1}{16}+\frac{7}{8}\zeta\right]
\end{eqnarray}
where $\zeta=r/(lN)$ is the dimensionless elongation and $L_p=l\kappa/(k_BT)$ is the persistence length . We suppose that such a constitutive equation is invertible through the function $\mathcal{F}$, leading to the expression $\zeta=r/(lN)=\mathcal{F}({fl}/({k_B T}))$. When $\vec{f}=0$ and $\vec g_J=\vec g$ for any $J$ we search for the 2D scalar relation between
$r$ and $g=\vert \vec{g}\vert$. As discussed in a previous section (see Eqs.(\ref{fjc2Dg}) and (\ref{fjc3Dg})), we can write
\begin{eqnarray}
\nonumber
\frac{r}{lN} &=& \frac{1}{N} \sum_{k=1}^{N} \mathcal{F}\left( \frac{lg}{k_BT}(N-k+1) \right) \\
\nonumber
& \simeq & \frac{1}{N} \int_{k=0}^{N} \mathcal{F}\left( \frac{lg}{k_BT}(N-x+1) \right) dx\\
&= & \frac{1}{N}\frac{1}{\frac{lg}{k_BT} } \int_{\frac{lg}{k_BT}}^{\frac{lg}{k_BT}(N+1)} \mathcal{F}\left( y \right)dy
\end{eqnarray}
where we have defined the change of variable $y=\frac{lg}{k_BT}(N-x+1)$. We adopt now a second change of variable through the relation $z=\mathcal{F}(y)$ or $y=\mathcal{F}^{-1}(z)$; it leads to
\begin{eqnarray}
\nonumber
\frac{r}{lN} &=& \frac{1}{N}\frac{1}{\frac{lg}{k_BT} } \int_{\mathcal{F}\left(\frac{lg}{k_BT}\right)}^{\mathcal{F}\left(\frac{lg}{k_BT}(N+1)\right)} z\frac{\mathcal{F}^{-1}\left( z \right)}{dz}dz\\
\label{wlc2Dg}
&=&\frac{1}{N}\frac{1}{\frac{lg}{k_BT} }\frac{l}{L_{p}}\\
\nonumber
&\times &\left[\frac{7}{16}z^{2}-\frac{1}{8(1-z)}+\frac{1}{16(1-z)^{2}} \right]_{\mathcal{F}\left(\frac{lg}{k_BT}\right)}^{\mathcal{F}\left(\frac{lg}{k_BT}(N+1)\right)}
\end{eqnarray}
where we used the notation $[h(z)]_a^b=h(b)-h(a)$.
This result represents (although in implicit form) the approximated force-extension curve for the 2D-WLC under external fields. To evaluate Eq.(\ref{wlc2Dg}) we need to know the inverse function $\mathcal{F}(\cdot)$, a task that can be performed numerically.
\begin{figure}[ht]
\resizebox{0.85\columnwidth}{!}{\includegraphics{A5stress_strainFJC2D.eps}}\\
\resizebox{0.85\columnwidth}{!}{\includegraphics{A5stress_strainFJC3D.eps}}
\caption{(color online) Force-extension curves of a FJC polymer in an external field (or external force) with N=20. The red line corresponds to the approximated expressions given in Eqs.(\ref{fjc2Dg}) and Eqs.(\ref{fjc3Dg}) while the black circles have been obtained through MC simulations. The 2D (Eq.(\ref{fjc2Df})) and 3D (Eq.(\ref{fjc3Df})) FJC expressions (without an external field) are plotted for comparison with $f=g$ and $f=Ng$.
}
\label{forcextension_FJC}
\end{figure}
\begin{figure}[ht]
\resizebox{0.85\columnwidth}{!}{\includegraphics{A5stress_strainWLC2D.eps}}\\
\resizebox{0.85\columnwidth}{!}{\includegraphics{A5stress_strainWLC3D.eps}}
\caption{(color online) Force-extension curves of a WLC polymer in an external field (or external force) with N=20. The red line corresponds to the approximated expressions given in Eqs.(\ref{wlc2Dg}) and Eqs.(\ref{wlc3Dg}) while the black circles have been obtained through MC simulations. The 2D (Eq.(\ref{markosiggia2D})) and 3D (Eq.(\ref{markosiggia3D})) WLC expressions (without an external field) are plotted for comparison with $f=g$ and $f=Ng$. The value of the bending spring constant is $\kappa = 0.4 \cdot 10^{-19} $ Nm $\simeq\,10 k_BT$ at $T=293$K. }
\label{forcextension_WLC}
\end{figure}
Similarly, we may consider the standard 3D-WLC model with an applied force $f$; the classical Marko-Siggia result\cite{marko} is
\begin{eqnarray}
\label{markosiggia3D}
\frac{fl}{k_B T}=\frac{l}{L_{p}}\left[ \frac{1}{4(1-\zeta)^{2}}-\frac{1}{4}+\zeta\right]
\end{eqnarray}
where, as before, $\zeta=r/(lN)$ is the dimensionless elongation and $L_p=l\kappa/(k_BT)$ is the persistence length. We suppose again that such constitutive equation is invertible through the function $\mathcal{G}$, leading to the expression $\zeta=r/(lN)=\mathcal{G}({fl}/({k_B T}))$. When $\vec{f}=0$ and $\vec g_J=\vec g$ for any $J$ we search for the 3D scalar relation between
$r$ and $g=\vert \vec{g}\vert$. By repeating the previous procedure, we can write
\begin{eqnarray}
\nonumber
\frac{r}{lN} &=& \frac{1}{N}\frac{1}{\frac{lg}{k_BT} } \int_{\mathcal{G}\left(\frac{lg}{k_BT}\right)}^{\mathcal{G}\left(\frac{lg}{k_BT}(N+1)\right)} z\frac{\mathcal{G}^{-1}\left( z \right)}{dz}dz\\
\label{wlc3Dg}
&=&\frac{1}{N}\frac{1}{\frac{lg}{k_BT} }\frac{l}{L_{p}}\\
\nonumber
&\times &\left[\frac{1}{2}z^{2}-\frac{1}{2(1-z)}+\frac{1}{4(1-z)^{2}} \right]_{\mathcal{G}\left(\frac{lg}{k_BT}\right)}^{\mathcal{G}\left(\frac{lg}{k_BT}(N+1)\right)}
\end{eqnarray}
which represents the implicit form of the approximated force-extension curve for the 3D-WLC under external fields.
\begin{table}
\caption{Asymptotic forms of the force-extension curves for all cases described in the paper: FJC and WLC models in 2D and 3D geometry with force applied $f$ or field applied $g$.\label{asym}}
\begin{ruledtabular}
\begin{tabular}{l c c}
& Asymptotic form & Asymptotic form \\
$\underbrace{\mbox{Polymer chain}}_{Equation}$ & of $ \frac{r}{lN}$ for $f,g\rightarrow 0$ & of $ \frac{r}{lN}$ for $f,g\rightarrow \infty$ \\
& $\left(x=\frac{lf}{k_BT}\mbox{ or }\frac{lg}{k_BT} \right) $ & $\left(x=\frac{lf}{k_BT}\mbox{ or }\frac{lg}{k_BT} \right) $ \\
\hline
\\
$\underbrace{\mbox{FJC (2D) }f}_{Eq.(\ref{fjc2Df})}$ & $\dfrac{1}{2}x$ & $1-\dfrac{1}{2x}$ \\ \\
$\underbrace{\mbox{FJC (3D) }f}_{Eq.(\ref{fjc3Df})}$ & $\dfrac{1}{3}x$ & $1-\dfrac{1}{x}$ \\ \\
$\underbrace{\mbox{FJC (2D) }g}_{Eq.(\ref{fjc2Dg})}$ & $\dfrac{1}{2}\left(1+\dfrac{N}{2} \right) x$ & $1-\dfrac{\log(N+1)}{2N}\dfrac{1}{x}$ \\ \\
$\underbrace{\mbox{FJC (3D) }g}_{Eq.(\ref{fjc3Dg})}$ & $\dfrac{1}{3}\left(1+\dfrac{N}{2} \right) x$ & $1-\dfrac{\log(N+1)}{N}\dfrac{1}{x}$ \\ \\
$\underbrace{\mbox{WLC (2D) }f}_{Eq.(\ref{markosiggia2D})}$ & $\dfrac{L_p}{l}x$ & $1-\dfrac{1}{4}\dfrac{1}{\sqrt{\dfrac{L_p}{l}x}}$ \\ \\
$\underbrace{\mbox{WLC (3D) }f}_{Eq.(\ref{markosiggia3D})}$& $\dfrac{2}{3}\dfrac{L_p}{l}x$ & $1-\dfrac{1}{2}\dfrac{1}{\sqrt{\dfrac{L_p}{l}x}}$ \\ \\
$\underbrace{\mbox{WLC (2D) }g}_{Eq.(\ref{wlc2Dg})}$& $\dfrac{L_p}{l}\left(1+\dfrac{N}{2} \right)x$ & $1-\dfrac{1}{\sqrt{\dfrac{L_p}{l}x}}\dfrac{\sqrt{N+1}-1}{2N}$ \\ \\
$\underbrace{\mbox{WLC (3D) }g}_{Eq.(\ref{wlc3Dg})}$& $\dfrac{2}{3}\dfrac{L_p}{l}\left(1+\dfrac{N}{2} \right)x$ & $1-\dfrac{1}{\sqrt{\dfrac{L_p}{l}x}}\dfrac{\sqrt{N+1}-1}{N}$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
It is interesting to compare the very different force-extension curves for a single molecule in the two cases of a uniform (only $f$ applied) and non-uniform (only $g$ applied) stretch. In particular, taking advantage of our approximated formulas, we can analyse the case of a FJC and a WLC polymer. The 2D and 3D FJC results are plotted in Fig.\ref{forcextension_FJC}; on the other hand, the 2D and 3D WLC curves have been shown in Fig.\ref{forcextension_WLC}.
For the WLC case we assumed $\kappa=10 k_B T$ for the bending modulus at $T=293$K. This value is comparable to that of polymer chains of biological interest (e.g., for DNA $\kappa=15 k_B T$).\cite{marko}
In any case three curves have been reported for drawing all the possible comparisons: the response under the field $g$, the response under the force $f=g$ and, finally, the response to an external force $f=Ng$.
Interesting enough we note that the curve corresponding to the field $g$ is always comprised between the cases with only the force $f=g$ and $f=Ng$. The response with the field $g$ is clearly larger than that with the single force $f=g$ since the field corresponds to a distribution of $N$ forces (of intensity $f$) applied to all monomers; therefore, the total force applied is larger, generating a more intense effect.
However, the case with a single force $f=Ng$ shows a response larger than that of the field $g$.
In this case the total force applied in the two cases is the same but the single force $Nf$ is applied entirely to the last terminal monomer, generating an overall stronger effect compared to the same force evenly distributed on the monomers. In fact, a force generates a stronger effect if it is placed in the region near the free polymer end (its effect is redistributed also to all preceding bonds).
The curves in Fig.\ref{forcextension_FJC} and Fig.\ref{forcextension_WLC} have been obtained with the theoretical formulations presented in this Section and confirmed by a series of MC simulations. In all case we obtained a quite perfect agreement between the two formulations.
The knowledge of the closed-form expressions allowed us to analytically analyze the behavior of the chains for very low and very high applied forces (or fields). The results are shown in Table \ref{asym}: interestingly, we note that the extension is always a linear function of the small applied perturbation.
Nevertheless, the corresponding constant of proportionality depends on $N$ only when a field is applied to the chain; conversely, it is independent of $N$ with a single force applied at one end. On the other hand, with a large perturbation applied to the molecule, we observe a $1/x$ behavior for the FJC models and a $1/\sqrt{x}$ behavior for the WLC models.
To conclude we also remark that the order of the curves observed in Fig.\ref{forcextension_FJC} and Fig.\ref{forcextension_WLC} is confirmed also in the low and high force (or field) regime by the following inequalities: $1<1+N/2<N$ (low force regime) and $1<\log(N+1)<N$ (high force regime) for the FJC model and $1<1+N/2<N$ (low force regime) and $\sqrt{N}<2(\sqrt{N+1}-1)<N$ (high force regime) for the WLC model (always for $N\geq 2$).
\section{Action of a pulling force not aligned with the external field}
\begin{figure}
\hspace{1cm}\resizebox{0.7\columnwidth}{!}{\includegraphics{A2positions_3D_N.eps}}
\resizebox{0.7\columnwidth}{!}{\includegraphics{A2varX_3D_K.eps}}
\resizebox{0.7\columnwidth}{!}{\includegraphics{A2varY_3D_K.eps}}
\resizebox{0.7\columnwidth}{!}{\includegraphics{A2varZ_3D_K.eps}}
\caption{(color online) Action of a pulling force $f$ (along the $y$-axis) perpendicular to the applied field $g$ (along the $z$-axis). We adopted different values of the bending spring constant: $\kappa = 0.08, 0.6, 2, 8 \cdot 10^{-19} $ Nm. The chain length is fixed $(N=20)$, the external field amplitude is $g=4$ pN and the force applied to the last monomer of the chain corresponds to $f=8$ pN. The red solid lines correspond to the analytical results for the FJC case (see Eqs.(\ref{3Dcampo}) and (\ref{variance})). Black circles correspond to the MC simulations with the different bending spring constants. In the top panel we reported the average positions, while in the others the three variances of the $x$, $y$ and $z$ components.}
\label{tang}
\end{figure}
\begin{figure}
\resizebox{0.7\columnwidth}{!}{\includegraphics{A3WLCpositions_3D_Fdir.eps}}
\caption{(color online) Average positions of the chain for different angles between the external traction force $f$ and the direction of the applied field $g$. We adopted $N=20$, $g=4$ pN and $f=60$ pN. The red solid lines correspond to the FJC analytical result, Eq.(\ref{3Dcampo}). The symbols represent the MC results for the WLC model with $\kappa = 0.08, 0.6, 2 \cdot 10^{-19} $ Nm (circles, triangles and squares, respectively). For both FJC and WLC models we used different values of the angle between the applied field and the traction force $\theta= \pi/2, 3\pi/4, 5\pi/6, 15\pi/16$ from the right left. }
\label{pos_3D_bending}
\end{figure}
\begin{figure*}
\resizebox{0.67\columnwidth}{!}{\includegraphics{A3WLCsurfAngleVarX_K0.eps}}
\resizebox{0.67\columnwidth}{!}{\includegraphics{A3WLCsurfAngleVarY_K0.eps}}
\resizebox{0.67\columnwidth}{!}{\includegraphics{A3WLCsurfAngleVarZ_K0.eps}}
\caption{(color online) Monomer variances versus the position along the chain ($i$) and the angle between force and field ($0<\theta<\pi$) for the FJC model. As before we used $N=20$, $g=4$ pN and $f=60$ pN. }
\label{var-fjc}
\end{figure*}
\begin{figure*}
\resizebox{0.67\columnwidth}{!}{\includegraphics{A3WLCsurfAngleVarX_K15.eps}}
\resizebox{0.67\columnwidth}{!}{\includegraphics{A3WLCsurfAngleVarY_K15.eps}}
\resizebox{0.67\columnwidth}{!}{\includegraphics{A3WLCsurfAngleVarZ_K15.eps}}
\caption{(color online) Monomer variances versus the position along the chain ($i$) and the angle between force and field ($0<\theta<\pi$) for the WLC model. As before we used $N=20$, $g=4$ pN and $f=60$ pN. We also adopted a bending stiffness $\kappa = 0.6 \cdot 10^{-19} $ Nm. }
\label{var-wlc}
\end{figure*}
In previous Sections we considered the polymer chain immersed in an external field with an external force equal to zero at its end. However, since we developed a form of the partition function also taking into account an external force applied at the end of the chain (at least for the FJC model), we can directly study the important case with a non zero force superimposed to an external field, in general having different orientation. To do this, we keep fixed the origin of the chain and apply a constant force at the end of the polymer with different angles with respect to the direction of the applied field. We will analyse such a problem for both the FJC and WLC cases.
To begin, we consider a pulling force perpendicular to the direction of the applied field, respectively the $y$ and $z$ axis of our reference frame. For increasing values of the bending spring constant $\kappa$ going from nearly zero (FJC model) to $8 \cdot 10^{-19} $Nm (WLC model, including the bending constant of the DNA given by $\kappa=0.6 \cdot 10^{-19} $ Nm $\simeq\,15 k_B T$). In Fig.\ref{tang} we reported the results for the average monomers positions and their variances. The red solid lines correspond to the analytical results for the FJC case, while the black symbols correspond to the MC simulations. It is interesting to observe the effect of the persistence length (or, equivalently of the bending stiffness): in fact, in the top panel of Fig.\ref{tang} we note that the chains with an higher bending spring constant tend to remain more straight under the same applied load. At the same time, in the fourth panel of Fig.\ref{tang} we observe a decreasing variance along the $z$-axis (direction of the applied field) with an increasing bending spring constant; this fact can be easily interpreted observing that an higher rigidity of the chain reduces the statistical fluctuations in the direction of the applied field. The situation is more complicated for the variances along the $x$ and $y$ directions: in fact, along the chain, there are some monomers with variances larger than the corresponding FJC case and others with smaller values.
In Fig.\ref{pos_3D_bending} the average positions of the monomers for different directions of the external force are reported. The figure shows how the average monomer positions depend on the bending rigidity $\kappa$ and on the external force angle $\theta$.
As before we can observe that the persistence length of the chain tends to maintain a low curvature in the shape of the chain. This phenomenon is more evident with an increasing angle between the force and the field.
In fact, in Fig.\ref{pos_3D_bending}, the deviation between the FJC results and the WLC ones is higher for the angles approaching $\pi$, where the force and the field are applied in opposite directions.
In Figs.\ref{var-fjc} and \ref{var-wlc} the three components of the variance are reported versus the position of the monomer along the chain and the angle between the field and the force directions, for the FJC and WLC case, respectively. We can extract some general rules about this very complex scenario:
as for the variance along the $x$ direction we observe it to be an increasing function both of the position $i$ along the chain an of the angle $\theta$ between $f$ and $g$. Both behaviors can be interpreted with the concept of persistence length, as discussed above.
Conversely, the description of the variance along the $y$ direction is more complicated. In fact, while the increasing trend of the variance with the position $i$ along the chain is maintained, we observe a non monotonic behavior in terms of the angle $\theta$, with a minimum of the variance at about $\theta=2\pi/3$. Finally, the variance along the $z$ direction is always increasing along the chain, but it shows a maximum near $\theta=\pi$ (at least in the first part of the polymer chain).
\section{Conclusions}
In this work we investigated mechanical and conformational properties of flexible and semi-flexible polymer chains in external fields.
As for the FJC model we developed a statistical theory, based on the exact analytical determination of the partition function, which generalizes previous results to the case where an external field is applied to the system. In particular we obtained closed form expression for both the average conformation of the chain and its covariance distribution. For sake of completeness, all calculations have been performed both in two-dimensional and three-dimensional geometry. On the other hand, as for the WLC model we derived new approximate expressions describing the force-extension curve under the effect of an external field. They can be considered as the extensions of the classical Marko-Siggia relationships describing the polymer pulled by a single external force applied at the free end of the chain.
All our analytical results, for both FJC and WLC models, have been confirmed by a series of Monte Carlo simulations, always found in very good agreement with the theory.
The overall effects generated on the tethered polymer by the application of an external field can be summarized as follows.
As for the average configuration of a chain, it is well known that a single pulling force generates a uniform deformation along the chain (for a homogeneous polymer with all monomers described by the same effective elastic stiffness). On the contrary, the application of an external field produces a non uniform deformation along the chain, showing a larger deformation in the portion of the chain closest to the fixed end.
Moreover, the variances of the positions increase linearly along the chain with a single force applied to the polymer. Conversely, the polymer subjected to an external field exhibits a non-linearly increasing behavior of the variances along the chain. More specifically the variances assume the largest values nearby the last free monomers, where we can measure the highest fluctuations.
To conclude, we underline that the use of the MC method, once validated against known analytical solutions, is crucial for analysing models conditions which are beyond reach of a full analytical calculation. We take full profit of this approach for analysing the effects of the combination of an applied force at the free end together with an external field, especially when the two are not aligned. We have analysed the average configurational properties of the polymer, observing a very complex scenario concerning the behavior of the variances.
\begin{acknowledgments}
We acknowledge computational support by CASPUR (Rome, Italy) under project ``Standard HPC Grant 2011/2012''. FM acknowledges the Department of Physics of the University of Cagliari for the extended visiting grant, and the IEMN for the kind hospitality offered during part of this work.
\end{acknowledgments}
|
2,869,038,154,617 | arxiv | \section{Introduction}
Computer vision research has made great progress in the past few years, driven by the development of deep convolutional neural networks (CNNs) \cite{krizhevsky2012imagenet, szegedy2015going, he2016deep, huang2017densely, xie2017aggregated, jf2021grcnn, wang2017gated, Chen_2020_CVPR, hu2018squeeze, tan2019efficientnet} as well as large-scale datasets of high quality \cite{deng2009imagenet, lin2014microsoft}. However, these large-scale datasets are usually well-designed, and the number of instances in each class is balanced artificially, which is inconsistent with the data distributions in real-world scenaries. It is common that the images of some categories are difficult to be collected, resulting in a dataset with an imbalanced data distribution. In general, imbalanced datasets can be classified into two categories in terms of data distributions: long-tailed imbalanced distributions \cite{cui2019class} and step imbalanced distributions \cite{buda2018systematic},
which will both be the focus of this work.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{motivation}
\caption{RSG in a simple CNN. The part in the dotted
box is only used during training. RSG learns to generate new rare-class samples, which are used to reshape the decision boundary and enlarge the feature space of rare classes.
}
\vspace{-2ex}
\label{fig:cnn_RSG}
\vspace{-1ex}
\end{figure}
Generating new samples for rare classes during training is a good solution \cite{schwartz2018delta, wang2018low, yin2019feature}, which is regarded as a data augmentation method. However, these methods have different drawbacks, which limit their performance. Firstly, some frameworks \cite{schwartz2018delta,yin2019feature} were not trained in an end-to-end manner, so that the gradients cannot be backpropagated from the top to the bottom of CNNs. But it is well known that deep models can usually benefit from end-to-end training. Secondly, some methods \cite{schwartz2018delta, yin2019feature} utilized variation information, such as different poses or lighting, among samples from the same frequent class to generate new rare-class samples. However, these methods did not introduce any mechanism to ensure that the variation information obtained from frequent classes is class-irrelevant. As a result, if the variation information (which still contains the class-relevant information) is directly combined with real rare-class samples to generate new rare-class ones for training the classifier and reshaping decision boundaries, the performance will be hurt due to the aliasing of different class-relevant information.
Finally, Wang \emph{et al.} \cite{wang2018low} use noise vectors to encode the variation information mentioned above. But using such noise vectors for generation can possibly generate unstable or low-quality samples, since noise vectors are too random to reflect the true variations among real images.\footnote{Note that \cite{yin2019feature} has proposed to avoid sampling random vectors due to their randomness, and \cite{schwartz2018delta} also has conducted experiments and verified that using random vectors to generate new samples for training classifiers can degrade the performance.}
To alleviate the above drawbacks, in this paper, we propose a simple but efficient fully parameterized generator, called
rare-class sample generator (RSG), which can be trained end-to-end with any backbone. RSG directly uses the variation information, which usually reflects different poses or lighting, among the real samples from the same frequent class to generate new samples rather than using random vectors to encode such information, and therefore, RSG can generate more reasonable and stable samples.
Besides, RSG introduces a new module that is designed to further filter out the frequent-class-relevant information that possibly exists in the variation information, solving the aliasing problem mentioned above.
Figure~\ref{fig:cnn_RSG} shows how it is integrated into a simple CNN for imbalanced datasets. RSG only requires the feature maps of samples from any specific layer, and it generates some new samples during training to impact on rare classes in order to adjust their decision boundaries and enlarging their feature space. In the testing phase, RSG is removed, so that no additional computational burden is imposed on the network. Note that we only show a simple CNN in Fig.~\ref{fig:cnn_RSG}, but RSG can be used in any network architecture, such as ResNet \cite{he2016deep}, DenseNet \cite{huang2017densely}, ResNeXt \cite{xie2017aggregated}, and Inception \cite{szegedy2015going}.
\begin{figure*}[t]
\centering
\includegraphics[width=0.75\linewidth]{RSG}
\vspace{-0.3cm}
\caption{A diagram of RSG with samples' feature maps as input. The blue dashed line denotes a decision boundary.}
\label{fig:RSG}
\vspace{-1.2ex}
\end{figure*}
\section{Related Work}
\label{related_work}
Recent existing solutions for dealing with imbalanced datasets can be largely classified into approaches based on resampling and reweighting, new loss functions, meta-learning, utilizing unlabeled data, and sample generation.
Resampling techniques include oversampling the minority classes \cite{shen2016relay, buda2018systematic, byrd2019effect, zhou2020BBN, kang2019decoupling} and undersampling the majority classes \cite{buda2018systematic, japkowicz2002class, he2009learning}, which aims to balance the data distribution. Reweighting methods \cite{huang2016learning,huang2019deep,wang2017learning, cui2019class, li2019gradient} also try to balance the data distribution by assigning different weights to frequent-class and rare-class samples. Some approaches \cite{zhang2017range, cao2019learning} designed new loss functions by directly adding constraints to affect the decision boundaries for frequent and rare classes. Some meta-learning-based methods \cite{liu2019large, snell2017prototypical, shu2019meta} were also proposed to solve the data imbalance problem. Very recently, Yang and Xu \cite{yang2020rethinking} analyzed the value of imbalanced labels, and utilized unlabeled data to boost class-imbalanced learning via semi-supervised and self-supervised strategies.
Previous sample generation methods are more relevant to this work than other approaches. A hallucinator \cite{wang2018low} was designed to generate new samples for rare classes. It uses real instances from rare classes and noise vectors to produce new hallucinated instances for rare classes.
A $\Delta$-encoder framework \cite{schwartz2018delta} was proposed for generating new samples. It is first trained to reconstruct the pre-computed feature vector of input images from frequent classes. Thereafter, it is used to generate new samples by combining the real rare-class samples, and the newly generated ones are further used to train the classifier.
A feature transfer learning (FTL) framework \cite{yin2019feature} was recently proposed, which consists of an auto-encoder, a feature filter, and fully-connected (FC) layers. The auto-encoder is initially pre-trained on a large-scale dataset for several epochs to converge to learn the latent representations. Then, principal component analysis (PCA) is leveraged to transfer the intra-class variance from frequent classes to rare classes by generating some new rare-class samples. A two-stage alternating training strategy was also proposed to jointly optimize the encoder, the feature filter, and FC layers.
\section{Rare-Class Sample Generator (RSG)}
\label{model}
The rare-class sample generator (RSG) is composed of a center estimation module, a contrastive module, and a vector transformation module (see Fig.~\ref{fig:RSG}).
To~optimize the parameters of RSG, two loss functions are used, namely, center estimation with sample contrastive (CESC) loss
and maximized vector (MV) loss.
RSG assumes that samples from a class follow a uni-modal distribution or a multi-modal distribution \cite{snell2017prototypical, yin2019feature}, and thus there can be a center or a set of centers in each category to fit the distribution. In this paper, we define the notion of \emph{feature displacement},
which indicates the displacement of a sample to its corresponding center in a class, caused by the same object with different conditions (e.g., angles, poses, or light conditions) in input images. Therefore, under ideal circumstances, feature displacement should not contain class-relevant information.
Given a mini-batch of samples consisting of both frequent-class and rare-class instances, RSG takes their feature maps as input and forwards them to these modules.
The center estimation module aims to estimate a set of centers in each class, which is used as ``anchors'' for obtaining the feature displacement of each sample.
The contrastive module is used to ensure that the feature displacement does not contain any frequent-class-relevant information during the sample generation process.
The vector transformation module calculates the {feature displacement} of each frequent-class sample based on the estimated centers and uses it for generating new samples for rare classes. Intuitively, generating some new samples with such {feature displacement} that comes from abundant classes for rare classes may alleviate the problem caused by imbalanced datasets, as rare classes usually lack input variations.
\medskip
\noindent\textbf{The center estimation module}
is formulated as:
\begin{equation}
\label{eq:eq0}
\gamma^l =f(A^l ave(x^l) + b^l),
\end{equation}
where $x^l\,{\in}\, R^{D\times W\times H}$ is the feature map of an input sample, and we assume that the channel dimension, width, and height are $D$, $W$, and $H$, respectively.
$l$ is the class label of the sample, $ave(\cdot)$ denotes global average pooling across width and height, $A^l$ and $b^l$ are the parameters of this module performing a linear transformation on the input, and $f$ is the softmax function that outputs a probability distribution ($\gamma^l$) for assigning the sample to the closest center in its corresponding class.
The center estimation module is designed to estimate a set of centers instead of only one center for each class, since the intra-class data distribution is unknown. If the intra-class data distribution is a multi-modal distribution, using a set of centers is better than using a single center. On the contrary, if it is a uni-modal distribution, those centers can be very close or overlapping, which is similar to using a single center.
\smallskip
\noindent\textbf{The contrastive module}
is formulated as:
\begin{equation}
\label{eq:eq0}
\gamma^* =f(A^*ave(h(cat[x_1, x_2])) + b^*),
\end{equation}
where $x_1{\in}\, R^{D\times W\times H}$ and $x_2{\in}\, R^{D\times W\times H}$ are the feature maps of any two input samples from a given mini-batch, and $cat(\cdot)$ denotes the concatenation operation, which performs along the channel dimension. $h(\cdot)$ is implemented by stacking two $3\times3$ convolutional layers with 256 channels interleaved with a ReLU activation layer throughout the paper. $A^*$ and $b^*$ are the parameters of the linear layer, resulting in a probability distribution $\gamma^*$ to show whether two samples come from the same class.
\medskip
\noindent\textbf{The vector transformation module}
is responsible for
generating new rare-class samples through combining the {feature displacement} from real frequent-class samples with real rare-class samples. As Fig.~\ref{fig:cnn_RSG} shows, an imbalanced dataset causes a bias in the decision boundary, resulting in a smaller feature space for rare classes than for frequent classes.
Thus, we propose to use the vector transformation module to generate new samples for rare classes to enlarge the feature space and ``push away'' the decision boundaries.
To generate new samples, we first need to obtain the {feature displacement} from frequent classes, which is implemented by using the frequent-class samples and their corresponding centers estimated by the center estimation module:
\begin{equation}
\label{eq:eq4}
x_{\text{fd-freq}} = x^l_{\text{freq}} -up(C_\mathcal{K}^l),
\end{equation}
where $x^l_{\text{freq}}{\in}\, R^{D\times W\times H}$ denotes a sample in a frequent class~$l$. We use $C_i^l\,{\in}\, R^{D}$ to denote the $i$-th center in class $l$ with dimension $D$, and {\small $\mathcal{K}$} is the index of the closest center to $x^l_{\text{freq}}$, i.e., {\small $\mathcal{K} = \text{arg\,max}\,\, f(A^lave(x_{\text{freq}}^l)+b^l)$.} $up(\cdot)$ denotes the upsampling operation implemented by repeating the values of $C_i^l$ along the width and height, forming feature maps of a center in the same size as the $x^l_{\text{freq}}$.
After we subtract the corresponding center feature maps from $x^l_{\text{freq}}$, most of the class-relevant information is removed from $x^l_{\text{freq}}$;
thus, we use $x_{\text{fd-freq}}$ to represent the {feature displacement} of the frequent-class sample.
Then, the second step is to generate new samples for rare classes by using $x_{\text{fd-freq}}$ and the real rare-class samples. Intuitively, $x_{\text{fd-freq}}$ can be added to the centers of rare classes, but we directly add $x_{\text{fd-freq}}$ to the real rare-class samples for two reasons: Firstly, the length of some $x_{\text{fd-freq}}$ may be smaller than the original variance of the feature space in rare classes. If we add $x_{\text{fd-freq}}$ to the centers, the new samples may have no impact on decision boundaries. Secondly, due to the limited sample size of rare classes, most rare-class samples can directly determine the decision boundaries, and adding $x_{\text{fd-freq}}$ to rare-class samples has a more straightforward impact on the decision boundaries.
So, the generation process of new rare-class samples is:
\begin{equation}
\label{eq:eq5}
x_{\text{new}}^{\scriptscriptstyle l'} = \mathcal{T}(x_{\text{fd-freq}}) + x^{\scriptscriptstyle l'}_{\text{rare}},
\end{equation}
where $x^{{\scriptscriptstyle l'}}_{\text{rare}}{\in}\, R^{D\times W\times H}$ denotes a sample in a rare class $l'$, $x_{\text{new}}^{{\scriptscriptstyle l'}}$ is a newly generated sample in that class, and $\mathcal{T}$ is a linear transformation defined as $\mathcal{T}(z) = conv(z)$, where $conv$ denotes a single convolutional layer containing a set of convolutional filters with the kernel size 3, the stride 1, and the padding size 1, whose number is the same as the number of channels of input feature maps.
\begin{table*}[t]
\centering\resizebox{0.85\textwidth}{!}{
\begin{tabular}{@{}l@{}}
\hline
\textbf{Algorithm 1:} Training Procedure of RSG\\
\hline
\textbf{Input: }\\
Batch size: s; feature maps of training data: $\{x^{(i)}\}_{i=1}^{s}$; epoch threshold: T$_{\text{th}}$; centers: $C$; training epochs: T;\\
transfer strength: $\beta\in(0, 1]$; center estimation module: $CE_{\theta}$; contrastive module: $CM_{\theta}$;\\
vector transformation module: $VT_{\theta}$; weights of the backbone network: $\widetilde{\theta}$; frequent-class ratio: $\alpha\in(0, 1]$.
\vspace*{1ex}\\
\textbf{Training:}\\
\textbf{for} j \textbf{in} range(0, T): \\
\quad Compute $L_{\text{CESC}}$ with $\{x^{(i)}\}_{i=1}^{s}$. Compute gradient $\nabla_{\text{CESC}}$. \\
\quad Update: $\nabla_{\text{CESC}} \rightarrow CE_{\theta}$, $\nabla_{\text{CESC}} \rightarrow C$.\\
\quad \textbf{if} j \textless T$_{\text{th}}$: \\
\quad\quad Compute $L_{\text{cls}}$ with $\{x^{(i)}\}_{i=1}^{s}$. Compute gradient $\nabla_{\text{cls}}$. \\
\quad\quad Update: $\nabla_{\text{cls}} \rightarrow \widetilde{\theta}$, $\nabla_{\text{CESC}} \rightarrow CM_{\theta}$. \\
\quad \textbf{else}: \\
\quad\quad Generate new samples with $\alpha$ and $\beta$: $\{x_{\text{new}}^{(i)}\}_{i=1}^{ s_{\text{new}}}$. Concat: $\{x_{\text{aug}}^{(i)}\}_{i=1}^{s + s_{\text{new}}}$ = [$\{x^{(i)}\}_{i=1}^{s}$, $\{x_{\text{new}}^{(i)}\}_{i=1}^{ s_{\text{new}}}$]. \\
\quad\quad Compute $L_{\text{MV}}$ with $C$, $\{x_{\text{new}}^{(i)}\}_{i=1}^{ s_{\text{new}}}$, and $\{x^{(i)}\}_{i=1}^{s}$. Compute gradient $\nabla_{\text{MV}}$. \\
\quad\quad Compute $L_{\text{cls}}$ with $\{x_{\text{aug}}^{(i)}\}_{i=1}^{s+ s_{\text{new}}}$. Compute gradient $\nabla_{\text{cls}}$. \\
\quad\quad Update: $\nabla_{\text{MV}} + \nabla_{\text{cls}} \rightarrow VT_{\theta}$, $\nabla_{\text{cls}} \rightarrow \widetilde{\theta}$. \\
\quad\textbf{end if} \\
\textbf{end for} \\
\hline
\end{tabular}}
\vspace{-0.4ex}
\label{tab:training_proc}
\end{table*}
\begin{figure}[t]
\centering
\includegraphics[width=1.\linewidth]{MV_loss}
\caption{The objective and principle of the vector transformation module and MV loss. The triangles and circles in the figure have the same meaning as those in Fig.~\ref{fig:RSG}.}
\label{fig:MV_loss}
\end{figure}
\medskip
\noindent\textbf{The center estimation with sample contrastive loss
($L_{\text{CESC}}$)} aims to update centers of each class and to optimize the contrastive module as well as the center estimation module. Therefore, it is composed of two classical loss terms, which can be written as:
\begin{equation}
\small
\begin{aligned}
L_{\text{CESC}} = \left \langle \sum_{i=0}^{K-1}\gamma_i^l\sum_{d,j,k}||x^l_{(d,j,k)} - up(C_i^l)_{(d,j,k)}||^2 \right \rangle_s\\
- \left \langle (ylog\gamma^* + (1-y)log(1-\gamma^*)) \right \rangle_{\frac{s}{2}},
\end{aligned}
\end{equation}
where $d$, $j$, and $k$ denote the indices of the feature maps along
the channel, width, and height. $\gamma^l_i$ is the probability of the sample belonging to the $i$-th center obtained from Eq.~\eqref{eq:eq0}, $K$ is the number of centers in each class, $s$ is the batch size.
Considering a mini-batch with batch size $s$, $\frac{s}{2}$ sample pairs are formed by randomly picking samples from the mini-batch during training for the contrastive module. We denote by $y\in\{0, 1\}$ the ground-truth showing whether the samples in each input pair come from the same class. $\langle \cdot \rangle_s$ and $\langle \cdot \rangle_{\frac{s}{2}}$ denote that the first term and the second term of $L_{\text{CESC}}$ are calculated over $s$ instances and $\frac{s}{2}$ pairs on average, respectively.
\medskip
\noindent\textbf{The maximized vector loss
($L_{\text{MV}}$)} optimizes the parameter of the vector transformation module (namely, $\mathcal{T}$) and ensures that newly generated samples can enlarge the feature space of rare classes, where the basic idea is to maximize the {feature displacement} of newly generated samples relative to their centers (i.e., the new overall {feature displacement} in Fig.~\ref{fig:MV_loss}). Here, we treat the {feature displacement} of a sample as a vector starting from a center to the sample (see Fig.~\ref{fig:MV_loss}). To generate a new rare-class sample, one can directly add $x_{\text{fd-freq}}$ to a rare-class sample (Fig.~\ref{fig:MV_loss}, left), but the direction of $x_{\text{fd-freq}}$ is usually uncertain and the new overall {feature displacement} typically does not always have the largest length, because of the triangle inequality. Thus, we design the MV loss to make the transformed vector co-linear with the {feature displacement} of the rare-class sample in the same direction, and leave the length of the transformed vector unchanged (Fig.~\ref{fig:MV_loss}, right), to maximally impact the decision boundary.
For example, if the direct addition is used, the newly generated samples may not impact the decision boundary due to the limited overall length. But leveraging the vector transformation module and MV loss ensures that the newly generated samples are widely distributed in the feature space of rare classes, because of the larger displacement relative to the centers, and it improves the probability that newly generated samples can appear around decision boundaries in each batch during training.
Moreover, as for a given frequent-class sample, although the frequent-class-relevant information has been largely removed when the {feature displacement} of a sample is calculated via Eq.~\eqref{eq:eq4}, we still use the contrastive module to ensure that the {feature displacement} does not contain frequent-class-relevant information in order to further alleviate the possible class-relevant information aliasing problem when new rare-class samples are generated.
W.l.o.g., $\gamma^*$ is the probability that the two input samples of the contrastive module do not belong to the same category, and the MV loss~is:
\begin{equation}
\label{eq:eq6}
\begin{aligned}
&L_{\text{MV}} = \left \langle \sum_{j,k}(|\frac{\mathcal{T}(x_{\text{fd-freq}})^{(j,k)} \cdot x^{(j,k)}_{\text{fd-rare}}}{||\mathcal{T}(x_{\text{fd-freq}})^{(j,k)}||_2||x^{(j,k)}_{\text{fd-rare}}||_2} - 1|) \right \rangle_{s_{\text{new}}} \\
&+ \left \langle \sum_{j,k}(|\ ||\mathcal{T}(x_{\text{fd-freq}})^{(j,k)}||_2-||x_{\text{fd-freq}}^{(j,k)}||_2|)\right \rangle_{s_{\text{new}}} \\
& - \left \langle log\gamma^* \right \rangle_{s_{\text{new}}}\,,
\end{aligned}
\end{equation}
where $j$ and $k$ denote the indices of the feature maps along the width and height, $|\cdot|$ takes the absolute value, and $x_{\text{fd-rare}}$ represents the {feature displacement} obtained from a sample and its closest center in a rare class via Eq.~\eqref{eq:eq4}.
The two input samples of the contrastive module are $\mathcal{T}(x_{\text{fd-freq}})$ and $x^l_{\text{freq}}$, respectively.
The first term of $L_{\text{MV}}$ is essentially to minimize the cosine angle of $\mathcal{T}(x_{\text{fd-freq}})$ and $x_{\text{fd-rare}}$ in order to make them co-linear in the same direction, the second term is to keep the length of $\mathcal{T}(x_{\text{fd-freq}})$ unchanged compared with $x_{\text{fd-freq}}$, and the third term makes $\mathcal{T}(x_{\text{fd-freq}})$ and $x^l_{\text{freq}}$ not belong to the same category, ensuring that $\mathcal{T}(x_{\text{fd-freq}})$ will not have any frequent-class-relevant information.
Given a mini-batch of samples, $\langle \cdot \rangle_{s_{\text{new}}}$ denotes that $L_{\text{MV}}$ is calculated over newly generated samples on averages, where $s_{\text{new}}$ is the number of newly generated samples.
Note that minimizing
$L_{\text{MV}}$ may encourage to
generate some new samples with very large overall {feature displacement}, which can hurt the performance on frequent classes. Thus, the vector transformation module also receives the gradients from the classification loss function $L_{\text{cls}}$, reaching a trade-off between $L_{\text{cls}}$ and the second term of $L_{\text{MV}}$,
to generate more reasonable new samples for rare classes.
\medskip
\noindent\textbf{The training procedure and overall loss function} of RSG
are summarized in \textbf{Algorithm 1}
and given as follows, respectively:
\begin{equation}
\label{eq:overall}
L_{total} = L_{\text{cls}} + \lambda_1 L_{\text{CESC}} + \lambda_2 L_{\text{MV}},
\end{equation}
where $L_{\text{cls}}$ denotes any classification loss, such as softmax with cross-entropy loss, focal loss \cite{lin2017focal}, AM-Softmax \cite{wang2018additive, wang2018cosface}, and LDAM \cite{cao2019learning}, and $\lambda_1$ and $\lambda_2$ denote coefficients.
The epoch threshold $T_{\text{th}}$ is set to the index of epoch in which the learning rate is decayed to 0.001 in this paper.
\medskip
\noindent\textbf{The workflow} of RSG is as follows (see Fig.~\ref{fig:RSG}). Before the epoch threshold $T_{\text{th}}$, given a mini-batch of samples, RSG splits them into two parts according to a manually set constant frequent-class ratio
$\alpha={n_{\text{freq}}}/{n_{\text{cls}}}$,
where $n_{\text{freq}}$ and $n_{\text{cls}}$ denote the number of frequent classes and the total number of classes, respectively.
For example, for a training set of 10 classes and $\alpha=0.3$, the three classes with the largest number of samples are frequent classes, and the other classes are rare classes. (Note that for simplicity, only a frequent-class and a rare-class are plotted in Fig.~\ref{fig:RSG}.)
Then, the data are forwarded to the center estimation module to update centers for each class and optimize the parameters of the center estimation module. In addition, those data are also forwarded to the contrastive module to optimize its parameters.
After the epoch threshold $T_{\text{th}}$, RSG starts to generate new samples and the parameters of the contrastive module are not further updated. The {feature displacement} of each sample in frequent classes is calculated by the vector transformation module, which is then transformed with $\mathcal{T}$ and randomly added to the data in rare classes with a manually set parameter transfer strength $\beta$, resulting in newly generated samples. The contrastive module propagates gradients to the $\mathcal{T}$ in the vector transformation module to optimize $\mathcal{T}$ and filter out frequent-class-relevant information. In general, the number of samples in frequent classes is not smaller than that in rare classes in a given mini-batch. We define the transfer strength $\beta$ as the number of samples in frequent classes involved in calculating the {feature displacement} and generating new samples for rare classes. Specifically, the number of newly generated samples is
$s_{\text{new}} = max\{\lfloor \beta \times {s_{\text{freq}}}/ {s_{\text{rare}}}\rfloor, 1\} \times s_{\text{rare}}$,
where $s_{\text{freq}}$ and $s_{\text{rare}}$ are the numbers of samples in frequent and rare classes in a mini-batch, respectively, and $\lfloor \cdot \rfloor$ is the floor function. Finally, the feature maps of newly generated samples are concatenated with the original input feature maps along the batch dimension and forwarded to subsequent layers to calculate the loss and to optimize the whole framework.
\section{Experimental Evaluation}
\label{experiment}
\paragraph{Datasets.} The experimental evaluation focuses on the Imabalanced CIFAR, the iNaturalist 2018, the Places-LT, and the ImageNet-LT datasets. Imbalanced CIFAR is based on the original CIFAR dataset, which is constructed by reducing the training samples per class, and the validation set is not changed. An imbalance ratio $\rho$ is defined as the ratio between sample sizes of the most frequent class and the least frequent class, i.e., $\rho = {N_{\text{max}}}/{N_{\text{min}}}$. We conducted experiments on the long-tailed imbalance \cite{cui2019class} and step imbalance~\cite{buda2018systematic} settings. The imbalance factors ($\rho$) that we used in our experiments are 50 and 100. The iNaturalist species classification dataset \cite{van2018inaturalist} is a large-scale imbalanced dataset of 437,513 training images classified into 8142 species in its 2018 version. The official training and validation set has a long-tailed distribution and a balanced distribution, respectively.
Places-LT has 365 categories, with the maximum of 4980 images per class and the minimum of 5 images per class, while ImageNet-LT has 1000 categories, with the maximum of 1280 images per class and the minimum of 5 images per class.
As for the evaluation on these two datasets, the classes are further categorized into three splits: many-shot (more than 100 samples), medium-shot (between 20 to 100), and few-shot (less than 20) in order to better examine performance variations across classes with different numbers of samples seen during training. We follow the experimental setting of these datasets in previous works \cite{cao2019learning, kang2019decoupling} for evaluation.
\vspace*{-1ex}
\paragraph{Implementation details.} The training details on the four datasets are summarized as follows:
\begin{itemize}[leftmargin=8pt]
\item {\bf Imbalanced CIFAR:} We followed the basic data augmentation method \cite{he2016deep} for training: 4 pixels are padded, and a $32\times32$ patch is randomly cropped from the image or its horizontal flip. The framework was trained with a batch size of 128 for 200 epochs. The learning rate was initially set to~0.1, and then it was decayed by~0.01 at the 160-th epoch and again at the 180-th epoch. The network was optimized by using stochastic gradient descend with a momentum of 0.9.
\item {\bf iNaturalist 2018:} We followed standard practice and performed data augmentation with random-size cropping \cite{szegedy2015going} to $224\times224$ from images or their horizontal flip. The network was trained from scratch for 90 epochs with a batch size of 256. The learning rate was set to 0.1 initially, and then it was decayed by 0.1 at the 50-th epoch, the 70-th epoch, and the 85-th epoch, respectively. Besides, for a fair comparison, we followed Kang \emph{et al.} \cite{kang2019decoupling} and also trained the model for the $2\times$ schedular (180 epochs). In our $2\times$ schedular experiment, the learning rate was decayed by 0.1 at the 100-th epoch, the 140-th epoch, and the 170-th epoch, respectively. During validation, images were center-cropped to $224\,{\times}\,224$ without further augmentation.
\item {\bf Places-LT:} We followed previous work \cite{liu2019large} to perform the data augmentation and to fine-tune ResNet-152, which is pre-trained on the full ImageNet-2012 dataset. The network was trained with a batch size of 256 for 30 epochs. The initial learning rate was set to 0.01, and it was decayed by 0.1 at every 10 epoch, and the training was stopped after 30 epochs.
\item {\bf ImageNet-LT:} We followed previous work \cite{kang2019decoupling} to use ResNeXt-50-32x4d, which was trained with a batch size of 256 for 100 epochs. The initial learning rate was set to 0.1, and it was decayed by 0.1 at the 60-th epoch, the 80-th epoch, and the 95-th epoch, respectively.
\end{itemize}
\begin{table}
\centering
\footnotesize
\resizebox{0.455\textwidth}{!}{
\begin{tabular}{@{}ccccc@{}}
\toprule[1pt]
\multirow{2}{*}{{\bf CIFAR-10}} & \multicolumn{2}{c}{Long-Tailed} &\multicolumn{2}{c}{Step} \\
\cline{2-5}
& w/o RSG & w/ RSG & w/o RSG & w/ RSG \\
\hline
ERM & 25.19 & {\bf 20.25} & 28.88 & {\bf 26.07} \\
Focal Loss \cite{lin2017focal} & 23.28 & {\bf 21.58} & 28.70 & {\bf 26.01} \\
M-DRW \cite{wang2018additive,cao2019learning} & 20.44 & {\bf 17.72} & 21.05 & {\bf 20.09} \\
LDAM-DRW \cite{cao2019learning} & 18.97 & {\bf 17.20} & 18.67 & {\bf 17.90} \\
\midrule[1pt]
\multirow{2}{*}{{\bf CIFAR-100}} & \multicolumn{2}{c}{Long-Tailed} &\multicolumn{2}{c}{Step} \\
\cline{2-5}
& w/o RSG & w/ RSG & w/o RSG & w/ RSG \\
\hline
ERM & 56.15 & {\bf 54.44 } & 59.32 & {\bf 56.82 } \\
Focal Loss \cite{lin2017focal} & 55.68 & {\bf 54.85 } & 58.50 & {\bf 55.93 } \\
M-DRW \cite{wang2018additive, cao2019learning} & 56.06 & {\bf 55.30 } & 56.26 & {\bf 54.60 } \\
LDAM-DRW \cite{cao2019learning} & 53.38 & {\bf 51.50 } & 50.97 & {\bf 49.43 } \\
\bottomrule[1pt]
\end{tabular}}
\vspace{-0.2cm}
\caption{Top-1 error rates of ResNet-32 with RSG for different loss functions on Imbalanced CIFAR for $\rho=50$.}
\vspace{-0.5ex}
\label{tab:loss_compare}
\end{table}
\vspace*{-1ex}
\paragraph{Ablation studies.}
We performed ablation studies on Imbalanced CIFAR with $\rho=50$. The mean error rates that are taken from three independent runs are reported. We comprehensively searched the hyperparameters of RSG and explored which level of feature is the most suitable for RSG to generate new samples by conducting experiments on ResNet-32 \cite{he2016deep} with LDAM-DRW \cite{cao2019learning}, where ``DRW'' denotes a deferred re-weighting training strategy proposed by Cao~\emph{et al.} \cite{cao2019learning}. Based on our exploration, in the following experiments, we set the number of centers to 15, the frequent-class ratio to 0.2 and 0.5
for long-tailed and step imbalanced distributions, the transfer strength to 1.0 and 0.01 for long-tailed and step imbalanced distributions, and $\lambda_1$ and $\lambda_2$ to 0.1 and 0.01, respectively. The search process can be found in the supplementary material. Note that RSG was initially used before the second-to-last down-sampling layer.
\begin{table}
\centering
\resizebox{0.455\textwidth}{!}{
\begin{tabular}{@{}ccccc@{}}
\toprule[1.0pt]
\multirow{2}{*}{{\bf CIFAR-10}} & \multicolumn{2}{c}{Long-Tailed} &\multicolumn{2}{c}{Step} \\
\cline{2-5}
& w/o RSG & w/ RSG & w/o RSG & w/ RSG \\
\hline
ResNet-32 & 18.97 & {\bf 17.20} & 18.67 & {\bf 17.90} \\
ResNet-56 & 18.01 & {\bf 16.83} & 18.52 & {\bf 17.20} \\
ResNet-110 & 17.70 & {\bf 16.61} & 17.96 & {\bf 16.73} \\
DenseNet-40 & 17.46 & {\bf 16.21} & 17.40 & {\bf 16.12} \\
ResNeXt-29, 8$\times$64d & 16.10 & {\bf 15.26} & 16.82 & {\bf 15.99} \\
\midrule[1.0pt]
\multirow{2}{*}{{\bf CIFAR-100}} & \multicolumn{2}{c}{Long-Tailed} &\multicolumn{2}{c}{Step} \\
\cline{2-5}
& w/o RSG & w/ RSG & w/o RSG & w/ RSG \\
\hline
ResNet-32 & 53.38 & {\bf 51.50} & 50.97 & {\bf 49.43} \\
ResNet-56 & 51.63 & {\bf 50.60} & 49.22& {\bf 48.53} \\
ResNet-110 & 50.64 & {\bf 49.83} & 48.65 & {\bf 47.90} \\
DenseNet-40 & 49.51 & {\bf 48.75} & 48.30 & {\bf 47.13} \\
ResNeXt-29, 8$\times$64d & 49.62 & {\bf 48.70} & 50.68 & {\bf 47.16} \\
\bottomrule[1.0pt]
\end{tabular}}
\vspace{-0.2cm}
\caption{Top-1 error rates of different network architectures combined with LDAM-DRW \cite{cao2019learning} on Imbalanced CIFAR for $\rho=50$.}
\label{tab:arch_compare}
\vspace{-0.5ex}
\end{table}
Firstly, we fixed the network architecture to ResNet-32 \cite{he2016deep} and tested RSG relative to different $L_{\text{cls}}$.
By Table~\ref{tab:loss_compare}, the deep model equipped with RSG consistently performs better than the one without RSG when combined with different loss functions. RSG significantly improves the performance when the model is combined with standard softmax with cross-entropy loss (denoted ERM, i.e., empirical risk minimization). This is reasonable, as standard softmax does not have any mechanism against imbalanced datasets. As for focal loss, AM-softmax, and LDAM, although they are well-designed to tackle imbalanced datasets, RSG can still further improve the performance.
Secondly, we set $L_{\text{cls}}$ to LDAM with DRW \cite{cao2019learning} (i.e., LDAM-DRW) and evaluated (mainly five) different network architectures combined with RSG on Imbalanced CIFAR, namely, ResNet-32, ResNet-56, ResNet-110, DenseNet-40, and ResNeXt-29 (8$\times$64d).
Note that the used networks
were built according to the experiments on CIFAR in their original papers \cite{he2016deep, xie2017aggregated, huang2017densely}.
As Table~\ref{tab:arch_compare} shows, when RSG is integrated into the networks, all the models are consistently improved.
Thirdly, we did a comprehensive ablation study on MV loss and the vector transformation module, and we obtain the following conclusions based on Table~\ref{tab:add_compare}: (1) Every subterm of MV loss is important and useful, since once we remove any subterm of it, an increase can be observed with regard to the error rate. (2) Adding the {feature displacement} to the centers of rare classes leads to an increase in terms of the error rate. This fact verifies what we have mentioned in Section~\ref{model}, i.e., adding the {feature displacement} to real rare-class samples is a better choice than adding it to the centers of rare classes. (3) Using the vector transformation module with MV loss performs better than directly adding the {feature displacement} to the samples in rare classes, which thus verifies their effectiveness.
Moreover, RSG is compared with previous sample generation methods \cite{schwartz2018delta, wang2018low, yin2019feature}. As Table~\ref{tab:compare_prev} shows, RSG has outperformed previous methods with different margins, showing that RSG can solve the drawbacks in previous generation methods and improve the performance.
Finally, we leveraged RSG before different pooling layers of ResNet-32 to explore which level of feature is the most suitable for generating new samples. As Table~\ref{tab:diff_layers} shows, RSG achieves the best result when it was used before the second-to-last down-sampling layer. Therefore, in the remaining experiments, RSG was still used before the second-to-last down-sampling layer.
\begin{table}
\centering
\resizebox{0.475\textwidth}{!}{
\begin{tabular}{@{}ccccc@{}}
\toprule[1pt]
\multirow{2}{*}{} & \multicolumn{2}{c}{Long-Tailed} &\multicolumn{2}{c}{Step} \\
\cline{2-5}
& CIFAR-10 & CIFAR-100 & CIFAR-10 & CIFAR-100 \\
\hline
MV Loss w/o 1st Term & 18.03 & 52.15 & 18.58 & 50.34 \\
MV Loss w/o 2nd Term & 18.07 & 52.33 & 18.36 & 50.19 \\
MV Loss w/o 3rd Term & 17.67 & 52.12 & 18.23 & 49.84 \\
Adding to Rare-class Centers & 18.91 & 52.79 & 18.47 & 50.67 \\
Direct Addition & 18.87 & 52.48 & 18.33 & 49.99 \\
Vector Transformation Module & {\bf 17.20} & {\bf 51.50} & {\bf 17.90} & {\bf 49.43} \\
\bottomrule[1pt]
\end{tabular}}
\vspace{-0.2cm}
\caption{Ablation study on MV loss and the vector transformation module.
Top-1 error rates of ResNet-32 combined with RSG and LDAM-DRW \cite{cao2019learning} on Imbalanced CIFAR for $\rho=50$ are reported.}
\label{tab:add_compare}
\end{table}
\begin{table}
\centering
\resizebox{0.475\textwidth}{!}{
\begin{tabular}{@{}ccccc@{}}
\toprule[1.0pt]
\multirow{2}{*}{} & \multicolumn{2}{c}{Long-Tailed} &\multicolumn{2}{c}{Step} \\
\cline{2-5}
& CIFAR-10 & CIFAR-100 & CIFAR-10 & CIFAR-100 \\
\hline
$\Delta$-Encoder \cite{schwartz2018delta} & 23.76 & 54.91 & 27.70 & 57.85 \\
Imaginary \cite{wang2018low} & 23.99 & 55.08 & 28.23 & 58.46 \\
FTL \cite{yin2019feature} & 23.56 & 55.24 & 27.83 & 58.03 \\
ERM-RSG (ours) & {\bf 20.25} & {\bf 54.44} & {\bf 26.07} & {\bf 56.82} \\
\bottomrule[1.0pt]
\end{tabular}}
\vspace{-0.2cm}
\caption{Comparison with other sample generation methods on Imbalanced CIFAR ($\rho=50$). All of them are based on ResNet-32 combined with ERM for a fair comparison.}
\label{tab:compare_prev}
\end{table}
\begin{table}[h]
\centering
\resizebox{0.48\textwidth}{!}{
\begin{tabular}{ccccc}
\toprule[1.0pt]
\multirow{2}{*}{} & \multicolumn{2}{c}{Long-Tailed} & \multicolumn{2}{c}{Step} \\
\cline{2-5}
& CIFAR-10 & CIFAR-100 & CIFAR-10 & CIFAR-100 \\
\hline
1st down-sampling & 18.13 & 53.22 & 18.66 & 50.81 \\
2nd down-sampling & {\bf17.20} & {\bf51.50} & {\bf17.90} & {\bf49.43} \\
3rd down-sampling (GAP) & 17.68 & 52.14& 18.05 & 50.38 \\
\bottomrule[1.0pt]
\end{tabular}}
\caption{Ablation study (top-1 error rates) with regard to the different layers, where RSG was used on Imbalanced CIFAR ($\rho=50$). RSG was used before the three down-sampling layers in ResNet-32. ResNet-32 combined with LDAM-DRW was used, and GAP denotes global average pooling.}
\label{tab:diff_layers}
\end{table}
\vspace{-0.5cm}
\paragraph{Comparison with state of the art.}
For each of the following experiments, we report mean error rates or mean accuracies, which are taken from three independent runs. Table~\ref{tab:soa_cifar} shows the results on Imbalanced CIFAR with $\rho\,{\in}\,\{50$, $100\}$. We first compare our LDAM-DRW-RSG with LDAM-DRW, as this comparison directly shows the improvement brought by RSG. After combining LDAM-DRW with RSG, we obtain a remarkable improvement for both long-tailed and step imbalanced
distributions, which shows the power of RSG for handling imbalanced datasets. As a result, with the help of RSG, LDAM-DRW-RSG achieves superior results on Imbalanced CIFAR when compared with previous methods.
\begin{table*}
\centering\resizebox{0.75\textwidth}{!}{
\begin{tabular}{@{}c|c|c|c|c|c|c|c|c@{}}
\hline
Dataset & \multicolumn{4}{c|}{Imbalanced CIFAR-10} &\multicolumn{4}{c}{Imbalanced CIFAR-100} \\
\hline
Imbalance Type & \multicolumn{2}{c|}{Long-Tailed} &\multicolumn{2}{c|}{Step} & \multicolumn{2}{c|}{Long-Tailed} &\multicolumn{2}{c}{Step}\\
\hline
Imbalance Ratio ($\rho$) & 100 & 50 & 100 & 50 & 100 & 50 & 100 & 50 \\
\hline
ERM & 29.64 & 25.19 & 36.70 & 28.88 & 61.68 & 56.15 & 61.43 & 59.32\\
Focal loss \cite{lin2017focal} & 29.62 & 23.28 & 36.09 & 28.70 & 61.59 & 55.68 & 61.65 & 58.50\\
CB Focal \cite{cui2019class} & 25.43 & 20.73 & 39.73 & 39.65 & 63.98 & 54.83 & 80.24 & 85.10 \\
CB RW \cite{cui2019class} & 27.63 & 21.95 & 38.06 & 30.38 & 66.01 & 57.54 & 78.69 & 69.63 \\
M-DRW \cite{cao2019learning} & 24.94 & 20.44 & 27.67 & 21.05 & 59.49 & 56.06 & 58.91 & 56.26\\
BBN \cite{zhou2020BBN}& {\bf 20.18} & 17.82 & 22.34 & 18.33 & 57.44 & 52.98 & 54.14 & 50.49 \\
LDAM-DRW \cite{cao2019learning} & 22.97 & 18.97 & 23.08 & 18.67 & 57.96 & 53.38 & 54.64 & 50.97 \\
LDAM-DRW-SSP \cite{yang2020rethinking} & 22.17 & 17.87 & 22.95 & 18.38 & 56.57 & 52.89 & 54.28 & 50.47 \\
LDAM-DRW-RSG (ours) & 20.45 & {\bf 17.20} & {\bf 21.65} & {\bf 17.90} & {\bf 55.45} & {\bf 51.50} & {\bf 53.00} & {\bf 49.43}\\
\hline
\end{tabular}}\vspace{-1.2ex}
\caption{Top-1 error rates of ResNet-32 on Imbalanced CIFAR.}
\label{tab:soa_cifar}
\vspace{-2ex}
\end{table*}
\begin{table}
\centering
\resizebox{0.45\textwidth}{!}{
\begin{tabular}{@{}c|c|c@{}}
\hline
Training Schedular & Method & Error Rate \\
\hline
\multirow{11}{*}{$1 \times$ schedular} & ERM & 42.86 \\
& CB Focal Loss \cite{cui2019class} & 38.88 \\
& ERM-DRW \cite{cao2019learning} & 36.27 \\
& ERM-DRS \cite{cao2019learning} & 36.44 \\
& BBN \cite{zhou2020BBN} & 33.71 \\
& $\tau$-normalized \cite{kang2019decoupling} & 34.40 \\
& LDAM-DRW \cite{cao2019learning} & 34.00 \\
& LDAM-DRS \cite{cao2019learning} & 32.73 \\
& LDAM-DRW-SSP \cite{yang2020rethinking} & 33.70 \\
& LDAM-DRW-RSG (ours) & 33.22 \\
& LDAM-DRS-RSG (ours) & {\bf 32.10} \\
\hline
\multirow{5}{*}{$2 \times$ schedular} & BBN \cite{zhou2020BBN} & 30.38 \\
&$\tau$-normalized \cite{kang2019decoupling}& 30.70 \\
& cRT \cite{kang2019decoupling} & 32.40 \\
& LWS \cite{kang2019decoupling} & 30.50 \\
&LDAM-DRS-RSG (ours) & {\bf29.74} \\
\hline
\end{tabular}}
\vspace{-0.2cm}
\caption{Top-1 error rates of ResNet-50 on iNaturalist 2018.}
\label{tab:soa_inaturalist}
\vspace{-0.5ex}
\end{table}
Table~\ref{tab:soa_inaturalist} shows the top-1 error rate of different methods using ResNet-50 \cite{he2016deep} as the backbone on iNaturalist 2018, and we followed Kang \emph{et al.} \cite{kang2019decoupling} to conduct experiments in two training settings, namely, the 1$\times$ schedular and the 2$\times$ schedular.
In the 1$\times$ schedular experiment, we compare LDAM-DRW-RSG and LDAM-DRS-RSG with previous LDAM-DRW and LDAM-DRS, separately. Here, ``DRS'' denotes a deferred class-balanced resampling strategy proposed by Cao \emph{et al.} \cite{cao2019learning}. Note that we cannot reproduce the result on iNaturalist 2018 reported in the original paper (32.0\%) \cite{cao2019learning} by using LDAM-DRW. So, we report our reproduced results of LDAM-DRW and LDAM-DRS \cite{cao2019learning} based on their publicly available code. The results in Table~\ref{tab:soa_inaturalist} show that we can obtain better results by leveraging the proposed generator, which directly demonstrates the effectiveness of RSG. Moreover, as for the 2$\times$ schedular setting, the top-1 error rate of LDAM-DRS-RSG is further decreased. Thus, it can be seen that RSG helps the model achieve new state-of-the-art results in both training schedular settings, which demonstrates that RSG is capable of dealing with imbalanced datasets~effectively.
Table~\ref{tab:soa_places} shows the top-1 accuracy on Places-LT. The results show that the performance can be further improved when RSG is combined with LDAM-DRS, showing that RSG is useful. Moreover, when compared with the recent two popular methods, namely, $\tau$-normalized \cite{kang2019decoupling} and BBN \cite{zhou2020BBN}, RSG can improve the performance of the model on medium-shot and few-shot classes with less accuracy loss on many-shot classes, resulting in a higher overall accuracy and a new state-of-the-art result.
Table~\ref{tab:soa_imagenet} shows the top-1 accuracy on ImageNet-LT. When compared with LDAM-DRW, LDAM-DRW-RSG can achieve a higher accuracy, verifying that RSG is able to alleviate the problem caused by imbalanced datasets. RSG can enhance the model and greatly improve its generality on medium-shot and few-shot classes. In addition, by equipping RSG, we can also obtain a new state-of-the-art result on ImageNet-LT.
Since all hyperparameters of RSG were fixed after the hyperparameter searching process, we can conclude that the hyperparameters and RSG are quite robust to new datasets (i.e., Places-LT, ImageNet-LT, and iNaturalist 2018). If hyperparameters are further tuned on the new datasets, even better results might be obtained.
\begin{table}
\centering
\resizebox{0.45\textwidth}{!}{
\begin{tabular}{@{}ccccccc@{}}
\hline
Method & Many & Medium & Few & All\\
\hline
Lifted Loss \cite{oh2016deep} & 41.1 & 35.4 & 24.0 & 35.2\\
Focal Loss \cite{lin2017focal} & 41.1 & 34.8 & 22.4 & 34.6\\
Range Loss \cite{zhang2017range} & 41.1 & 35.4 & 23.2 & 35.1\\
FSLwF \cite{gidaris2018dynamic} & 43.9 & 29.9 & 29.5 & 34.9 \\
BBN \cite{zhou2020BBN} & 42.5 & 40.3 & 30.6 & 38.7\\
OLTR \cite{liu2019large} & {\bf 44.7} & 37.0 & 25.3 & 35.9 \\
$\tau$-normalized \cite{kang2019decoupling} & 37.8& 40.7 & 31.8 & 37.9\\
LDAM-DRS \cite{cao2019learning} & 43.3 & 38.3 & 30.7 & 38.6 \\
LDAM-DRS-RSG (ours) & 41.9& {\bf 41.4} & {\bf 32.0} & {\bf 39.3}\\
\hline
\end{tabular}}
\vspace{-0.2cm}
\caption{Top-1 accuracy of ResNet-152 on Places-LT.}
\label{tab:soa_places}
\end{table}
\begin{table}
\centering
\resizebox{0.45\textwidth}{!}{
\begin{tabular}{@{}ccccccc@{}}
\hline
Method & Many & Medium & Few & All\\
\hline
Focal Loss \cite{lin2017focal} & 63.3 & 37.4 & 7.7 & 43.2 \\
OLTR \cite{liu2019large} &52.1 & 39.7 & 20.3 & 41.2 \\
Joint \cite{kang2019decoupling} & {\bf 65.9} & 37.5 & 7.7 & 44.4 \\
NCM \cite{kang2019decoupling} & 56.6 & 45.3 & 28.1 & 47.3 \\
cRT \cite{kang2019decoupling} & 61.8 & 46.2 & 27.4 & 49.6 \\
$\tau$-normalized \cite{kang2019decoupling} & 59.1 & 46.9 & 30.7 & 49.4 \\
LWS \cite{kang2019decoupling} & 60.2 & 47.2 & 30.3 & 49.9 \\
LDAM-DRS \cite{cao2019learning} & 63.7 & 47.6 & 30.0 & 51.4 \\
LDAM-DRS-RSG (ours) & 63.2 & {\bf 48.2} & {\bf 32.3} & {\bf 51.8 }\\
\hline
\end{tabular}}
\vspace{-0.2cm}
\caption{Top-1 accuracy of ResNeXt-50 on ImageNet-LT.}
\label{tab:soa_imagenet}
\vspace{-1.2ex}
\end{table}
\section{Summary and Outlook}
\label{conclusion}
We have introduced a rare-class sample generator (RSG), which is a general building block to mitigate the issue of training on imbalanced datasets. RSG is simple yet effective, since it is an architecture-agnostic and loss-agnostic plug-in module, and it does not bring any additional burdens to the backbone network during the inference phase. In extensive experiments, we have verified the effectiveness of RSG, which has achieved excellent results on four public benchmarks. Since RSG is flexible and orthogonal to most previous methods, future research can focus on improving the RSG module directly by designing more elegant ways to generate higher-quality rare-class samples.
\vspace{-1ex}
\paragraph{\small Acknowledgments.} \small This work was supported by the National Natural Science Foundation of China under the grant 61906063, by the Natural Science Foundation of Tianjin City, China, under the grant 19JCQNJC00400, and by the ``100 Talents Plan'' of Hebei Province, China, under the grant E2019050017.
This work was also supported by the Alan Turing Institute under the EPSRC grant EP/N510129/1 and
by the AXA Research Fund. We also acknowledge the use of the Tier 2 facility
JADE (EP/P020275/1) and GPU computing support by Scan Computers International Ltd.
{\small
\bibliographystyle{ieee_fullname}
|
2,869,038,154,618 | arxiv | \section{\label{intro}Introduction}
The problem of a dipole emitter placed close to a reflective surface has received much interest over the last few decades: seminal work \cite{DREXHAGE} by Drexhage in 1970 first demonstrated that a reflective interface modifies the intrinsic properties of the emitter, influencing both the emission frequency \cite{frequencyshift,Morawitz1969} and the emitter's excited lifetime \cite{Morawitz1969, babiker, babiker2, babiker:superradiance, ficek2005quantum}. Recently, a sound analogue of Drexhage's experiment has been performed to study the acoustic frequency shifts of a gong struck near a hard wall \cite{acoustic}.
Mirrors have widespread use for directing light from sources that emit across a extended solid angle, for example in the form parabolic reflectors in everyday light sources. On the nanoscale, precise guiding of photons into particular optical modes is of paramount importance for quantum information processing and communication, where on demand single photons are required \cite{KLM, SinglePhotonTransport1, SinglePhotonTransport2, SinglePhotonTransport3}.
Although micron-sized spherical mirrors for open access microcavities \cite{JasonSmith} have recently enabled the investigation of quantum dot--cavity systems in the strong coupling regime \cite{Warburton,Imamoglu}, the use of sophisticated mirrors remains a challenge for solid-state quantum emitters that are often embedded in heterogenous layers of substrates with varying refractive indices. This motivates the more straightforward alternative of increasing the photon collection efficiency by placing the emitter above a planar mirroring interface \cite{SurfaceEnhancedRef1, SurfaceEnhancedRef2, SurfaceEnhancedRef3}. Interestingly, the presence of even such a simple mirror also affects the physical properties of the emitter, as discussed above.
In recent years, progress in the synthesis and control of solid-state emitters has enabled experimental investigation of these modified properties of condensed-state emitters including quantum dots (QDs) \cite{nanowire, gerardot} as well as perovskite \cite{imageexcitons} and transition metal dichalcogenide monolayers \cite{MonolayerRef} deposited on reflective surfaces. Circuit QED analogues of an atom and a variable mirror have also been successfully implemented \cite{circuit_qed, circuit_qed2}; these offer the advantage of increased control over the artificial atom's interaction with the mirror. With improved atom-mirror coupling, Hoi \textit{et al.} managed to collect over $99\%$ of the radiation by coupling a transmon microwave emitter to a 1D superconducting waveguide \cite{circuit_qed}.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=\linewidth]{artistic.pdf}
\caption{Artistic rendition of a driven quantum dot (QD), depicted as a cyan spheroid, in the proximity of a golden metallic surface. The corresponding `image dot' is shown blurred on the other side `below' of the semiconductor-gold interface. The optical dipoles are depicted as `dumbbells' within the QDs. The vertical red beam represents the laser driving, and the magenta spiralling arrows indicate scattered photons.}
\label{blender}
\end{center}
\end{figure}
Several theoretical investigations \cite{frequencyshift, Morawitz1969, babiker, ficek2005quantum} have shown that an atomic two-level system (TLS) near a reflective surface can be modelled as a pair of emitters: the real one as well as an identical emitter that is placed equidistant from, but on the opposite side of, the interface (see Figs.~\ref{blender} and \ref{schematic}). The basic idea follows that of the electrostatics concept of an image charge to capture the surface charge distribution that ensures meeting the electric field boundary conditions \cite{jackson}. In the optical case, the `method of images' relies on considering the emission from the combined dipole-image system. This yields the same expression for the modified spontaneous emission (SE) rate which one obtains from a full QED treatment (employing surface-dependent response functions to arrive at the modifications to the emitter's lifetime and transition frequency)\cite{agarwal3}. The image dipole treatment has also been applied to model the surface-induced modifications of more complex structures such as molecules \cite{sers, barnesreview}, multiple dipole emitters \cite{George1985,imagedipoleold2,sanders} and solid state-emitters \cite{nanowire,imageexcitons}. To date, however, the latter have largely ignored the vibrational solid state environment and the continuous wave (cw) laser driving typical of a resonance fluorescence (RF) setting.
Motivated by these successes, we here present a full image dipole polaron master equation (ME) treatment of a driven TLS (such as, e.g., a quantum dot) in the proximity of a metal surface (see Fig.~\ref{blender}). Our calculations extend previous image dipole studies as follows: (i) we consider driven systems, showing how to incorporate a laser driving term into the dipole and image Hamiltonian; (ii) we discuss the need for introducing an additional `selection rule' to prevent unphysical double excitation; (iii) we demonstrate how a solid-state phonon environment can be accounted for -- via a single bosonic bath that is perfectly correlated across the real emitter and its image.
We will show that the resulting master equation model remains highly intuitive and possesses appealing simplicity. We establish the correctness of this model by comparing its results to those obtained from an alternative calculation which does not involve fictitious entities or rely on ad-hoc assumptions: the half-sided cavity model. This agreement gives us confidence that the model could also be extended to the case of multiple solid-state emitters near a reflective surface, laying the groundwork for the investigation of collective effects in this setting, where we believe that an image approach will be easier to deploy than both the Green's function and the half-sided cavity approach.
This Article is organised as follows: We will start by briefly summarising the results from the established Green's function method for calculating the SE rate of a `bare' dipole emitter. Next, we shall derive a ME for the emitter by treating the metal surface as a half-sided Fabry--P{\'e}rot cavity, providing the benchmark model for a single TLS near the metal surface (see Fig.~\ref{schematic}a). Finally, we formulate the ME using the method of images (see Fig.~\ref{schematic}b). We show that, with suitable alterations, the two-body ME reduces to an effective two level system with rates and energy shifts agreeing with the cavity model.
Finally, we put our model to use to obtain the RF spectrum of the modified system, featuring a phonon sideband, the Mollow triplet, and the ratio of coherently to incoherently scattered light.
\begin{figure}[t!]
\centering
\def1\textwidth{0.4\textwidth}
\input{drawing2a.pdf_tex}
\caption{Two equivalent descriptions of an emitter near a perfect metallic mirror. {\bf Left:} schematic of the Green's function and half-sided cavity approaches. {\bf Right:} the emitter supplemented with a fictitious image dipole. The solid (dashed) red arrows indicate emitted (reflected) photons whereas the solid (dashed) red curve indicates the incident (reflected) driving beam.}
\label{schematic}
\end{figure}
\section{\label{Green}Green's function approach: Brief summary}
We begin by summarising the main results of the Green's function approach for modelling the optical environment of a dipole emitter. This can be applied to obtain the SE rate of an emitter in free space \cite{principles} as well as in the presence of a metallic surface \cite{principles,green,babiker2}. Whilst this approach gives a closed analytical solution for the case of a single dipole, a numerical route has to be taken to model a system comprised of a larger number of emitters \cite{sanders, principles}, even in the absence of a driving field and phonon-environments. Therefore, we here limit the discussion to a single `bare' emitter as an independent reference point for the SE rate (and energy shift) in that idealised configuration.
Let the dipole be situated at position $\mathbf{r}_d$, where $\mathbf{r}_d$ is perpendicular to a metal surface containing the origin of the coordinate system. In the Green's function approach, the emitter is usually modelled as a classical dipole oscillating harmonically with amplitude $\mathbf{x}$ at frequency $\omega_0$ about $\mathbf{r}_d$ \cite{sanders}. In vacuum, the SE rate can be calculated as
\begin{equation}\label{SEGreen}
\gamma^{pt}_0(\omega_0) = \frac{4 \omega^2_0}{ \pi \epsilon_0 \hbar c^2}\left[ \hat{\mathbf{d}} \cdot \mathrm{Im}\{ \mathbf{G}(\mathbf{r}_d, \mathbf{r}_d; \omega_0) \} \cdot \hat{\mathbf{d}} \right]~,
\end{equation}
where $\epsilon_0$ is the electric permittivity of vacuum, $c$ is the speed of light, $\hat{\mathbf{d}}$ is a unit vector indicating the direction of the emitter's dipole moment, and $\mathbf{G}(\mathbf{r}_d, \mathbf{r}_d; \omega_0)$ is the Fourier transform of the dyadic Green's function at the emitter's position \cite{principles}. In Ref.~\cite{sanders}, Choquette \textit{et al.} studied the the collective decay rate of $N$ such classical emitters near a planar interface, arriving at a diagonal Green's function matrix, so that Eq.~\eqref{SEGreen} allows one to find the SE rate for arbitrary dipole orientations.
To obtain the SE rate in a dielectric environment, we consider the following expression for the normalised dissipated power:
\begin{equation}\label{power}
\frac{P}{P_0} = 1 + \frac{6 \pi \epsilon_0 \epsilon_r}{|\mathbf{d}|^2 k^3} \mathrm{Im} \{ \mathbf{d}^* \cdot \mathbf{E}_s(\mathbf{r}_d) \} ~,
\end{equation}
where $P_0$ is rate of energy dissipation in free space, $\epsilon_r$ and $k$ are the relative permittivity and wave vector magnitude in the dielectric surrounding the emitter, respectively, and $\mathbf{E}_s(\mathbf{r}_d)$ is the scattered electric field at the dipole's position (which, for a single dipole near the surface, corresponds to the reflected field) \cite{principles}. The connection between the Green's function and the decay rate of the dipole emitter is established via the relationship
\begin{equation}
\frac{P}{P_0} = \frac{\gamma^{pt}(\omega_0)}{\gamma^{pt}_0(\omega_0)} ~.
\end{equation}
Rearranging the above then yields an integral expression for the desired SE rate $\gamma^{pt}(\omega_0)$.
We note that the Green's function method is not limited to ideal metallic interfaces but can also be applied straightforwardly to reflective dielectric interfaces, simply by substituting appropriate dielectric constants into the above relevant expressions \cite{principles}. In this case, one obtains qualitatively very similar results for a dielectric mirror, especially at larger separations \cite{principles}. Whilst the method of images fundamentally relies on the assumption of a perfectly conducting interface, it is fair to assume its qualitative predictions will by analogy also carry across to the case of dielectric mirrors.
\section{\label{halfcavity}Half-sided Cavity Model}
In the previous section, we discussed how to determine the SE rate for an undriven emitter interacting only with a photonic environment. However, in order to fully model a solid-state emitter such as a QD, we need to include interactions between the emitter and its phonon environment \cite{Machnikowski:Phonons,Machnikowski:Rabi}. Now we shall derive the polaron ME for a TLS near a metal surface, by modelling the latter as a half-sided Fabry--P{\'e}rot cavity positioned at $z=0$ lying in the $xy$ plane, and the QD positioned at $z = r_d \geq 0$, where $r_d = |\mathbf{r}_d|$. Our calculation follows the general cavity model from Refs.~\cite{demartini, ficek2005quantum}, taking the appropriate limits for the reflectivity and transmittivity of the two mirrors to obtain, effectively, only a single perfectly reflecting surface (see Fig.~\ref{cavityvecs}).
\begin{figure}[t!]
\centering
\def1\textwidth{0.7\columnwidth}
\input{test.pdf_tex}
\caption{The limiting case of the Fabry--P{\'e}rot cavity, effectively reducing to a single perfectly reflecting surface. The arrows indicate the wavevectors in \eqref{cavityRabi} and \eqref{cavity_spatial_fns}, and $r$ denotes the surface reflection coefficient \cite{demartini, ficek2005quantum}.}
\label{cavityvecs}
\end{figure}
\subsection{\label{halfcavity:hamiltonian}Hamiltonian}
We consider a driven TLS with ground state $\Ket{0}$ and excited state $\Ket{X}$, which is governed by the following Hamiltonian in a rotating frame and after the usual rotating wave approximation ($\hbar = 1$)
\begin{equation}\label{CavityHamiltonian}
H_S = \delta \Ket{X} \Bra{X} + \frac{\Omega^*_{cav}}{2}\Ket{0} \Bra{X} + \mathrm{H.c.} ~,
\end{equation}
where H.c.~denotes the Hermitian conjugate and $\delta = \omega_0 - \omega_l$ is the detuning between the TLS transition frequency $\omega_0$ and the laser frequency $\omega_l$. $\Omega_{cav}$ is the effective Rabi frequency in the presence of the metal surface, given by
\begin{equation}\label{cavityRabi}
\Omega_{cav} = 2\sqrt{\frac{\omega_l}{2 \epsilon V}}~ \mathbf{d} \cdot \left( \mathbf{e}_{l_-} \mathrm{e}^{-i \mathbf{q}_{l} r} - \mathbf{e}_{l_+} \mathrm{e}^{i \mathbf{q}_{l} r} \right) ~,
\end{equation}
where $\mathbf{q}_l$ is the laser field wavevector, with polarisation $\mathbf{e}_{l_-}$ ($\mathbf{e}_{l_+}$ after reflection), as shown in Fig.~\ref{cavityvecs} for the case of the laser beam being perpendicular to the surface. Photon and phonon environments are modelled by the Hamiltonians
\begin{align}
H^{pt}_E &= \sum_{\mathbf{q}, \, \lambda} \nu_\mathbf{q} a^\dagger_{\mathbf{q}\lambda} a_{\mathbf{q}\lambda}~, \\
H^{pn}_E &= \sum_{\mathbf{k}} \omega_\mathbf{k} b^\dagger_\mathbf{k} b_\mathbf{k} ~,
\end{align}
where $b^\dagger_\mathbf{k}$ and $a^\dagger_{\mathbf{q}\lambda}$ ($b_\mathbf{k}$ and $a_{\mathbf{q}\lambda}$) are the $\mathbf{k}$-phonon and $\mathbf{q}\lambda$-photon creation (annihilation) operators, respectively. In the dipole approximation, the photon interaction Hamiltonian is of the form
\begin{equation}
H^{pt}_I = -\mathbf{d} \cdot \mathbf{E}(\mathbf{r}_d) (\Ket{0} \Bra{X} + \Ket{X} \Bra{0}) ~
\label{eq:hpt0}
\end{equation}
with $\mathbf{E}(\mathbf{r})$ being the Schr{\"o}dinger picture electric field for the half-sided cavity \cite{ficek2005quantum, demartini},
\begin{equation}\label{electricfield}
\mathbf{E}(\mathbf{r}) = i \sum_{\mathbf{q}, \lambda} \left[ \mathbf{u}_{\mathbf{q} \lambda}(\mathbf{r}) a_{\mathbf{q} \lambda} - \mathrm{H.c.} \right] ~.
\end{equation}
The spatial mode functions $\mathbf{u}_{\mathbf{q} \lambda}(\mathbf{r})$ for an ideal half-sided cavity (of perfect reflectivity) are given by
\begin{equation}\label{cavity_spatial_fns}
\mathbf{u}_{\mathbf{q} \lambda}(\mathbf{r}) = \sqrt{\frac{\omega_{\mathbf{q} \lambda}}{2 \epsilon V}}\left( \mathbf{e}_{\mathbf{q}_- \lambda} \mathrm{e}^{i \mathbf{q}_- r} - \mathbf{e}_{\mathbf{q}_+ \lambda} \mathrm{e}^{i \mathbf{q}_+ r} \right)~.
\end{equation}
Here, $\mathbf{q}_-$ ($\mathbf{q}_+$) is the incident (reflected) wavevector, with corresponding polarisation $\mathbf{e}_{\mathbf{q}_- \lambda}$ ($\mathbf{e}_{\mathbf{q}_+ \lambda}$). For simplicity, we have assumed that the dipole moment $\mathbf{d}$ of the TLS is real.
The interaction with the phonon bath can be generically represented by the Hamiltonian \cite{Mahan}
\begin{equation}
H^{pn}_I = \Ket{X} \Bra{X}\sum_{\mathbf{k}} g_\mathbf{k} ( b^\dagger_\mathbf{k} + b_\mathbf{k} ) ~,
\end{equation}
where $g_\mathbf{k}$ is the coupling strength of the TLS's excited electronic configuration with phonon mode $\mathbf{k}$. We move to the polaron frame by employing the standard Lang--Firsov-type transformation $U = e^S$, $S = \Ket{X}\Bra{X} \sum_{\mathbf{k}} (g_\mathbf{k} / \omega_\mathbf{k}) ( b^\dagger_\mathbf{k} - b_\mathbf{k} )$, obtaining the following transformed system Hamiltonian:
\begin{align}\label{polaronsystem}
\begin{split}
H_{SP} = \delta' \Ket{X} \Bra{X} &+ \frac{\Omega^*_{cav}}{2}\Ket{0} \Bra{X} B_- \\
&+ \frac{\Omega_{cav}}{2}\Ket{X} \Bra{0} B_+~,
\end{split}
\end{align}
where $\delta' = \delta - \sum_\mathbf{k} g^2_\mathbf{k} / \omega_\mathbf{k}$ (becoming $\delta - \int_0^\infty J_{pn}(\omega) / \omega$ in the continuum limit), and the phonon bath operators $B_\pm$ are defined as $B_\pm = \Pi_\mathbf{k} D_\mathbf{k} (g_\mathbf{k} / \omega_\mathbf{k})$, with $D_\mathbf{k}(\pm \alpha) = \exp[\pm(\alpha b^\dagger_\mathbf{k} -\alpha^* b_\mathbf{k})]$ being the $\mathbf{k}$th mode displacement operator. For numerical results we shall later use a superohmic exciton-phonon spectral density $J_{pn}(\omega)$ with exponential cut-off at frequency $\omega_c$ that is appropriate for self-assembled III-V quantum dots \cite{Ramsay, phononrabi2}:
\begin{equation}\label{phonon_spectral_density}
J_{pn}(\omega) = \alpha \omega^3 \mathrm{e}^{-\frac{\omega^2}{\omega_c^2}}~.
\end{equation}
In the polaron frame the light-mattter interaction Hamiltonian Eq.~\eqref{eq:hpt0} becomes
\begin{align}
\begin{split}
H^{pt}_{IP} = &i \Ket{0} \Bra{X} B_- \sum_{\mathbf{q}, \lambda} \mathbf{d}\cdot\mathbf{u}^*_{\mathbf{q}\lambda}(\mathbf{r}_d) a^\dagger_{\mathbf{q}\lambda} \\
-&i \Ket{X} \Bra{0} B_+ \sum_{\mathbf{q}, \lambda} \mathbf{d}\cdot\mathbf{u}_{\mathbf{q}\lambda}(\mathbf{r}_d) a_{\mathbf{q}\lambda} ~.
\end{split}
\end{align}
With the definitions $A_1^{pt} = \Ket{0} \Bra{X}$, $A_2^{pt} = A_1^{pt \dagger}$, $B^{pt}_{1/2} \equiv B_\mp$, $C_1 = i \sum_{\mathbf{q}, \lambda} \mathbf{d}\cdot\mathbf{u}^*_{\mathbf{q}\lambda}(\mathbf{r}_d) a^\dagger_{\mathbf{q}\lambda}$, and $C_2 = C^\dagger_1$, we can compactly write the above Hamiltonian as
\begin{equation}
\label{eq:compactHpti}
H^{pt}_{IP} = \sum_{i=1}^2 A^{pt}_i \otimes B^{pt}_i \otimes C_i ~,
\end{equation}
Since the second term in Eq.~\eqref{polaronsystem} contains system and environment operators, we identify this as our new exciton-phonon interaction term \cite{phononreview}. This new interaction term possesses a non-zero expectation value with respect to the thermal equilibrium bath state $\rho^{pn}_E$; tracing out the phonon bath degrees of freedom, we thus obtain
%
\begin{align}
\mathrm{Tr}_E^{pn}\left[\left(\frac{\Omega^*_{cav}}{2}\Ket{0} \Bra{X} B_- + \frac{\Omega_{cav}}{2}\Ket{X} \Bra{0} B_+\right)\rho^{pn}_E\right] \nonumber \\
= \frac{\Omega^*_{cav}}{2}\langle B \rangle\Ket{0} \Bra{X} + \frac{\Omega_{cav}}{2}\langle B \rangle\Ket{X} \Bra{0} ~,
\end{align}
%
where
\begin{equation}
\langle B \rangle = \exp\left[ -\frac{1}{2}\int_0^\infty \mathrm{d} \omega \frac{J_{pn}(\omega)}{\omega^2} \coth(\beta \omega / 2) \right] ~.
\end{equation}
%
In order to expand perturbatively, we therefore define the system-bath interaction with respect to this value. To this end, we add the expectation value by defining $\mathcal{B}_\pm = B_\pm - \langle B \rangle$ and $\Omega^{pn}_{cav} = \langle B \rangle \Omega_{cav}$ and regrouping our system and interaction Hamiltonian terms, obtaining:
\begin{align}\label{newinteraction}
H_{SP} &= \delta' \Ket{X}\Bra{X} + \frac{\Omega^{pn *}_{cav}}{2}\Ket{0} \Bra{X} + \frac{\Omega^{pn}_{cav}}{2}\Ket{X} \Bra{0}~, \\
H_{IP}^{pn} &= \frac{\Omega^*_{cav}}{2}\Ket{0} \Bra{X} \mathcal{B}_- + \frac{\Omega_{cav}}{2}\Ket{X} \Bra{0} \mathcal{B}_+ ~,
\end{align}
As for Eq.~\eqref{eq:compactHpti}, we introduce operator labels $B^{pn}_{1/2} = \mathcal{B}_\mp$, $A^{pn}_1 = \Omega^*_{cav} /2 \, \Ket{0} \Bra{X}$ and $A^{pn}_2 = A_1^{pn \dagger}$ to recast the above interaction Hamiltonian into the compact form
\begin{equation}
\label{eq:compactHphi}
H_{IP}^{pn} = \sum_{i=1}^2 A^{pn}_i \otimes B^{pn}_i
\end{equation}
which will prove useful for the derivation of the master equation.
\subsection{Master Equation}\label{cavityME}
Having obtained our Hamiltonian in the polaron frame and partitioned it into system, interaction and environment parts, we can make use of the generically derived microscopic second-order Born-Markov master equation of Ref.~\cite{breuer} (Eqn. 3.118). The interaction terms Eqs.~\eqref{eq:compactHpti} and \eqref{eq:compactHphi} are of the required form underlying this derivation, and the resultant ME (in the interaction picture) reads:
\begin{align}\label{generalME}
\diff{}{t} &\rho_{SP}(t) = \\
&-\int_0^\infty \mathrm{d}\tau \; \mathrm{Tr}_E [ H_{IP}(t), [ H_{IP}(t-\tau), \rho_{SP}(t)\otimes\rho_E(0) ] ]~, \nonumber
\end{align}
where $H_{IP}(t) = H^{pn}_{IP}(t) + H^{pt}_{IP}(t)$, and $\mathrm{Tr}_E$ denotes the trace over both environments \cite{breuer}. It can be easily shown \cite{phononreview} that the right-handside (RHS) of the above equation can be split into two parts:
\begin{align}\label{splitgeneralME}
\diff{}{t} &\rho_{SP}(t) = \\
&-\int_0^\infty \mathrm{d}\tau \mathrm{Tr}^{pn}_E [ H^{pn}_{IP}(t), [ H^{pn}_{IP}(t-\tau), \rho_{SP}(t)\otimes\rho^{pn}_E(0) ] ]~ \nonumber \\
&-\int_0^\infty \mathrm{d}\tau \mathrm{Tr}_E [ H^{pt}_{IP}(t), [ H^{pt}_{IP}(t-\tau), \rho_{SP}(t)\otimes\rho_E(0) ] ]~. \nonumber
\end{align}
Since we assume that the (initial) environmental state is thermal, $\rho_E(0) $ factorises: $\rho_E(0) = \rho^{pn}_E(0) \otimes \rho^{pt}_E(0)$.
\subsubsection{Phonon bath correlations}
We proceed by analysing the first term on the RHS of Eq.~\eqref{splitgeneralME} which captures the influence of phonons on the TLS dynamcis with scattering rates determined by phonon correlation functions \cite{Ulhaq2013, PhononRates, spectrum}. In the ME formalism, the rate $\gamma(\omega)$ of a dissipative process is given by $\gamma(\omega) = 2 \mathrm{Re}\left[ \int_0^\infty \mathrm{d}s K(s) \right]$, where $K(s)$ is the relevant correlation function [{\it c.f.}~Eq.~(3.137) in Ref.~\cite{breuer}]. For our phonon dissipator, these functions are given by
\begin{align}
C^{pn}_{ii}(\tau) &= \mathrm{Tr}^{pn}_{E} \left[ \mathcal{B}^\dagger_\pm(\tau) \mathcal{B}_\pm(0) \rho^{pn}_E(0)\right] \nonumber\\
& = \langle B \rangle^2 (\mathrm{e}^{\phi(\tau)} -1)~, \label{phonon_cor_fns_cav:reg}\\
C^{pn}_{ij}(\tau) &= \mathrm{Tr}^{pn}_{E} \left[ \mathcal{B}^\dagger_\pm(\tau) \mathcal{B}_\mp(0) \rho^{pn}_E(0)\right] \nonumber \\
&= \langle B \rangle^2 (\mathrm{e}^{-\phi(\tau)} -1)~, \label{phonon_cor_fns_cav:cross}
\end{align}
where $i, j \in \{ 1,2 \}$, $i \neq j$. After some algebra, we obtain a phonon dissipator of the form
\begin{align*}
\begin{split}
&\gamma^{pn}(\omega') \mathcal{L}[\sigma_-] + \gamma^{pn}(-\omega') \mathcal{L}[\sigma_+] \\[10pt]
&- \gamma^{pn}_{cd}(\omega') \mathcal{L}_{cd}[\sigma_-] - \gamma^{pn}_{cd}(-\omega') \mathcal{L}_{cd}[\sigma_+] ~,
\end{split}
\end{align*}
where $\mathcal{L}[C] = C \rho_{SP} C^\dagger - \frac{1}{2}\{C^\dagger C, \rho_{SP} \}$ and $\mathcal{L}_{cd}[C] = C \rho_{SP} C - \frac{1}{2}\{C^2, \rho_{SP} \}$. The rates $\gamma^{pn}(\pm\omega')$ and $\gamma_{cd}^{pn}$ are
\begin{align*}
\gamma^{pn}(\pm \omega') &= \frac{\left| \Omega_{cav}^{pn} \right|^2}{4} \int_{-\infty}^\infty \mathrm{d}\tau \; \mathrm{e}^{\pm i \omega' \tau} \left( \mathrm{e}^{\phi(\tau)} - 1 \right)~, \\
\gamma^{pn}_{cd}(\omega') &= \frac{\left( \Omega^{pn*}_{cav} \right)^2}{4} \int_{-\infty}^\infty \mathrm{d}\tau \; \cos(\omega' \tau) \left( 1- \mathrm{e}^{-\phi(\tau)} \right)~, \\
\gamma^{pn}_{cd}(-\omega') &= \frac{\left( \Omega_{cav}^{pn} \right)^2}{4} \int_{-\infty}^\infty \mathrm{d}\tau \; \cos(\omega' \tau) \left( 1- \mathrm{e}^{-\phi(\tau)} \right)~,
\end{align*}
where $\phi(\tau) = \int_0^\infty \mathrm{d} \omega \frac{J_{pn}(\omega)}{\omega^2} [\coth(\beta \omega / 2)\cos(\omega \tau) - i \sin(\omega \tau)]$. Our rates match the ones obtained by Roy-Choudhury \textit{et al.} \cite{PhononRates} in previous work\footnote{Ref.~\cite{PhononRates} introduces an additional, phenomenological, pure dephasing term, which we have not included in this paper.}. The rates $\gamma^{pn}(\omega')$ and $\gamma^{pn}(-\omega')$ correspond to enhanced radiative decay and incoherent excitation of the TLS, respectively, whilst $\gamma^{pn}_{cd}(\pm\omega')$ is associated with cross-dephasing \cite{Ulhaq2013}.
\subsubsection{Electromagnetic bath correlations}\label{EM_bath_cor_cav}
Having arrived at a `Lindblad-like' phonon dissipator\footnote{Note that we have not performed a secularisation and our ME is therefore not strictly of Lindblad form}, we now turn our attention to the second term of the RHS of Eq.~\eqref{splitgeneralME}. This term will yield the modified SE rate of the TLS near the cavity, as well as account for the frequency shift via a unitary renormalisation term. As in the previous section, we begin by explicitly printing the correlation functions obtained from Eq.~\eqref{splitgeneralME}:
\begin{align}\label{photon_cor_fns_cav}
&C^{pt}_{ij}(\tau) \\
&= \mathrm{Tr}_{E} \left[ \left(B^{pt \dagger}_i(\tau) \otimes C^\dagger_i(\tau) \right) \left( B^{pt}_j(0) \otimes C_j(0) \right) \rho_E(0)\right]~, \nonumber \\
&= \mathrm{Tr}^{pn}_{E} \left[ B^{pt \dagger}_i(\tau) B^{pt}_j(0) \rho^{pn}_E(0)\right] \mathrm{Tr}^{pt}_{E} \left[ C^\dagger_i(\tau) C_j(0) \rho^{pt}_E(0)\right]~, \nonumber
\end{align}
where $i, j \in \{ 1,2 \}$. After substituting for the bath operators, we make use of the following relations \cite{breuer}
\begin{align*}
\mathrm{Tr}^{pt}_{E} \left[ a_{\mathbf{q} \lambda} a_{\mathbf{q}' \lambda'} \rho^{pt}_E(0) \right] &= \mathrm{Tr}^{pt}_{E} \left[ a^\dagger_{\mathbf{q} \lambda} a^\dagger_{\mathbf{q}' \lambda'} \rho^{pt}_E(0) \right] &=&~0 ~,\\
\mathrm{Tr}^{pt}_{E} \left[ a_{\mathbf{q} \lambda} a^\dagger_{\mathbf{q}' \lambda'} \rho^{pt}_E(0) \right] &= \delta_{\mathbf{q}\mathbf{q}'}\delta_{\lambda \lambda'} (1 + N(\nu_\mathbf{q})) &\approx&~\delta_{\mathbf{q}\mathbf{q}'}\delta_{\lambda \lambda'}~, \\[10pt]
\mathrm{Tr}^{pt}_{E} \left[ a^\dagger_{\mathbf{q} \lambda} a_{\mathbf{q}' \lambda'} \rho^{pt}_E(0) \right] &= \delta_{\mathbf{q}\mathbf{q}'}\delta_{\lambda \lambda'} N(\nu_\mathbf{q}) &\approx& ~0~,
\end{align*}
where we have assumed that $\forall \omega > 0$, the Planck distribution $N(\omega) \approx 0$\footnote{Only (optical) photon modes with energies close to $\omega_0$ are relevant, for which this approximation is typically justified under ambient conditions. However, the generalisation to a finite temperature photon bath is also straightforward.}. This means that we only have a single non-vanishing correlation function $C^{pt}_{11}(\tau)$. Following Ref.~\cite{FermiGR2}, we consider well-separated photon and phonon correlation times (appropriate for an unstructured photonic environment), so that $C^{pt}_{11}(\tau)$ reduces to the photon bath correlation function in the absence of a phonon bath. The latter is given by
\begin{equation}
C^{pt}_{11}(\tau) = \frac{|\mathbf{d}|^2 }{6 \pi^2 \epsilon c^3}\int_0^\infty \mathrm{d}\nu_\mathbf{q} \; \nu^3_\mathbf{q} [1 + \mathcal{F}_{cav}(q r_d)]~,
\end{equation}
where the term
\begin{figure*}[ht!]
\centering
\begin{subfigure}[b]{0.5\textwidth}
\begin{flushleft}
\includegraphics[width=0.9\textwidth]{rate.pdf}
\end{flushleft}
\end{subfigure
\begin{subfigure}[b]{0.5\textwidth}
\begin{flushleft}
\includegraphics[width=0.9\textwidth]{energy.pdf}
\end{flushleft}
\end{subfigure}
\caption{Spontaneous emission rate (left) and energy shift (right) for the half-sided cavity model (red), where we divided expressions \eqref{SE_rate_cav} and \eqref{energy_shift_cav} by the bare SE rate in order to avoid dependence on its value. The blue energy shift curve denotes the energy shift obtained using a full QED approach \cite{agarwal3}, showing a distinctively different behaviour at smaller separations ($\lesssim 0.05 \lambda_0$) when compared to the half-sided cavity and image approaches. The oscillations persist even at larger separations, of the order of the emission wavelength $\lambda_0$ for the SE rate. As $x\rightarrow\infty$, the SE rate tends to that of a bare emitter and the energy shift vanishes, as expected.}
\label{energyrate}
\end{figure*}
\begin{equation}\label{Fcav}
\mathcal{F}_{cav}(x) = \frac{3}{2}\left( -\frac{\sin(2 x)}{2 x} - \frac{\cos(2 x)}{(2 x)^2} + \frac{\sin(2 x)}{(2 x)^3} \right)~,
\end{equation}
describes the influence of the metal surface. The SE rate then evaluates to
\begin{equation}\label{SE_rate_cav}
\gamma_{cav}^{pt}(\omega') = (1+ \mathcal{F}_{cav}(q_0 r_d))\gamma_0^{pt}(\omega') ~,
\end{equation}
where $\gamma^{pt}_0(\omega')$ is the bare SE rate for an isolated TLS, and is given by $\gamma^{pt}_0(\omega') = |\mathbf{d}|^2 \omega'^3 / 3 \pi \epsilon c^3$. The imaginary part of the correlation tensor has two components: the first term is the usual Lamb shift (whose expression is divergent unless one adopts a full QED approach based on a relativistic Hamiltonian and appropriate renormalisation \cite{Gardiner}). The second term is the additional energy shift term and takes the form \cite{agarwal, agarwal2, ficek2005quantum}
\begin{equation}\label{energy_shift_cav}
V_{cav} = \frac{1}{2} \mathcal{G}_{cav}(q_0 r_d)\gamma^{pt}_0(\omega')~,
\end{equation}
where the function $\mathcal{G}_{cav}$ is given by
\begin{equation}
\mathcal{G}_{cav}(x) = \frac{3}{2} \left(- \frac{\sin(2 x)}{(2 x)^2} - \frac{\cos(2 x)}{(2 x)^3} + \frac{\cos(2 x)}{2 x}\right)~.
\end{equation}
Overall, the transition frequency for the TLS in the polaron frame is now given by
\begin{equation}\label{shifted_cavity_frequency}
\tilde{\omega}' = \omega' + V_{cav}~
\end{equation}
and the final polaron frame ME takes the following form in the Schr\"odinger picture:
\begin{align}\label{ME_polaron_cav}
\begin{split}
\diff{}{t} &\rho_{SP} = \\
&-\frac{i}{\hbar} [H'_{SP}, \rho_{SP}(t)] + D_{pn}(\rho_{SP}) +D_{pt}(\rho_{SP}) ~,
\end{split}
\end{align}
where $D_{pn}(\rho_{SP}) = \gamma^{pn}(\omega') \mathcal{L}[\sigma_-] + \gamma^{pn}(-\omega') \mathcal{L}[\sigma_+] - \gamma^{pn}_{cd}(\omega') \mathcal{L}_{cd}[\sigma_-] - \gamma^{pn}_{cd}(-\omega') \mathcal{L}_{cd}[\sigma_+] $ and $D_{pt}(\rho_{SP}) = \gamma^{pt}_{cav}(\omega') \mathcal{L}[\sigma_-]$. $H'_{SP}$ is the system Hamiltonian in the polaron frame including the energy shift from Eq.~\eqref{energy_shift_cav}.
In summary, Eqs.~\eqref{SE_rate_cav} and \eqref{energy_shift_cav} capture how the presence of a metal surface (here treated as a perfect reflector) alters the SE rate and the transition frequency of the TLS, respectively.
Considering our results in the absence of phonons, we find full analytical agreement with the prior literature on the image dipole approach \cite{ficek2005quantum,agarwal3}, and except for very small separations, we also have excellent numerical agreement with the full QED approach \cite{agarwal3}. We show this agreement in Fig.~\ref{energyrate} as a function of the distance of the emitter to the surface. The dashed vertical lines at multiples of $1/8 n$ (where $n$ is the refractive index of the host material, taken to be GaAs in our case), taken from Eqns.~\eqref{SE_rate_cav} and \eqref{energy_shift_cav}, serve as a guide to the eye for the approximate frequency of oscillation, and demonstrate that multiple periods occur within a wavelength's separation of emitter to surface. In the limiting case $r_d \rightarrow \infty$, we have $V_{cav} \rightarrow 0$ and $\gamma_{cav}^{pt}(\omega') \rightarrow \gamma_0^{pt}(\omega')$, i.e.~we recover the case of an isolated QD as required.
\section{Image Emitter Approach}\label{image:dot:approach}
Models involving emission from a combination of two identical TLS have been used extensively to study the modifications to the SE rate of an emitter in the proximity of a dielectric or metal surface. After setting up the appropriate Hamiltonian, we shall once more derive a polaron frame ME. We then show that this ME is identical to the one derived using the half-sided cavity approach, provided we disregard certain terms in order to constrain the dynamics of our two emitter model to the `right' subspace.
\subsection{Setup}
We focus on the case where the dipole is oriented parallel to the surface\footnote{We discuss modifications for the perpendicular case in the Appendix} (as is appropriate for a typical self-assembled QD emitter), implying that the image dipole will be antiparallel \citep{babiker,agarwal,agarwal2,agarwal3}. In what follows, we shall once again take the {\it real} emitter to be situated at a distance $r_d > 0$ along the positive $z$-axis, with the dipole vector oriented in the positive $x$-direction. Hence, the corresponding {\it image} dipole is positioned at $z = -r_d$, with its dipole vector being parallel to the negative $x$-axis.
\subsection{Hamiltonian}
The Hamiltonian of the two driven TLS in a frame rotating with frequency $\omega_l$ is given by
\begin{equation}
H_S = \sum_{j=1}^2 \delta \Ket{X_j}\Bra{X_j} + \frac{\Omega^*_j}{2}\Ket{0_j} \Bra{X_j} + \frac{\Omega_j}{2}\Ket{X_j} \Bra{0_j} ~,
\end{equation}
where the subscript $j=1, 2$ denotes the real and image TLS, respectively. In order to match the boundary conditions required for reflection, we model the classical driving field as two counter-propagating beams, with the secondary `reflected' beam having a $\pi$ phase shift with respect to the original beam. For simplicity, we model these as plane waves propagating along the $z$-axis and polarised in the $x$-direction. In phasor notation, these two waves can be written as
\begin{align}
\begin{split}
\mathbf{E}_1(\mathbf{r}) &= \mathbf{E}_{incident}(\mathbf{r}) = E_0 \mathrm{e}^{i \mathbf{q}_l \cdot \mathbf{r}} \hat{\mathbf{x}} ~, \\
\mathbf{E}_2(\mathbf{r}) &= \mathbf{E}_{reflected}(\mathbf{r}) = -E_0 \mathrm{e}^{-i \mathbf{q}_l \cdot \mathbf{r}} \hat{\mathbf{x}}~,
\end{split}
\end{align}
giving rise to the following Rabi frequencies at the positions $\mathbf{r}_{1,2}$ of the two emitters:
\begin{align}
\begin{split}
\Omega_1 &= 2 \mathbf{d}_1 \cdot (\mathbf{E}_1(\mathbf{r}_1) + \mathbf{E}_2(\mathbf{r}_1))~, \\
\Omega_2 &= 2 \mathbf{d}_2 \cdot (\mathbf{E}_1(\mathbf{r}_2) + \mathbf{E}_2(\mathbf{r}_2)) ~.
\end{split}
\end{align}
Since $\mathbf{r}_2 = -\mathbf{r}_1$ and $\mathbf{d}_2 = -\mathbf{d}_1$, we have $\Omega \coloneqq \Omega_1 = \Omega_2$.
We now turn to the wider electromagnetic environment (excluding the coherent driving field discussed above). The electric field operator can be written as in Eq.~\eqref{electricfield} but with the spatial mode functions now being replaced by the free-space functions
\begin{equation}
\mathbf{u}_{\mathbf{q} \lambda}(\mathbf{r}) = \sqrt{\frac{\omega_{\mathbf{q} \lambda}}{2 \epsilon V}} \mathbf{e}_{\mathbf{q} \lambda} \mathrm{e}^{i \mathbf{q} r} ~.
\end{equation}
The interaction Hamiltonian of the TLS with the photonic environment is then given by
\begin{align}
\begin{split}
H^{pt}_I =&H^{pt,1}_I + H^{pt,2}_I \\
=&-\sum_{j=1}^2 \mathbf{d}_j \cdot \mathbf{E}(\mathbf{r_j}) (\Ket{0_j} \Bra{X_j} + \Ket{X_j} \Bra{0_j})~.
\end{split}
\end{align}
For the interaction with vibrational modes, we assume that both real and image TLS see the same phonon bath and possess perfectly correlated coupling constants $g_\mathbf{k}$. This ensures the image system exactly follows the dynamics of real dipole, as is required for matching the boundary condition of a perfectly reflecting interface. Thus, our relevant Hamiltonian reads
\begin{align}
\begin{split}
H^{pn}_I =&H^{pn,1}_I + H^{pn,2}_I \\
=&\sum_{j=1}^2 \sum_{\mathbf{k}} \Ket{X_j} \Bra{X_j} g_\mathbf{k} ( b^\dagger_\mathbf{k} + b_\mathbf{k} )~.
\end{split}
\end{align}
Next, we move into the polaron frame with the transformation $\mathrm{e}^{S_1+S_2} = \mathrm{e}^{S_1}\mathrm{e}^{S_2}$, obtaining the transformed Hamiltonians
\begin{align}
H_{SP} = & \sum_{j=1}^2 \delta' \Ket{X_j}\Bra{X_j} + \frac{\Omega^{pn *}}{2}\Ket{0_j} \Bra{X_j} + \mathrm{H.c.}~, \\
H^{pt,j}_{IP} = &i \Ket{0_j} \Bra{X_j} B_- \sum_{\mathbf{q}, \lambda} \mathbf{d}_j\cdot\mathbf{u}^*_{\mathbf{q}\lambda}(\mathbf{r}_j) a^\dagger_{\mathbf{q}\lambda} \nonumber \\
-&i \Ket{X_j} \Bra{0_j} B_+ \sum_{\mathbf{q}, \lambda} \mathbf{d}_j\cdot\mathbf{u}_{\mathbf{q}\lambda}(\mathbf{r}_j) a_{\mathbf{q}\lambda} ~,\nonumber \\
H^{pn,j}_{IP} = &\frac{\Omega^*}{2}\Ket{0_j} \Bra{X_j} \mathcal{B}_- + \frac{\Omega}{2}\Ket{X_j} \Bra{0_j} \mathcal{B}_+ ~.
\end{align}
As in Sec.~\ref{halfcavity}, the latter two can easily be seen to be of the following generic form (with appropriate identifications for the $A, B, C$ operators) which will enable straightforward use of the ME (3.118) from Ref.~\cite{breuer}:
\begin{align}
H^{pn,j}_{IP} = &\sum_{i=1}^2 A^{pn,j}_i \otimes B^{pn,j}_i~, \\
H^{pt,j}_{IP} = &\sum_{i=1}^2 A^{pt,j}_i \otimes B^{pt,j}_i \otimes C^j_i ~.
\end{align}
\subsection{Master equation}
The ME for our system can, once again, be written as
\begin{align}\label{splitgeneralME2}
\diff{}{t} &\rho_{SP}(t) = \\
&-\int_0^\infty \mathrm{d}\tau \mathrm{Tr}^{pn}_E [ H^{pn}_{IP}(t), [ H^{pn}_{IP}(t-\tau), \rho_{SP}(t)\otimes\rho^{pn}_E(0) ] ]~ \nonumber \\
&-\int_0^\infty \mathrm{d}\tau \mathrm{Tr}_E [ H^{pt}_{IP}(t), [ H^{pt}_{IP}(t-\tau), \rho_{SP}(t)\otimes\rho_E(0) ] ]~, \nonumber
\end{align}
however, it now features a larger number of correlation functions due to the presence of the image emitter. Following the general procedure in Sec.~\ref{cavityME}, we shall analyse different contributions in turn to arrive at our final ME of the image emitter model.
\subsubsection{Phonon dissipator}
The correlation functions (including cross correlation terms between bath operators of the real and image system) result in the following phonon dissipator
\begin{align}
D_{pn}&(\rho_{SP})= \\
&\sum_{i,j=1}^2 \gamma^{pn}_{ji}(\omega') \left( \sigma^j_- \rho_{SP}(t) \sigma^i_+ - \frac{1}{2}\{ \sigma^i_+ \sigma^j_-, \rho_{SP}(t) \} \right) \nonumber \\
+&\sum_{i,j=1}^2 \gamma^{pn}_{ji}(-\omega') \left( \sigma^j_+ \rho_{SP}(t) \sigma^i_- - \frac{1}{2}\{ \sigma^i_- \sigma^j_+, \rho_{SP}(t) \} \right) \nonumber \\
-&\sum_{i,j=1}^2 \gamma^{pn}_{cd, ji}(\omega') \left( \sigma^j_- \rho_{SP}(t) \sigma^i_- - \frac{1}{2}\{ \sigma^i_- \sigma^j_-, \rho_{SP}(t) \} \right) \nonumber \\
-&\sum_{i,j=1}^2 \gamma^{pn}_{cd, ji}(-\omega') \left( \sigma^j_+ \rho_{SP}(t) \sigma^i_+ - \frac{1}{2}\{ \sigma^i_+ \sigma^j_+, \rho_{SP}(t) \} \right)~,\nonumber
\end{align}
where the rates $\gamma^{pn}_{ji}(\pm\omega')$ and $\gamma^{pn}_{cd, j}$ are given by
\begin{align*}\label{phononratesRealandImage}
\gamma^{pn}_{ji}(\pm \omega') &= \frac{|\Omega^{pn}|^2}{4} \int_{-\infty}^\infty \mathrm{d}\tau \; \mathrm{e}^{\pm i \omega' \tau} \left( \mathrm{e}^{\phi(\tau)} - 1 \right)~, \\
\gamma^{pn}_{cd, ji}(\omega') &= \frac{ (\Omega^{pn*})^2}{4} \int_{-\infty}^\infty \mathrm{d}\tau \; \cos(\omega' t) \left( 1- \mathrm{e}^{-\phi(\tau)} \right)~, \\
\gamma^{pn}_{cd, ji}(-\omega') &= \frac{ (\Omega^{pn})^2}{4} \int_{-\infty}^\infty \mathrm{d}\tau \; \cos(\omega' t) \left( 1- \mathrm{e}^{-\phi(\tau)} \right) ~.
\end{align*}
We shall return back to the phonon dissipator when discussing the ME equation in the symmetric-antisymmetric basis, which allows us to derive a model agreeing with the half-sided cavity approach.
\subsubsection{Photon dissipator}
We now turn our attention to the photon dissipator term from Eq.~\eqref{splitgeneralME2}. After evaluating the correlation and cross-correlation functions, we obtain the usual expression for two emitters \cite{ficek2005quantum} in a shared electromagnetic environment,
\begin{align}
\begin{split}
D_{pt}&(\rho_{SP})= \\
&\sum_{i,j=1}^2 \gamma^{pt}_{ji} \left( \sigma^j_- \rho_{SP}(t) \sigma^i_+ - \frac{1}{2}\{ \sigma^i_+ \sigma^j_-, \rho_{SP}(t) \} \right) ~,
\end{split}
\end{align}
where the diagonal terms $\gamma^{pt}_{22}(\omega') = \gamma^{pt}_{11}(\omega') = \gamma_0^{pt}(\omega')$, whilst the off diagonal terms are given by $\gamma^{pt}_{12}(\omega') = \gamma^{pt}_{21}(\omega') = \mathcal{F}_{12}(q_0 \Delta r)\gamma_0^{pt}(\omega')$ with $\Delta r = r_1 - r_2 = 2r_d$, and where
\begin{align}\label{Fimage}
\begin{split}
\mathcal{F}_{12}(x) = \frac{3}{2}\left( -\frac{\sin(x)}{x} - \frac{\cos(x)}{x^2} + \frac{\sin(x)}{x^3} \right)~.
\end{split}
\end{align}
This is the same function obtained for the half-sided cavity approach [{\it c.f.}~Eq.~\eqref{Fcav}]. The imaginary part of the correlation function yields the `correction' term to the unitary part of the ME \cite{breuer,ficek2005quantum,agarwal}: its diagonal contribution represents diagonal Lamb shift terms. Their small energetic shifts can be absorbed into the bare TLS transition frequency. We thus focus on the off-diagonal element which is of the form:
\begin{equation}\label{energy_shift_image}
V_{12} = \frac{1}{2} \mathcal{G}_{12}(q \Delta r)\gamma^{pt}_0(\omega')~,
\end{equation}
where the function $\mathcal{G}_{12}$ is
\begin{align}
\begin{split}
\mathcal{G}_{12}(x) = \frac{3}{2} \left(- \frac{\sin(x)}{x^2} - \frac{\cos(x)}{x^3} + \frac{\cos(x)}{x}\right)~.
\end{split}
\end{align}
Again, this corresponds to the same energy shift term we have previously encountered in Sec.~\ref{EM_bath_cor_cav}. After diagonalising the Hamiltonian, the frequency of the symmetric excited to ground state transition (in the polaron frame) is then given by
\begin{equation}
\tilde{\omega}' = \omega' + V_{12}~,
\end{equation}
exactly matching the transition frequency Eq.~\eqref{shifted_cavity_frequency} of the half-sided cavity model.
\begin{figure}[ht!]
\centering
\def1\textwidth{0.9\columnwidth}
\input{lvldiag.pdf_tex}
\caption{\label{lvl_diag} Energy level diagram for the two emitter system. The symmetric ($\Ket{s}$) and antisymmetric ($\Ket{a}$) levels are shifted up and down by $V_{12}$, respectively. The black arrows indicate the laser driving; the antisymmetric state is decoupled. Blue and red wavy lines indicate photon emission from the antisymmetric and symmetric channel, respectively. As discussed in the text, it is necessary to disable driving on the $\Ket{s} \leftrightarrow \Ket{e}$ transition (black dashed) to recover the effective two level-system $\Ket{g} \leftrightarrow \Ket{s}$. For environments permitting photon absorption, the dashed wavy transitions also need to be explicitly disabled. }
\end{figure}
\subsection{Effective TLS in the energy eigenbasis}\label{eigbasis}
\begin{figure*}[ht!]
\begin{flushleft}
\def1\textwidth{1\textwidth}
\input{schematic5.pdf_tex}
\caption{\label{schematic2}Overview of the four scenarios for an optical dipole considered in this work. All cases have a schematic depiction accompanied by the corresponding SE rates $\gamma_0$ and transition frequencies $\omega$. Here, $\Delta r$ is the separation between the real and image dipole, $\mathcal{F}_{12}(q_0 \Delta r)$ and $V_{12}$ are given by Eqns.~\eqref{Fimage} and \eqref{energy_shift_image}, respectively, and $\omega_0$ and $\omega'$ are the bare and polaron shifted frequencies, respectively. The blue `masses on springs' (blue circles) denote the phonon bath. Note that the driving field is not shown here, as its presence or absence does not influencing the relevant properties.}
\end{flushleft}
\end{figure*}
As stated in the introduction, previous literature treating spontaneous emission from initially excited emitters considered the transition from the symmetrically excited to the ground state, as this choice yields matching results with other methods \cite{ficek2005quantum, babiker}. We follow this approach and adopt the basis $\{\Ket{e}, \Ket{s}, \Ket{a},\Ket{g}\}$ with $\Ket{s} = (\Ket{0_1} \Ket{X_2} + \Ket{X_1}\Ket{0_2}) / \sqrt{2}$ and $\Ket{a} = (\Ket{0_1} \Ket{X_2} - \Ket{X_1}\Ket{0_2}) / \sqrt{2}$, see Fig.~\ref{lvl_diag}. In this basis, our full polaron ME reads:
\begin{align}\label{fullME}
\begin{split}
\diff{}{t} \rho_{SP}(t) = &-\frac{i}{\hbar} [H'_{SP}, \rho_{SP}(t)] \\
&+ D^{s}_{pn}(\rho_{SP}) +D^{a}_{pt}(\rho_{SP})+D^{s}_{pt}(\rho_{SP}) ~,
\end{split}
\end{align}
where the dissipator terms are explicitly given in Appendix \ref{app:dissipators}. Here, $H'_{SP}$ denotes the system diagonalised Hamiltonian [including the energy shift term Eq.~\eqref{energy_shift_image}]. The ME photonic dissipator separates into a symmetric channel ($\Ket{g} \leftrightarrow \Ket{s} \leftrightarrow \Ket{e}$) and an antisymmetric one ($\Ket{g} \leftrightarrow \Ket{a} \leftrightarrow \Ket{e}$). Courtesy of the fully correlated phonon bath, phonons also only act in the symmetric channel.
Since $\Omega_1 = \Omega_2$, the symmetric channel Rabi frequency becomes $\Omega_{sg} \coloneqq (\Omega_1 + \Omega_2)/\sqrt{2} = \sqrt{2}\Omega = \Omega_{cav}$ and hence we obtain the same phonon rates as in the half-sided cavity approach\footnote{The last equality holds due to the difference in density of modes appearing in the derivation of the Rabi frequency in both models.}. Furthermore, the antisymmetric channel Rabi frequency $\Omega_{a} \coloneqq(\Omega_1 - \Omega_2)/\sqrt{2} = 0$, meaning that the laser field is completely decoupled from the antisymmetric state.
Consistency with the Green's function and half-sided cavity approach demands that we restrict the dynamics of our four-dimensional Hilbert space to the subspace spanned by the states $\{ \Ket{g}, \Ket{s} \}$, i.e.~the larger Hilbert space only served to let us calculate the correct properties of this single transition. Fully decoupling the antisymmetric singly and the doubly excited states from the dynamics is achieved by disabling the laser driving on the $\Ket{s} \leftrightarrow \Ket{e}$ transition. For finite temperature photon environments with $N(\omega) \neq 0$, we also need to remove dissipative photon absorption channels, by dropping the antisymmetric dissipator term $D^{a}_{pt}(\rho_{SP})$ from the ME and explicitly removing the dissipative $\Ket{s} \leftrightarrow \Ket{e}$ operator.
The image approach can thus be reduced to an effective TLS model featuring the same Rabi frequency, SE rate, and transition frequency as the half-sided cavity approach -- i.e.~ displaying full equivalence between the two representations.
In Fig.~\ref{schematic2}, we summarise the key results from the previous sections: We show the transition frequency and SE rate for the all four cases considered in this Article alongside their schematic depictions. The driving term is not included as it has no direct influence on the properties of the optical dipole transition.
\section{Resonance Fluorescence Spectrum}
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=1.05\textwidth]{spectrum2updated.pdf}
\end{subfigure
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=0.85\textwidth]{CoherentRatioNorm3.pdf}
\end{subfigure}
\caption{ {\bf Left:} Incoherent component of the RF spectrum for a single TLS (blue) and the effective TLS incorporating surface-induced modifications (red).
{\bf Right:} Ratio of coherent emission for all four cases (with/without mirror, with/without the phonon environment) as a function of the (normalised) effective Rabi frequency. $\Omega_s$ denotes the saturation Rabi frequency for $\gamma^{pt}_0 = 0.001$~ps$^{-1}$. See text for a discussion.}
\label{spectrumplot}
\end{figure*}
Having included the possibility of laser driving in our model, a natural application is to study the resonance fluorescence (RF) spectrum of a condensed matter TLS near a mirroring surface. We use the ME \eqref{fullME} (after discarding the antisymmetric channel, as argued above) to calculate the spectral function, which is given by the Fourier transform of the (steady-state) first order correlation function $\mathrm{lim}_{t \rightarrow \infty}\langle \mathbf{E}^{(-)}(\mathbf{R}, t) \mathbf{E}^{(+)}(\mathbf{R}, t + \tau) \rangle$, where $\mathbf{E}^{(-)}(\mathbf{R}, t)$ and $\mathbf{E}^{(+)}(\mathbf{R}, t)$ are, respectively, the negative and positive components of the electric field operator evaluated at the position $\mathbf{R}$ of the detector \cite{ficek2005quantum}. These operators are related to the system operators $\sigma_- = \Ket{0}\Bra{X}$ and $\sigma_+ = \Ket{X}\Bra{0}$, and hence, after applying the polaron transformation, the RF spectral function can be written as
\begin{align}\label{spectrum}
\begin{split}
S(\omega) \propto \int_{-\infty}^\infty \mathrm{d}\tau &\mathrm{e}^{-i (\omega - \omega') \tau} \times \\
&\langle \sigma_+(\tau) B_+(\tau) \sigma_-(0) B_-(0) \rangle_{s}~, \\
\end{split}
\end{align}
where we have exploited the temporal homogeneity of the stationary correlation function, and where the subscript `s' denotes the trace taken with respect the steady-state density matrix \cite{breuer}. The correlation function appearing in Eq.~\eqref{spectrum} involves two timescales, the nanosecond timescale associated with the exciton lifetime, and the shorter picosecond phonon bath relaxation timescale, allowing us to separate the correlation function into the product $\langle \sigma_+(\tau) \sigma_-(0) \rangle_s \langle B_+(\tau) B_-(0) \rangle_{s}$ \cite{jake}. Substituting the expression for the phonon bath correlation function, we obtain the spectral function
\begin{align}\label{spectrum_simplified}
\begin{split}
S(\omega) \propto \langle B \rangle^2\int_{-\infty}^\infty \mathrm{d}\tau &\mathrm{e}^{-i (\omega - \omega') \tau} \times \\
&\mathrm{e}^{\phi(\tau)} \langle \sigma_+(\tau) \sigma_-(0) \rangle_{s}~. \\
\end{split}
\end{align}
In the left panel of Fig.~\ref{spectrumplot}, we show the incoherent part of the emission spectrum of our surface-modified system as well as that of a reference TLS (also subject to the same phonon environment). Following Ref.~\cite{gerardot}, we take the TLS's position relative to the surface as $r_d \sim 177$ nm. The reference TLS is driven with `free space' Rabi frequency given by $\Omega^{pn} = 2 \langle B \rangle \mathbf{d}\cdot\mathbf{E}_0$. As expected, the curves differ in the position of the Mollow sidebands and the width of the three peaks, since the former is determined by the effective Rabi frequency and the later depends on the emission rate, which both undergo a change in the presence of a reflective surface. The two insets in the left panel of Fig.~\ref{spectrumplot} show the much broader phonon sideband, which receives $\sim 16\%$ of the scattered photons for the chosen spectral density at a phonon temperature of T=10~K.
In the right panel of Fig.~\ref{spectrumplot}, we plot the fraction of coherently scattered photons as a function of the renormalised effective Rabi frequency. This ratio is obtained numerically as the (integrated) coherent spectrum divided by the total integrated spectrum. There are two pairs of curves: one with and one without phonons. For the former, the finite area under the phonon sideband means that the coherent fraction does not go to unity even when driving far below saturation. The level at which this fraction plateaus is phonon coupling strength and temperature dependent \cite{jake}. By contrast, in the absence of phonons, almost all light is coherently scattered at weak enough driving. The close agreement between the two curves in each pair bears testament to the fact that the surface-modified emitter largely behaves like a bare emitter once the effective Rabi frequency has been corrected for (with the slight remaining discrepancy due to modifications of the natural lifetime). Indeed, plotting this ratio directly as a function of the laser driving field amplitude reveals sizeable horizontal shifts between these two curves in each pair (not shown).
\section{Summary and Discussion}
We have extended the method of images -- traditionally developed for capturing spontaneous emission in atomic ensembles near reflective interfaces -- to the case of a driven solid-state emitter near a metal surface. We have developed two approaches: a half-sided cavity and image dipole, and shown that the latter agrees with the former, but only when additional `selection rules' are introduced to constrain the dynamics to the relevant subspace. Both our approaches agree with a Green's function treatment in the absence of a vibrational environment. Through a rigorous derivation, we find that the emitter can indeed still be described as an effective (phonon-dressed) two-level system with appropriately modified properties, even in the presence of a phonon bath and for a driven system. Our calculated RF spectrum corroborates this observation.
We note that image dipole approach not only necessitates a larger Hilbert space but also involved a more cumbersome ME derivation than the half-sided cavity approach. This begs the questions whether such an image approach remains useful. We submit that the method of images can more easily accommodate larger numbers of emitters near a surface (of varying separation to the surface), as the problem then straightforwardly maps onto the case of several optical dipoles in a shared (free space) electromagnetic environment -- a problem which has been studied extensively, see, e.g.,~Ref.~\cite{ficek2005quantum}. Future work might investigate the role of geometry in configurations with $N>1$ emitters, possibly resulting in the enhancement of Dicke superradiance of an ensemble of solid state emitters \cite{sanders, Machnikowski:Superradiance}, or the use of mirrors to bring about other collective effects in the light matter interaction, for example inspired by a recent proposal for engineering the quantum-enhanced absorption of light \cite{superabsorption} or by harnessing sub-radiant collective states \cite{Scully2015, Higgins2015}.
Another interesting avenue for future work might be the study of charged quantum dots featuring excited trion states. In addition to the optical dipole, the image approach would then feature a separate permanent dipole. To a first approximation, we would expect this second dipole to be static, meaning it would not radiate and only modify the spectrum via energetic shifts. However, one might speculate whether the Coulomb interaction of the three charges involved in the trion state could slightly `wiggle' this dipole, making some radiative contribution to the overall spectrum conceivable.
\section*{Acknowledgements}
The authors thank Peter Kirton, Fabio Biancalana, and David Gershoni for valuable suggestions. D.S. thanks SUPA for financial support, T.S. acknowledges studentship funding from EPSRC under grant no EP/G03673X/1, B. D. G. thanks the Royal Society, and E. M. G. acknowledges support from the Royal Society of Edinburgh and the Scottish Government.
\section*{Appendix}
|
2,869,038,154,619 | arxiv | \section{Introduction}
Neutrinoless double beta decay ($0\nu\beta\beta$ decay) is a
second order nuclear weak decay process with the emission of two electrons
only~\cite{Vog92,fae98,AEE07}: $(A,Z)\rightarrow
(A,Z+2)+2e^-$. This process violates the total lepton number conservation and is
therefore forbidden in the standard model (SM) of electroweak interaction.
The existence of $0\nu\beta\beta$ decay will immediately prove the neutrino to be a Majorana particle (i.e., identical to its antiparticle).
Furthermore, a study of $0\nu\beta\beta$ decay is an indispensable mean to probe the absolute neutrino masses at the level of tens of meV.
The fact that the neutrinos are massive particles was firmly established by neutrino oscillation experiments,
thus providing the first evidence for physics beyond the SM
(for reviews see, e.g., Ref.~\cite{Kay08}).
However, the observed oscillations cannot in principle pin down the absolute scale of the neutrino masses. This calls for alternative ways one of which is $0\nu\beta\beta$ decay.
Thus unambiguous observation of $0\nu\beta\beta$ decay would be of paramount importance for our understanding of particle physics beyond the SM.
The next generation of $0\nu\beta\beta$-decay experiments (CUORE, GERDA, MAJORANA, SNO+, SuperNEMO, and so on,
see, e.g., Ref.~\cite{AEE07} for a recent review) has a great discovery potential.
Provided the corresponding decay rates are accurately measured,
knowledge of the relevant nuclear matrix elements (NME) $M^{0\nu}$ will become indispensable to reliably deduce the effective Majorana mass
from half-lives $T^{0\nu}_{1/2}$ of the decay.
One of the best candidate nuclei for searching $0\nu\beta\beta$ decay is $^{150}$Nd since
it has the second highest endpoint, $Q_{\beta\beta}=$3.37 MeV, and the largest phase-space factor for the decay (about 33 times larger than that for $^{76}$Ge, see, e.g.~\cite{Vog92}).
The SNO+ experiment at the Sudbury Neutrino Observatory
will use an Nd-loaded scintillator to search for $0\nu\beta\beta$ decay by looking for a distortion in the energy spectrum of decays at the endpoint~\cite{SNO+}.
SNO+ will be filled with 780 tons
of liquid scintillator. The planned loading of 0.1\% of the natural Nd translates into 43.6 kg of the isotope $^{150}$Nd. It is expected to achieve the sensitivity of $T^{0\nu}_{1/2} \simeq 5\cdot 10^{24}$ years after one year of running, with the best final value of about three to four times longer (without enrichment of the dissolved Nd).
Now, to translate the anticipated experimental sensitivity to the decay rate into
the sensitivity expected for the effective Majorana neutrino mass $m_{\beta\beta}$,
one needs the corresponding NME $M^{0\nu}$. With the result $M^{0\nu}=4.74$ of Ref.~\cite{Rod05}, already the initial phase of SNO+ will be able to probe $m_{\beta\beta}\approx$ 100 meV, and will finally be able to achieve sensitivity of $m_{\beta\beta}\approx$ 50 meV corresponding to the case of the so-called inverse hierarchy (IH) of the neutrino mass spectrum.
However, $^{150}$Nd is well-known to be strongly deformed, which
strongly hinders a reliable theoretical evaluation
of the corresponding $0\nu\beta\beta$-decay NME.
For instance, it does not seem feasible in the near future to reliably treat this nucleus within the large-scale nuclear shell model (LSSM), see, e.g., Ref.~\cite{men09}. Also, the ``optimistic'' NME of Ref.~\cite{Rod05} was obtained within a microscopic approach, the proton-neutron quasiparticle random phase approximation (QRPA), with neglect of deformation.
Recently, more phenomenological approaches like the pseudo-SU(3) model~\cite{Hir95}, the Projected Hartree-Fock-Bogoliubov (PHFB) approach~\cite{Hir08} and the interacting boson model (IBM-2)~\cite{Bar09} have been employed to calculate
$M^{0\nu}$ for strongly deformed heavy nuclei (a comparative analysis of different approximations involved in the models can be found in Ref.~\cite{Esc10}).
The results of these models
generally reveal a substantial suppression of $M^{0\nu}$ for $^{150}$Nd as compared with the QRPA result of Ref.~\cite{Rod05} where $^{150}$Nd and $^{150}$Sm were treated as spherical nuclei.
The recent result of the PHFB~\cite{Hir08} is in a fair agreement with the pseudo-SU(3) one of Ref.~\cite{Hir95}, but they both are about 1.5 times smaller than $M^{0\nu}$ of the IBM-2~\cite{Bar09}. These results for $M^{0\nu}$ give a factor of 2--3 worse limits (as compared with the result of Ref.~\cite{Rod05}) on the Majorana neutrino mass to be achieved at SNO+ , and basically leave no hope to probe the IH region by the current configuration of SNO+.
Such a spread in calculated NME $M^{0\nu}$ for $^{150}$Nd makes it very important to have a reliable estimate of the effect of nuclear deformation on $M^{0\nu}$. The most microscopic way up-to-date to describe this effect in $^{150}$Nd and $^{150}$Sm is provided by the QRPA. In Refs.~\cite{Sim03,Sim04,Sal09} a QRPA approach for calculating the $2\nu\beta\beta$-decay NME $M^{2\nu}$ in deformed nuclei was developed. The $2\nu\beta\beta$-decay half-lives have accurately been measured for a dozen nuclei and the corresponding nuclear matrix elements
$M^{2\nu}_{exp}$ were extracted~\cite{Bar10}. A theoretical interpretation of
these matrix elements provides a check of the reliability of different models.
It was demonstrated in Refs.~\cite{Sim03,Sim04,Sal09} that
deformation introduces a mechanism of suppression of the $M^{2\nu}$ matrix element
which gets stronger when deformations of the initial and final nuclei differ from each other. A similar dependence of the suppression of both $M^{2\nu}$ and $M^{0\nu}$ matrix elements on the difference in deformations was found in the PHFB~\cite{Hir08} and the LSSM~\cite{men09}.
In this Rapid Communication we report on the most
microscopic state-of-the-art calculation of $M^{0\nu}$ for $^{150}$Nd with an account for nuclear deformation. The QRPA with a realistic residual interaction (the Brueckner $G$-matrix derived from the Bonn-CD nucleon-nucleon potential)~\cite{Sal09} is used.
The present calculation shows a suppression of $M^{0\nu}$ by
about 40\% as compared with our previous QRPA result~\cite{Rod05}
for $^{150}$Nd that was obtained with neglect of deformation.
Making use of this newest NME, one may conclude that $0\nu\beta\beta$ decay
of $^{150}$Nd, to be searched for by the SNO+ collaboration soon, provides one of the best sensitivities to the Majorana neutrino mass and may approach the IH region of the neutrino mass spectrum.
The NME $M^{0\nu}$ for strongly deformed, axially symmetric nuclei can be most conveniently calculated within the QRPA in the intrinsic coordinate system associated with the rotating nucleus. This employs the adiabatic Bohr-Mottelson approximation that is well justified for $^{150}$Nd, which indeed reveals strong deformation. As for $^{150}$Sm, the enhanced quadrupole moment of this nucleus is an indication for its static deformation. Nevertheless, the experimental level scheme of $^{150}$Sm does not reveal a clear ground-state rotational band. In this work we treat $^{150}$Sm in the same manner as $^{150}$Nd.
However, a more elaborated theoretical treatment going beyond the simple adiabatic approximation might be needed in the future to describe the nuclear dynamics of this nucleus.
Nuclear excitations in the intrinsic system $| K^\pi\rangle$ are characterized by the projection $K$ of the total angular momentum onto the nuclear symmetry axis (the only projection that is conserved in strongly deformed nuclei) and the parity $\pi$.
In Ref.~\cite{Sal09} the structure of the intermediate
$| 0^+\rangle$ and $| 1^+\rangle$ states was obtained within the QRPA to calculate $2\nu\beta\beta$-decay NME $M^{2\nu}$. Here, the approach of Ref.~\cite{Sal09} is straightforwardly extended to calculate all possible $| K^\pi\rangle$ states needed to construct $M^{0\nu}$.
The matrix element $M^{0\nu}$ is given within the QRPA in the intrinsic system by a sum of the partial amplitudes of transitions via the intermediate states $K^\pi$
\begin{equation}
M^{0\nu}=\sum_{K^\pi} M^{0\nu}(K^\pi)\ ; \ M^{0\nu}(K^\pi) =
\sum_{\alpha} s^{(def)}_\alpha O_\alpha(K^\pi). \label{M0n}
\end{equation}
Here we use the notation of Appendix B in Ref.~\cite{anatomy}, $\alpha$ stands for the set of four single-particle indices $\{p,p',n,n'\}$,
and $O_\alpha(K^\pi)$ is a two-nucleon transition amplitude via all the $K^\pi$ states
in the intrinsic frame
\begin{equation}
O_\alpha(K^\pi)=\sum_{m_i,m_f}
\langle 0_f^+|c_{p}^\dagger c_{n}|K^\pi m_f\rangle
\langle K^\pi m_f|K^\pi m_i\rangle
\langle K^\pi m_i|c^\dagger_{p'} c_{n'}|0_i^+\rangle .
\label{O}
\end{equation}
The two sets of intermediate nuclear states generated from the
initial and final ground states (labeled by $m_i$ and $m_f$, respectively)
do not come out identically within the
QRPA. A standard way to tackle this problem is to introduce the overlap factor of these states $\langle K^\pi m_f|K^\pi m_i\rangle$ in Eq.~(\ref{O}).
Two-body matrix elements $s^{(def)}_\alpha$ of the neutrino potential in
Eq.~(\ref{M0n}) in a deformed Woods-Saxon single-particle basis are decomposed over the the spherical harmonic oscillator ones according to the way described in Ref.~\cite{Sal09}:
\begin{equation}
s^{(def)}_{pp'nn'}=
\sum_{J}\sum_{\footnotesize\begin{array}{c}
\eta_p \eta_{p'}\\[-1pt] \eta_n \eta_{n'}\end{array}}
F^{JK}_{p\eta_p n\eta_n}F^{JK}_{p'\eta_{p'}n'\eta_{n'}}s^{(sph)}_{\eta_p\eta_{p'} \eta_n\eta_{n'}}(J),
\end{equation}
\begin{eqnarray}
s^{(sph)}_{pp'nn'}(J)&=&\displaystyle \sum_{\mathcal J}
(-1)^{j_n + j_{p'} + J + {\mathcal J}} \hat{\mathcal J}
\left\{
\begin{array}{c c c}
j_p & j_n & J \\ j_{n'} & j_{p'} & {\mathcal J}
\end{array}
\right\}
\langle p(1), p'(2); {\mathcal J} \| {\mathcal O_\ell}(1,2) \| n(1), n'(2); {\mathcal J} \rangle\,,
\end{eqnarray}
where $\hat{\mathcal{J}} \equiv \sqrt{2\mathcal{J}+1}$, and ${\mathcal O_\ell}(1,2)$ is the neutrino potential as a function of the coordinates of two particles, with ${\ell}$ labeling its Fermi (F), Gamow-Teller (GT), and Tensor (T) parts. The particle-hole transformation coefficient
$F^{JK}_{p\eta_p n\eta_n}= B^p_{\eta_p}B^n_{\eta_n}(-1)^{j_n-\Omega_{n}}C^{JK}_{j_p\Omega_{p} j_n-\Omega_{n}}$ from the deformed basis into the spherical harmonic oscillator one
is constructed from the single-particle decomposition coefficients $B^p_{\eta_p}$ and $B^n_{\eta_n}$ (see Ref.~\cite{Sal09} for details),
$C^{JK}_{j_p\Omega_{p} j_n-\Omega_{n}}$ is the Clebsch-Gordan coefficient.
The particle-hole transition amplitudes in Eq.~(\ref{O}) can be represented in terms
of the QRPA forward $X^m_{i K}$ and backward $Y^m_{i K}$ amplitudes along with
the coefficients of the Bogoliubov transformation $u_\tau$ and $v_\tau$~\cite{Sal09}:
\begin{eqnarray}
\langle 0_f^+|c_{p}^\dagger c_{n}|K^\pi m_f\rangle&=&v_{p}u_{n}X^{m_f}_{pn,K^\pi}+u_{p}v_{n}Y^{m_f}_{pn,K^\pi},\nonumber\\
\langle K^\pi m_i|c^\dagger_p c_{n}|0_i^+\rangle&=&u_{p}v_{n}X^{m_i}_{pn,K^\pi}+v_{p}u_{n}Y^{m_i}_{pn,K^\pi}.\nonumber
\end{eqnarray}
The overlap factor in Eq.~(\ref{O}) can be written as:
\begin{eqnarray}
\langle K^\pi m_f|K^\pi m_i\rangle&=&\sum_{l_il_f}[X^{m_f}_{l_fK^\pi}X^{m_i}_{l_iK^\pi}-Y^{m_f}_{l_fK^\pi}Y^{m_i}_{l_iK^\pi}]
\mathcal{R}_{l_fl_i}\langle BCS_f|BCS_i\rangle
\label{overlap}
\end{eqnarray}
Representations for ${\cal R}_{l_fl_i}$ and the overlap factor $\langle BCS_f|BCS_i\rangle$ between the initial and final BCS vacua are given in Ref.~\cite{Sim03}.
For a numerical computation of the $0\nu\beta\beta$-decay
NME $M^{0\nu}$ for the process $^{150}$Nd$\rightarrow ^{150}$Sm$+2e^-$, we have
straightforwardly extended the method of Ref.~\cite{Sal09}.
The single-particle Schr\"odinger equation with the Hamiltonian of
a deformed Woods-Saxon mean field is solved on the basis of a axially-deformed
harmonic oscillator. The parametrization of the mean field is adopted from the spherical
calculations of Refs.~\cite{Rod05,anatomy,Sim09}.
We use here the single-particle deformed basis corresponding in the spherical limit to full (4--6)$\hbar\omega$ shells.
Decomposition of the deformed single-particle wave functions is performed over the spherical harmonic oscillator states within the seven major shells.
Only quadrupole deformation is taken into account in the calculation. The geometrical quadrupole deformation parameter $\beta_2$ of the deformed Woods-Saxon mean
field is obtained
by fitting the experimental deformation parameter $\beta= \sqrt{\frac{\pi}{5}}\frac{Q_p}{Z r^{2}_{c}}$, where $r_c $ is the charge rms radius and $Q_p$ is the empirical intrinsic quadrupole moment. The latter
can be derived from the laboratory quadrupole moments measured by the Coulomb excitation reorientation technique, or from the corresponding
$B(E2)$ values~\cite{ragha}.
We take in this work the experimental values $\beta=0.29$ and
$\beta=0.19$ for $^{150}$Nd and $^{150}$Sm, respectively, which are
extracted from the $B(E2)$ values as being more accurate.
The fitted values of the parameter $\beta_2$ of the deformed Woods-Saxon mean
field, which allow us to reproduce the experimental $\beta$, are listed in Table~\ref{table.1}. The spherical limit, i.e. $\beta_2=0$, is considered as well, to compare with the earlier results of Ref.~\cite{Rod05}. The procedure of fitting $\beta_2$ adopted here is more consistent than the approximate ansatz $\beta_2=\beta$ used in Ref.~\cite{Sal09}.
As in Refs.~\cite{Rod05,Sal09,anatomy,Sim09}, the nuclear Brueckner $G$ matrix, obtained by a solution of the Bethe-Goldstone equation with the Bonn-CD one boson exchange nucleon-nucleon potential, is used as a residual two-body interaction.
First, the BCS equations are solved to obtain the Bogoliubov coefficients, gap parameter and the chemical potentials. To solve the QRPA equations,
one has to fix the particle-hole $g_{ph}$ and particle-particle
$g_{pp}$ renormalization factors of the residual interaction (see Ref.~\cite{Sal09} for details). As in Ref.~\cite{Sal09}, we determine a value of $g_{ph}$
by fitting the experimental position of the Gamow-Teller giant resonance (GTR) in the intermediate nucleus. Since there is no experimental information on the GTR energy for $^{150}$Nd, we use for this nucleus the same $g_{ph}=0.90$ as fitted for $^{76}$Ge (this value is slightly different from the fitted $g_{ph}=1.15$ of Ref.~\cite{Sal09}
because of a different parametrization of the mean field used here).
The parameter $g_{pp}$ can be determined by fitting the experimental value of the $2\nu\beta\beta$-decay NME $M^{2\nu}_{GT}=0.07$ MeV$^{-1}$~\cite{Bar10}. The unquenched axial-vector coupling constant $g_A=1.25$ is used here. The fitted values of $g_{pp}$ are listed Table~\ref{table.1}.
Note, that the more realistic procedure of fitting $\beta_2$ adopted here also gives us more realistic $g_{pp}\simeq 1$ values as compared with those of Ref.~\cite{Sal09}.
\begin{table}[h]
\centering
\caption{The values of the deformation parameter $\beta_2$ of Woods-Saxon mean field
for initial (final) nuclei fitted in the calculation to reproduce the experimental quadrupole moment.
Also the fitted values of the particle-particle strength parameter $g_{pp}$ are listed
(the particle-hole strength parameter is $g_{ph}=0.90$).
The BCS overlap factor $\langle BCS_f|BCS_i\rangle$ (\ref{overlap}) between the initial and final BCS vacua is given in the last column.}
\label{table.1}
\begin{tabular}{|l|c|c|c|c|}
\hline
& & & \\
initial (final)& $\beta_{2}$&\ $g_{pp}$\
&$\langle BCS_i|BSC_f\rangle$\\
nucleus & & & \\
\hline
& & & \\
$^{150}$Nd ($^{150}$Sm)& 0.240 (0.153) & 1.05
& 0.52 \\
& 0.0\ \ (0.0) & 1.01
& 0.85 \\
\hline
\end{tabular}
\end{table}
Having solved the QRPA equations, the two-nucleon transition amplitudes (\ref{O}) are calculated and, by combining them with the two-body matrix elements of the neutrino potential, the total $0\nu\beta\beta$ NME $M^{0\nu}$ (\ref{M0n}) is formed. The present computation is rather time consuming since numerous programming loops are needed to calculate the decompositions of the deformed two-body matrix elements over the spherical ones. Therefore, to speed up the calculations the mean energy of 7 MeV of the intermediate states is used in the neutrino propagator.
Following Refs.~\cite{Rod05,anatomy,Sim09}, in this first application of the approach
the effects of the finite nucleon size and higher-order weak currents are included.
Recently, it was shown~\cite{Sim09} that a modern self-consistent treatment of the two-nucleon short-range correlations (s.r.c.) leads to a change in the NME $M^{0\nu}$ only by a few percents, much less than the traditional Jastrow-type representation of the s.r.c. does. Therefore, we postpone the analysis of the anticipated small effects of the s.r.c. to a forthcoming detailed publication.
In Table~\ref{tab:3} the presently calculated NME $M^{0\nu}$ for $^{150}$Nd is listed (column 4) and compared with the calculation results by other approaches. One can see that the NME $M^{0\nu}$ of this work calculated with neglect of deformation (column 3) agrees well with the previous one of the spherical QRPA~\cite{Rod05}. A small difference can have its origin in the somewhat different approximations involved (use of the Woods-Saxon single particle wave functions and the BCS overlap factor, neglect of the s.r.c. in the present work). By including deformation (column 4), one gets about 1.8 times smaller NME $M^{0\nu}$. The main origin of the suppression can be attributed to a smaller BCS overlap factor in the latter case, that is due to a marked difference in deformations between $^{150}$Nd and $^{150}$Sm nuclei (see Table~\ref{table.1}).
Our present NME $M^{0\nu}$ for $^{150}$Nd, obtained within the state-of-the-art QRPA approach that accounts for nuclear deformation, though smaller than the earlier one of Ref.~\cite{Rod05}, still is significantly larger than the NME of other approaches (columns 5,6, and 7 of Table~\ref{tab:3}).
The $0\nu\beta\beta$-decay half-life $T^{0\nu}_{1/2}$ corresponding to the Majorana neutrino mass $\langle m_{\beta\beta} \rangle$ = 50 meV is more than two times shorter as compared with the most optimistic prediction of the IBM-2 among the other
approaches~\footnote{Note that by neglecting the Jastrow-type s.r.c. the IBM-2 NME will get about 20\% larger and be in rather good agreement with our present result.}.
It allows us to hope that the SNO+ experiment will still be able to approach the inverse hierarchy of the neutrino mass spectrum.
\begin{table*}
\caption{
The matrix elements $M^{0\nu}$ for the $0\nu\beta\beta$ decay $^{150}$Nd$\rightarrow ^{150}$Sm calculated in different models. The final result of this work obtained with account of deformation is given in column 4. A result with neglect of deformation is also listed (column 3) for comparison with the earlier result of Ref.~\cite{Rod05} (column 2). The corresponding half-lives $T^{0\nu}_{1/2}$ (in years) for an assumed effective Majorana neutrino mass $\langle m_{\beta\beta} \rangle$ = 50 meV are also shown.
}
\begin{tabular}{|ccccccc|}
\hline
{}
& {QRPA~\cite{Rod05}
\footnote{using spherical harmonic oscillator
wave functions, no
deformation allowed. The radius parameter $r_0=1.2$ fm is used here, instead of $r_0=1.1$ fm of Ref.~\cite{Rod05}}}
& {this work ($\beta_2=0$)
\footnote{using Woods-Saxon wave functions, no
deformation allowed.}}
& {\bf this work
}
& {pseudo-SU(3)~\cite{Hir95}}
& {PHFB~\cite{Hir08}}
& {IBM-2~\cite{Bar09}}
\\
\hline
$M^{0\nu}$ & 5.17 & 5.78 & {\bf 3.16} & 1.57 & 1.61 & 2.32 \\
$T^{0\nu}_{1/2}$, $10^{25}$ y & 1.72 & 1.38 & {\bf 4.60} & 18.7 & 17.7 & 8.54 \\
($\langle m_{\beta\beta} \rangle$ = 50 meV) & & & & & & \\
\hline
\end{tabular}
\label{tab:3}
\end{table*}
To conclude, in this Rapid Communication the most
microscopic state-of-the-art calculation of the nuclear matrix element for neutrinoless double beta decay of $^{150}$Nd with an account for nuclear deformation is performed.
The proton-neutron
QRPA with a realistic residual interaction (the Brueckner $G$ matrix derived from the Bonn-CD nucleon-nucleon potential) is used as the underlying nuclear structure model.
The $0\nu\beta\beta$ decay matrix elements $M^{0\nu}$ calculated in this work shows
suppression by about 40\% with respect to our
previous QRPA result for $^{150}$Nd obtained with neglect of deformation.
Making use of this newest nuclear matrix element, one may conclude that
neutrinoless double beta decay of $^{150}$Nd, to be measured soon by the SNO+ collaboration, provides one of the best sensitivities for the Majorana neutrino mass.
The authors acknowledge the support of the Deutsche Forschungsgemeinschaft under both SFB TR27 "Neutrinos and Beyond" and Graduiertenkolleg GRK683.
|
2,869,038,154,620 | arxiv | \section{Introduction}
The equation \begin{equation}\label{LebNag} x^2 + D = y^n \end{equation} is known as the \emph{Lebesgue--Nagell equation}. Here, $x$ and $y$ are coprime integers, $n \geq 3$ and $D$ is an integer whose prime divisors belong to a fixed finite set. The Lebesgue--Nagell equation has a rich history and many cases have been resolved through use of a wide variety of techniques, ranging from primitive divisor arguments and bounds for linear forms in logarithms, to the modular method, based upon the modularity of Galois representations attached to Frey--Hellegouarch curves.
\freefootnote{\emph{Keywords}: Lebesgue--Nagell, Elliptic curves, Frey curve, multi-Frey, $\mathbb{Q}$-curves, modularity, level-lowering, Galois representations, newforms.}
\freefootnote{\emph{MSC2010}: 11D41, 11D61, 11F80, 11G05.}
In recent papers of the first- and third-named authors \citep{BeSi2} and \citep{BeSi}, various tools are developed to tackle equation (\ref{LebNag}) in the two ``difficult'' cases, where either $D>0$ and $y$ is even, or where $D < 0$. In particular, \citep{BeSi} focusses upon these situations where, additionally, it is assumed that
$D$ has a single prime divisor. For primes $q < 100$, the only unsolved cases of the equation $x^2 \pm q^\alpha=y^n$ (see \citep[Theorem~3 and Proposition~13.3]{BeSi}) correspond to
\begin{equation}\label{maineq00}
x^2-2 = y^n,
\end{equation}
\begin{equation}\label{maineq0}
x^2 - q^{2k+1} = y^n, \quad 2 \nmid y,
\end{equation}
for $q \in \{ 3, 5, 17, 37, 41, 73, 89 \}$,
and
\begin{equation}\label{maineq}
x^2 - q^{2k+1} = y^n, \quad 2 \mid y,
\end{equation}
for $q \in \{ 17, 41, 89, 97 \}$. Here, $k$ is a nonnegative integer and, in each case, we suppose that $n \geq 3$ and that $\gcd(x,y)=1$. The fundamental obstruction to resolving equations (\ref{maineq00}) and (\ref{maineq0}), for $q \in \{ 3, 5, 17, 37 \}$,
lies in the existence of a solution with $y=\pm 1$, valid for all (odd) exponents $n$. The analogous obstruction, in case of equation (\ref{maineq0}) with $q \in \{ 41, 73, 89 \}$, or equation (\ref{maineq}), for $q \in \{ 17, 41, 89, 97 \}$, is slightly more subtle, arising from the fact that $q \pm 8$ is square, in the first case, and from the identities
\begin{equation} \label{ident}
23^2-17=2^9, \; \; 13^2-41=2^7, \; \; 91^2-89=2^{13} \; \; \mbox{ and } \; \; 15^2-97=2^7,
\end{equation}
in the second.
In this paper, we will concentrate on equation (\ref{maineq}),
developing new techniques to handle further values of $q$ via the use of $\Q$-curves and multi-Frey techniques, overcoming some of these obstructions. In particular, we will prove the following.
\begingroup
\renewcommand\thetheorem{1}
\begin{theorem}\label{mainthm}
Let $q \in \{ 41, 97 \}$. Then the solutions to equation (\ref{maineq}) in integers $x, y, k, n$, with $x, k \geq 0$, $n \geq 3$ and $\gcd(x,y)=1$ are as follows:
$$
\begin{array}{c}
(q,x,y,k,n) \; = \; (41,3,-2,0,5),\; (41,7,2,0,3), \; (41,13,2,0,7), \\
(41,411,10,1,5), \; (97,15,2,0,7) \; \mbox{ and } \; (97,77,18,0,3).
\end{array}
$$
\end{theorem}
\endgroup
We are unable to provide a similar result for the cases $q = 17$ and $q = 89$, with obstructions to our method arising from the first and third identities in (\ref{ident}). We will still consider the cases $q = 17$ and $q = 89$ throughout, and in Section 5 will explain precisely why these solutions prevent us from resolving equation (\ref{maineq}) for these primes $q$. Note that Barros \citep{Barr} claims to resolve equations (\ref{maineq0}) and (\ref{maineq}) in the case $k=0$ and $q=89$; his argument overlooks the obstructing solution corresponding to the third identity on (\ref{ident}).
Using \citep[Theorems 1, 3 and 5]{BeSi}, we obtain the following corollary to Theorem \ref{mainthm}.
\begingroup
\renewcommand\thetheorem{2}
\begin{corollary} All solutions to the equation \[ x^2 \pm 97^\alpha = y^n, \qquad 97 \nmid x,\] for integers $x, \alpha, y$ and $n$ with $x, \alpha \geq 1$ and $n \geq 3$ are given by
$$
(\pm 15)^2 - 97 = 2^7, \; \; (\pm 77)^2 - 97 = 18^3, \; \;
$$
$$
(\pm 175784) - 97^4 = 3135^3 \; \; \mbox{ and } \; \; (\pm 48)^2 + 97 = 7^4.
$$
\end{corollary}
\endgroup
$\Q$-curves have been successfully applied to the problem of solving Diophantine equations in the past; the first such example is due to Ellenberg \citep{Ellenberg}, where he treats the equation
$$
x^2+y^4=z^n,
$$
for suitably large $n$. We refer to \citep{vanlangen} for a clear exposition of the general method and the references therein for more examples of this approach; we highlight \citep{BCDY}, since the set-up (once the Frey--Hellegouarch $\Q$-curve has been constructed) is most similar to ours.
We now outline the rest of the paper. In Section 2, we will associate a rational Frey--Hellegouarch curve $G$ to equation (\ref{maineq}) and recall some results from \citep{BeSi}. In Section 3, we construct a second Frey--Hellegouarch curve $E$, this time defined over the real quadratic field $\Q(\sqrt{q})$, show that it is a $\Q$-curve, and compute its conductor. Then, in Section 4, we will investigate some further properties of this $\Q$-curve, and in particular prove that its restriction of scalars is an abelian surface of $\mathrm{GL}_2$-type, which will allow us to associate the mod $n$ Galois representation of $E$ to a classical newform of a certain level and character. Finally, in Section 5, we will try and eliminate newforms to reach a contradiction.
The \texttt{Magma} \citep{magma} files used to carry out the computations in this paper are available at:
\vspace{3pt}
\begin{center}
\noindent \url{https://warwick.ac.uk/fac/sci/maths/people/staff/michaud/c/}
\end{center}
\bigskip
The second-named author would like to thank Damiano Testa for many useful discussions.
\section{A Rational Frey--Hellegouarch Curve}
Let $q \in \{17,41,89,97 \}$ and suppose that $(x,k,y,n)$ is a solution to equation (\ref{maineq}). We will assume that $x \equiv 1 \pmod{4}$ by replacing $x$ by $-x$ if necessary. We will also assume that $n$ is prime with $n \geq 7$, since the cases $n \in \{3,4,5\}$ are resolved for all values of $q$ in the range $3 \leq q < 100$ in \citep[pp.~6--7,~24]{BeSi}.
Following \citep[Proposition~14.1]{BeSi}, we associate a Frey--Hellegouarch elliptic curve, defined over $\Q$, to this solution:
\begin{equation}\label{RatFrey}
G = G_{x,k,q} \; \; : \; \; Y^2 = X^3 + 4xX^2 + 4(x^2 - q^{2k+1})X.
\end{equation}
The conductor of $G$ is given by
$$
N_G = q \, \mathrm{Rad}(y),
$$
where $ \mathrm{Rad}(y)$ is the product of the distinct primes dividing the nonzero integer $y$.
Applying standard level-lowering results, followed by the elimination of some newforms at level $2q$ (recall that $y$ is even), we find that $\overline{\rho}_{G,n} \sim \overline{\rho}_{F,n} $ for $F = F_q $ an elliptic curve of conductor $2q$ given, in Cremona's notation, in Table \ref{TabRat} (see \citep[Proposition~14.1]{BeSi}). Each curve $F$ in Table \ref{TabRat} corresponds to (at least) one solution to equation (\ref{maineq}). We have
\begin{align*} (-23)^2 - 17 = 2^9 \quad & \text{and} \quad G_{-23,0,17} \cong F_{17}, \\
13^2 - 41 = 2^7 \quad & \text{and} \quad G_{13,0,41} \cong F_{41}, \\ (-91)^2 - 89 = 2^{13} \quad & \text{and} \quad G_{-91,0,89} \cong F_{89}, \\ (-15)^2 - 97 = 2^7 \quad & \text{and} \quad G_{-15,0,97} \cong F_{97}. \\
\end{align*}
These isomorphisms of elliptic curves prevent us from using the isomorphisms of mod $n$ Galois representations $\overline{\rho}_{G,n} \sim \overline{\rho}_{F,n} $ to obtain an upper bound on $n$. We can, in fact, deduce such a bound through appeal to linear forms in logarithms, but it will be impractically large for our purposes, in each case well in excess of $10^{10}$. It is worth observing that equation (\ref{maineq}) is the more problematical case (in comparison to equation (\ref{maineq0})), for the purposes of application of bounds for linear forms in logarithms. In case of equation (\ref{maineq0}), results of Bugeaud \citep{Bu-Acta} imply that
$$
n < 4.5 \cdot 10^6 q^2 \log^2q,
$$
which we can, with care, sharpen to an upper bound upon $n$ of somewhat less than $10^6$ for, say, $q=3$ in equation (\ref{maineq0}). Even with such a bound, it remains impractical to finish the problem via this approach, since we have no reasonable techniques to obtain a contradiction for a fixed value of $n$ in (\ref{maineq0}), while, as discussed in
\citep[pp.~34--35]{BeSi}, in the case of equation (\ref{maineq}), we have such a method which is unfortunately computationally infeasible, given the size of our upper bounds for $n$.
For small values of $n$, however, we have the following.
\begingroup
\renewcommand*{\arraystretch}{1.8}
\begin{table}[ht!]
\begin{center} \small
\begin{tabular}{ |c|c|c|c|c| }
\hline
$q$ & $17$ & $41$ & $89$ & $97$ \\
\hline
$F_q$ & 34a1 & 82a1 & 178b1 & 194a1 \\
\hline
\end{tabular}
\vskip2ex
\caption{\label{TabRat}\normalsize Elliptic curves that cannot be eliminated.}
\end{center}
\end{table}
\endgroup
\normalsize
\begin{lemma}[{\citep[Proposition~14.1]{BeSi}}] \label{>1000} Let $ q \in \{ 17, 41, 89, 97 \} $ and suppose that $(x,k,y,n)$ is a solution to equation (\ref{maineq}) with $x \equiv 1 \pmod{4}$ and $n \geq 7$ prime. Then $n > 1000$ or $(q,x,y,k,n)$ is one of
$$
(17,-71,2,1,7), \; (41,13,2,0,7), \; (89,-91,2,0,13) \mbox{ or } (97,-15,2,0,7).
$$
\end{lemma}
We note that \citep[Proposition~14.1]{BeSi} also provides information on the parity of the exponent $k$; we will not have use of this.
\begin{proof} When $q \ne 17$, this follows immediately from \citep[Proposition~14.1]{BeSi}. For $q=17$, we use exactly the same method to achieve the desired result. Using \citep[Lemma~14.3]{BeSi} deals with all $n > 7$. For the case $n = 7$, we start by applying \citep[Lemma~14.6]{BeSi}, and following the arguments of \citep[pp. 32--34]{BeSi} leaves us needing to solve three Thue--Mahler equations, each of degree $7$. To be precise, we need to solve
$$
a_7X^7+a_6X^6Y+a_5X^5Y^2+a_4X^4Y^3+a_3X^3Y^4+a_2X^2Y^5+a_1XY^6+a_0Y^7=17^k,
$$
where $(a_7,a_6,a_5,a_4,a_3,a_2,a_1,a_0)$ is one of
$$
(139,1519,7119,18515,28945,27069,14133,3137),
$$
$$
(17, 189, 861, 2345, 3395, 3591, 1519, 467)
$$
or
$$
(1, 189, 14637, 677705, 16679635, 299923911, 2156762783, 11272244723).
$$
Using the code and techniques of \citep{ThueMahler}, we find that the first two of these equations yield no solutions, and the third has only the solution $(X,Y)=(1,0)$, which corresponds to the identity $(-71)^2 - 17^3 = 2^7$. These computations took approximately 2000 seconds for the first equation, and just over one minute for each of the second and third, running Magma V2.24-5 on a 2019 MacBook Pro.
\end{proof}
To proceed further, we will now turn our attention to
a new Frey--Hellegouarch curve, defined over the real
quadratic field $\Q(\sqrt{q})$.
\section{Constructing a Frey--Hellegouarch \texorpdfstring{$\Q$}{}-Curve}
Let $q \in \{17,41,89,97\}$ and write $M = \Q(\sqrt{q})$. In this section, we construct a new Frey--Hellegouarch curve, this time defined over $M$. This curve will be a $\Q$-curve, i.e. an elliptic curve, defined over some number field, that is isogenous over $\overline{\Q}$ to all of its Galois conjugates. The $\Q$-curve we define will in fact be \emph{completely defined} over $M$, meaning that the isogeny between the curve and its conjugate is also defined over $M$. We will start by following the approach suggested in \citep[pp.~47--48]{BeSi}.
In each case $M$ has class number $1$. We write $\mathcal{O}_M$ for the ring of integers of $M$. Although we may suppose that $n > 1000$ by Lemma \ref{>1000}, we will for the moment simply assume $n \geq 7$ (and $n$ prime) as in the previous section. We will write $\delta$ for a fundamental unit for $\mathcal{O}_M$. For each value of $q$, the rational prime $2$ splits in $\mathcal{O}_M$.
Let
\[ \gamma = \begin{cases*} (-3 + \sqrt{q})/2 & if $q = 17$, \\ (-19 - 3 \sqrt{q}) /2 & if $q = 41$, \\ (9 + \sqrt{q})/2 & if $q = 89$, \\ (325 + 33 \sqrt{q})/2 & if $q = 97$. \end{cases*}
\]
Here, we have chosen $\gamma$ such that it is a generator for one of the two prime ideals above $2$, and such that
$$
\gamma \overline{\gamma} = -2, \; \; \overline{\gamma} \equiv -1 \pmod{ \gamma^2} \; \; \mbox{ and } \; \; \sqrt{q} \equiv -1 \pmod{\gamma^2}.
$$
The relevance of these properties will be seen in due course.
We will now factor the left-hand side of equation (\ref{maineq}) over $M$. Writing $y = 2y_1$, we have \[ \left( \frac{x+q^k \sqrt{q} }{2} \right) \left( \frac{x - q^k \sqrt{q} }{2} \right) = 2^{n-2} y_1^n. \] Since $ q \equiv 1 \pmod{4}$, each factor on the left-hand side is in $\mathcal{O}_M$. Since $q \nmid x$, we see that \[ \gcd \left( \frac{x+q^k \sqrt{q} }{2}, \frac{x-q^k \sqrt{q} }{2} \right) = 1. \] Now, because $\overline{\gamma} \equiv -1 \pmod{ \gamma^2}$ and $ x \equiv 1 \pmod{4}$, we see that $\gamma$ must divide $(x + q^k \sqrt{q})/2$, and so $\overline{\gamma}$ will divide $(x - q^k \sqrt{q})/2$. Then by coprimality of the two factors, we have \begin{equation*} \frac{x+q^k \sqrt{q} }{2} = \delta^r \gamma^{n-2} \alpha^n,
\end{equation*} for some $r \in \mathbb{\Z}$ and $\alpha \in \mathcal{O}_M$. We then obtain that \begin{equation}\label{newnn2}
q^k\sqrt{q} = \delta^r \gamma^{n-2} \alpha^n - \overline{\delta}^r \overline{\gamma}^{n-2} \overline{\alpha}^n.
\end{equation} Treating this equation as a generalized Fermat equation of signature $(n,n,n)$ would lead to a Frey--Hellegouarch curve isogenous to the rational Frey--Hellegouarch curve $G$ defined by (\ref{RatFrey}). Instead, we will view this as an equation of signature $(n,n,2)$.
Write $k = 2m$ or $2m+1$ according to whether $k$ is even or odd. Let \begin{equation*}
w = \begin{cases*} \dfrac{(x+q^{2m}\sqrt{q})}{2} \cdot \sqrt{q}^3 = \delta^r \gamma^{n-2} \alpha^n \sqrt{q}^3 & if $k = 2m$, \\
\dfrac{(x+q^{2m+1}\sqrt{q})}{2} \cdot \sqrt{q} = \delta^r \gamma^{n-2} \alpha^n \sqrt{q} & if $k = 2m+1$. \end{cases*}
\end{equation*} From equation (\ref{newnn2}), we deduce that \begin{equation*} \gcd(w, \overline{w}) = \begin{cases*} \sqrt{q}^3 & if $k = 2m$, \\ \sqrt{q} & if $k = 2m+1$. \end{cases*}
\end{equation*} We can then rewrite (\ref{newnn2}) as \begin{equation*} w + \overline{w} = q^{2k+2}. \end{equation*}
One can attach to any equation of the form $w+\overline{w} = u^2$, with $u \in \Q$, a Frey--Hellegouarch $\Q$-curve; see, by way of example, \citep[pp.~199,~203--204]{vanlangen}. We take our $\Q$-curve to be \begin{equation}\label{E}
E = E_{x,m} = E_{x,m,q}: \; Y^2 = X^3 + 2\gamma q^{m+1} + \gamma^2 w X.
\end{equation} This $\Q$-curve is a quadratic twist by $\gamma$ of the $\Q$-curve one would obtain applying the recipe in \citep[p.~199]{vanlangen}. The reason for twisting by $\gamma$ is to ensure the curve $E$ is \emph{completely defined} over $M$, meaning the isogeny between $E$ and its Galois conjugate is also defined over $M$. Let $\sigma$ denote the non-trivial element of $\mathrm{Gal}(M / \Q)$. Then we have
\begin{equation}
\overline{E} = \overline{E}_{x,m} = \overline{E}_{x,m,q}: \; Y^2 = X^3 + 2\overline{\gamma} q^{m+1} + \overline{\gamma}^2 \overline{w} X,
\end{equation} and a $2$-isogeny, defined over $M$, \begin{equation}\label{varphi} \varphi_\sigma: \overline{E} \rightarrow E, \qquad (X,Y) \mapsto \left( \frac{X^2 + 2 \overline{\gamma}q^{m+1} + \overline{\gamma}^2 \overline{w}}{\overline{\gamma}^2 X} , \frac{(X^2 -\overline{\gamma}^2 \overline{w}) Y }{\overline{\gamma}^3 X^2} \right). \end{equation}
We would like to compute the conductor $\mathcal{N}_E$ of $E$. We first note that the curve $E$ has the following standard invariants :
\begin{equation*} c_4 = \gamma^6 \overline{\gamma}^4(w + 4 \overline{w}), \; \; c_6 = \gamma^9 \overline{\gamma}^6 (w - 8 \overline{w}) q^{m+1} \; \; \mbox{ and } \; \; \Delta = \gamma^{12} \overline{\gamma}^6 w^2 \overline{w}.
\end{equation*}
\begin{lemma}\label{Conductor1} Let $n \geq 11$. The curve $E$ has multiplicative reduction at both primes of $M$ above $2$. As a consequence, $E$ does not have complex multiplication.
\end{lemma}
\begin{proof} We recall that $\gamma$ and $\overline{\gamma}$ generate the two prime ideals of $M$ above $2$. The model $E$ is not minimal at these primes, but we will not actually need to write down a minimal model.
We have
$$
\operatorname{ord}_\gamma(w) = \operatorname{ord}_{\overline{\gamma}} (\overline{w}) = n - 2 + n \operatorname{ord}_\gamma (\alpha) ~\text{ and }~ \operatorname{ord}_{\overline{\gamma}}(w) = \operatorname{ord}_{\gamma} (\overline{w})=0,
$$
whence we deduce that
\begin{align*} \operatorname{ord}_\gamma(c_4) & = 6 + \operatorname{ord}_\gamma(w+4\overline{w}) = 8, \\ \operatorname{ord}_\gamma(c_6) & = 9 + \operatorname{ord}_\gamma(w+8\overline{w}) = 12, \\ \operatorname{ord}_\gamma(\Delta) & = 12 + 2(n-2+n \operatorname{ord}_\gamma(\alpha)) = 8 + 2n(1+\operatorname{ord}_\gamma(\alpha)).
\end{align*}
Similarly, we see that
$$
\operatorname{ord}_{\overline{\gamma}}(c_4) = 4, \; \; \operatorname{ord}_{\overline{\gamma}}(c_6) = 6 \; \; \mbox{ and } \; \; \operatorname{ord}_{\overline{\gamma}}(\Delta) = 4 + n(1+\operatorname{ord}_{\overline{\gamma}}(\alpha) ).
$$
Writing $j = c_4^3 / \Delta$ for the $j$-invariant of $E$, we have that \[ \operatorname{ord}_\gamma(j) = 16 - 2n(1+\operatorname{ord}_\gamma(\alpha)) < 0 ~ \text{ and } ~ \operatorname{ord}_{\overline{\gamma}}(j) = 8 - n(1+\operatorname{ord}_\gamma(\alpha)) < 0, \] since $n \geq 11$ by assumption. We note that these inequalities will in fact hold whenever $n \geq 9$. We conclude that $E$ has potentially multiplicative reduction at each prime above $2$. We can in fact already see at this point that $E$ does not have complex multiplication, since the $j$-invariant of $E$ is non-integral.
In order to show that $E$ has multiplicative reduction at each prime above $2$, it will be enough to prove that the extension $M(\sqrt{-c_6/c_4})/M$ is unramified at $\gamma$ and $\overline{\gamma}$.
We have, recalling that $\gamma \overline{\gamma} = -2$,
\begin{align*} - \frac{c_6}{c_4} & = - \gamma^3 \overline{\gamma}^2 \cdot \frac{w-8\overline{w}}{w+4\overline{w}} \cdot \sqrt{q}^{2m+2} = -\gamma^3 \overline{\gamma}^2 \cdot \frac{w + \gamma^3 \overline{\gamma}^3 \overline{w}}{w+ \gamma^2 \overline{\gamma}^2 \overline{w}} \cdot \sqrt{q}^{2m+2}
\\ & = -\frac{\gamma^4}{\gamma} \overline{\gamma}^2 \cdot \frac{w / \gamma^3 + \overline{\gamma}^3 \overline{w}}{w / \gamma^3+ \overline{\gamma}^2 \overline{w} / \gamma } \cdot \sqrt{q}^{2m+1} = - (\gamma^2 \overline{\gamma} \sqrt{q}^{m+1})^2 \cdot \frac{w / \gamma^3 + \overline{\gamma}^3 \overline{w}}{w / \gamma^2+ \overline{\gamma}^2 \overline{w} }. \end{align*} Write\[ \eta = \gamma^2 \overline{\gamma} \sqrt{q}^{m+1} \quad \text{ and } \quad \kappa = - \frac{w / \gamma^3 + \overline{\gamma}^3 \overline{w}}{w / \gamma^2+ \overline{\gamma}^2 \overline{w} }, \] so that $M( \sqrt{-c_6 / c_4 }) = M(\sqrt{\kappa}) = M((1+\sqrt{\kappa})/2)$.
Consider the numerator of $\kappa$. We have that $\operatorname{ord}_\gamma(\overline{\gamma}^3\overline{w}) = 0$, and
$$
\operatorname{ord}_\gamma( w / \gamma^3) = n - 2 + n \operatorname{ord}_\gamma(\alpha) - 3 = n - 5 + n \operatorname{ord}_\gamma(\alpha) \geq 6 > 0,
$$
as $n \geq 11$, so $\gamma$ does not divide the numerator and, similarly for the denominator. So $\operatorname{ord}_\gamma(\kappa) = 0$, and similarly, $\operatorname{ord}_{\overline{\gamma}}(\kappa) = 0$. We have that $\operatorname{ord}_\gamma(w / \gamma^3), \; \operatorname{ord}_\gamma(w/\gamma^2) > 2$, so $\kappa \equiv -\overline{\gamma} \equiv 1 \pmod{\gamma^2}$ by our choice of $\gamma$. We also have that $\kappa \equiv -1/\gamma \equiv 1 \pmod{\overline{\gamma}^2}$ since $\gamma \equiv -1 \pmod{\overline{\gamma}^2}$.
Now, $(1+\sqrt{\kappa})/2)$ satisfies the polynomial \[ X^2 - X + \frac{1-\kappa}{4},\] which is integral at $\gamma$ and $\overline{\gamma}$ and has discriminant $\kappa$. This proves that the extension $M(\sqrt{-c_6/c_4})/M$ is unramified at $\gamma$ and $\overline{\gamma}$. \end{proof}
\begin{lemma}\label{Conductor2} Let $n \geq 11$. Then \begin{enumerate}
\item If $\pi \nmid 2q \alpha \overline{\alpha}$ is a prime of $M$, then $E$ has good reduction at $\pi$;
\item If $\pi \nmid 2q$ is a prime of $M$ dividing $\alpha$ or $\overline{\alpha}$, then the model of $E$ is minimal at $\pi$, the prime $\pi$ is of multiplicative reduction for $E$, and $n \mid \operatorname{ord}_\pi(\Delta)$;
\item $E$ has additive, potentially good reduction at $\sqrt{q} \cdot \mathcal{O}_M$. In particular, we have that $\operatorname{ord}_{\sqrt{q}}(\mathcal{N}_E) = 2$, since $q \geq 5$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $\pi \nmid 2q$ be a prime of $M$. So $\pi \nmid \gamma \overline{\gamma} \sqrt{q}$. If $\pi \nmid \alpha \overline{\alpha}$, then $\pi \nmid \Delta$, so $\pi$ is a prime of good reduction for $E$, proving the first part.
Suppose instead that $\pi \nmid 2q$, but that $\pi \mid \alpha \overline{\alpha}$. Then $\pi \mid \Delta$. From equation (\ref{newnn2}) we see that $\gcd(\alpha, \overline{\alpha}) = 1$ (using the fact that $\sqrt{q} \nmid \alpha \overline{\alpha}$, since $ q \nmid y$). So either $\pi \mid \alpha$ or $\pi \mid \overline{\alpha}$, but not both. So $\pi \mid w$ or $\pi \mid \overline{w}$, but not both. It follows that $\pi \nmid c_4$. So $E$ is minimal at $\pi$, and $\pi$ is a prime of multiplicative reduction for $E$. Moreover, $\operatorname{ord}_\pi(\Delta) = 2n \operatorname{ord}_\pi(\alpha) + n \operatorname{ord}_\pi(\overline{\alpha}) \equiv 0 \pmod{n}$, as required.
Finally, we consider $\sqrt{q}$. We have that $\operatorname{ord}_{\sqrt{q}}(w) = \operatorname{ord}_{\sqrt{q}}(\overline{w}) = 1$ or $3$ according to whether $k$ is odd or even. So $\operatorname{ord}_{\sqrt{q}}(\Delta) = 3$ or $9$, and $\sqrt{q} \mid c_4$. It follows that $E$ is minimal with additive reduction at $\sqrt{q}$. To see that we have potentially good reduction, we show that $\operatorname{ord}_{\sqrt{q}}(j) \geq 0$. We must show that $3\operatorname{ord}_{\sqrt{q}}(c_4) \geq \operatorname{ord}_{\sqrt{q}}(\Delta)$. We have that $\operatorname{ord}_{\sqrt{q}}(c_4) = \operatorname{ord}_{\sqrt{q}}(w + 4\overline{w}) \geq 3$ or $1$ according to whether $k$ is even or odd. Then $\operatorname{ord}_{\sqrt{q}}(\Delta) = 9$ or $3$ according to whether $k$ is even or odd. So we have the required inequality.
\end{proof}
Combining Lemmas \ref{Conductor1} and \ref{Conductor2}, we have that \[ \mathcal{N}_E = \left( \gamma \overline{\gamma} \cdot \sqrt{q}^2 \cdot \mathrm{Rad}_2(\alpha \overline{\alpha}) \right) \cdot \mathcal{O}_M, \] where $\mathrm{Rad}_2(\alpha \overline{\alpha})$ denotes the product of all prime ideals of $M$ dividing $\alpha \overline{\alpha}$ but not dividing $2$.
\section{Irreducibility and Level-Lowering}
We would like to apply certain level-lowering results to $E$ in order to relate $E$ to a newform of a particular level and character. We must first prove irreducibility of $\overline{\rho}_{E,n}$, the mod $n$ Galois representation of $E$.
\begin{proposition}\label{irreduc} Let $q \in \{17,41,89,97 \}$. The representation $\overline{\rho}_{E,n}$ is irreducible for $n \geq 11$.
\end{proposition}
\begin{proof} Suppose that $\overline{\rho}_{E,n}$ is reducible with $n \geq 11$. If $n = 13$, then arguing as in \citep[p.~215]{vanlangen}, $E$ would give rise to a $\Q(\sqrt{13})$-point on the modular curve $X_0(26)$, a contradiction, since $q \ne 13$. We will therefore suppose that $n = 11$ or $n > 13$. Since $E$ is a $\Q$-curve defined over a quadratic field and the isogeny $\varphi$ has degree $2$, \citep[Proposition~3.2]{Ellenberg} tells us that every prime of $M$ of characteristic $>3$ is a prime of potentially good reduction for $E$.
We first show that $y$ must be a power of $2$. If $\ell >3$ is a prime with $\ell \mid y$, then each prime of $M$ above $\ell$ will divide either $\alpha$ or $\overline{\alpha}$, and it follows (by Lemma \ref{Conductor2}) that we have a prime of characteristic $\ell > 3$ of multiplicative reduction for $E$, a contradiction. Next, suppose that $3 \mid y$. Then $3$ is a prime of multiplicative reduction for the rational Frey--Hellegouarch curve $G$ defined in (\ref{RatFrey}). From the isomorphism $\overline{\rho}_{G,n} \sim \overline{\rho}_{F,n}$, for $F$ an elliptic curve of level $2q$ in Table \ref{TabRat}, we have, writing $f$ for the newform corresponding to $F$, that
\begin{equation} \label{Hasse!}
n \mid 3+1 \pm a_3(f).
\end{equation}
From the Hasse bound, we have that $\abs{a_3(f)} \leq 2 \sqrt{3}$ and hence the right-hand-side of (\ref{Hasse!}) is a nonzero integer, bounded above by $4+2\sqrt{3}$. This contradicts $n \geq 11$ and so we may conclude that $y$ is necessarily a power of $2$, say $y = 2^s$, with $ s \geq 1$ since $y$ is even.
We thus have that $x^2 - 2^{ns} = q^{2k+1}$. By \citep[p.~328]{Ivorra}, this equation has no solutions with $n \geq 11$, provided $2k+1 >1$. It follows that $k = 0$ and we have \[ x^2 = 2^{ns} + q. \] Multiplying both sides by $2$ or $2^2$ if necessary, we obtain an integral point on one of the following elliptic curves:
\begin{align*}
Y^2 = X^3 + q, \; \;
Y^2 = X^3 + 2^2q \; \; \mbox{ or } \; \;
Y^2 = X^3 + 2^4 q
\end{align*}
Computing the integral points on each of these curves for each value of $q$ using \texttt{Magma} quickly leads to a contradiction.
\end{proof}
We highlight the fact that we have used the rational Frey--Hellegouarch curve $G$ to help us prove the irreducibility of $\overline{\rho}_{E,n}$.
\begin{remark} At this point, we could apply standard level-lowering results over $M$ (see \citep[Theorem~7]{asym} for example) to relate $\overline{\rho}_{E,n}$ to the Galois representation of a Hilbert newform at level $\gamma \overline{\gamma} \sqrt{q}^2 \cdot \mathcal{O}_M$. For $q = 17, 41, 89,$ and $97$, the dimensions of these spaces of newforms are $46, 1093, 9631,$ and $26378$ respectively. Computing the newform data at these levels using \texttt{Magma} is certainly possible for $q = 17$, and would also likely be achievable for $q = 41$ by working directly with Hecke operators (see \citep[p.~26]{MJ} for example), but for $q = 89$, and especially for $q = 97$, the dimensions are likely too large for current computations. The $\Q$-curve approach we now present will allow us to work with classical modular forms and make the resulting computations feasible.
\end{remark}
We start by computing some data associated to the $\Q$-curve $E$, which we recall does not have complex multiplication (by Lemma \ref{Conductor1}). We will use the notation and terminology of Quer \citep{Quer}. We note that we are in a similar set-up to that of \citep[pp.~8--9]{BCDY}. As in the previous section, we write $\mathrm{Gal}(M/\Q) = \{ 1, \sigma \}$. We have the isogeny $\varphi_\sigma : \overline{E} \rightarrow E$ given by (\ref{varphi}), and $\varphi_1$ will denote the identity morphism on $E$. Write $c: \mathrm{Gal}(M/\Q) \rightarrow \Q^*$ for the $2$-cocycle given by \[ c(s,t) = \varphi_s \, {}^s\varphi_t \; \varphi_{st}^{-1}. \] We have that $c(1,1) = c(1,\sigma) = c(\sigma, 1) = 1$. By direct calculation, we verify that $c(\sigma,\sigma) = \varphi_\sigma ({}^\sigma \varphi_\sigma) = -2$.
Next, define \[ \beta : \mathrm{Gal}(M / \Q) \rightarrow \overline{\Q}^*, \qquad \beta(1) = 1,~ \beta(\sigma) = \sqrt{-2}. \] This map satisfies \begin{equation}\label{schur} c(s,t) = \beta(s)\beta(t) \beta(st)^{-1} \quad \text{ for } s,t \in \mathrm{Gal}(M/ \Q). \end{equation} It follows that $\beta$ is a \emph{splitting map} for $c$. The \emph{splitting character} associated to $\beta$ is then defined by \[ \epsilon(s) = \beta(s)^2 / \deg(\varphi_s). \] So $\epsilon(1) = 1$ and $\epsilon(\sigma) = -1$, and $\epsilon$ is the quadratic Galois character associated to $M$. We may also view $\epsilon$ as a quadratic Dirichlet character $\epsilon: (\Z / qZ)^\times \rightarrow \{\pm1 \}$ of conductor $q$ (recall that $q \equiv 1 \pmod{4}$) via the natural map $\Q(\zeta_q) \rightarrow M$ and isomorphism $\mathrm{Gal}(\Q(\zeta_q) / \Q) \cong (\Z / q\Z)^\times$.
Write $B = \mathrm{Res}_\Q^M(E)$ for the restriction of scalars of $E$ to $\Q$. This is an abelian surface defined over $\Q$ and plays an important role. The relation (\ref{schur}) shows that the $2$-cocycle $c$ has trivial Schur class (i.e. is trivial when viewed as an element of $H^2(\mathrm{Gal}(M/\Q), \overline{\Q}^*)$ with trivial action). By \citep[Proposition~5.2]{Quer}, we deduce that $B$ decomposes as a product of abelian varieties of $\mathrm{GL}_2$-type. Moreover, the $\Q$-simple abelian variety of $\mathrm{GL}_2$-type, $A_\beta$, attached to $\beta$, which is a quotient of $B$, will have endomorphism algebra $\Q(\beta(1),\beta(\sigma)) = \Q(\sqrt{-2})$ (see \citep[pp.~305--306]{Quer}), and is therefore itself an abelian surface. It follows that $B \sim_{\Q} A_\beta$ is $\Q$-simple and of $\mathrm{GL}_2$-type with $\Q$-endomorphism algebra $\Q(\sqrt{-2})$. We record this in the following proposition.
\begin{proposition} The abelian surface $B = \mathrm{Res}_\Q^M(E)$ is $\Q$-simple and of $\mathrm{GL}_2$-type. It has $\Q$-endomorphism algebra $\Q(\sqrt{-2})$. The conductor of $B$ is given by \[ N_B = \left(2q^2 \,\mathrm{Rad}_2(y) \right)^2. \]
\end{proposition}
\begin{proof}
It remains to compute the conductor of $B$. This can be obtained from the conductor of $E$ using the formula in \citep[Proposition~1]{Milne}. Writing $\Delta_M$ for the discriminant of $M$, we have \[ N_B = (\Delta_{M})^2 \, \mathrm{Norm}(\mathcal{N}_E) = q^2 \cdot 2^2 q^2 \cdot \mathrm{Norm}(\mathrm{Rad}_2(\alpha \overline{\alpha})) = 2^2 q^4 (\mathrm{Rad}_2(y))^2, \] and the proposition follows.
\end{proof}
We can now use the modularity of $B$ and standard level-lowering results to deduce the following result.
\begin{proposition}\label{repiso} Let $q \in \{17,41,89,97\}$ and let $n \geq 11$. Write $G_M = \mathrm{\mathrm{Gal}}(\overline{\Q}/M)$. Then we have \begin{equation}\label{repisoeq} \overline{\rho}_{E,n} \sim \left.{\overline{\rho}_{f,\mathfrak{n}}}\right|_{G_M}, \end{equation} for $f$ a newform of level $2q^2$ and character $\epsilon$, and $\mathfrak{n}$ a prime above $n$ in the coefficient field of $f$.
\end{proposition}
\begin{proof} By \citep[Theorem~4.4]{Ribet}, $B$ is isogenous to a factor, $A_g$, of $J_1(N)$ for some $N$, where $A_g$ is the abelian variety attached to some newform $g$. We have that $N^{\dim(A_g)} = N_B = \left(2q^2 \,\mathrm{Rad}_2(y) \right)^2,$ and so $N = 2q^2 \,\mathrm{Rad}_2(y)$. Moreover, $g$ has character $\epsilon^{-1} = \epsilon$, since $\epsilon$ has order $2$.
By Proposition \ref{irreduc}, the representation $\overline{\rho}_{E,n}$ is irreducible, so the representation $\overline{\rho}_{g,\pi}$ is too, and applying standard level-lowering results, we have that $\overline{\rho}_{g,\mathfrak{\pi}} \sim \overline{\rho}_{f,\mathfrak{n}}$, for $f$ a newform of level $2q^2$ and character $\epsilon$, and $\pi, \mathfrak{n}$ primes above $n$. Since $E$ is completely defined over $M$, restricting to $G_M$, we obtain \[ \overline{\rho}_{E,n} \sim \left.{\overline{\rho}_{g,\mathfrak{\pi}}}\right|_{G_M} \sim \left.{\overline{\rho}_{f,\mathfrak{n}}}\right|_{G_M},\] as required.
\end{proof}
\section{Eliminating Newforms}
We start by using \texttt{Magma} to compute the Galois conjugacy classes of newforms (i.e. their $q$-expansions) at level $2q^2$ with character $\epsilon$. Table \ref{TabNew} records some of this data.
\begingroup
\renewcommand*{\arraystretch}{1.8}
\begin{table}[ht!]
\begin{center} \footnotesize
\begin{tabular}{ |c|c|c|c|c|c| }
\hline
$q$ & $\dim$ & no. classes & (size of class, multiplicity) & time \\
\hline
$17$ & $22$ & $6$ & $(2,3), (4,1), (6,2)$ & $1$s \\
\hline
$41$ & $136$ & $18$ & $(2,4),(4,5),(6,2),(4,8),(16,1),(24,2)$ & $8$s\\
\hline
$89$ & $652$ & $26$ & \makecell{$(2,4),(4,2), (6,4), (8,3), (12,2), (24,3), (30,1), $ \\ $ (40,2), (50,1), (60,1), (80,1), (96,2) $} & $400$s \\
\hline
$97$ & $774$ & $29$ & \makecell{$(2,4),(4,3), (6,3),(8,4),(12,3),(20,3),(24,1), $ \\ $ (32,3),(40,1), (48,1),(64,1),(168,2) $ } & $739$s\\
\hline
\end{tabular}
\vskip2ex
\caption{\label{TabNew}\normalsize Newform data. Here, \emph{dim} refers to the dimension of the space of newforms and \emph{time} refers to the computation time using a 2200MHz AMD Opterons.}
\end{center}
\end{table}
\endgroup
\normalsize
We now see how we can try and eliminate these newforms using the isomorphism of representations (\ref{repisoeq}).
Let $\mathfrak{p} \nmid 2qn$ be a prime of $M$ above $p$ and denote by $\mathrm{Frob}_\mathfrak{p} \in G_M$ a Frobenius element at $\mathfrak{p}$. Let $f$ denote the newform related to $E$ in Proposition \ref{repiso}. Then, \begin{equation}\label{Traceeq} \mathrm{Tr}(\overline{\rho}_{E,n}(\mathrm{Frob}_\mathfrak{p})) = \mathrm{Tr}(\overline{\rho}_{f,\mathfrak{n}}(\mathrm{Frob}_\mathfrak{p})).\end{equation}
We start by considering the right-hand side of (\ref{Traceeq}). Writing $a_p(f)$ for the $p$-th coefficient of the $q$-expansion of $f$, we have that (see \citep[pp.~217--219]{vanlangen} for example) \begin{equation}\label{Tracef}
\mathrm{Tr}(\overline{\rho}_{f,\mathfrak{n}}(\mathrm{Frob}_\mathfrak{p})) \equiv \begin{cases*} a_p(f) \pmod{\mathfrak{n}} & if p \text{splits in} M, \\ a_p(f)^2 + 2p \pmod{\mathfrak{n}} & if p \text{is inert in} M. \end{cases*}
\end{equation} Here we have used the fact that $\epsilon(p) = -1$ when $p$ is inert in $M$. Let \[ t_{f, \mathfrak{p}} = \begin{cases*} a_p(f) & if p \text{splits in} M, \\ a_p(f)^2 + 2p & if p \text{is inert in} M. \end{cases*} \] Then $t_{f, \mathfrak{p}}$ is independent of $\mathfrak{n}$, but satisfies $\mathrm{Tr}(\overline{\rho}_{f,\mathfrak{n}}(\mathrm{Frob}_\mathfrak{p})) \equiv t_{f, \mathfrak{p}} \pmod{\mathfrak{n}}$ by (\ref{Tracef}).
Next, for the left-hand side of (\ref{Traceeq}), the quantity $\mathrm{Tr}(\overline{\rho}_{E,n}(\mathrm{Frob}_\mathfrak{p})) $ is dependent on our choice of $x$ and $m$ (i.e. dependent on our original solution to equation (\ref{maineq})). However, looking at how $E = E_{x,m}$ is defined in (\ref{E}), we see that the trace will only depend on $x$ and $q^{m} \pmod{p}$. In particular, it will only depend on the value of $x$ modulo $p$, and $m$ modulo $(p-1)$ (in fact it will only depend on $m$ modulo the multiplicative order of $q \pmod{p}$).
Given $0 \leq \chi \leq p-1$ and $0 \leq \mu \leq p-2$, write $E_{\chi,\mu}$ for the curve obtained by substituting $x = \chi$ and $m = \mu$ into $E_{x,m}$, defined in (\ref{E}). We then have \begin{equation*} \mathrm{Tr}(\overline{\rho}_{E_{\chi,\mu},n}(\mathrm{Frob}_\mathfrak{p})) = \begin{cases} a_{\mathfrak{p}}(E_{\chi,\mu}) & \text{ if }~ \mathfrak{p} \mid \Delta_{E_{\chi,\mu}}, \\ \mathrm{Norm}(\mathfrak{p}) + 1 & \text{ if }~ \mathfrak{p} \nmid \Delta_{E_{\chi,\mu}} \text{ and } \overline{-c_6/c_4} \in (\mathbb{F}_\mathfrak{p}^*)^2, \\ -\mathrm{Norm}(\mathfrak{p})-1 & \text{ if }~ \mathfrak{p} \nmid \Delta_{E_{\chi,\mu}} \text{ and } \overline{-c_6/c_4} \notin (\mathbb{F}_\mathfrak{p}^*)^2. \end{cases}
\end{equation*}
Here, $\overline{-c_6/c_4}$ denotes $-c_6/c_4 \pmod{\mathfrak{p}}$. We can now simply run through all possible pairs $\chi$ and $\mu$ in this range. Define \[ \mathcal{A}_\mathfrak{p} = \{ \mathrm{Tr}(\overline{\rho}_{E_{\chi,\mu},n}(\mathrm{Frob}_\mathfrak{p})) : 0 \leq \chi \leq p-1, \quad 0 \leq \mu \leq p-2 \}. \] Then we know that $\mathrm{Tr}(\overline{\rho}_{E_{x,m},n}(\mathrm{Frob}_\mathfrak{p})) \in \mathcal{A}_\mathfrak{p}$, and we can compute the set $\mathcal{A}_\mathfrak{p}$ for any $\mathfrak{p} \nmid 2qn$.
Define \[ \mathcal{B}_{f,\mathfrak{p}} = p \cdot \mathrm{Norm} \big(\prod_{ a \in \mathcal{A}_\mathfrak{p}} (a - t_{f,\mathfrak{p}}) \big). \] Then by (\ref{Traceeq}) we have that $n \mid \mathcal{B}_{f,\mathfrak{p}}$ whenever $\mathfrak{p} \nmid 2q$. Note that we have included a factor of $p$ in the definition of $\mathcal{B}_{f,\mathfrak{p}}$, as we would usually require $\mathfrak{p} \nmid 2qn$, but $n$ is unknown. Then if $\mathcal{B}_{f,\mathfrak{p}}$ is non-zero, we obtain a bound on $n$. Moreover, we can repeat this with many auxiliary primes $\mathfrak{p}$. If $\mathfrak{p}_1, \dots, \mathfrak{p}_r$ are primes not dividing $2q$, then \[ n \mid \mathcal{B}_f = \mathcal{B}_{f,\mathfrak{p}_1, \dots, \mathfrak{p}_r} = \gcd \left(\mathcal{B}_{f,\mathfrak{p}_1}, \dots,
\mathcal{B}_{f,\mathfrak{p}_r} \right). \]
\begin{proof}[Proof of Theorem \ref{mainthm}]
Let $q = 41$ or $97$. We computed the value $\mathcal{B}_f$, and in particular its prime factors, for each newform $f$ at level $2q^2$ and character $\epsilon$. For most newforms $f$, we did this by choosing a prime of $M$ above each rational prime between $3$ and $30$. For computational reasons, when $q = 97$ and $f$ is one of the two newforms with coefficient field of degree $168$, we only worked with a prime above each of $3$ and $11$. We found that for each newform $f$, all prime factors of $\mathcal{B}_f$ were $<300$, except for two newforms when $q= 41$, which we denote $g_1$ and $g_2$. Since we can take $n > 1000$ by Lemma \ref{>1000}, this eliminates all newforms except for $g_1$ and $g_2$. We are unable to eliminate these two newforms as their $\mathcal{B}$ values are $0$, and this remains the case when using more auxiliary primes.
Since we managed to eliminate all newforms when $q=97$, this proves Theorem \ref{mainthm} in the case $q = 97$. For $q = 41$, we are able to eliminate the remaining forms, but we need to use a multi-Frey approach.
Let $q = 41$. Recall that $k = 2m$ or $2m+1$ according to whether $k$ is even or odd, respectively. From Table \ref{TabRat}, we know that $\overline{\rho}_{G_{x,k},n} \sim \overline{\rho}_{F,n}$ where $F$ is the elliptic curve with Cremona label `82a1'. Let $p = 7$ which is inert in $M$, and write $\mathfrak{p} = p \cdot \mathcal{O}_M$ for the unique prime of $M$ above $7$.
We compute $a_7(F) = -4$. Given $0 \leq \chi \leq 6$ and $0 \leq \kappa \leq 5$, write $G_{\chi,\kappa}$ for the curve obtained by substituting $x = \chi$ and $k = \kappa$ into the definition of $G_{x,k,q}$ in (\ref{RatFrey}). Then we compute $\mathrm{Tr}(\overline{\rho}_{G_{\chi,\kappa},n}(\mathrm{Frob}_7))$ for each $\chi$ and $\kappa$. We found this trace to be independent of $\kappa$. The traces are recorded in Table \ref{Tab41} and we see that this forces $x \equiv 6 \pmod{7}$.
\begingroup
\renewcommand*{\arraystretch}{1.8}
\begin{table}[ht!]
\begin{center} \small
\begin{tabular}{ |c|c|c|c|c|c|c|c| }
\hline
$\chi$ & $0$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ \\
\hline
$\mathrm{Tr}(\overline{\rho}_{G_{\chi,\mu},n}(\mathrm{Frob}_7))$ & $0$ & $4$ & $2$ & $2$ & $ -2$ & $ -2$ & $-4$ \\
\hline
\end{tabular}
\vskip2ex
\caption{\label{Tab41}\normalsize Proof of Theorem \ref{mainthm}: Traces of Frobenius at $7$ for $G$.}
\end{center}
\end{table}
\endgroup
\normalsize
When $\chi = 6$, we find that $\mathrm{Tr}(\overline{\rho}_{E_{6,\mu},n}(\mathrm{Frob}_\mathfrak{p})) = 6$ for each $0 \leq \mu \leq 5$ and $k$ even or odd. However, \begin{align*} \mathrm{Tr}(\overline{\rho}_{g_1,\mathfrak{n}_1}(\mathrm{Frob}_\mathfrak{p})) & = a_p(g_1)^2 + 2p = -4, ~~\text{and} \\ \mathrm{Tr}(\overline{\rho}_{g_2,\mathfrak{n}_2}(\mathrm{Frob}_\mathfrak{p})) & = a_p(g_2)^2 + 2p = 14. \end{align*} It follows that $n \mid 7 \cdot 10$ or $ n \mid 7 \cdot 12$. So $x \not\equiv 6 \pmod{7}$, a contradiction.
\end{proof}
When $q = 17$ or $q = 89$, we found that, in each case, there was a single obstructing newform that we were unable to eliminate. When $q = 17$, this is due to the solution $(-23)^2 - 17 = 2^9$, and when $q = 89$, this is a consequence of the identity $(-91)^2 - 89 = 2^{13}$. The exponent $n$ in each case exceeds $8$, and it follows that the curves $E_{-23,0,17}$ and $E_{-91,0,89}$ have multiplicative reduction at the primes of $M$ above $2$. This can be verified directly or can be seen from the proof of Lemma \ref{Conductor1}. We can check that the traces of Frobenius for these two curves match the traces of the obstructing newforms (for all primes of characteristic $<1000$ say). We also note that in both cases, the coefficient field of the obstructing newform is $\Q(\sqrt{-2})$. This is the same as the $\Q$-endomorphism algebra of $B = \mathrm{Res}_\Q^M(E)$, as expected.
When $q = 41$ or $q = 97$, the solutions in Theorem \ref{mainthm} with exponent $n = 7$ prevent us from eliminating the isomorphism of mod $n$ representations of $G$ and $F_q$ (as noted in Section 2), but these solutions do not pose any issue when working with the $\Q$-curve $E$. This is because the exponent $n=7$ is not large enough to force multiplicative reduction at the primes of $M$ above the rational prime $2$. A similar remark applies for (restricting attention to primes $q < 1000$)
$$
q \in \{ 233, 313, 401, 601 \},
$$
which are potentially accessible to the methods of this paper (though the corresponding computation of forms at level $2q^2$ would, with current techniques, be formidable).
\bibliographystyle{plainnat}
|
2,869,038,154,621 | arxiv | \section{Introduction}
\label{sec:introduction}
The production of lepton pairs in hadron-hadron collisions via the Drell--Yan (DY) process
is described in the standard model (SM) by the $s$-channel exchange of $\gamma^*/\text{Z}$.
Theoretical calculations of the differential cross section $d\sigma/dM(\ell\ell)$, where $M(\ell\ell)$ is the
dilepton invariant mass, are well established up to
next-to-next-to-leading order (NNLO)~\cite{DY-theory, DYNNLO, DYNNLO1}.
Therefore, comparisons between calculations and precise experimental measurements
provide stringent tests of perturbative quantum chromodynamics (QCD) and significant
constraints on the evaluation of the parton distribution functions (PDFs).
Furthermore, the production of DY lepton pairs constitutes a major source of background
for \ttbar~ and diboson measurements, as well as for searches for new physics, such as
production of high mass dilepton resonances.
This paper presents a measurement of the differential DY cross section
in proton-proton collisions at $\sqrt{s} = 7\TeV$, based on dimuon and dielectron
data samples collected in 2010 by the Compact Muon Solenoid (CMS) experiment at the Large
Hadron Collider (LHC), corresponding to an integrated luminosity of $35.9\pm 1.4$\pbinv.
The results are given for the dilepton invariant mass range $15 < M(\ell\ell) < 600\GeV$, corresponding to the
Bjorken $x$ range 0.0003--0.633 for the interacting partons, and complement the observations
previously reported by the Tevatron collaborations~\cite{D0,CDF_a,CDF_b}.
To reduce systematic uncertainties, the results are normalized to the cross section in the
$Z$~region ($60 < M(\ell\ell) < 120\GeV$)
as determined in the same measurement.
The inclusive Z cross section in the full phase space was measured previously by
CMS~\cite{ZCrossSection}.
In the analysis presented, the cross sections are calculated as
\begin{equation}\label{eqn:fullCrossSection_intro}
\sigma = \frac{N_{\text{u}}}{A \, \epsilon \, \rho \, {\cal{L}}}\, ,
\end{equation}
where $N_{\text {u}}$ is the unfolded background-subtracted yield, corrected for detector
resolution. The values of the acceptance $A$ and the efficiency $\epsilon$ are
estimated from simulation, while $\rho$ is a factor that accounts for differences
in the detection efficiency between data and simulation.
Knowledge of the integrated luminosity ${\cal{L}}$ is
not required for the measurements described in this paper, since the cross
sections are normalized to the Z region.
This paper is organized as follows: in Section \ref{sec:CMS-detector} the CMS detector is described, with particular
attention to the subdetectors used to identify charged leptons.
Section \ref{sec:Event-Selection} describes
the data and Monte Carlo (MC) samples
used in the analysis and the selection applied to identify the DY candidates.
The signal extraction methods for the muon and electron channels,
as well as the background contributions to the candidate samples are discussed in Section
\ref{sec:backgrounds}. Section \ref{sec:unfolding} describes the analysis techniques used to unfold the detector resolution
from the measurements.
The calculation of the geometrical and kinematic acceptances together with
the methods applied to determine
the reconstruction, selection, and trigger efficiencies of the leptons within the experimental acceptance are presented in Section \ref{sec:AccepEff}.
Systematic uncertainties are discussed in Section \ref{sec:systematics}.
The calculation of the shapes of the DY invariant mass distributions
are summarized in
Section \ref{sec:results}. In that section we report not only results in
the full phase space but also results
as measured within the fiducial
and kinematic acceptance (both before and after final state QED radiation
corrections), thereby eliminating the PDF uncertainties from the results.
\section{The CMS Detector}
\label{sec:CMS-detector}
A detailed description of the CMS detector and its performance can be found
in Ref.~\cite{ref:CMS}. The central feature of the CMS apparatus is a
superconducting solenoid 13~m in length and 6~m in diameter, which
provides an axial magnetic field of 3.8~T. Within the field volume are
the silicon pixel and strip tracker, the crystal electromagnetic
calorimeter (ECAL), and the brass/scintillator hadron calorimeter.
Charged particle trajectories are measured by the
tracker, covering
the full azimuthal angle
and pseudorapidity interval $|\eta| < 2.5$,
where the pseudorapidity is defined as
$\eta = -\ln \tan (\theta/2)$,
with $\theta$ being the polar angle of the trajectory of the particle
with respect to the counterclockwise beam direction.
Muons are measured in the pseudorapidity range $|\eta|< 2.4$,
with detection planes made using three technologies: drift tubes, cathode
strip chambers, and resistive plate chambers. The muons associated with the
tracks measured in the silicon tracker have a transverse momentum (\pt)
resolution of about 2\% in the muon \pt range relevant for the analysis
presented in this paper.
The ECAL consists of nearly 76\,000 lead tungstate
crystals, distributed in a barrel region ($|\eta| < 1.479$) and
two endcap regions ($1.479 < |\eta| < 3$), and
has an ultimate energy resolution better than 0.5\%
for unconverted photons with transverse energies (\ET) above 100~\GeV. The
electron energy resolution is better than 3\% for the range of
energies relevant for the measurement reported in this paper.
A two-level trigger system selects the
events for use in
offline physics analysis.
\section{Event Selection}
\label{sec:Event-Selection}
The basic signature of the DY process is straightforward:
two oppositely charged isolated leptons originating in the same primary vertex.
The analysis presented in this paper is based on dilepton data samples
selected by inclusive single-lepton triggers.
The dimuon data sample was selected by a single-muon trigger with a \pt
threshold ranging from 9 to 15\GeV, depending on the beam conditions.
In the offline selection, one of the muons is required to match,
in three-dimensional momentum space, a muon trigger candidate, and must have
$|\eta| < 2.1$ and $\pt > 16$\GeV, to ensure that it is on the plateau of the trigger
efficiency curve. The second muon is required to have $|\eta| < 2.4$ and $\pt > 7$\GeV.
No muon isolation is required at the trigger level.
Muons are required to pass the standard CMS muon
identification and quality criteria, based on the number of hits found
in the tracker, the response of the muon chambers, and a set of
matching criteria between the muon track parameters as determined by
the inner tracker section of the detector and as measured in the muon
chambers~\cite{tag-and-probe, ref:CRAFT}.
These criteria ensure
that only muons with well-measured parameters are selected for the
analysis. To eliminate cosmic-ray muons, each muon is
required to have an impact parameter in the transverse plane
less than 2~mm with respect
to the center of the interaction region, and the opening
angle between the two muons must differ from $\pi$ by more than 5~mrad.
In order to reduce the fraction of muon pairs from (different) light-meson decays
a common vertex for the two muons is fitted
and the event is rejected if the
dimuon vertex $\chi^2$ probability is smaller than 2\%.
Finally, an isolation requirement is imposed on both muons,
$I_{\mathrm{rel}} = (\sum \pt({\mathrm{tracks}}) + \sum \ET({\mathrm{had}})) /\pt(\mu) < 0.15,$
where $\sum \pt({\mathrm{tracks}})$ is the sum of the transverse
momenta of all the additional tracker tracks and $\sum \ET({\mathrm{had}})$ is the sum of all
transverse energies of hadronic deposits in a cone
$\Delta R = \sqrt{(\Delta\phi)^2+(\Delta\eta)^2} < 0.3$
centered on the muon direction and excluding the muon itself.
Given that muons can radiate nearly collinear photons
in a process referred to as final state electromagnetic radiation (FSR),
deposits in the ECAL are not included in the definition.
Otherwise an inefficiency would be
introduced in the analysis.
For the electron analysis, events are selected with a trigger requiring at least one electron,
with a minimum \ET ranging from 15 to 17~\GeV,
depending on the
beam conditions.
Electron reconstruction starts from clusters of energy deposited in
the ECAL, and associates with them hits
in the CMS
tracker~\cite{EGMPAS}.
Energy-scale corrections are
applied to individual electrons as described in Ref.~\cite{WAsymmetry}.
The electron candidate is required to be consistent with
a particle originating from the primary vertex in the event. Electron
identification criteria based on shower shape and track-cluster
matching are applied to the reconstructed candidates.
Electrons originating from photon conversions are rejected by
eliminating those electrons for which a partner track consistent with
a conversion hypothesis is found, and requiring no missing hits
in the pixel detector,
as discussed in Ref.~\cite{ZCrossSection}.
Isolation requirements are imposed on each electron, according to
$(\sum \pt({\mathrm{tracks}}) + \sum \ET({\mathrm{had}}) + \sum \ET({\mathrm{em}}) ) /\pt({\mathrm{e}}) < 0.1,$
where $\sum \pt({\mathrm{tracks}})$ and $\sum \ET({\mathrm{had}})$ are
defined as explained for muons,
and $\sum \ET({\mathrm{em}})$ is the sum of the transverse energies of electromagnetic deposits
in $\Delta R < 0.3$, excluding the electron candidate itself.
The standard CMS isolation calculation for electrons also excludes
ECAL energy deposits that are potentially created by FSR photons, while
absorbing some of these deposits into electron objects. Thus,
the FSR-related inefficiencies, present for muons, are avoided for
electrons and ECAL information is used in the total isolation calculation.
The criteria were optimized to maximize the rejection of misidentified
electrons from QCD multijet production and nonisolated electrons
from heavy-quark decays, while maintaining at least 80\% efficiency
for electrons from the DY process. More details are
found in Ref.~\cite{ZCrossSection}.
Electrons must be reconstructed in the ECAL barrel
with $|\eta| < 1.44$ or in the ECAL endcaps with
$1.57 < |\eta| < 2.5$.
The leading electron is required to have $\ET > 20$\GeV,
while the second electron must have $\ET > 10$\GeV.
The leading electron in a candidate pair is required
to match, in $\eta$ and $\phi$, a trigger electron candidate.
Event samples for simulation studies of electroweak
processes involving W and Z production are produced with the NLO
MC
generator
{\sc POWHEG}~\cite{Alioli:2008gx, Nason:2004rx, Frixione:2007vw} interfaced
with the {\sc PYTHIA} (v.~6.422)~\cite{Sjostrand:2006za} parton-shower event generator, using
the CT10~\cite{CT10} parametrization of the PDFs. {\sc PYTHIA} is also used for the FSR simulation.
The QCD multijet background is
generated with {\sc PYTHIA}, and
the \ttbar~ background is simulated using {\sc MadGraph} (v.~4.4.12)~\cite{MadGraph} and {\sc PYTHIA}
at leading order using the CTEQ\,6L PDF set~\cite{CTEQ} for both samples.
Generated events are processed through the full {\sc GEANT4}~\cite{GEANT4}
detector simulation, trigger emulation, and event reconstruction chain.
The observed invariant mass distributions, in the dimuon and dielectron channels, are shown in
Fig.~\ref{fig:mass-observed}. Thirteen mass bins are used to cover the observable dilepton mass
spectrum. These are chosen not only to be wide enough to minimize the influence of the mass resolution but
also to provide good statistical power. The mass resolution varies between a few hundred MeV at the
low invariant masses covered and several tens of GeV at the high end of the spectrum. The mass
bins have unequal widths.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.47\textwidth, angle=90]{figures/2_Sec/Fig_1_a.pdf}
\includegraphics[width=0.47\textwidth, angle=90]{figures/2_Sec/Fig_1_b.pdf}
\caption{The observed dimuon (left) and dielectron (right) invariant mass spectra. No corrections are applied to the distributions. The points with error bars represent the data, while the
various contributions from simulated events are shown as
stacked histograms. By ``EWK'' we denote $Z/\gamma^* \rightarrow \tau \tau$, $\text{W} \rightarrow \ell\nu$, and diboson production. The ``QCD''
contribution results from processes associated with QCD and could be genuine or misidentified leptons.
The lower panels show the ratios between the measured and the
simulated distributions including the statistical uncertainties from both.
\label{fig:mass-observed}
}
\end{center}
\end{figure}
\section{Backgrounds}
\label{sec:backgrounds}
Several physical and instrumental backgrounds contribute to both the
dimuon and dielectron analyses.
The main backgrounds at
high dilepton invariant masses are caused by \ttbar\ and diboson production, while at
invariant masses below the $Z$ peak, DY production of $\tau^+\tau^-$ pairs
becomes the dominant background. At low dimuon invariant masses, most background events
are QCD multijet events.
The expected shapes and
relative yields of these several dilepton sources can be seen in
Fig.~\ref{fig:mass-observed}.
For the dimuon channel, the electroweak and \ttbar\ backgrounds are evaluated through
simulation studies, expected to provide a good description of the real contributions.
This is also verified by related studies in the electron channel presented below.
In contrast, the QCD background is evaluated from data by two independent
methods.
The first estimates the yield of opposite-sign (OS) background muon pairs by scaling
the yield of same-sign (SS) pairs. The scaling is based on information from the ratio of
OS/SS events when one of the muons is not isolated (a sample dominated by background),
and the MC prediction that the same ratio holds when both muons are isolated.
Statistical uncertainties in all the cases are propagated to the final background
estimate.
The second method, which is more precise, is based on
the signal/background discriminating variable $I_{\mathrm{rel}}$.
We obtain $\pt$-dependent isolation distributions (templates) from almost pure samples of background
and signal events, respectively composed of SS and OS muon pairs. The latter consist of events in
the Z mass peak surviving tight quality selection criteria. A superposition of these two shape distributions
is fitted to the observed isolation distributions of the two muons, for each invariant mass bin. The dimuon invariant mass
distribution of the QCD background is obtained as the weighted average of the estimates from the two methods.
There are two categories of dielectron backgrounds: the first category
contributes candidates composed of two genuine electrons and the second
contributes candidates
in which at least one particle
is a misidentified electron.
Most of the genuine dielectron background is due to \ttbar, WW, and tW
production, as well as DY production of $\tau^+\tau^-$ pairs.
We estimate the contribution from these processes with a sample of
${\mathrm{e}}^{\pm}\mu^{\mp}$ events having the same
physical origin.
This signal-free sample contains approximately twice the estimated number of
background events contaminating the ${\mathrm{e}}^+{\mathrm{e}}^-$ sample, and provides an evaluation
of the background level that agrees with the estimate based on simulation studies.
The genuine dielectron background
from WZ and ZZ production is estimated from simulation.
The misidentified electron backgrounds originate from QCD multijet and
W+jet events. These sources of background are relatively small because of the tight electron
identification and kinematic requirements, and are estimated from data
based on the probability that jets
or random energy deposits in the calorimeters emulate electron candidates~\cite{Zprime}.
The background estimates in the dimuon and dielectron channels are tabulated in
Section~\ref{sec:unfolding} (Tables~\ref{tab:mumu-yields} and~\ref{tab:ee-yields}, respectively).
\section{Acceptance and Efficiency}
\label{sec:AccepEff}
The reconstructed dilepton invariant mass distributions cannot be directly compared to the spectra
provided by the theoretical models, not only because of the limited acceptance coverage of the
detector but also because the observed spectra are affected by FSR, a process usually not
included in the calculations. We define ``pre-FSR'' and ``post-FSR'' as labels to be attached to any quantity
referred to before and after the FSR effects occur.
The measurement of $d\sigma/dM(\ell\ell)$ therefore requires a two-step correction procedure. First,
the measured, post-FSR spectra are corrected for acceptance, when applicable, and detector efficiencies.
Then the (acceptance and) efficiency corrected spectra are themselves altered by a bin-by-bin FSR
correction factor which relates the yields before and after the FSR takes place.
These spectra can be compared to the calculations.
The geometrical and kinematic acceptance $A$ is defined, using the simulated leptons after the
FSR simulation, as $A \equiv N_{\mathrm{acc}}/N_{\mathrm{gen}}$, where $N_{\mathrm{gen}}$ is the number of generated events and
$N_{\mathrm{acc}}$ is the corresponding number of events passing the standard $\PT$ and $\eta$
lepton requirements, in each dilepton invariant mass bin.
The efficiency $\epsilon$ is the fraction of events within the acceptance that pass the full
selection, so that
\begin{equation}\label{eqn:AccEff}
A \cdot \epsilon \equiv \frac{N_{\mathrm{acc}}}{N_{\mathrm{gen}}} \cdot
\frac{N_{\epsilon}}{N_{\mathrm{acc}}} = \frac{N_{\epsilon}}{N_{\mathrm{gen}}} ,
\end{equation}
where $N_{\epsilon}$ is the number of events surviving the reconstruction, selection, and identification requirements.
The values of the product of acceptance and efficiency are obtained from simulation.
A separate
correction factor is determined from data and applied to the product, following the procedure used in the inclusive W
and Z cross section measurements in CMS~\cite{ZCrossSection}. This factor, the efficiency correction,
describes the difference
between data and simulation
in the efficiency to observe single leptons or dileptons.
The {\sc POWHEG} simulation combines the next-to-leading-order (NLO) calculations with a parton showering which
is insufficient to model fully the low invariant mass region of the dilepton spectra.
The two high-\pt leptons required in the analysis must form a small angle at low mass and therefore the dilepton
system gets significantly boosted, something to be compensated by hard gluon radiation in the transverse plane. This means
that these low-mass events are of the type ``$\gamma^*$ + hard jet'' at first order, and therefore the next order of
correction (NNLO) becomes essential for a reliable estimate of acceptance corrections.
To account for this, a
correction is applied, determined from the ratio between the differential cross sections calculated
at NNLO with {\sc FEWZ}~\cite{FEWZ} and at NLO with {\sc POWHEG}, both at pre-FSR level. These correction weights, obtained
in bins of dilepton rapidity, \pt, and invariant mass, are applied on an event-by-event basis.
The distributions obtained are used for all the simulation based estimations (acceptance, efficiency, FSR corrections) for DY, and this sample is
referred to as ``{\sc POWHEG} matched to {\sc FEWZ} (NNLO) distributions''.
This procedure changes the acceptance in the lowest invariant mass bin significantly (by about 50\%),
but has a small effect, not exceeding 3\%, on the rest of the bins.
Figure~\ref{Acc} shows the variables $A$, $\epsilon$, and
$A\cdot\epsilon$ as functions of $M(\ell\ell)$ for dimuons (left) and
dielectrons (right), the values being listed in
Tables~\ref{tab_accEff} and~\ref{tab_accEff_electrons}, respectively.
The FSR correction factors listed in
Tables~\ref{tab_accEff} and~\ref{tab_accEff_electrons} for
a given invariant mass range are obtained
from simulation by dividing the post-FSR cross sections by the corresponding pre-FSR
quantities. They are applied on (corrected) data as an additional step as described earlier
in the section.
The factors obtained within the detector acceptance and in the full phase space
(as shown in the tables) are applied to the corresponding measurements.
Systematic uncertainties related to the FSR simulation are discussed in
Section~\ref{sec:systematics}.
\begin{figure}[h]
{\centering
\includegraphics[width=0.365\textwidth, angle = 90]{figures/4_Sec/Fig_2.pdf}
\includegraphics[width=0.365\textwidth, angle = 90]{figures/4_Sec/Fig_3.pdf}
\caption{\label{Acc}
DY acceptance (blue, filled circles), efficiency (red, open triangles), and their product (black, open squares) per invariant mass bin,
for the $\mu^+\mu^-$ (left) and $\ee$ (right) channels.}
}
\end{figure}
\begin{table}[h]
\begin{center}
\caption{DY acceptance and acceptance times efficiency per invariant mass bin
for the $\mu^+\mu^-$ channel.
In addition, the FSR correction factors are given. All uncertainties are statistical.\label{tab_accEff} }
\begin{tabular}{|l|r@{$~\pm~$}l|r@{$~\pm~$}l|r@{$~\pm~$}l|r@{$~\pm~$}l|}
\hline
Invariant mass
& \multicolumn{2}{c|}{Acceptance (\%)}
& \multicolumn{2}{c|}{Acc $\times$ Eff (\%)}
& \multicolumn{2}{c|}{FSR correction (\%)}
& \multicolumn{2}{c|}{FSR correction in}\\
bin (\!\GeV)
& \multicolumn{2}{c|}{}
& \multicolumn{2}{c|}{}
& \multicolumn{2}{c|}{}
& \multicolumn{2}{c|}{the acceptance (\%)} \\
\hline
15--20 & 1.23 & 0.01 & 1.00 & 0.01 & 97.28 & 0.02 & 96.30 & 0.02 \\
\hline
20--30 & 5.69 & 0.03 & 4.44 & 0.03 & 97.28 & 0.02 & 97.99 & 0.02 \\
\hline
30--40 & 23.5 & 0.1 & 19.6 & 0.1 & 98.43 & 0.03 & 98.77 & 0.03 \\
\hline
40--50 & 34.8 & 0.2 & 30.1 & 0.2 & 104.0 & 0.1 & 105.9 & 0.1 \\
\hline
50--60 & 41.2 & 0.2 & 36.2 & 0.2 & 120.2 & 0.3 & 125.1 & 0.3 \\
\hline
60--76 & 47.4 & 0.2 & 41.9 & 0.2 & 166.4 & 0.5 & 175.1 & 0.6 \\
\hline
76--86 & 50.6 & 0.1 & 45.5 & 0.1 & 167.1 & 0.4 & 169.8 & 0.4 \\
\hline
86--96 & 51.8 & 0.1 & 47.1 & 0.1 & 91.63 & 0.03 & 91.62 & 0.03 \\
\hline
96--106 & 53.1 & 0.2 & 48.5 & 0.2 & 88.0 & 0.1 & 88.1 & 0.1 \\
\hline
106--120 & 54.6 & 0.4 & 49.6 & 0.4 & 91.3 & 0.2 & 91.2 & 0.2 \\
\hline
120--150 & 56.6 & 0.6 & 51.8 & 0.6 & 93.2 & 0.3 & 93.1 & 0.3 \\
\hline
150--200 & 60.8 & 0.9 & 55.0 & 0.9 & 94.3 & 0.4 & 95.0 & 0.4 \\
\hline
200--600 & 67.7 & 1.2 & 60.9 & 1.3 & 92.8 & 0.7 & 93.1 & 0.6 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[h]
\begin{center}
\caption{DY acceptance and acceptance times efficiency per invariant mass bin
for the $\ee$ channel.
In addition, the FSR correction factors are given. All uncertainties are statistical.\label{tab_accEff_electrons} }
\begin{tabular}{|l|r@{$~\pm~$}l|r@{$~\pm~$}l|r@{$~\pm~$}l|r@{$~\pm~$}l|}
\hline
Invariant mass
& \multicolumn{2}{c|}{Acceptance (\%)}
& \multicolumn{2}{c|}{Acc $\times$ Eff (\%)}
& \multicolumn{2}{c|}{FSR correction (\%)}
& \multicolumn{2}{c|}{FSR correction in}\\
bin (\!\GeV)
& \multicolumn{2}{c|}{}
& \multicolumn{2}{c|}{}
& \multicolumn{2}{c|}{}
& \multicolumn{2}{c|}{the acceptance (\%)} \\
\hline
15--20 & 0.56 & 0.01 & 0.10 & 0.01 & 93.8 & 0.1 & 98.7 & 1.9 \\ \hline
20--30 & 1.19 & 0.01 & 0.54 & 0.01 & 93.9 & 0.2 & 102.9 & 1.8 \\ \hline
30--40 & 9.1 & 0.1 & 3.6 & 0.1 & 96.8 & 0.3 & 109.5 & 1.3 \\ \hline
40--50 & 26.5 & 0.2 & 12.8 & 0.2 & 107.7 & 0.6 & 117.8 & 1.2 \\ \hline
50--60 & 37.5 & 0.2 & 18.3 & 0.2 & 139.3 & 1.0 & 156.2 & 1.8 \\ \hline
60--76 & 45.1 & 0.2 & 22.4 & 0.2 & 230.7 & 1.4 & 256.3 & 2.4 \\ \hline
76--86 & 47.9 & 0.1 & 26.4 & 0.1 & 224.1 & 1.0 & 235.0 & 1.5 \\ \hline
86--96 & 49.3 & 0.1 & 30.6 & 0.1 & 83.9 & 0.1 & 85.6 & 0.2 \\ \hline
96--106 & 50.8 & 0.2 & 31.6 & 0.2 & 78.5 & 0.5 & 80.1 & 0.7 \\ \hline
106--120 & 52.6 & 0.4 & 33.3 & 0.4 & 83.9 & 1.0 & 85.2 & 1.4 \\ \hline
120--150 & 54.2 & 0.6 & 33.4 & 0.6 & 87.9 & 1.4 & 88.5 & 1.9 \\ \hline
150--200 & 58.1 & 0.9 & 36.1 & 0.9 & 89.1 & 2.2 & 90.3 & 3.0 \\ \hline
200--600 & 67.2 & 1.3 & 42.4 & 1.3 & 87.5 & 3.2 & 88.9 & 4.0 \\ \hline
\end{tabular}
\end{center}
\end{table}
The total dimuon event selection efficiency is factorized as
\begin{equation}
\varepsilon(\text{event}) =
\varepsilon(\mu_1)
\cdot \varepsilon(\mu_2)
\cdot \varepsilon[\mu\mu|(\mu_1) \& (\mu_2)]
\cdot \varepsilon(\text{event},\text{trig}|\mu\mu) ,
\label{eq:muon-event-eff}
\end{equation}
where $\varepsilon(\mu)$ is the single muon selection efficiency;
$\varepsilon[\mu\mu|(\mu_1) \& (\mu_2)]$ is the dimuon
selection efficiency, which includes
the requirement that
the two muon tracks be consistent with originating from a common vertex and that they satisfy the angular criteria;
and
$\varepsilon(\text{event},\text{trig}|\mu\mu)$ is the efficiency of triggering an event
including the efficiency that an identified muon is matched to a trigger object.
The single muon efficiency is factorized as
\begin{equation}
\varepsilon(\mu)= \varepsilon(\text{track}|\text{accepted})
\cdot\varepsilon(\text{reco}+\text{id}|\text{track})
\cdot\varepsilon(\text{iso}|\text{reco}+\text{id}) ,
\label{eq:single-muon-eff}
\end{equation}
where
$\varepsilon(\text{track}|\text{accepted})$ is the offline track reconstruction efficiency in the tracker detector;
$\varepsilon(\text{reco}+\text{id}|\text{track})$ is the muon reconstruction and identification efficiency; and
$\varepsilon(\text{iso}|\text{reco}+\text{id})$ is the muon isolation efficiency.
The trigger efficiency $\varepsilon(\text{event},\text{trig}|\mu\mu)$ is given by
\begin{equation}
\varepsilon(\text{event},\text{trig}|\mu\mu)
= \varepsilon(\mu_1,\text{trig}|\mu_1)
+ \varepsilon(\mu_2,\text{trig}|\mu_2)
- \varepsilon(\mu_1,\text{trig}|\mu_1)
\cdot\varepsilon(\mu_2,\text{trig}|\mu_2) ,
\end{equation}
where $\varepsilon(\mu,\text{trig}|\mu)$
is the efficiency of an offline selected muon to fire the trigger.
The track reconstruction efficiency is very high ($99.5\%$). The angular
criterion is nearly $100\%$ efficient for signal DY events, and the vertex
probability requirement is more than $98\%$ efficient and has a negligible
($< 0.3\%$) dependence on~$M(\ell\ell)$.
The muon reconstruction and identification efficiency is estimated using
clean samples of muon pairs in the Z peak (tag and probe, T\&P, method~\cite{ZCrossSection}).
The properties of one muon are probed, after imposing tight requirements
on the other one.
To determine the isolation efficiency, the Lepton Kinematic Template Cones (LKTC) method~\cite{tag-and-probe} is applied.
The essence of the LKTC method is to choose predefined directions in events
with
an underlying event environment similar to that
of the signal sample. The isolation variable is defined as if these directions
represent signal leptons, and the chosen isolation-based criteria are subsequently studied.
To describe the observed efficiency variations between data and simulation, efficiency correction factors are obtained in bins of $\PT$ and $\eta$
as the ratio of the efficiencies measured with
data and with the simulated events:
\begin{equation}
\rho_{\mathrm{eff}}(\PT,\eta) = \frac{\varepsilon_{\mathrm{data}}(\PT,\eta)}{\varepsilon_{\mathrm{sim}}(\PT,\eta)} .
\label{eqn:rho-definition}
\end{equation}
The corrections to the efficiencies in simulation are implemented by reweighting simulated events, with
weights computed as
$W =\rho_1^{\text{reco}} \rho_2^{\text{reco}} \rho_1^{\text{iso}} \rho_2^{\text{iso}} \rho^{\text{trig}}$ where
$\rho^{\text{trig}} = (\epsilon_{\text{data},1}^{\text{trig}} + \epsilon_{\text{data},2}^{\text{trig}} - \epsilon_{\text{data},1}^{\text{trig}} \epsilon_{\text{data},2}^{\text{trig}})/(\epsilon_{\text{MC},1}^{\text{trig}} + \epsilon_{\text{MC},2}^{\text{trig}} - \epsilon_{\text{MC},1}^{\text{trig}} \epsilon_{\text{MC},2}^{\text{trig}})$.
If $\PT < 16$~\GeV or $|\eta| > 2.1$ for a given muon~ $i=1,2$, its
trigger efficiency
is set to zero.
The systematic uncertainty related to the efficiency correction is
evaluated by generating one hundred variations of the $(\PT,\eta)$
correction maps,
where the weight in each $(\PT,\eta)$ bin is
obtained by adding to the original value a Gaussian-distributed shift of
mean zero and width equal to the statistical uncertainty of the
original correction factor (Eq.~(\ref{eqn:rho-definition})).
Signal corrected yields are evaluated using event weights
obtained from each of the alternative correction maps and the RMS
spread of the resulting values is taken as the systematic
uncertainty.
The systematic error computed with this procedure includes an irreducible statistical component, yielding a conservative uncertainty
which also covers
generous variations in the efficiency-correction shape.
The resulting uncertainties are shown in
Table~\ref{tab_effCorr}.
The total event efficiency in the dielectron channel analysis is defined
as the product of the two single electron efficiencies, which incorporate three
factors: 1)~the efficiency $\varepsilon_{\mathrm{reco}}$ to reconstruct an electron candidate from an energy
deposit in the ECAL; 2)~the efficiency $\varepsilon_{\mathrm{id}}$ for that candidate to pass the selection
criteria, including identification, isolation, and conversion rejection; 3)~the efficiency
$\varepsilon_{\mathrm{trig}}$ for the leading electron to pass the trigger
requirements. Each of these efficiencies is obtained from simulation and corrected by
$\rho_{\mathrm{eff}}(\PT,\eta)$, as for the muon channel (Eq.~(\ref{eqn:rho-definition})). The
T\&P method is used for all efficiency components. The event efficiency correction
and its uncertainty are derived as for the muon channel by reweighting simulated events.
The correction factors are listed in Table~\ref{tab_effCorr}.
\begin{table}[h]
\begin{center}
\caption{Combined efficiency corrections for the muon and electron channels per mass bin.
They account for the data vs.\ simulation differences in reconstruction, identification, isolation and trigger efficiencies.
\label{tab_effCorr} }
\begin{tabular}{| l | l | l |}
\hline
Invariant mass & \multicolumn{2}{c|}{ Combined efficiency correction} \\
bin (\!\GeV) & Muon channel & Electron channel\\
\hline
15--20 & $0.917\pm 0.010$ & $1.098\pm 0.087$ \\
\hline
20--30 & $0.915\pm 0.010$ & $1.089\pm 0.091$\\
\hline
30--40 & $0.918\pm 0.011$ & $1.107\pm 0.103$\\
\hline
40--50 & $0.931\pm 0.011$ & $1.076\pm 0.081$\\
\hline
50--60 & $0.943\pm 0.008$ & $1.034\pm 0.053$\\
\hline
60--76 & $0.952\pm 0.006$ & $1.008\pm 0.033$\\
\hline
76--86 & $0.958\pm 0.004$ & $0.995\pm 0.024$\\
\hline
86--96 & $0.960\pm 0.003$ & $0.979\pm 0.019$\\
\hline
96--106 & $0.961\pm 0.003$ & $0.973\pm 0.018$\\
\hline
106--120 & $0.961\pm 0.003$ & $0.960\pm 0.018$\\
\hline
120--150& $0.956\pm 0.010$ & $0.953\pm 0.019$\\
\hline
150--200& $0.957\pm 0.021$ & $0.945\pm 0.020$\\
\hline
200--600& $0.957\pm 0.021$ & $0.940\pm 0.020$\\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Detector Resolution Effects and Unfolding}
\label{sec:unfolding}
The effects of the detector
resolution on the observed dilepton spectra are corrected through an
unfolding procedure. The original invariant mass spectrum is
related to the observed one (in the limit of no background) by
\begin{equation}
N_{{\mathrm{obs},i}} = \sum_k \, T_{ik} \, N_{\mathrm{true},k} ,
\end{equation}
where $N_i$ is the event count in a given invariant mass bin~$i$.
The element $T_{ik}$ of the ``response matrix'' $T$ is the probability
that an event with an original invariant mass in the bin $k$ is reconstructed
with an invariant mass in the bin $i$.
The original invariant mass spectrum is obtained by inverting the response
matrix and calculating~\cite{Cowan-unfolding,Bohm-unfolding}
\begin{equation}
N_{\text{u},k} \equiv N_{\mathrm{true},k} = \sum_i \, (T^{-1})_{ki} \, N_{{\mathrm{obs}},i} .
\label{eq:invResponse}
\end{equation}
This procedure is sufficient in the analysis reported in
this paper
because the response matrix is nonsingular and nearly diagonal.
Two extra dilepton invariant mass bins are included in the unfolding procedure,
to account for events observed with $M(\ell\ell) < 15$~\GeV or
$M(\ell\ell) > 600$~\GeV.
The response matrix is calculated using the simulated sample of DY
events, defining the ``true mass'' as the ``generator level'' dilepton invariant mass,
after
FSR. Only the
selected
events in the sample are used to calculate the response matrix. The loss of events
caused by reconstruction inefficiencies or limited acceptance is
factored out from the unfolding procedure and taken into account
by means of efficiency and acceptance factors in a
subsequent step. Events generated with a dilepton invariant mass in the window
of the analysis but reconstructed with an invariant mass too small (below
15~\GeV) or too large (above 600~\GeV) contribute to the response
matrix. Events generated outside this window but reconstructed inside it
are also accounted for. The sum of probabilities in the columns of the response
matrix plus the probabilities of the bins with too small and too large invariant
masses is constrained to be 100\%.
The response matrices are nearly diagonal. The few significant
off-diagonal elements present are found immediately next to the
diagonal elements. Almost all off-diagonal elements are less than 0.1
for the muon channel and less than 0.3 for the electron channel, as
shown in Fig.~\ref{fig:response-matrix}. The response matrices in
both lepton channels are invertible.
The larger off-diagonal elements in the response matrix for the
electron channel reflect a larger crossfeed among neighboring bins
due to the following two factors. First, the detector resolution is
worse for electrons than for muons.
Second, the electron reconstruction algorithm attributes the four-momenta of some
FSR photons to the electrons. Thus, for electrons, unfolding removes not only
the effect of detector resolution on the invariant mass but also the effect of FSR
photons in the electron reconstruction, yielding the original mass spectrum after FSR.
The calculation of the
original mass spectrum before FSR from the spectrum resulting from the
unfolding procedure is done in a separate step through FSR corrections
and is described in the next section.
\begin{figure}[hbtp]
\begin{center}
\includegraphics[width=0.49\textwidth]{figures/6_Sec/Fig_6_a_log_gray}
\includegraphics[width=0.49\textwidth]{figures/6_Sec/Fig_6_b_log_gray}
\caption{The response matrices for the muon (left) and electron (right)
channels from simulation.
\label{fig:response-matrix}
}
\end{center}
\end{figure}
The yields before and after background subtraction and the unfolding corrections
are given in Tables~\ref{tab:mumu-yields} and~\ref{tab:ee-yields}.
\begin{table}[htbH]
\begin{center}
\caption{Observed data yields, estimated backgrounds, and
background-corrected and unfolded signal yields for
DY production in the $\mu^+\mu^-$ channel. The QCD background is estimated from data
whereas the ``Other'' background contributions (as indicated in Fig.~\ref{fig:mass-observed}) are based on simulation.
\label{tab:mumu-yields}}
\begin{tabular}{|l|r@{$~\pm~$}l|r@{$~\pm~$}l|r@{$~\pm~$}l|r@{$~\pm~$}l|r@{$~\pm~$}l|r@{$~\pm~$}l|}
\hline
Invariant mass
& \multicolumn{2}{c|}{$N_{\text{obs}}$}
& \multicolumn{4}{c|}{Backgrounds}
& \multicolumn{2}{c|}{$N_{\text{obs}}-N_{\text{bg}}$}
& \multicolumn{2}{c|}{$N_{\text{u}}$}\\
bin (\!\GeV)
& \multicolumn{2}{c|}{}
& \multicolumn{2}{c|}{QCD}
& \multicolumn{2}{c|}{Other}
& \multicolumn{2}{c|}{}
& \multicolumn{2}{c|}{} \\
\hline
15--20 & 253 & 16 & 11 & 8 & 1 & 1 & 241 & 18 & 243 & 19\\
20--30 & 809 & 28 & 59 & 21& 15 & 4& 735 & 36 & 736 & 37\\
30--40 & 986 & 31 & 46 & 15& 30 & 6& 910 & 36 & 907 & 37\\
40--50 & 684 & 26 & 22 & 8 & 30 & 6& 632 & 29 & 631 & 30 \\
50--60 & 471 & 22 & 11 & 7 & 25 & 6& 435 & 24 & 436 & 26 \\
60--76 & 797& 28 & 7 & 6 & 22 & 5 & 768& 29 & 752& 31\\
76--86 & 1761& 42 &\multicolumn{2}{l|}{}& 6 & 3 & 1755& 42 & 1471& 49 \\
86--96 & 11786& 109 &\multicolumn{2}{l|}{}& 25 & 6 & 11761& 109 & 12389& 119 \\
96--106 & 909& 30 &\multicolumn{2}{l|}{}& 5 & 3 & 904& 30 & 591& 38\\
106--120 & 194& 14 &\multicolumn{2}{l|}{}& 3 & 2 & 191& 14 & 178& 17\\
120--150 & 145& 12 &\multicolumn{2}{l|}{}& 4 & 3 & 141& 12 & 142& 13\\
150--200 & 53& 7 &\multicolumn{2}{l|}{}& 4 & 3 & 49& 8 & 47& 9 \\
200--600 & 30& 6 &\multicolumn{2}{l|}{}& 3 & 2 & 27& 6 & 28& 6\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[htbH]
\begin{center}
\caption{Observed data yields, estimated backgrounds, and
background-corrected and unfolded signal yields for
DY production in the $\ee$ channel.
\label{tab:ee-yields}}
\begin{tabular}{|l|r@{$~\pm~$}l|r@{$~\pm~$}l|r@{$~\pm~$}l|r@{$~\pm~$}l|r@{$~\pm~$}l|r@{$~\pm~$}l|}
\hline
Invariant mass
& \multicolumn{2}{c|}{$N_{\text{obs}}$}
& \multicolumn{4}{c|}{Backgrounds}
& \multicolumn{2}{c|}{$N_{\text{obs}}-N_{\text{bg}}$}
& \multicolumn{2}{c|}{$N_{\text{u}}$}\\
bin (\!\GeV)
& \multicolumn{2}{c|}{}
& \multicolumn{2}{c|}{genuine $\ee$}
& \multicolumn{2}{c|}{misidentified $\ee$}
& \multicolumn{2}{c|}{}
& \multicolumn{2}{c|}{} \\
\hline
15--20 & 16 & 4 & 0.0 & 0.2 & 0.4& 0.7 & 16 & 4 & 16 & 6\\
20--30 & 91 & 10 & 2.5 & 1.7 & 0.9& 1.1 & 88 & 10 & 94 & 12\\
30--40 & 179 & 13 & 14.3 & 4.6 & 1.5& 1.4 & 163 & 14 & 164 & 17 \\
40--50 & 243 & 16 & 31.4 & 6.9 & 3.7& 2.7 & 208 & 18 & 219 & 22 \\
50--60 & 211 & 15 & 19.9 & 5.2 & 3.9& 2.8 & 187 & 16 & 234 & 25\\
60--76 & 455 & 21 & 22.4 & 5.3 & 4.9& 3.3 & 428 & 22 & 620 & 45\\
76--86 & 1599 & 40 & 8.5 & 2.8 & 2.5& 2.1 & 1588 & 40 & 1277 & 89\\
86--96 & 6998 & 84 & 12.5 & 1.8 & 4.4& 3.1 & 6981 & 84 & 7182 & 117\\
96--106 & 587 & 24 & 3.5 & 1.8 & 2.1& 1.8 & 581 & 24 & 441 & 36\\
106--120 & 132 & 11 & 3.2 & 1.9 & 1.5& 1.4 & 127 & 12 & 127 & 15\\
120--150 & 67 & 8 & 7.8 & 3.1 & 2.0& 1.7 & 57 & 9 & 53 & 10 \\
150--200 & 34 & 6 & 5.5 & 2.5 & 1.6& 1.4 & 27 & 7 & 25 & 7\\
200--600 & 26 & 5 & 3.0 & 1.9 & 1.4& 1.4 & 22 & 6 & 21 & 5 \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Systematic Uncertainties}
\label{sec:systematics}
Systematic uncertainties have been evaluated for each step in the determination of the dilepton invariant mass
spectrum. The acceptance-related uncertainties are a special case as they
only apply to the acceptance corrected results, i.e., results in the full phase space,
and are approximately the same for the dimuon and dielectron channels
(the FSR uncertainties are treated separately).
The acceptance uncertainty resulting from the knowledge of the PDFs is estimated using {\sc PYTHIA}
with the CTEQ6.1 PDF set by a reweighting technique~\cite{Bourilkov:2006cj}, with a negligible statistical uncertainty given the very large
simulated sample.
Since we are making a shape measurement, normalizing the DY cross
section to the dilepton cross section in the $\text{Z}$ region, the analysis only
depends on the uncertainty of the \emph{ratio} of acceptances,
$A_i/A_{\mathrm{norm}}$, where $A_i$ is the acceptance for
the invariant mass bin $i$ and $A_{\mathrm{norm}}$ is the acceptance for the
invariant mass region of the $\text {Z}$.
The uncertainty of the acceptance is estimated, for each dilepton invariant mass bin,
using {\sc FEWZ}, at NLO and NNLO accuracy
in perturbative QCD. Variations of the factorization and renormalization
scales lead to a systematic uncertainty smaller than 1\% (at NNLO)
for most of the invariant mass range used in the analysis presented here.
Special care is needed to calculate the acceptance of low invariant mass dileptons,
where differences between NLO and NNLO values can be significant,
given the relatively high thresholds imposed on the transverse momentum of
the leptons.
Since the {\sc POWHEG} MC (NLO) simulation, modified to match the {\sc FEWZ} (NNLO) calculations, is used
to calculate the acceptance
corrections used in the analysis,
an additional (model-dependent) systematic uncertainty on the acceptance
calculation is determined from the
observed differences in acceptances based on
{\sc FEWZ} spectra and {\sc POWHEG} distributions matched to {\sc FEWZ}.
These differences are caused by variations in the kinematic
distributions within the bins
where bin sizes are chosen to take into account the limited reliability of perturbative
QCD calculations in parts of the phase space.
This systematic uncertainty reaches up to 10\% in the dilepton
invariant mass range considered in the analysis and is included in the comparison
between the measurements and the theoretical expectations.
The dominant systematic uncertainty on the cross section measurement in the
dimuon channel is the uncertainty on the background estimation, which is,
however, relatively small given the low background levels. This uncertainty is
evaluated from data using two independent background subtraction methods, as described in Section~\ref{sec:backgrounds}.
The next most important uncertainties are related to
the muon efficiency and to the muon momentum scale and resolution.
The former is determined using the large sample of Z events decaying to
dimuons.
Uncertainties in the latter are mostly caused by
residual misalignment between the muon chambers and the silicon tracker,
potentially not reproduced in the simulation. The Z line shape is used to
constrain the level of such possible limitations in the simulation.
The momentum resolution and the momentum scale uncertainties are included
in the unfolding procedure and, hence, the resulting shape is affected
by these systematic effects.
The level of the momentum scale uncertainty is evaluated by introducing a bias in the MC reconstruction and unfolding the resulting dimuon mass distribution with the unfolding matrix determined from the nominal (unbiased) MC sample. The bias is on the reconstructed invariant mass and is based on the maximal difference between MC and data Z peak positions as obtained with variations in the \pt and $\eta$ requirements.
Studies of photons reconstructed near a muon in a DY event indicate that
the FSR simulation is remarkably accurate. A corresponding systematic uncertainty
is evaluated by examining how
the results change when the fraction of FSR events as well as the energy and
angular distributions of the radiated photon are
modified within proper statistical variations.
Other systematic effects that could affect the dimuon yield have been considered,
such as the impact of
additional soft pp collisions that occur in the same bunch crossing as
the studied interaction
and the effects of the dimuon vertex
probability requirement and of residual data-simulation discrepancies. A combined uncertainty is
reported for these ``other'' sources in Table~\ref{tab_syst}, where all systematic
uncertainties in the dimuon channel are listed.
\begin{table}[h]
\begin{center}
\caption{Summary of systematic uncertainties in the muon channel (in percent). The ``Total'' is
a quadratic sum of all sources without ``Acceptance''. With the exception
of ``Acceptance'', the numbers correspond to the individual measurements per bin and not the
ratio to the Z region.
\label{tab_syst} }
\begin{tabular}{| l | l | l | l | l | l | l || l |}
\hline
Invariant mass&Efficiency& Background& Unfolding & FSR &Other & Total&Acceptance\\
bin (\!\GeV) & correction && & & & & \\
\hline
15--20 & $1.1 $ & $3.6$ & $0.4$ & $1.5 $ &$1.0$ &$4.2$ & $+2.2$/$-3.0$\\
20--30 & $1.1 $ & $3.1$ & $0.2$ & $1.1 $ &$1.0$ &$3.6$ & $+1.9$/$-3.2$\\
30--40 & $1.2 $ & $1.9$ & $0.1$ & $0.7 $ &$1.0$ &$2.6$ & $+1.7$/$-3.0$\\
40--50 & $1.2$ & $1.7$ & $0.2$ & $0.7 $ &$1.0$ &$2.4$& $+1.7$/$-2.9$\\
50--60 & $0.8$ & $2.1$ & $0.2$ & $0.5 $ &$0.5$ &$2.4$& $+1.7$/$-2.8$\\
60--76 & $0.6$ & $1.0$ & $0.2$ & $1.4 $ &$0.5$ &$1.9$& $+1.6$/$-2.6$\\
76--86 & $0.4$ & $0.2$ & $1.7$ & $2.0 $ &$0.5$ &$2.7$& $+1.5$/$-2.5$\\
86--96 & $0.3$ & $0.05$ & $0.2$ & $0.5 $ &$0.5$ &$0.8$& $+1.5$/$-2.4$\\
96--106 & $0.3$ & $0.4$ & $3.8$ & $0.5 $ &$0.5$ &$3.9$& $+1.5$/$-2.4$\\
106--120 & $0.3$ & $1.4$ & $0.7$ & $0.5 $ &$3.0$ &$3.4$& $+1.5$/$-2.3$\\
120--150& $1.1$ & $2$ & $0.4$ & $0.5 $ &$1.0$ &$2.6$& $+1.5$/$-2.1$\\
150--200& $2.1$ & $6$ & $0.9$ & $0.5 $ &$1.0$ &$6.5$ & $+1.4$/$-1.8$\\
200--600& $2.1$ & $10$ & $0.1$ & $0.5 $ &$1.0$ &$10.3$ & $+1.2$/$-1.4$\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[h]
\begin{center}
\caption{Summary of systematic uncertainties in the electron channel (in percent). The ``Total'' is
a quadratic sum of all sources without ``Acceptance''. With the exception
of ``Acceptance'', the numbers correspond to the individual measurements per bin and not the
ratio to the Z region.
\label{tab_syst_ee} }
\begin{tabular}{| l | l | l | l | l | l || l |}
\hline
Invariant mass& Energy & Efficiency& Background& Unfolding & Total &Acceptance\\
bin (\!\GeV) & scale &correction & & & & \\
\hline
15--20 & $ 23.4$ & $ 9.2$ & $ 6.2$ & $ 8.7$ & $ 27.3$ & $+2.1$/$-2.9$\\
20--30 & $ 3.6$ & $ 8.5$ & $ 2.8$ & $ 2.1$ & $ 9.9$ & $+1.7$/$-2.8$\\
30--40 & $ 2.7$ & $ 9.4$ & $ 4.0$ & $ 1.5$ & $ 10.6$ & $+1.5$/$-2.7$\\
40--50 & $ 3.3$ & $ 7.5$ & $ 5.2$ & $ 1.4$ & $ 9.9$ & $+1.5$/$-2.5$\\
50--60 & $ 3.3$ & $ 5.2$ & $ 4.6$ & $ 1.9$ & $ 7.9$ & $+1.5$/$-2.4$\\
60--76 & $ 10.3$ & $ 3.3$ & $ 2.2$ & $ 2.0$ & $ 11.2$ & $+1.4$/$-2.3$\\
76--86 & $ 39.5$ & $ 2.5$ & $ 0.8$ & $ 3.1$ & $ 39.7$ & $+1.3$/$-2.2$\\
86--96 & $ 3.9$ & $ 1.9$ & $ 0.2$ & $ 0.6$ & $ 4.4$ & $+1.2$/$-2.1$\\
96--106 & $ 45.6$ & $ 2.0$ & $ 0.9$ & $ 3.6$ & $ 45.8$ & $+1.3$/$-2.0$\\
106--120& $ 13.2$ & $ 2.1$ & $ 2.6$ & $ 2.4$ & $ 13.9$ & $+1.3$/$-1.9$\\
120--150& $ 6.0$ & $ 2.4$ & $ 8.2$ & $ 2.6$ & $ 10.8$ & $+1.3$/$-1.8$\\
150--200& $ 5.7$ & $ 2.8$ & $ 12.9$ & $ 2.4$ & $ 14.5$ & $+1.2$/$-1.5$\\
200--600& $ 4.6$ & $ 3.2$ & $ 11.8$ & $ 1.6$ & $ 13.1$ & $+1.0$/$-1.1$\\
\hline
\end{tabular}
\end{center}
\end{table}
In the electron channel, the leading systematic uncertainty is
associated with the energy scale corrections of individual electrons.
The corrections affect both the placement of a given candidate in a particular
invariant mass bin and the likelihood of surviving the kinematic
selection. The energy scale correction itself is calibrated to 2\%
precision for the dataset used. The associated error on signal event
yields is calculated by varying the energy scale correction value
within this amount and remeasuring the yields. This uncertainty takes
its largest values for the bins just below and above the central
Z peak bin because of bin migration. The energy scale uncertainty for the
electron channel is on the order of 20 times larger than the momentum scale
uncertainty for muons, for which the associated systematic uncertainties on the
cross section are rather small.
The second leading uncertainty for electrons is caused by the uncertainty on the
efficiency scale factors. The precision of the scale factor calibration is
limited by the size of the data sample available for the T\&P procedure.
The systematic uncertainty on the scale factors as well as the
resulting error on the normalized cross section are found with the
same procedure as for the muon channel.
The dielectron background uncertainties are evaluated by comparing the
background yields calculated as described in
Section~\ref{sec:backgrounds} with predictions from simulation. These
uncertainties are only dominant at the highest invariant masses
considered. The uncertainty associated with the unfolding procedure in
the electron channel comes primarily from the uncertainty on the
unfolding matrix elements due to imperfect simulation of detector
resolution. This simulation uncertainty for electrons is significantly
larger than for muons, leading to a larger systematic uncertainty on
the normalized cross section. The uncertainties due to FSR effects
are estimated with a method similar to that for the muon channel discussed
above with similar values. Because of significantly higher
systematic uncertainty for all mass bins for the electron channel than
for the muon channel, the FSR related contribution to the electron channel
systematic uncertainty is neglected.
The systematic uncertainties for the electron channel are summarized in
Table~\ref{tab_syst_ee}. At present the dominant systematic uncertainties
are driven by the limited size of calibration samples available for energy
scale and efficiency scale factor calculations, and therefore the
uncertainties could be reduced significantly with larger data samples.
\section{Results}
\label{sec:results}
The DY cross section per invariant mass bin $i$, $\sigma_i$, is calculated according to Eq. (\ref{eqn:fullCrossSection_intro}).
In order to provide a measurement independent of the luminosity uncertainty and to reduce
many systematic uncertainties, the $\sigma_i$ is normalized to
the cross section in the Z region, $\sigma_{\mathrm{\ell\ell}}$, defined as the DY
cross section in the invariant mass region $60 < M(\ell\ell) < 120~\GeV$.
The result of the analysis is presented as the ratio
\begin{equation}
\label{eqn:fullCrossSectionRatio}
R^i_{\text{post-FSR}} = \frac{N_{\text{u},i}}{A_i\,\varepsilon_i\,\rho_i} \big/
\frac{N_{u,{\mathrm{norm}}}}{A_{\mathrm{norm}}\,\varepsilon_{\mathrm{norm}}\,\rho_{\mathrm{norm}}},
\end{equation}
where $N_{\text{u},i}$ is the number of events after the unfolding procedure, and the
acceptances $A_i$, the efficiencies $\epsilon_i$, and the corrections estimated from data,
$\rho_i$, were defined earlier; $N_{u,{\mathrm{norm}}}$, $A_{\mathrm{norm}}$, $\varepsilon_{\mathrm{norm}}$, and $\rho_{\mathrm{norm}}$
refer to the Z region.
For both lepton channels, the cross sections
in the Z region measured in this analysis are in excellent agreement with the
previous CMS measurement~\cite{ZCrossSection}.
In order to allow a more direct and precise comparison with theory predictions, the
shape measured before the acceptance correction is also reported, thus eliminating PDF and theory
uncertainties from the experimental results:
\begin{equation}
\label{eqn:fullCrossSectionRatio_DET}
R_{\text{det, post-FSR}}^i = \frac{N_{\text{u},i}}{\varepsilon_i\,\rho_i} \big/
\frac{N_{u,{\mathrm{norm}}}}{\varepsilon_{\mathrm{norm}}\,\rho_{\mathrm{norm}}} .
\end{equation}
The post-FSR shapes, $R_{\text{post-FSR}}$ and $R_{\text{det,post-FSR}}$, are modified by the FSR correction factors from Tables~\ref{tab_accEff} and~\ref{tab_accEff_electrons}
to obtain the pre-FSR shapes, $R$ and $R_{\mathrm{det}}$, respectively.
The shapes integrated in the normalization region are equal to one by construction.
The results are presented in Tables~\ref{tab_result_single}
and~\ref{tab_result_electrons}, respectively, for the dimuon and dielectron
channels.
The two shape measurements,
shown in the last column of the tables,
are in good agreement for 11 out of 13 invariant
mass bins and remain statistically consistent (although marginally)
for the remaining two bins, 40--50\GeV and 120--150\GeV.
As a semi-independent check, a measurement was performed using a data
sample collected with a double-muon trigger with a lower \pt requirement of 7\GeV on each
muon. The signal yield is increased tenfold at the lowest invariant masses at the expense of
larger systematic uncertainties on the background. The result agrees with
the measurement made with the single muon trigger, having a similar precision in
the two lowest invariant mass bins.
\begin{table}[h!]
\begin{center}
\caption{Results for the DY spectrum normalized to the Z region in the
dimuon channel. The statistical and systematic uncertainties
are summed in quadrature. $R_{\text{post-FSR}}$ and $R_{\text{det,post-FSR}}$ are
calculated using Eqs.~(\ref{eqn:fullCrossSectionRatio})
and~(\ref{eqn:fullCrossSectionRatio_DET}), respectively. The $R_{\mathrm{det}}$ and $R$ are
calculated using the FSR corrections given in Table~\ref{tab_accEff}.
\label{tab_result_single} }
\begin{tabular}{|l|r@{$~\pm~$}l|r@{$~\pm~$}l|r@{$~\pm~$}l|r@{$~\pm~$}l|}
\hline
Invariant mass bin (\!\GeV)
& \multicolumn{2}{c|}{$R_{\text{det,post-FSR}}~(10^{-3})$}
& \multicolumn{2}{c|}{$R_{\mathrm{det}}~(10^{-3}) $}
& \multicolumn{2}{c|}{$R_{\text{post-FSR}}~(10^{-3}) $}
& \multicolumn{2}{c|}{$R~(10^{-3}) $}\\
\hline
15--20 & 18 & 2 & 19 & 2 & 772 & 67 & 780 & 69 \\
20--30 & 58 & 3 & 58 & 3 & 528 & 33 & 533 & 34 \\
30--40 & 67 & 3 & 67 & 3 & 147 & 8 & 147 & 8 \\
40--50 & 44 & 2 & 41 & 2 & 66 & 4 & 62 & 4 \\
50--60 & 30 & 2 & 23 & 2 & 37 & 3 & 30 & 2 \\
60--76 & 51 & 2 & 28 & 1 & 55 & 3 & 32 & 2 \\
76--86 & 97 & 4 & 56 & 3 & 98 & 5 & 58 & 3 \\
86--96 & 803& 14& 861& 15& 799 & 23 & 857 & 26\\
96--106 & 38 & 3 & 43 & 3 & 37 & 3 & 41 & 3 \\
106--120 & 12 & 1 & 12 & 1 & 11 & 1 & 12 & 1 \\
120--150 & 9.2 & 0.9& 9.7 & 1.0 & 8.4 & 0.8 & 8.8 & 0.9\\
150--200 & 3.1 & 0.6& 3.2 & 0.7 & 2.6 & 0.5 & 2.7 & 0.6\\
200--600 & 1.8 & 0.4& 1.9 & 0.5 & 1.4 & 0.3 & 1.5 & 0.4\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[h!]
\begin{center}
\caption{Results for the DY spectrum normalized to the Z region in the
dielectron channel. The statistical and systematic uncertainties
are summed in quadrature. $R_{\text{post-FSR}}$ and $R_{\text{det,post-FSR}}$ are
calculated using Eqs.~(\ref{eqn:fullCrossSectionRatio}) and~(\ref{eqn:fullCrossSectionRatio_DET}), respectively. The $R_{\mathrm{det}}$ and $R$ are
calculated using the FSR corrections given in Table~\ref{tab_accEff_electrons}.
\label{tab_result_electrons} }
\begin{tabular}{|l|r@{$~\pm~$}l|r@{$~\pm~$}l|r@{$~\pm~$}l|r@{$~\pm~$}l|}
\hline
Invariant mass bin (\!\GeV)
& \multicolumn{2}{c|}{$R_{\text{det,post-FSR}}~(10^{-3})$}
& \multicolumn{2}{c|}{$R_{\mathrm{det}}~(10^{-3}) $}
& \multicolumn{2}{c|}{$R_{\text{post-FSR}}~(10^{-3}) $}
& \multicolumn{2}{c|}{$R~(10^{-3}) $}\\
\hline
15--20 & 6 & 3 & 6 & 3 & 487 & 230 & 508 & 238\\
20--30 & 13 & 2 & 13 & 2 & 536 & 96 & 559 & 97\\
30--40 & 24 & 4 & 22 & 4 & 129 & 22 & 131 & 21\\
40--50 & 28 & 4 & 24 & 4 & 52 & 8 & 47 & 7\\
50--60 & 30 & 5 & 19 & 3 & 39 & 6 & 27 & 4\\
60--76 & 78 & 12 & 30 & 4 & 84 & 13 & 36 & 5\\
76--86 & 144 & 60 & 61 & 25 & 147 & 60 & 64 & 26\\
86--96 & 722 & 62 & 839 & 60 & 715 & 62 & 834 & 60\\
96--106 & 44 & 21 & 55 & 26 & 43 & 20 & 53 & 25\\
106--120& 13 & 3 & 15 & 3 & 12 & 2 & 14 & 3\\
120--150& 5.4 & 1.2& 6.0 & 1.3& 4.8 & 1.1 & 5.4 & 1.2 \\
150--200& 2.5 & 0.8& 2.8 & 0.8& 2.1 & 0.6 & 2.3 & 0.7 \\
200--600& 2.1 & 0.6& 2.4 & 0.7& 1.5 & 0.5 & 1.7 & 0.5 \\ \hline
\end{tabular}
\end{center}
\end{table}
The theoretical cross section is calculated with FEWZ
and three sets of PDFs: CT10, CTEQ66~\cite{CTEQ66}, and MSTW2008~\cite{MSTW2008}.
The calculations include leptonic decays of Z bosons
with full spin correlations as well as nonzero width effects and
$\gamma^*$-Z interference. However, they do not simulate FSR effects.
The calculations are cross-checked with the program DYNNLO, based on~\cite{DYNNLO,DYNNLO1},
which offers features similar to FEWZ. The predictions for the shape of
the DY spectrum agree well between the two programs, typically within 1\%.
The uncertainties on the theoretical predictions due to the imprecise knowledge of the PDFs are calculated
with the LHAGLUE
interface to the PDF library
LHAPDF~\cite{Bourilkov:2003kk,Whalley:2005nh}, using
a reweighting technique with asymmetric uncertainties~\cite{Bourilkov:2006cj}.
Since this is a shape measurement, and the normalization of the spectrum
is defined by the number of events in the Z region, the uncertainty is
calculated for the yield ratio, $Y_i/Y_{\mathrm{norm}}$, where $Y_i$ is the
predicted yield in the invariant mass bin~$i$ and $Y_{\mathrm{norm}}$ is the yield in the Z region.
The uncertainties for these ratios are much smaller than those for the individual yields
because of the correlations between $Y_i$ and $Y_{\mathrm{norm}}$, especially in
the dilepton invariant mass region close to the Z mass.
The factorization and renormalization scales were varied between 0.5 and 2
times the dilepton invariant mass. The resulting variations of the cross sections at
NNLO are much smaller than at NLO, and are less than 1.4\% around the Z peak.
The dependence of the DY cross section on the strong coupling constant
$\alpha_s$ was evaluated by varying $\alpha_s$ between $0.116$ and $0.120$,
using FEWZ and the CT10 PDF set. The cross section variations are at the percent level.
Higher-order electroweak corrections for DY, evaluated with HORACE~\cite{HORACE},
showed a negligible influence (typically well below 1\%) on the shape measurements in
the investigated invariant mass range.
The theoretical predictions from FEWZ at NNLO are presented in
Table~\ref{tab_theory_fullacc}.
\begin{table}[h!]
\begin{center}
\caption{Theoretical predictions at NNLO with FEWZ and three sets of PDFs.
The cross sections in this table are calculated in the full phase space with
1\% statistical precision. The
theoretical predictions of the ratio $R$ and its uncertainties are also given. ``Other''
contains uncertainties from EWK correction, scale dependence, and $\alpha_s$.
\label{tab_theory_fullacc} }
\begin{tabular}{| l | l | l | l || l | l | l |}
\hline
Invariant mass & \multicolumn{3}{c||}{Cross section (pb)}& $R~(10^{-3})$ & \multicolumn{2}{c|}{Uncertainties on $R$ (\%)} \\
\cline{2-7}
bin (\!\GeV)& CT10 & CTEQ66 & MSTW2008 &MSTW2008 & PDF & Other \\
\hline
15--20 & $787~~$ & $811~~$ & $819~~$ & $812~~$ & $+4.3$/$-3.3$ & $+2.5$/$-2.7$\\
20--30 & $476~~$ & $483~~$ & $499~~$ & $494~~$ & $+3.6$/$-2.8$ & $+1.9$/$-3.6$\\
30--40 & $135~~$ & $137~~$ & $142~~$ & $141~~$ & $+2.7$/$-2.3$ & $+3.1$/$-2.1$\\
40--50 & $53~~$ & $54~~$ & $56~~$ & $55~~$ & $+2.1$/$-1.9$ & $+2.4$/$-2.5$\\
50--60 & $27~~$ & $27~~$ & $29~~$ & $28~~$ & $+1.6$/$-1.5$ & $+2.6$/$-2.0$\\
60--76 & $32~~$ & $32~~$ & $33~~$ & $33~~$ & $+0.9$/$-0.9$ & $+2.0$/$-2.4$\\
76--86 & $56~~$ & $57~~$ & $58~~$ & $58~~$ & $+0.2$/$-0.2$ & $+2.1$/$-2.5$\\
86--96 & $822~~$ & $825~~$ & $852~~$ & $844~~$ & $+0.1$/$-0.1$ & $+1.8$/$-2.2$\\
96--106 & $51~~$ & $51~~$ & $53~~$ & $52~~$ & $+0.2$/$-0.2$ & $+2.8$/$-2.0$\\
106--120 & $12~~$ & $12~~$ & $13~~$ & $13~~$ & $+0.5$/$-0.5$ & $+2.6$/$-2.2$\\
120--150 & $6.7$ & $6.7$ & $7.0$ & $6.9$ & $+0.9$/$-0.9$ & $+2.5$/$-1.7$\\
150--200 & $2.6$ & $2.6$ & $2.7$ & $2.7$ & $+1.5$/$-1.6$ & $+2.0$/$-1.8$\\
200--600 & $1.3$& $1.3$ & $1.3$ & $1.3$ & $+2.8$/$-2.9$ & $+1.8$/$-2.1$\\
\hline
\end{tabular}
\end{center}
\end{table}
The results are also normalized to the invariant mass bin widths, $\Delta M_i$, defining
\begin{equation}\label{eqn:shape_r}
r_i = \frac{R_i}{\Delta M_i} .
\end{equation}
Assuming lepton universality, the dimuon and dielectron results for $r_i$ are
combined in a weighted average, using as weights the inverse of the
respective squared total uncertainties, where the statistical and
systematic uncertainties are added in quadrature.
The only expected source of correlation between the dimuon and dielectron results is due to the use
of the same MC model for the acceptance and FSR corrections.
Given that the uncertainties on these corrections are much smaller than most other uncertainties,
especially in the dielectron channel, this correlation has a negligible influence on the
combined results.
There are correlations between the invariant mass bins, induced by the various corrections
applied in the analysis, especially those related to the efficiencies and resolutions.
The efficiency corrections are highly correlated between adjacent invariant mass bins, since
they tend to use the same T\&P factors, derived from the same single-lepton
\pt bins. Nevertheless, for the dimuon channel the efficiency uncertainty is at most 20\%
of the total uncertainty, significantly diluting the effect of these correlations in the final results.
The resolution correlations, introduced through the unfolding procedure,
only have a visible effect around the Z peak. In summary, the level of correlations
does not affect the combination of results in a significant way.
\begin{table}[h]
\begin{center}
\caption{Results for the DY spectrum normalized to the Z region and to the invariant mass bin width,
using Eq.~(\ref{eqn:shape_r}), before and after combining the two channels. The results presented are
in {Ge\hspace{-.08em}V}$^{-1}$ units.
\label{tab_result_combined} }
\begin{tabular}{|l|r@{$~\pm~$}l|r@{$~\pm~$}l|r@{$~\pm~$}l|}
\hline
Invariant mass bin (\!\GeV)
& \multicolumn{2}{c|}{$r$ (muons)}
& \multicolumn{2}{c|}{$r$ (electrons)}
& \multicolumn{2}{c|}{$r$ (combined)}\\
\hline
15--20 & $(15.6 $ & $1.4) \times 10^{-2}$ & $(10.2 $ & $4.8) \times 10^{-2}$ & $(15.2 $ & $1.3) \times 10^{-2}$ \\
20--30 & $(5.3 $ & $0.3) \times 10^{-2}$ & $(5.6 $ & $1.0) \times 10^{-2}$ & $(5.4 $ & $0.3) \times 10^{-2}$ \\
30--40 & $(1.5 $ & $ 0.1) \times 10^{-2}$ & $(1.3 $ & $ 0.2) \times 10^{-2}$ & $(1.5 $ & $ 0.1) \times 10^{-2}$ \\
40--50 & $(6.2 $ & $ 0.4) \times 10^{-3}$ & $(4.7 $ & $ 0.7) \times 10^{-3}$ & $(5.9 $ & $ 0.3) \times 10^{-3}$ \\
50--60 & $(3.0 $ & $ 0.2) \times 10^{-3}$ & $(2.7 $ & $ 0.4) \times 10^{-3}$ & $(3.0 $ & $ 0.2) \times 10^{-3}$ \\
60--76 & $(2.0 $ & $ 0.1) \times 10^{-3}$ & $(2.2 $ & $ 0.3) \times 10^{-3}$ & $(2.1 $ & $ 0.1) \times 10^{-3}$ \\
76--86 & $(5.8 $ & $ 0.3) \times 10^{-3}$ & $(6.4 $ & $ 2.6) \times 10^{-3}$ & $(5.8 $ & $ 0.3) \times 10^{-3}$ \\
86--96 & $(85.7 $ & $ 2.6) \times 10^{-3}$ & $(83.4 $ & $ 6.0) \times 10^{-3}$ & $(85.6 $ & $ 2.4) \times 10^{-3}$ \\
96--106 & $(4.1 $ & $ 0.3) \times 10^{-3}$ & $(5.3 $ & $ 2.5) \times 10^{-3}$ & $(4.2 $ & $ 0.3) \times 10^{-3}$ \\
106--120 & $(8.4 $ & $ 0.9) \times 10^{-4}$ & $(9.6 $ & $ 1.9) \times 10^{-4}$ & $(8.6 $ & $ 0.8) \times 10^{-4}$ \\
120--150 & $(2.9 $ & $ 0.3) \times 10^{-4}$ & $(1.8 $ & $ 0.4) \times 10^{-4}$ & $(2.5 $ & $ 0.2) \times 10^{-4}$ \\
150--200 & $(5.4 $ & $ 1.2) \times 10^{-5}$ & $(4.6 $ & $ 1.4) \times 10^{-5}$ & $(5.1 $ & $ 0.9) \times 10^{-5}$ \\
200--600 & $(3.7 $ & $ 1.0) \times 10^{-6}$ & $(4.3 $ & $ 1.3) \times 10^{-6}$ & $(3.9 $ & $ 0.8) \times 10^{-6}$ \\
\hline
\end{tabular}
\end{center}
\end{table}
Table~\ref{tab_result_combined} gives the measured shape $r$, defined in Eq.~(\ref{eqn:shape_r}),
both in the dimuon and dielectron channels and also the combined result.
Figure~\ref{results} compares the measured (combined)
results with the prediction from the FEWZ NNLO calculations, performed with the MSTW08 PDF set. To provide
a meaningful comparison,
each data point is located on the horizontal axis at the position where
the theoretical function has a value equal to its mean value over the
corresponding bin, following
the procedure described
in Ref.~\cite{binCorrection}. The measurements are very well reproduced by the theoretical calculations.
\begin{figure}[h!]
{\centering
\includegraphics[width=0.80\textwidth, angle=90]{figures/8_Sec/Fig_8.pdf}
\caption{\label{results}
DY invariant mass spectrum, normalized to the Z resonance region,
$r = (1/\sigma_{\mathrm{\ell\ell}}) d\sigma/dM(\ell\ell)$, as measured
and as predicted by
NNLO calculations, for the full phase space. The vertical error bar indicates
the experimental (statistical and systematic) uncertainties summed in quadrature with
the theory uncertainty
resulting from the model-dependent kinematic distributions inside each bin.
The horizontal bars indicate bin sizes and the data points inside are placed according to Ref.~\cite{binCorrection}. The width of the theory curve represents uncertainties from Table \ref{tab_theory_fullacc}.}}
\end{figure}
\section{Summary}
\label{sec:conclusions}
The Drell--Yan differential cross section normalized to the cross section in the Z region
has been measured in $\pp$ collisions at $\sqrt{s} = 7\TeV$,
in the dimuon and dielectron channels in the invariant mass range $15 < M(\ell\ell) < 600 \GeV$.
The measurement is based on event samples collected by the CMS experiment,
corresponding to an integrated luminosity of $35.9\pm 1.4~{\mathrm{pb}}^{-1}$.
Results are presented both inside the detector acceptance and in the full phase space,
and the effect of final state QED radiation on the results is reported as well.
A correct description of the measurements requires modeling to NNLO
for dilepton invariant masses below about 30 GeV.
The measurements are in good agreement with the NNLO theoretical predictions,
as computed with {\sc FEWZ}.
\section*{Acknowledgments}
\hyphenation{Bundes-ministerium Forschungs-gemeinschaft Forschungs-zentren}
We would like to thank the authors of {\sc FEWZ} and {\sc POWHEG} for the fruitful discussions, co-operation, and cross-checks in performing the theoretical calculations for our analysis.
We wish to congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC machine. We thank the technical and administrative staff at CERN and other CMS institutes. This work was supported by the Austrian Federal Ministry of Science and Research; the Belgium Fonds de la Recherche Scientifique, and Fonds voor Wetenschappelijk Onderzoek; the Brazilian Funding Agencies (CNPq, CAPES, FAPERJ, and FAPESP); the Bulgarian Ministry of Education and Science; CERN; the Chinese Academy of Sciences, Ministry of Science and Technology, and National Natural Science Foundation of China; the Colombian Funding Agency (COLCIENCIAS); the Croatian Ministry of Science, Education and Sport; the Research Promotion Foundation, Cyprus; the Estonian Academy of Sciences and NICPB; the Academy of Finland, Finnish Ministry of Education and Culture, and Helsinki Institute of Physics; the Institut National de Physique Nucl\'eaire et de Physique des Particules~/~CNRS, and Commissariat \`a l'\'Energie Atomique et aux \'Energies Alternatives~/~CEA, France; the Bundesministerium f\"ur Bildung und Forschung, Deutsche Forschungsgemeinschaft, and Helmholtz-Gemeinschaft Deutscher Forschungszentren, Germany; the General Secretariat for Research and Technology, Greece; the National Scientific Research Foundation, and National Office for Research and Technology, Hungary; the Department of Atomic Energy and the Department of Science and Technology, India; the Institute for Studies in Theoretical Physics and Mathematics, Iran; the Science Foundation, Ireland; the Istituto Nazionale di Fisica Nucleare, Italy; the Korean Ministry of Education, Science and Technology and the World Class University program of NRF, Korea; the Lithuanian Academy of Sciences; the Mexican Funding Agencies (CINVESTAV, CONACYT, SEP, and UASLP-FAI); the Ministry of Science and Innovation, New Zealand; the Pakistan Atomic Energy Commission; the State Commission for Scientific Research, Poland; the Funda\c{c}\~ao para a Ci\^encia e a Tecnologia, Portugal; JINR (Armenia, Belarus, Georgia, Ukraine, Uzbekistan); the Ministry of Science and Technologies of the Russian Federation, the Russian Ministry of Atomic Energy and the Russian Foundation for Basic Research; the Ministry of Science and Technological Development of Serbia; the Ministerio de Ciencia e Innovaci\'on, and Programa Consolider-Ingenio 2010, Spain; the Swiss Funding Agencies (ETH Board, ETH Zurich, PSI, SNF, UniZH, Canton Zurich, and SER); the National Science Council, Taipei; the Scientific and Technical Research Council of Turkey, and Turkish Atomic Energy Authority; the Science and Technology Facilities Council, UK; the US Department of Energy, and the US National Science Foundation.
Individuals have received support from the Marie-Curie programme and the European Research Council (European Union); the Leventis Foundation; the A. P. Sloan Foundation; the Alexander von Humboldt Foundation; the Associazione per lo Sviluppo Scientifico e Tecnologico del Piemonte (Italy); the Belgian Federal Science Policy Office; the Fonds pour la Formation \`a la Recherche dans l'Industrie et dans l'Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); and the Council of Science and Industrial Research, India; the European Union Structural Funds project `Postdoctoral Fellowship Implementation in Lithuania'.
\section{The CMS Collaboration \label{app:collab}}\begin{sloppypar}\hyphenpenalty=5000\widowpenalty=500\clubpenalty=5000\input{EWK-10-007-authorlist.tex}\end{sloppypar}
\end{document}
|
2,869,038,154,622 | arxiv |
\section{Introduction}
Online shopping has become popular in our daily lives. Hundreds of millions of users visit e-commerce platforms (such as Amazon, eBay, Taobao and JD) every day. The product search service, which displays relevant products based on user queries, is of great importance for user experience and transaction efficiency.
Taobao Search consists of two phases: the retrieval phase and the ranking phase. The retrieval phase aims to select a candidate set (tens of thousands) from a large pool of products (in billion level), while the ranking phase determines the displaying order. Hence, the retrieval phase plays an important role in the quality of search results. In Taobao\footnote{https://www.taobao.com/}, a product post is composed of a title and several images, while user queries are plain texts. Therefore, the retrieval phase is formulated as a problem of text-to-multimodal matching.
There are many works \cite{li2021embedding,nigam2019semantic,xiao2019weakly,chang2021extreme,zobel2006inverted,robertson2009probabilistic} proposed for the product retrieval task, which fall into two categories:
lexical matching approaches and embedding-based learning approaches. Lexical matching approaches \cite{zobel2006inverted,robertson2009probabilistic} typically build the inverted indexes for products and conduct exact matching between inverted indexes and user queries.
Embedding learning approaches \cite{li2021embedding,nigam2019semantic,xiao2019weakly,chang2021extreme} learn semantic representations (i.e., embeddings) of queries and products, and then retrieve products by measuring the similarity between the query and product embeddings.
\begin{table}[htbp]
\centering
\begin{tabular}{p{0.14\linewidth} | p{0.79\linewidth}}
\toprule
Query & Title \\\midrule
White shirt & White chiffon shirt, women's long-sleeved top, 2020 spring and autumn new western style professional wear, light mature temperament \\
\bottomrule
\end{tabular}
\caption{An example of query-title pair collected from online logs. There exists significant imbalance between user queries and product titles.
}
\label{tab:query-title}
\end{table}
Recently, the success of transformer \cite{vaswani2017attention} structure and vision-language representation learning \cite{zhang2021vinvl,qi2020imagebert,li2021align} motivates people to study pre-training on e-commerce tasks \cite{gao2020fashionbert,zhuge2021kaleido,yu2022commercemm}. These models, composed of a text encoder and an image encoder based on transformers, are pre-trained on text-image pairs and fine-tuned on image captioning, category recognition, text-to-image retrieval, etc.
Intuitively, to solve the text-to-multimodal retrieval task in Taobao Search, we exploit the text encoder on user queries and product titles, while applying the image encoder on product images. The representations of user queries and products are then used in an embedding retrieval framework. However, we observe sub-optimal performance due to the following two key problems.
\textbf{First}, existing methods neglect the fact that in e-commerce search, users' attention on titles or images varies on products. For example, users pay more attention to images of clothes. Whereas on electronic products, users care more about key properties described in titles, such as memory size.
\textbf{Second}, sellers often apply the search engine optimization (SEO) techniques, in order to improve the matching probabilities and ranking of their products. As shown in Table \ref{tab:query-title}, user queries are usually short and brief, while product titles tend to be long and concrete. The semantic imbalance between user queries and products is a big challenge for the retrieval task. %
To handle the \textbf{first} problem, we propose a \textbf{M}odal \textbf{A}daptation module to perform cross-modal fusion by introducing user queries as contextual information and to assign reasonable attentions on product titles and images.
To address the \textbf{second} issue, we design an independent text encoder to process user queries. We further design a \textbf{K}eyword \textbf{E}nhancement mechanism by jointly optimizing similar positive samples for user queries, in order to enrich the semantic information and learn better user query embeddings.
To summarize, our main contributions are as follows:
\begin{itemize}
\item We propose a novel vision-language pre-training method (referred as \textbf{MAKE}) tailored for the text-to-multimodal retrieval task in e-commerce search. Trained on a large-scale (\textit{query}, \textit{title}, \textit{image}) triplet dataset from online logs of Taobao Search, \textbf{MAKE} is capable of effective and efficient text-to-multimodal retrieval.
\item We propose a \textbf{M}odal \textbf{A}daptation module to learn appropriate attentions on product titles and images by introducing user queries as the context. The module leads to stronger representation power of product embeddings.
\item We propose a \textbf{K}eyword \textbf{E}nhancement mechanism to enhance the query embeddings by jointly training similar user queries. The mechanism significantly alleviates the semantic imbalance between user queries and product titles.
\item
Extensive experiments on offline datasets and online A/B tests demonstrate that \textbf{MAKE} outperforms existing V+L pre-training methods on the e-commerce text-to-multimodal retrieval task. Our method has been deployed on Taobao Search and served hundreds of millions of users every day.
\end{itemize}
\section{The Proposed Approach}
The key of our method is the vision-language pre-training model with a Modal Adaptation module and a Keyword Enhancement mechanism tailored for the e-commerce text-to-multimodal retrieval task, as introduced below.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{fig/network2.pdf}
\caption{Overview of our pre-trained model MAKE.}
\label{fig:network}
\end{figure}
\subsection{Vision-Language Pre-training Model}
\label{sec:pretrain}
\paragraph{\textbf{Model Structure.}} Different from existing V+L methods with a text encoder and an image encoder, our pre-training model has a three-tower structure, with a query encoder, a title encoder and an image encoder, as shown in Figure \ref{fig:network}.
Each of them consists of 6 transformers (a multi-head self-attention layer and a feed-forward layer) blocks~\cite{vaswani2017attention}. The two text encoders are initialized with the Structbert \cite{wang2019structbert} model (pre-trained on Chinese e-commerce corpus), and the image encoder is initialized with the ImageBert \cite{qi2020imagebert} model (pre-trained on Chinese corpus and corresponding images \cite{qiu2021easytransfer}). The embeddings of products are the combination of outputs from the title encoder and the image encoder.
\paragraph{\textbf{Query Encoder.}} To start with, we pre-train ALIGN \cite{li2021align} on image-text pairs from Taobao. We apply the same text encoder on user queries and product titles. Nevertheless, we observe sub-optimal performance on the query-to-product retrieval task. We find out that in e-commerce search, it is common for sellers to apply the search engine optimization (SEO) techniques for improving the rankings of their products.
As a result, the product titles usually consist of many keywords that are grammatically meaningless and contain grammatical errors. On the contrary, users tend to type in short terms to the search engine. Hence, there exist non-trivial imbalances between queries and titles. Therefore, we need a separated text encoder for user queries. Besides, for V+L pre-training models \cite{li2021align,qi2020imagebert}, Image-Text Matching (ITM) are widely used to improve the performance on the downstream retrieval task. Similar to ITM, we adopt a Query-Product Matching (QPM) loss with in-batch negative sampling to optimize embeddings of queries and products.
\begin{align}
\boldsymbol{p}_{q2p}&=-\log\frac{\exp(\boldsymbol{u}^T\boldsymbol{v}/\tau)}{\sum_{j=1}^N\exp(\boldsymbol{u}^T\boldsymbol{v_j}/\tau)}, \\
\mathcal{L}_{QPM}&=\mathbb{E}_{(Q, T, I) \sim D} \mathrm{H}(\boldsymbol{p}_{q2p},\boldsymbol{y}_{q2p}), \label{eq:qpm}
\end{align}
where $N$ is the batch size and $\tau$ is the temperature parameter.
\paragraph{\textbf{Model Inputs.}} The text modal (user queries and product titles) are preprocessed in the same way as BERT \cite{devlin2019bert}. We adopt the Chinese vocabulary provided by EasyTransfer \cite{qiu2021easytransfer}. For the product images, we preprocess them as 4*4 patches and apply ResNet \cite{he2016deep} as a backbone network to extract image sequences of 2048-D features. Segmentation marks \textit{Q}, \textit{T} and \textit{I} are used to distinguish token sequences of user queries, product titles and images.
\paragraph{\textbf{Self-Supervised Pre-training Objective.}} Pre-training models with self-supervised task \cite{devlin2019bert,yu2022commercemm} are proved to be effective on many downstream tasks. Following FashionBert \cite{gao2020fashionbert}, we apply two self-supervised tasks: Masked Language Modeling (MLM) and Mask Patch Modeling (MPM). Please refer to the paper for detail information.
\subsection{Modal Adaptation Module}
\label{sec:ma}
In e-commerce, it is obvious that the importance of titles and images varies across different products.
For instance, on clothes, users pay more attention to images. Whereas on electronic products, users care more about key properties described in titles, such as memory size. Hence, two modalities should be fused with proper weights for different products. However, we observe that with separated text/image encoders, the pre-training model focuses evenly on text/image modals. We believe that the lack of cross-modal fusion prevents the network from learning better representations.
Although Yu \etal\cite{yu2022commercemm} design a multimodal fusion encoder on top of the text/image encoder, they ignore user intentions. Therefore, by introducing user queries as contextual information, we propose a novel \textbf{M}odal \textbf{A}daptation module in order to conduct modal fusion and optimize the overall representations, as shown in Figure \ref{fig:network}. The module contains two layers of a sub-module, composed of a self-attention layer, a cross-attention layer and a feed-forward layer, which takes outputs of query encoder, title encoder and image encoder as inputs.
For the self-attention layer, inputs of \textit{KV} are outputs of the title encoder and the image encoder, while for the cross-attention layer, inputs of \textit{Q} are outputs of the query encoder.
With the Modal Adaptation module, embeddings of products not only contain two-modal (title+image) information of diverse weights within products, but consider the influence of user queries.
We also design a Query-Product Classification (QPC) loss for the \textbf{MA} module. Different from optimizing similarity among separated embeddings in the QPM loss, the QPC loss refines the joint representation of query-product pairs.
The [CLS] outputs of the \textbf{MA} module, followed by a fully-connected layer and a sigmoid function, are used in a two-class classification task.
We construct negative query-product pairs by choosing the maximum similarity among mini-batch negative samples. The similarity is pre-computed in QPM loss, and thus brings little computational cost. The QPC loss is presented as:
\begin{align}
\mathcal{L}_{\mathrm{QPC}}&=\mathbb{E}_{(Q, T, I) \sim D} \mathrm{H}(\boldsymbol{p}_{\mathrm{QPC}}(Q, T, I), \boldsymbol{y}_{\mathrm{QPC}}),
\end{align}
where $\boldsymbol{p}_{QPC}$ is the probability of classification and $\boldsymbol{y}_{\mathrm{QPC}}$ indicates the ground-truth label.
\subsection{Keyword Enhancement Mechanism}
\label{sec:ke}
As mentioned above, the QPM (Eq. \ref{eq:qpm}) loss is associated with in-batch negative sampling (IBNS) adopted in ALIGN \cite{li2021align}, which brings another significant issue to our model. Different from limited academic tasks, multiple user queries are considered as matching to the same product. With the IBNS mechanism, those similar user queries are mistakenly treated as negative samples and thus compromise query embeddings. Therefore, to solve the false-negative issue of the IBNS mechanism, we propose a Keyword Enhancement mechanism to replace the IBNS mechanism. The proposed mechanism aims at improving representation learning of user queries by jointly optimizing queries related to the same product.
Instead of \textit{query-product} pairs, a product with several related queries ($\textit{product}$, $\textit{query}_1$, $\ldots$, $\textit{query}_M$) collected from Taobao Search logs are grouped as one training sample. $M$ is the the number of enhanced queries and set as 5 in this paper.
In addition, we design a new QPM loss based on contrastive learning and \textbf{KE} mechanism:
\begin{equation}
\mathcal{L}_{QPM}^{KE} = \log(1+\sum_{j=1}^N\exp(\gamma\exp(s_{neg}^j+\theta))\sum_{m=1}^M\exp(-\gamma s_{pos}^m)),
\end{equation}
where $s(\cdot)=\boldsymbol{v}^T \boldsymbol{u}-\log \boldsymbol{p}$ measures the inner-product similarity between embeddings of queries and products. Following sampled-softmax \cite{jean2014using}, the $-\log\boldsymbol{p}$ term is the expected frequency of products, with which we prevent the model from focusing too much on popular products. $N$ is the batch size.
$\gamma$ is the scaling factor. The hyper-parameter $\theta$ constrains the lower bound of the similarity difference between positive pairs and negative pairs. With the Keyword Enhancement mechanism, we solve the false-negative issue and narrow the distance between embeddings of similar queries.
Finally, the pre-trained model is optimized as below:
\begin{equation}
\mathcal{L}=\mathcal{L}^Q_{MLM}+\mathcal{L}^T_{MLM}+\mathcal{L}^{I}_{MPM}+\mathcal{L}_{QPC}+\mathcal{L}_{QPM}^{KE}.
\end{equation}
\section{Experiments}
\subsection{Datasets, Implementations and Metrics}
\textbf{Large-scale Industrial Dataset.} We collect online clicking logs with user queries, product titles and images from Taobao Search. The training set contains samples of billion level, and we randomly choose 1.5 million search logs as the evaluation set.
\textbf{Model Implementation.}
The pre-training model is composed of three encoders with 6 layers of transformers \cite{devlin2019bert}. Each layer has $768$ hidden units and 12 self-attention heads. We pre-train the model for 10 epochs with a batch size of $1280$ on $50$ NVIDIA P100 GPUs. We applied an Adam optimizer with $\beta_1=0.9$ and $\beta_2=0.98$. The learning rate is warmed-up to $1e^{-4}$ in the first $2000$ iterations and decays to $0$ following a linear schedule.
\textbf{Online Serving.} We predict embeddings of all products in Taobao with the title encoder and the image encoder. Then we adopt Proxima \cite{proxima}, an ANN (approximate nearest neighbor) framework, to build indexes of product embeddings with HC (hierarchical clustering) algorithm. Once receiving a user request, the online query encoder predicts on the user query and returns the embedding. The query embedding is used to retrieve top-K relevant products from the ANN index. The model is updated on weekly basis.
\textbf{Offline Evaluation Metrics.} The retrieval set is denoted as $R=\{p_1,\ldots,p_K\}$. The clicked products from the evaluation set is denoted as the target set $T$.
We use the metric $P_{rel}$ and $P_{cate}$, which measures the rate of relevance on the retrieval set $R$, according to a well-trained relevance model \cite{DBLP:conf/www/YaoTCYXD021} (the AUC on human-labeled data is $0.92$). The first one focus on the overall relevance, while the second one compares the category predicted on user queries to the category of retrieved products.
\begin{equation}
\small
P_{rel}=\frac{1}{NN_R}\sum_{i=1}^N\sum_{j=1}^{N_R} f(q_i,p_{i,j}), \quad P_{cate}=\frac{1}{NN_R}\sum_{i=1}^N\sum_{j=1}^{N_R} \mathbb{I}(f_c(q_i)=c_{i,j}),
\end{equation}
where $f(\cdot,\cdot) \in[0, 1]$ denotes the prediction of the relevance model, $f_c(\cdot)$ returns category based on queries. $N$ is the size of the evaluation dataset and $N_{R_{i}}$ is the size of retrieval set $R_i$. $c_{i,j}$ is the category of the retrieved product $p_{i,j}$.
We also apply a Recall@K metric to evaluate the retrieval performance, computed as:
\begin{equation}
\text{Recall@K}=\frac{1}{N}\sum_{i=1}^{N}\mathbb{I}(\exists t | t\in R_{i,K} \wedge t\in T_i),
\end{equation}
where $\mathbb{I}(\cdot)$ is an indicator function.
\textbf{Online Evaluation Metrics.} We use the number of transactions (denoted as \#Trans) and GMV (Gross Merchandise Volume, \xy{total value of sales}) as online evaluation metrics.
For users with few recorded consuming behaviors, these two metrics are denoted as $\text{\#Trans}_n$ and $\text{GMV}_n$, respectively.
\subsection{Offline Experimental Results}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.43\textwidth]{fig/weight2.pdf}
\caption{Attention weights on titles and images across different product category. MAKE vs. MAKE w/o MA.
}
\label{fig:weight}
\end{figure}
\subsubsection{Comparison with Baseline Methods}
We adopt CLIP \cite{radford2021learning}, FashionBERT \cite{gao2020fashionbert} and CommerceMM \cite{yu2022commercemm} as strong baseline pre-training models. The latter two are proposed to solve downstream task related to e-commerce scenario. FashionBERT \cite{gao2020fashionbert} consists of one encoder, while CLIP \cite{radford2021learning} and CommerceMM \cite{yu2022commercemm} have a two-tower structure.
All baseline methods are pretrained on the same training dataset.
As shown in Table \ref{tab:ablation}, our proposed method \textbf{MAKE} outperforms all baseline methods on the text-to-multimodal retrieval task of Taobao Search.
\subsubsection{Modal Adaptation Module. } The \textbf{MA} module, associated with the Query-Product Classification (QPC) task, is proposed to conduct modal fusion and learn appropriate attention on text/image modals across different products.
The comparison from \textbf{MAKE w/o MA} to \textbf{MAKE} reveals that the \textbf{MA} module significantly improves the relevance and the recall hitrate by 1.88\% and 4.59\%.
To further evaluate the effect of \textbf{MA} module, we collect attention weights on titles/images across different product categories from \textbf{MAKE} and \textbf{MAKE w/o MA}, as shown in Figure \ref{fig:weight}.
After introducing the \textbf{MA} module, for vision dominant categories, the network pays more attention on images ($79.4\%$ for ornaments), while for text dominant categories, the network focuses more on titles ($67.5\%$ for laptops). With the \textbf{MA} module, the model assigns proper attention weights on modals of text and image.
\begin{table}[htbp]
\centering
\begin{tabular}{lccc}
\hline
Methods & $P_{rel}\uparrow$ & $P_{cate}\uparrow$ & Recall@$K\uparrow$ \\\hline
FashionBert \cite{gao2020fashionbert} & 0.8385 & 0.8190 & 0.3867 \\\hline
CLIP \cite{radford2021learning} & 0.8648 & 0.8423 & 0.4675\\\hline
CommerceMM \cite{yu2022commercemm} & 0.8710 & 0.8653 & 0.4937 \\\hline
MAKE & \textbf{0.9014} & \textbf{0.9295} & \textbf{0.6088} \\\hline
MAKE w/o MA & 0.8826 & 0.8910 & 0.5629 \\\hline
MAKE w/o KE & 0.8922 & 0.8803 & 0.5781 \\\hline
\end{tabular}
\caption{Offline experimental results and ablation studies.
}
\label{tab:ablation}
\end{table}
\subsubsection{Keyword Enhancement Module}
The \textbf{KE} mechanism is a modified negative sampling mechanism with a QPM loss.
The \textbf{KE} mechanism enforces the model to jointly optimize similar queries and avoid false-negative sampling of the popular IBNS mechanism. By comparing \textbf{MAKE} to \textbf{MAKE w/o KE}, the \textbf{KE} mechanism effectively strengthens the query representations by reducing the distance from similar queries to the same product. The \textbf{KE} mechanism also helps alleviate the semantic imbalance between user queries and product titles.
\subsection{Online A/B Tests}
We deploy our pre-training method \textbf{MAKE} on Taobao Search and provides relevant candidates to the prior three-channel retrieval system, including embedding-based learning, collaborative filtering and inverted-index matching.
As shown in Table \ref{tab:AB}, our method outperforms the prior retrieval system
by improving the overall relevance ($+2.20\%$) of product candidates. We also report 14-day average online improvements of \textbf{MAKE} on GMV and \#Trans. As shown in Table \ref{tab:AB}, our proposed method, improves GMV and \#Trans by $0.79\%$ and $0.37\%$, respectively. Considering the large amount of transactions in Taobao Search, \textbf{MAKE} facilitates hundreds of thousands of transactions per day. Besides, \textbf{MAKE} obtains more performance gains ($2.01\%$ on GMV and $1.58\%$ on \#Trans) on inactive users and new users, including tens of millions of users per day. Compared to the significant gains, the additional computational cost is negligible (\textbf{2 ms}). These results demonstrate that \textbf{MAKE} significantly improves the overall efficiency of Taobao Search.
\begin{table}[htbp]
\centering
\small
\begin{tabular}{l|cccccc}
\hline
Methods & GMV & \#Trans & $\text{GMV}_n$ & $\text{\#Trans}_n$ & $P_{rel}$ & Time Cost \\\hline
MAKE & +0.79\% & +0.37\% & +2.01\% & +1.58\% & +2.20\% & +2 ms \\\hline
\end{tabular}
\caption{Online A/B tests of MAKE.
}
\label{tab:AB}
\end{table}
\section{Conclusion}
In this paper, we propose a novel vision-language pre-training method (\textbf{MAKE}) with a three-encoder structure tailored for the text-to-multimodal retrieval task of Taobao Search. We propose a Modal Adaptation module to perform cross-modal fusion and learn effective product representations. We further design a Keyword Enhancement mechanism to solve the semantic imbalance and false-negative sampling issue to improve query representations.
Offline ablation study and online A/B tests, verify the effectiveness of our method \textbf{MAKE}. We have deployed \textbf{MAKE} online to serve hundreds of millions of users every day, and greatly improves online transaction efficiency in e-commerce.
\bibliographystyle{ACM-Reference-Format}
|
2,869,038,154,623 | arxiv | \section{Introduction}
It is commonly accepted that the parent compounds of the
superconducting high-$T_c$ cuprates are antiferromagnetic
charge-transfer insulators and that superconductivity emerges upon
doping either with holes or electrons.\cite{Imada, ARM} There are
some similarities but also differences between hole- and
electron-doped cuprates. One similarity is that all cuprate
superconductors have a perovskite structure with the common
feature of square planar copper-oxygen planes separated by
rare-earth oxide (charge-reservoir) layers. On the other hand,
they differ in that the hole-doped cuprates have a $T$ structure
characterized by the presence of apical oxygen above and below the
CuO$_{2}$ planes, while the electron-doped cuprates have a
$T^\prime$ structure, where two sites are occupied by oxygen: O(1)
in the CuO$_{2}$ planes and O(2) within the rare-earth oxide
layers, with no apical oxygen located directly above the copper in
the CuO$_{2}$ plane, as shown in the inset of Fig. \ref{rho}. This
implies that the $T$ structure has six oxygen atoms, two of which
are in the apical positions, surrounding each copper (octahedrally
coordinated), while in the $T^\prime$ structure only four oxygens
surround each copper (square-planar coordinated).
There is also a large difference between the phase diagrams of hole-
and electron-doped cuprates. Whereas the antiferromagnetic phase
exists only over a small doping range (0 -- 4\,\%) in hole-doped
cuprates, it is more robust in electron-doped cuprates and persists
to higher doping levels (0 -- 11\,\%). Superconductivity occurs in
a doping range that is almost five times narrower for electron-doped
cuprates (11 -- 17\,\%) as compared to the hole-doped counterparts
(4 -- 32\,\%). While consensus on the phase diagram exists for the
hole-doped side, the situation for the electron-doped cuprates is
less obvious.
As early as in 1995, Brinkmann \textit{et al.}\cite{Brinkmann}
demonstrated that the superconductivity window in
Pr$_{2-x}$Ce$_x$CuO$_4$ single crystals can be extended down to a
doping level of 4\,\% by a special oxygen reduction and annealing
technique. Improved deposition and annealing techniques have
recently made it possible to produce thin films of electron-doped
parent compounds ($R_2$CuO$_4$, $R$ = Pr, Sm, Nd, Eu, and Gd) with
$T^\prime$ structure that, in fact, are metallic and
superconducting at low temperatures.\cite{Matsumoto1, Matsumoto2,
Matsumoto3, Matsumoto4, Yamamoto, Ikeda, YKrock1}
This sharp contradiction to earlier results is explained as being
due to the fact that although apical oxygen should not exist in the
ideal $T^\prime$ structure, in practice (especially in bulk samples)
it is usually not completely removed.\cite{YKrock1} This apical
oxygen in the $T^\prime$ structure acts as a very strong scatterer
and pair breaker.\cite{Sekitani} In contrast to bulk samples, the
large surface-to-volume ratio of thin films along with their tenuity
itself is advantageous in achieving the proper $T^\prime$ structure
with no apical oxygen.
The reported superconductivity in undoped cuprates puts a question
mark on the applicability of the charge-transfer-insulator picture
to electron-doped cuprates.\cite{Naito} Remarkably, recent
calculations on the basis of a newly developed first-principles
method show a radical difference between the parent compounds with
$T$ and $T^\prime$ structures.\cite{Das, Weber1, Weber3} The first
are found to be charge-transfer insulators, while the latter,
e.g., Pr$_{2}$CuO$_{4}$, are essentially metallic and their
apparent insulating nature may originate from magnetic long-range
order (Slater transition) which is competing with the metallic
ground state.\cite{Calder}
One should note, however, that it is still a question whether or
not $T^\prime$ superconductors are truly undoped or are still
doped by possible oxygen vacancies in the $R$O layers during the
reduction process. Since bulk $T^\prime$-$R_2$CuO$_{4}$
superconducting samples have not yet been synthesized, direct
measurements of the oxygen distribution are not available so far.
Nevertheless, neutron diffraction on Nd$_{2-x}$Ce$_{x}$CuO$_{4+y}$
single crystals shows that it is mostly apical oxygen which is
removed during reduction.\cite{Schultz, Radaelli} The synthesis of
bulk samples of a nominally undoped
$T^\prime$-(La,Sm)$_2$CuO$_{4}$,\cite{Ueda, Asai} and of heavily
underdoped
Pr$_{1.3-x}$La$_{0.7}$Ce$_{x}$CuO$_{4+\delta}$\cite{TAdachi} gives
hope that the oxygen stoichiometry might be determined in the near
future for this class of superconductors.
\begin{figure}[t]
\centering
\includegraphics[width=7 cm,clip]{rho.eps}
\caption{(Color online) Temperature dependence of the in-plane dc
resistivity $\rho_{dc}$ of a MBE-grown $T^\prime$-PCO film [open
(black) circles] together with fits (lines) discussed in
Sec.~\ref{sec:res}. Schematic diagrams of the $T$ and $T^\prime$
structures are shown as an inset.} \label{rho}
\end{figure}
In this paper, we do not touch the issue of oxygen stoichiometry;
instead we present a comprehensive broadband optical investigation
of Pr$_2$CuO$_x$ (PCO) films with $x\simeq 4$. As argued above, it
is impossible to rule out doping by oxygen vacancies (if this is
the case, $x$ differs from 4 in our films). However, we will show
that our findings can also be consistently understood within the
picture, where superconductivity develops in undoped PCO (i.e.,
$x=4$). We demonstrate that the available PCO samples do show a
metallic as well as a superconducting optical response. We find
that this response can be reconciled with $d$-wave
superconductivity and the density of the superconducting
condensate is rather low. We do not observe any indication of a
normal-state pseudogap. All this supports ideas that the standard
charge-transfer-insulator picture might not be applicable to PCO.
\section{Experiment}
PCO films were grown by molecular beam epitaxy (MBE)
\cite{Yamamoto} on a (110)-oriented 0.35 mm thick DyScO$_{3}${\rm}
substrate. The phase purity of these films was confirmed by x-ray
diffraction. The films were 100 nm thick with the $c$ axis
oriented perpendicular to the film's surface. Direct-current (dc)
resistivity was measured from 4 to 300 K by a standard four-probe
method.
Near-normal reflectivity from 40 to 55000 cm$^{-1}${\rm} (5 -- 6800 meV)
was measured using a combination of two Fourier-transform
spectrometers (Bruker IFs113V and Bruker IFS66V/s) covering
frequencies from 40 to 22000 cm$^{-1}${\rm} and a grating spectrometer
for room-temperature reflectivity measurements from 8000 to 55000
cm$^{-1}$. In order to obtain the absolute reflectivity of the sample,
we used an \textit{in situ} gold (for the infrared) or silver (for
the visible) overfilling technique.\cite{gold} With this
technique, we achieved an absolute accuracy in the reflectivity
better than 3 \% and the relative error between different
temperatures was of the order of 0.5 \%. The room temperature
reflectivity in the ultraviolet was measured against an aluminum
mirror and then corrected for the absolute reflectivity of
aluminum.
Normal-incident phase-sensitive transmission at 210 and 250 GHz (7
and 8.3 cm$^{-1}${\rm}) was measured as a function of temperature with a
spectrometer employing backward-wave oscillators (BWOs) as sources
of coherent radiation.\cite{Kozlov} A Mach-Zehnder interferometer
arrangement of the spectrometer allows measurements of both the
intensity and the phase shift of the wave transmitted through the
sample. Using the Fresnel optical formulas for the complex
transmission coefficient of the two-layer system, the film's
complex conductivity as well as the penetration depth were
directly obtained from these measurements. This experimental
method has been previously applied to a large number of different
superconductors.\cite{Dressel} Technical details of our
experimental procedure can be found in Ref.~\onlinecite{Fischer}.
Optical properties of bare substrates were obtained from
measurements performed in the same frequency and temperature
windows as for the thin-film samples.
We investigated two thin films of PCO. The results, obtained on
the films, do not demonstrate any significant difference.
Hereafter we present results for one of the two films.
\section{Resistivity}
\label{sec:res}
Figure~\ref{rho} shows the temperature dependence of the
resistivity of the PCO film. The resistivity decreases
monotonically with decreasing temperature down to $T_c =$ 27 K.
The width of the superconducting transition is 0.8 K. The
temperature dependence of the resistivity can be described by the
power law
\begin{equation}
\rho(T) = \rho_{0} + AT^{n},
\end{equation}
with $\rho_{0}$ = 0.151 m$\Omega$cm, $A = 10^{-5}$
m$\Omega$cmK$^{-n}$, and $n = 2$ from $T_c$ up to 150 K. The
quadratic temperature dependence is in agreement with earlier
reports on superconducting Nd$_{2-x}$Ce$_x$CuO$_4$ films and
single crystals for temperatures below 200 K.\cite{Tsuei, Onose}
But, unlike Nd$_{2-x}$Ce$_x$CuO$_4$, where a slightly reduced
power law with $n$ ranging from 1.5 to 1.7 is observed above 200
K, we find a linear temperature dependence in PCO above 210 K. A
quadratic temperature dependence is often taken as evidence for
Fermi-liquid behavior.\cite{Abrikosov, Pines}
\section{Optical properties}
\subsection{Raw experimental data}
\label{subsec:a}
\begin{figure}[t]
\centering
\includegraphics[width=8 cm,clip]{reflectivity.eps}
\caption{(Color online) Reflectivity of the PCO thin film on a
DyScO$_{3}${\rm} substrate as a function of frequency at various
temperatures listed in the legend. The \textbf{E} vector of the
probing radiation lies in the $ab$ plane of the film (and parallel
to the [001] axis of the substrate). The inset shows the
reflectivity of the bare substrate at 4 K.} \label{reflectivity}
\end{figure}
Figure~\ref{reflectivity} shows the as-measured in-plane
($ab$-plane) reflectivity of the PCO film on a DyScO$_{3}${\rm}
substrate versus frequency at various temperatures. At low
frequencies, the reflectivity is quite high and increases with
decreasing temperature, typical for metals. A number of phonon
modes from the substrate and the film appears at frequencies below
700 cm$^{-1}$. The maxima seen above some 10000 cm$^{-1}$ can be
attributed to interband transitions.
\begin{figure}[b]
\centering
\includegraphics[width=4 cm,clip]{submm_raw.eps}
\caption{(Color online) Examples of raw (i.e. not normalized to
the empty-channel measurements) phase-sensitive transmission
measurements at 8.3 cm$^{-1}$. Power transmission $Tr$ (top panel)
and phase shift (middle panel) of the wave passed through the PCO
film on the DyScO$_{3}${\rm} substrate are shown as a function of
temperature together with a close-up of the dc resistivity
measurements around the superconducting transition (bottom panel).
The dc resistivity measurements were performed twice: on the fresh
film [solid (red) symbols] and after completion of all optical
measurements [open (blue) symbols]. The thin vertical line
indicates $T_{c}$.} \label{submm_raw}
\end{figure}
The changes to the reflectivity spectra induced by the
superconducting transition are not very well pronounced within our
experimental accuracy. This is because of a relatively high
transparency of the film. Thus, the results, obtained from the
reflectivity measurements, are only discussed in the normal state
in the course of the article.
The formation of the superconducting condensate can instead be
directly seen by use of our low-frequency phase-sensitive
transmission measurements. In Fig.~\ref{submm_raw} we present
examples of these measurements. The onset of the transition into
the superconducting state reveals itself immediately as a
reduction of the temperature-dependent power transmission $Tr$ and
the phase shift.\cite{Tr_phase} The penetration depth and the
superfluid density, obtained from these measurements, are
discussed in Sec. \ref{subsec:f}.
\subsection{Normal-state optical conductivity}
\label{subsec:b}
By applying a thin-film fitting procedure, described in detail in
App.~\ref{sec:ThinFlims}, we extract the film's complex optical
conductivity, $\sigma = \sigma_{1} + i\sigma_{2}$, from our
reflectivity spectra. Neither BWO data nor values of the dc
conductivity in the normal state have been utilized within this
fitting procedure.
The real part of the PCO optical conductivity obtained by this
modeling is shown in Fig.~\ref{conductivity} for various
temperatures indicated in the legend. As the lowest frequency of
the reflectivity measurements was 40 cm$^{-1}$ the data obtained
from this analysis below this threshold frequency are to be
considered as extrapolations and, thus, are shown as dashed lines.
Nevertheless, the zero-frequency limit of $\sigma_{1}$ evolves in
accordance with $\sigma_{dc}$ at all temperatures in the normal
state (bold points on the vertical left-hand axis of
Fig.~\ref{conductivity}).
\begin{figure}[t]
\includegraphics[width=8 cm, clip]{conductivity.eps}
\caption{(Color online) Real part of the optical conductivity of
PCO as a function of frequency for various temperatures listed in
the legend. Dots on the left-hand axis of the main panel represent
the dc-conductivity values.} \label{conductivity}
\end{figure}
At all temperatures above $T_c$, the optical conductivity of PCO
can be disentangled into a Drude component and a set of Lorentz
oscillators, representing a broad far-infrared (FIR) band, narrow
FIR peaks, a mid-infrared (MIR) band, and interband-transition
bands at the highest frequencies:
\begin{eqnarray}
\sigma(\omega) &=& \textrm{Drude + FIR band + FIR peaks +} \nonumber\\
&&\textrm{MIR band + interband transitions.} \label{decomp}
\end{eqnarray}
This becomes particularly evident from Fig.~\ref{decomposition}
where all these contributions are shown for 30 and 300 K. (We used
the Drude-Lorentz fitting procedure as described in
App.~\ref{sec:ThinFlims}. The FIR and MIR absorption bands have
been modeled with two Lorentzians each and we used three
Lorentzians for the interband transitions.)
We attribute the narrow and relatively weak peaks at 130 cm$^{-1}${\rm},
304 cm$^{-1}${\rm}, 343 cm$^{-1}${\rm}, and 500 cm$^{-1}${\rm} to infrared-active
phonon modes. Their frequency positions agree well with the
positions of strong phonon modes reported for nonsuperconducting
Pr$_2$CuO$_{4}${\rm} by Homes \textit{et al.}\cite{Homes}
It is worth noting here that in addition to the well pronounced
phonons, characteristic to the $T^\prime$ structure, other modes
which are not allowed by the crystal structure in the $T^\prime$
phase, have been observed by Homes \textit{et al.}\cite{Homes} The
authors have elaborated on the possible origin of these additional
modes but concluded eventually that some impurities and/or
contributions from different phases may play a role. This
conclusion is perfectly in line with claims made in Refs.
\onlinecite{Matsumoto1, Matsumoto2, Matsumoto3, Matsumoto4,
Yamamoto, Ikeda, YKrock1} that the complete removal of all apical
oxygen is extremely challenging and absolutely necessary for
superconductivity in undoped $T^\prime$ cuprates.
\begin{figure}[b]
\centering
\includegraphics[width=\columnwidth, clip]{decomp_sw.eps}
\caption{(Color online) Decomposition of the real part of the
optical conductivity, $\sigma_1(\omega)$, at 30 K (left-hand
panel) and 300 K (right-hand panel). Inset: Frequency dependence
of the spectral weight of PCO as a function of the cutoff
frequency $\omega_c$ for various temperatures quoted in the
inset's legend.} \label{decomposition}
\end{figure}
The bump at about 300\,cm$^{-1}$\ in the $\sigma_{1}(\omega)$ spectra
can be attributed to electron localization. Such behavior is known
for the superconducting cuprates\cite{Basov2} and is typical for
so-called bad metals where a certain degree of disorder is
inherently present.\cite{Emery, Mutou}
Using dynamical mean-field theory (DMFT) and iterated perturbation
theory, Mutou and Kontani demonstrated\cite{Mutou} that in a
strongly correlated metallic state, realized by a large on-site
repulsion energy, the optical conductivity develops a Drude peak
centered at $\omega=0$ at low temperatures and a shift of this
peak to finite frequencies above the Ioffe-Regel-limit temperature
$T_{IR}$, although the resistivity increases monotonically even at
$T > T_{IR}$. A temperature evolution of far-infrared
$\sigma_{1}(\omega)$, similar to the one observed here, has been
reported for underdoped Nd$_{2-x}$Ce$_x$CuO$_4$,\cite{Onose}
underdoped La$_{2-x}$Sr$_x$CuO$_4$,\cite{Takenaka} and zinc-doped
YBCO [YBa$_{2}$(Cu$_{1-x}$Zn$_{y})_{4}$O$_{8}$].\cite{Basov3}
The two highest-frequency absorption peaks at around 30000 --
40000 cm$^{-1}$ (4 -- 5 eV) are typical for the cuprates and
represent transitions into a band formed mostly by oxygen $p$
orbitals (see, e.g., Ref.~\onlinecite{Weber1}). The band slightly
below 20000 cm$^{-1}$ (e.g. around 1.5 -- 2 eV) is very similar to
the upper Hubbard band. Such absorption bands have been observed,
for example, in optical-conductivity studies of insulating undoped
Pr$_{2}$CuO$_{4}$\cite{Arima} and Nd$_{2}$CuO$_{4}$.\cite{Onose}
It is important to realize that the presence of such a band does
not necessarily require a charge-transfer gap. Moreover, LDA +
DMFT calculations\cite{Weber1} demonstrated that such a band may
perfectly coexist with a quasiparticle absorption peak (e.g. with
a metallic state) in the case of undoped Nd$_{2}$CuO$_{4}$ with a
perfect $T^\prime$ structure.
\subsection{Spectral weight}
\label{subsec:c}
We can trace the temperature evolution of each term in
Eq.~\eqref{decomp}. The mid-infrared band is almost invariant with
temperature, while the shape of the lower-frequency spectrum
(consisting of the Drude contribution and the FIR band) changes.
While at temperatures below 150 K the peak in $\sigma_{1}$ is
centered at $\omega = 0$ (the Drude term dominates), it shifts to
finite frequencies at $T > 150$~K. This shift is an indication of
the breakdown of the simple Drude-metal picture. It suggests a
continuous change in the charge transport from the low-temperature
coherent (Drude) to high-temperature incoherent
regimes.\cite{Lobo}
In Fig.~\ref{sw_terms}, we plot the spectral weight (SW) of the
Drude term (panel a), the FIR band (panel b), the sum of the two
(panel c), and, for completeness, the MIR band (panel d). (The
spectral weight of each term in Eq.~\eqref{EpsDL} is just the
squared plasma frequency of the term.) As one can see from
Fig.~\ref{sw_terms}, the spectral weights plotted in panels (c)
and (d) are temperature independent, only the Drude and the
FIR-band spectral weights depend on temperature. It is obvious
that the spectral weight of the FIR band grows at the expense of
the Drude component with increasing temperature. We suggest that
this spectral weight transfer between the Drude and the FIR band
is related to the change in the transport properties, namely to
the change from the quadratic to the linear temperature dependence
of $\rho(T)$ which happens at comparable temperatures
(Fig.~\ref{rho}).
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth, clip]{sw_terms.eps}
\caption{(Color online) Temperature dependence of the spectral
weight: of the Drude term (a), the FIR band (b), the sum of the
two (c), and the MIR band (d) following the decomposition
according to Fig.~\ref{decomposition}} \label{sw_terms}
\end{figure}
A qualitative picture of the spectral weight redistribution with
temperature in PCO can be studied by means of the total spectral
weight:
\begin{equation}
SW(\omega_{c}) =
8\int_{0}^{\omega_{c}}\!d\omega\,\sigma_{1}(\omega).
\end{equation}
It is plotted as a function of the cutoff frequency $\omega_c$ in
the inset on the right-hand panel of Fig.~\ref{decomposition}. At
low frequencies $\omega_c$, $SW(\omega_c)$ increases with
decreasing temperature, and it increases with $\omega_c$ for all
temperatures, finally developing an upturn around 10000 -- 15000
cm$^{-1}$. This upturn is due to interband transitions. Up to
15000 cm$^{-1}$, the spectral weight shows a temperature
dependence. Only at higher frequencies do the spectral-weight
curves merge, implying that above 15000 cm$^{-1}$ ($\sim1.9\,$eV)
the spectral weight is conserved as temperature changes. In other
correlated-electron materials, the spectral weight is known to be
conserved also only at frequency scales of a few eV.\cite{Imada,
Qazilbash} Thus, our results indicate the presence of electron
correlations in PCO.
In order to estimate the spectral weight and the plasma frequency
of the itinerant charge carriers only, we set $\omega_c / 2\pi=
9400$ cm$^{-1}$, thus cutting off the contribution from the
interband transitions. This gives a plasma frequency of 17700
cm$^{-1}$ (2.19 eV), a value comparable to those for other
high-$T_{c}$ cuprates. \cite{Onose, Uchida, Lee} Using the
relation between the charge-carrier density $n$ and the plasma
frequency ($\omega_{p}^{2} = 4\pi ne^2/m$), $n$ is estimated to
give $\sim 3.53 \times 10^{21}\,$cm$^{-3}$ assuming $m$ to be
equal to the free-electron mass $m_0$.
\subsection{Extended-Drude analysis}
\label{subsec:d}
To get further insight into the physics behind the optical
response of Pr$_2$CuO$_{4}${\rm}, we analyze the optical conductivity data in
terms of the extended (or generalized) Drude model which is widely
used for analysis of the optical properties of correlated electron
systems.\cite{JWAllen, Puchkov} The complex conductivity in this
model is given by
\begin{equation}
\sigma(\omega) =
\frac{1}{4\pi}\frac{\omega_{p}^{2}}{\Gamma(\omega)
-i\omega[1+\lambda(\omega)]},
\label{ext_Drude1}
\end{equation}
where $[1+\lambda(\omega)] = m^{*}(\omega)/m$ and
$\tau_{op}^{-1}(\omega) \equiv\Gamma(\omega)$ are the
frequency-dependent mass renormalization factor and the optical
scattering rate, respectively. Inverting Eq.~\eqref{ext_Drude1}
gives
\begin{equation}
1+\lambda(\omega) =
\frac{\omega_{p}^{2}}{4\pi}\frac{\sigma_{2}(\omega)}{\omega|\sigma(\omega)|^{2}};
\quad
\Gamma(\omega) =
\frac{\omega_{p}^{2}}{4\pi}\frac{\sigma_{1}(\omega)}{|\sigma(\omega)|^{2}}.
\label{ext_Drude2}
\end{equation}
The frequency-dependent optical scattering rate, obtained on the
basis of Eq.~\eqref{ext_Drude2} with $\omega_{p}$ = 2.19 eV, is
displayed in Fig. \ref{tau} as a function of frequency for various
temperatures listed in the legend. At $T < 150$~K, the general
trend in $\tau_{op}^{-1} (\omega)$ is to increase with frequency,
but this increase is nonmonotonic. This is due to phonons and the
localization mode discussed above. This mode reveals itself as a
bump at around 230 cm$^{-1}$ ($\sim$ 28 meV) in the optical
scattering rate. At $T > 150$~K, the scattering rate increases
rapidly as $\omega \rightarrow 0$. This is because at high
temperatures the localization mode dominates the Drude
contribution as it was discussed in relation to the
$\sigma(\omega)$ spectra.
\subsection{Eliashberg analysis and electron-boson spectral density}
\label{subsec:e}
The optical scattering rate is according to App.~\ref{sec:MaxEnt}
closely related to the electron-exchange boson interaction
spectral density $I^2\chi(\omega)$ [Eq.~\eqref{eq:InvTau}] which
is at the core of normal and superconducting state Eliashberg
theory.\cite{ESchach3} This theory can be applied to calculate
various normal and superconducting state properties and,
consequently, it is of quite some interest to gain knowledge on
$I^2\chi(\omega)$ by inverting $\tau^{-1}_{op}(\omega)$. This will
allow a more detailed analysis of our experimental results.
\begin{figure}[tb]
\centering
\includegraphics[width=8 cm,clip]{tau.eps}
\caption{(Color online) Panel a: The experimental optical
scattering rate in PCO for various temperatures listed in the
legend. Panel b: The experimental $\tau^{-1}_{op}(\omega)$ for
$T=30\,$K [solid (red) curve] and the Eliashberg-theory result
[dashed (black) curve] with an impurity parameter $t^+=15\,$meV,
see text. Inset in the panel: The electron-boson spectral density,
$I^2\chi(\omega)$, at $30\,$K as a result of a straightforward
inversion of the experimental $\tau^{-1}_{op}(\omega)$.}
\label{tau}
\end{figure}
It was also demonstrated by Schachinger {\it et
al.}\cite{ESchach1} that any nonzero contribution to
$I^2\chi(\omega)$ at some energy $\omega$ will result in an
increase of the optical scattering rate. Consequently the bump
observed in the optical scattering rate of PCO (Fig.~\ref{tau}) at
around $230\,$cm$^{-1}$\ ($\sim28\,$meV) cannot be caused by
electron-exchange boson interaction and is, therefore, not part of
the conducting-electron background. Nevertheless, we concentrate
on the normal-state $T=30\,$K data and perform a straightforward
inversion using the maximum-entropy procedure outlined in
App.~\ref{sec:MaxEnt} by inverting Eq.~\eqref{eq:InvTau} together
with the kernel Eq.~\eqref{eq:Shulga}. As the temperature and
frequency independent impurity scattering rate
$\tau^{-1}_{imp}=2\pi t^+$ is not known, this is an iterative
process which is performed by slowly increasing $t^+$ until a
smooth function $I^2\chi(\omega)$ with no pronounced spikes in the
immediate vicinity of $\omega=0$ has been found. This resulted in
$\tau^{-1}_{imp}\sim100\,$meV ($t^+=15\,$meV) which is quite
substantial but in good agreement with what has been reported for
the system PCCO.\cite{ESchach} Furthermore, we restricted the
frequency range of the inversion to $\omega\in[0,300]\,$meV
because between $100\le\omega\le 300\,$meV,
$\tau^{-1}_{op}(\omega)$ develops only a moderate increase with
energy.
It has to be pointed out, though, that Eq.~\eqref{eq:Shulga} is
only approximate. Therefore, we use the spectrum $I^2\chi(\omega)$
which resulted from the inversion process to calculate the
quasiparticle self-energy using the full normal-state
infinite-bandwidth Eliashberg equations. The complex infrared
conductivity $\sigma(\omega,T)$ is then calculated using a Kubo
formula\cite{lee} and the resulting optical scattering rate is
calculated from Eq.~\eqref{ext_Drude2}. A comparison of this
result with the data requires some adaptation of the original
$I^2\chi(\omega)$ spectrum in order to achieve the best possible
agreement with the data. This final spectrum is presented in the
inset of Fig.~\ref{tau}. It shows a double-peak structure which is
followed by a deep valley and a hump at higher frequencies. The
low-energy peak is at $\sim 11\,$meV and the high-energy peak can
be found at $\sim 50\,$meV. Similar double-peak spectra have been
reported for PCCO by Schachinger {\it et al.}\cite{ESchach} and
for La$_{1.83}$Sr$_{0.17}$CuO$_4$ (a hole-doped cuprate) by Hwang
{\it et al.}\cite{Hwang} both with a less pronounced low-energy
peak. It is most likely that the bump around $\sim 28\,$meV in the
PCO $\tau^{-1}_{op}(\omega)$ data is responsible for this
overpronouncement of the low-energy peak in the PCO
$I^2\chi(\omega)$ spectrum.
We found that the mass renormalization factor, which can be
calculated as the first inverse moment of $I^2\chi(\omega)$, is
$\lambda=4.16$.
A comparison of theoretical and experimental
$\tau^{-1}_{op}(\omega)$ data for $T=30\,$K is presented in
Fig.~\ref{tau}(b). The solid (red) curve represents the data while
the dashed (black) curve presents the result of our theoretical
calculations on the basis of the $I^2\chi(\omega)$ spectrum shown
in the inset. (Of course, good agreement between theory and data
cannot be expected in the energy region around the bump at $\sim
28\,$meV.)
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth,clip]{lambda.eps}
\caption{(Color online). Panel (a): Superfluid density, $N_s(T) =
\lambda_L^2(0)/\lambda_L^2(T)$, as a function of temperature.
Panel (b): Low-temperature variation of the normalized London
penetration depth, $[\lambda_L(T)-\lambda_L(0)]/\lambda_L(0)$, as
a function of temperature squared. Data derived from the
millimeter-wave conductivity measurements at $7\,$cm$^{-1}$ and
$8.3\,$cm$^{-1}$ are presented by solid (red) circles and solid (blue)
triangles, respectively. Panel (a) contains for comparison the
temperature dependence $N_s(T) = 1-(T/T_c)^2$ [thin dashed
(purple) line] which is expected for a nodal superconductor and
$N_s=1-(T/T_c)^4$ [thin dashed-dotted (olive) line] for a fully
gapped superconductor. In panel (b) a quadratic power law of the
reduced penetration depth is indicated by a thin solid (black)
line. Inset: A universal relation between the zero-temperature
superfluid density, $N_{s0}$, and the product of normal-state dc
conductivity and $T_{c}$, as found in Ref.~\onlinecite{Homes1}
[straight solid (blue) line], reported ``error bars" of this
relation [straight dashed (blue) lines], and the data obtained for
three PCO films investigated in this study and in
Ref.~\onlinecite{Pronin} [bold (red) circles]. The error bars are
shown for the least accurate data.} \label{lambda}
\end{figure}
\subsection{Penetration depth and Superfluid density}
\label{subsec:f}
The temperature dependence of the penetration depth of PCO was
obtained experimentally by means of phase-sensitive
millimeter-wave measurements.\cite{Kozlov} Using the Fresnel
optical formulas for the complex transmission coefficient, the
in-plane complex conductivity of the film was calculated directly
from the measured transmission coefficient and phase shift. The
penetration depth was then calculated from $\sigma_2$ by using
$\lambda_{L}=c/(4\pi\omega\sigma_{2})^{1/2}$, where $c$ is the
vacuum speed of light and $\omega$ is the frequency of the
incoming radiation. We found $\lambda_{L}(T\rightarrow0) \approx
1.6\pm 0.1$ $\mu$m.
Figure~\ref{lambda}(a) presents the normalized superfluid density
$N_s(T) = n_s(T)/n_s(0) = \lambda_L^2(0)/\lambda_L^2(T)$ measured
at $7\,$cm$^{-1}$\ [solid (red) circles] and $8.3\,$cm$^{-1}$\ [solid (blue)
triangles] \textit{vs} temperature. We added curves for $N_s(T) =
1-(T/T_c)^2$ [thin dashed (purple) curve] and $N_s(T) =
1-(T/T_c)^4$ [thin dashed-dotted (olive) curve] for comparison.
They are supposed to mimic the temperature dependence of $N_s(T)$
for nodal ($d$-wave) and fully gapped ($s$-wave)
superconductivity, respectively. Obviously, the former curve
describes the data reasonably well, whereas the fully gapped
behavior can certainly be ruled out.
More sensitive is the low-temperature variation of the penetration
depth $[\lambda_L(T)-\lambda_L(0)]/\lambda_L(0)$ as a function of
the square of the temperature. The data are presented in
Fig.~\ref{lambda}(b) using the same symbols as in
Fig.~\ref{lambda}(a). For a nodal superconductor a quadratic power
law [thin solid (black) line] is to be expected and the data are
in agreement with this power law at lowest $T$.
In returning to the absolute value of the zero-temperature
penetration depth [$\lambda_{L}(T\rightarrow0) \approx 1.6$] we
conclude that the density of the superfluid condensate is very low
in PCO as compared to typical values for optimally doped cuprates,
where the penetration depth is smaller by a factor of 5 to
10.\cite{Basov1} For example, in optimally doped PCCO $\lambda_{L}
= 330$~nm.\cite{Zimmers1} This points toward a doping-related
nature of superconductivity in our PCO samples, as large values of
$\lambda_L(0)$ are typical for either underdoped or overdoped
regimes. However, the value of $\lambda_L(0)$ found here is so
big, that within this picture our PCO sample must be far off
optimal doping. This is rather unlikely, because the critical
temperature of our film is definitely too high for a heavily
underdoped or overdoped sample. It is also important to note that
a possible degradation of $T_{c}$ while the optical measurements
have been performed can be excluded because the dc resistivity
measured after the completion of all optical measurements does not
differ from the resistivity measured on the fresh film (see bottom
panel of Fig.~\ref{submm_raw}).
Furthermore, we would like to note that our second PCO sample had
a $\lambda_L(0) = 1.5 \pm 0.1$ $\mu$m and a very similar value of
$T_{c}$. Another PCO film prepared by a different method (metal
organic decomposition \textit{vs} MBE for the current films) was
reported by some of us to have $\lambda_L(0) = 1.55 \pm
0.25$~$\mu$m and $T_c = 27.5$~K.\cite{Pronin}
It is revealing to note that all these samples do not at all fit a
supposedly universal relation reported by Homes \textit{et
al.}\cite{Homes1} which connects the superfluid density [or
$\lambda_L(0)$] of cuprates to the product of $T_{c}$ and
normal-state conductivity $\sigma_{dc}$. This relation was
obtained from an analysis of experimental data on doped samples,
e.g. on doped charge-transfer insulators, and works for all doping
levels. According to Zaanen\cite{Zaanen} the existence of this
universal relation reflects the fact that the normal state of the
doped cuprates is extremely viscous (dissipative).\cite{Zaanen1}
All the PCO samples studied here and in Ref.~\onlinecite{Pronin}
are far off this universal relation (see the inset in
Fig.~\ref{lambda}). It is tempting to explain this fact as a sign
of a possible departure from the charge-transfer-insulator picture
in PCO: while Homes' relation reflects the physics behind the
doped-insulator picture, it does not necessarily work any longer
whenever this picture loses its validity in cuprates.
Nevertheless, if our samples are indeed doped, it seems to be
reasonable to assume that they must be underdoped rather than
overdoped. This assumption is based on the method used to prepare
the samples (reduction of oxygen content), on the high value of
$T_{c}$, and on the low superfluid density.
\subsection{Absence of a pseudogap feature}
\label{subsec:g}
A well-known characteristic feature of the underdoped cuprates is
the occurrence of a pseudogap, i.e. a partial normal-state gap in
the electronic density of states. Such a pseudogap has been
observed in underdoped cuprates by many experimental
methods.\cite{Timusk} In optical experiments, the occurrence of a
pseudogap below a characteristic temperature can manifest itself
in different ways. In hole-doped cuprates, the pseudogap is seen
as a suppression of the low-frequency scattering rate.
\cite{Basov1} In electron-doped cuprates a suppression of the MIR
reflectivity is observed that corresponds to a reduced real-part
MIR optical conductivity and to a nonmonotonic behavior of the
so-called restricted spectral weight,
RSW$(\omega_{L},\omega_{H},T)$.\cite{Zimmers} Here, RSW is defined
as
\begin{equation}
RSW(\omega_{L},\omega_{H},T)=8\int_{\omega_{L}}^{\omega_{H}}
\!d\omega\,\sigma_{1} (\omega,T), \label{eq:4gc}
\end{equation}
where $[\omega_{L}$, $\omega_{H}]$ is the restricted frequency
range of interest.
It follows from the reflectivity and conductivity data presented
in Figs.~\ref{reflectivity} and \ref{conductivity} that such a
normal-state gap is not evident in the current system. The absence
of the normal-state pseudogap is further confirmed by the results
presented in Fig.~\ref{sw2}. In this figure, the restricted
spectral weight $\textrm{RSW}(\omega_L,\omega_H,T)$ normalized to
the spectral weight at $T=300\,$K is plotted as a function of
temperature for four frequency ranges $[\omega_L,\omega_H]$ as
quoted in the legend. It is evident that the normalized restricted
spectral weight displays in all four cases a monotonic temperature
dependence. This rules out the existence of a normal-state
pseudogap.\cite{Zimmers} In addition, the optical scattering rate
(see Fig.~\ref{tau}) shows (apart from the low-frequency features
due to localization, phonon modes, and MIR bands) no
temperature-dependent suppression, that might (similarly to the
hole-doped cuprates) indicate a pseudo-gap-like
feature.\cite{Timusk}
\begin{figure}[t]
\centering
\includegraphics[width=7 cm,clip]{sw2.eps}
\caption{(Color online). Temperature dependence of the restricted
normalized spectral weight (RSW) with the integration boundaries
[see, Eq.~\eqref{eq:4gc}] indicated in the legend.} \label{sw2}
\end{figure}
Thus, we conclude that a pseudogap is absent in the PCO films, in
contrast to most underdoped high-$T_c$ cuprates. In our view, this
difference can be related to the absence of an antiferromagnetic
phase in PCO.\cite{YKrock1, Naito, YKrock, TAdachi} In
electron-doped cuprates, the magnetic order induces the pseudogap.
With doping, the Neel temperature decreases monotonically leading
to a complete suppression of antiferromagnetic order at higher
doping accompanied by the disappearance of the pseudogap. The
absence of a pseudogap feature in PCO supports ideas, expressed in Refs.~%
\onlinecite{YKrock1, Sekitani, Naito}, about a strong suppression
or even absence of an antiferromagnetic insulating phase in
electron-doped cuprates, if the $T^\prime$ structure (i.e., no
apical oxygen) can be managed to survive down to very low or even
zero doping levels.
\section{Conclusions}
In our broadband investigation of the optical response of thin PCO
films, we unveiled a low-frequency (FIR) collective charge
excitation attributed to localization effects. As a function of
temperature, the optical spectral weight is redistributed between
this mode and the zero-frequency-centered Drude peak: the weight
of the FIR mode grows with temperature at the expense of the
Drude-peak weight. Such a behavior has been reported in underdoped
cuprates and is typical for bad metals.
We found that the optical spectral weight remains
temperature-dependent up to 1.9 eV, which indicates strong
electron correlations in PCO.
We calculated the electron-boson spectral density and found the
mass renormalization factor, $\lambda=4.16$ at $30\,$K.
In the millimeter-wave data, we directly observed the formation of
the superconducting condensate at $T < T_{c}$. We obtained that
the temperature dependence of the London penetration depth at low
temperatures follows a quadratic power law. This indicates
$d$-wave symmetry which is typical for the cuprates.
Neither the experimental optical data nor their analysis reveal
any indication of normal-state gap-like features which could be
attributed to the existence of a normal-state pseudogap. This
observation is in line with a breakdown of the
charge-transfer-insulator picture in PCO.
\section{Acknowledgements}
We are very grateful to Dr. Hideki Yamamoto for his work on sample
preparation and for useful discussions.
|
2,869,038,154,624 | arxiv | \section{Introduction}
\label{sectionintroduction}
The discovery and exploration of the mechanism of mass generation and
electroweak symmetry breaking is one of the most important tasks of future
collider experiments. Within the Standard Model of elementary particle
physics (SM) electroweak symmetry breaking is realized by the Higgs
mechanism which postulates the existence of an electric neutral
elementary scalar field that interacts with all SM particles carrying
nonzero hypercharge and weak isospin. Through self-interactions this
Higgs field acquires a vacuum expectation value
$V=(\sqrt{2}\,G_F)^{1/2}\approx 246$~GeV, $G_F$ being the Fermi
constant, which breaks the SU(2)$_L\times$U(1)$_Y$ symmetry at high
energies down to the electric U(1)$_{\rm em}$ below the symmetry
breaking scale and leads to nonzero masses of the elementary
particles. The Higgs mechanism also predicts that the Higgs fields can
be produced as a massive Bose particle in collider experiments
if sufficient energy is provided in the process. The mass of the
Higgs boson is expected to lie between the current experimental lower
limit of $114.4$~GeV~\cite{LEPlimits} and about 1~TeV. Current
analyses of electroweak precision observables yield a $95\%$~CL upper
indirect bound of $186$~GeV for the Higgs boson
mass~\cite{MHupperlimit}. While a Higgs boson with a mass up to
$1$~TeV can be found at the LHC, precise and model-independent
measurements of quantum numbers and couplings are likely to be
restricted to a future $e^+e^-$ Linear
Collider~\cite{TESLATDR,ALCphysics,ACFALCphysics} such as the
International Linear Collider (ILC) project.
The Higgs mechanism predicts that the quark masses $m_q$ are related
to the quark-Higgs Yukawa coupling $\lambda_q$ through the relation
$m_q=\lambda_qV$. This makes the measurement of the Yukawa coupling to
the top quark ($m_t=172.5\pm 2.3$~GeV~\cite{topmass}) particularly important
since it is expected to have a high precision. At
a future $e^+e^-$ Linear Collider the top Yukawa coupling can be
measured from the process $e^+e^-\to t\bar t H$ since
the amplitudes describing Higgs radiation off the $t\bar t$
pair dominate the cross section.~\footnote{An indirect measurement through virtual Higgs effects
might be also possible at the $t\bar t$ threshold if the Higgs mass is
close to the present lower experimental limit~\cite{TESLATDR}.}
For the second phase of the ILC project with c.m.\,energies between
$500$~GeV and $1$~TeV and assuming a Higgs mass of around $120$~GeV
the total cross section $\sigma(e^+e^-\to t\bar t H)$ is at the level
of $1-2$~fb and measurements of $\lambda_t$ with experimental errors
of around five percent are
expected~\cite{JustetopYukawa,GaytopYukawa}. The precision
motivates the computation of radiative corrections. In the
approximation that the top
quark and the Higgs boson are stable particles\footnote{
For a light Higgs boson this is an excellent approximation.
For $m_H=115(150)$~GeV one finds
$\Gamma_H=0.003(0.017)$~GeV~\cite{Hdecay}}
the tree level cross section was determined already some time ago in
Refs.~\cite{Borneetth}. The full set of one-loop QCD corrections were
obtained in Ref.~\cite{Dittmaier1}. Earlier studies using
approximations were given in Refs.~\cite{Dawson1,Dawson2}. One-loop
electroweak corrections were studied in Refs.~\cite{Belanger1,Denner1}
and also in Ref.~\cite{You1}.
\begin{figure}[t]
\begin{center}
\leavevmode
\epsfxsize=9cm
\leavevmode
\epsffile[140 300 430 495]{figures/fig1.ps}
\vskip 0.0cm
\caption{
Typical constellation of momenta for the process $e^+e^-\to t\bar t H$ in the
large Higgs energy endpoint region.
\label{fig1} }
\end{center}
\end{figure}
The phase space region where the Higgs energy is close to its upper
endpoint,
\begin{equation}
E_H\approx E_H^0 \equiv (s+m_H^2-4m_t^2)/(2\sqrt s)\;,
\end{equation}
$\sqrt s$ being the
center of mass energy, was studied in detail in
Ref.~\cite{FarrellHoang1}. In the
large Higgs energy endpoint region the $t\bar t$ pair is forced to
become collinear and to move opposite to the Higgs direction
in order to maximize the momentum necessary to balance the large Higgs
momentum, see Fig.~\ref{fig1}. Thus the $t\bar t$ invariant mass is
close to $2m_t$. In this kinematic regime the $t\bar t$ pair is
nonrelativistic in its c.m.\,frame and fixed-order QCD perturbation
theory in powers of $\alpha_s$ leads to singular terms proportional to
$(\alpha_s/v)^n$ and $(\alpha_s\ln v)^n$ which have to be summed to
all order. Here, $v=(1-4m_t^2/Q^2)^{1/2}$ is the top quark relative
velocity in the $t\bar t$ c.m.\,frame and $Q$ is the $t\bar t$
invariant mass. In Ref.~\cite{FarrellHoang1} these singularities were
summed at NLL order in a simultaneous expansion in $\alpha_s$ and $v$
and also accounting for the finite top quark width. The computations
were carried out using a nonrelativistic effective
theory~\cite{LMR,HoangStewartultra,hmst} originally developed for the
threshold region in the process $e^+e^-\to t\bar t$. Due to the large
top quark width, $\Gamma_t\approx 1.5$~GeV, the nonrelativistic $t\bar
t$ dynamics is protected from nonperturbative effects and the
summations can be carried out with perturbative methods. It was shown
in Ref.~\cite{FarrellHoang1} that the summation of the singular terms
leads to an enhancement of the total cross section that needs to be
accounted for up to c.m.\,energies of about $700$~GeV. The impact of
the summations increases with the fraction of the phase space
where the c.m.\,top velocity $v$ is nonrelativistic, i.e. it increases with
the Higgs and top
quark masses and decreases with the c.m.\ energy. A convenient measure
for the impact of the nonrelativistic summations on the total cross
section is the maximal relative velocity of the $t\bar t$ pair which is
achieved at the {\it low} Higgs energy endpoint $E_H=m_H$,
\begin{equation}
v^{\rm max} \, = \, \left(\,1-\frac{4m_t^2}{\;Q^2_{\rm max}}\right)^{1/2} \,
= \, \left(\,1-\frac{4m_t^2}{(\sqrt{s}-m_H)^2}\,\right)^{1/2}
\,.
\label{vmaxdef}
\end{equation}
For small $v^{\rm max}$ the summations have a large effect since
the available phase space is predominantly nonrelativistic.
As was already demonstrated in Ref.~\cite{FarrellHoang1}, the
fixed-order QCD predictions~\cite{Dittmaier1,Dawson1,Dawson2} become
unreliable for c.m.\,energies up to
$500$~GeV, which corresponds to the energy available during the first phase of
the ILC project. For $m_H=(120,130,140)$~GeV, $m_t=175$~GeV, and
$\sqrt{s}=500$~GeV one has $v^{\rm max}=(0.39,0.32,0.23)$ and consequently
the entire phase space is governed by the
nonrelativistic QCD dynamics. The nonrelativistic expansion
based on the parametric counting $\alpha_s\sim v\ll 1$ has to be employed
rather then the $\alpha_s$ expansion to make reliable theoretical
predictions for the cross section. Another consequence of small
$v^{\rm max}$ is that the cross section for
c.m.~energies up to $500$~GeV can be substantially smaller than
$1$~fb due to phase space suppression, which severely restricts
statistics. Since the singularities proportional to $(\alpha_s/v)^n$
and $\alpha_s\ln v$ are large in this case only predictions where the
nonrelativistic summations are accounted for allow for a
realistic assessment of Yukawa coupling measurements during the first phase of
the ILC project~\cite{Justetalk,topQCDSnowmass}.
In this work we give a detailed analysis of the total cross section
and the Higgs energy distribution for the process $e^+e^-\to t\bar t
H$ for c.m.\,energies up to $500$~GeV accounting for QCD effects at
NLL order in the nonrelativistic expansion. The approach
of Ref.~\cite{FarrellHoang1} developed for descriptions of the large
Higgs energy endpoint region is extended to the case
where the entire phase space is nonrelativistic. We show that our NLL order
predictions are substantially larger than the known tree level predictions,
which have in fact been used for experimental simulations
studies at $500$~GeV in the past~\cite{Justetalk}. We also account for the
possibility of electron-positron
beam polarization which can further enhance the cross section. Our results
significantly affect the prospects for top Yukawa coupling measurements
during the first phase of the ILC project.
The content of this paper is organized as follows:
In Sec.~\ref{sectionlargeHE} we review the ingredients of the
factorization formula derived in Ref.~\cite{FarrellHoang1} in the
large Higgs energy endpoint region valid for large c.m.\,energies.
We extend the presentation by also accounting for electron-positron
beam polarization and by giving a more detailed discussion of the
$t\bar t$ final state in the helicity basis. In
Sec.~\ref{sectionlowHE} we discuss the modifications that need to be
applied to the factorization formula for the case where the full
phase space is nonrelativistic. In Sec.~\ref{sectionanalysis} we analyze
our results numerically and
Sec.~\ref{sectionconclusion} contains the conclusion.
\section{The Large Higgs Energy Endpoint Region}
\label{sectionlargeHE}
In the large Higgs energy region $E_H\approx E_H^0$ the Higgs energy
distribution can be factorized into a hard part describing the production of
the $t\bar t$ pair and the Higgs boson and in a low-energy part describing the
nonrelativistic dynamical QCD effects of the $t\bar t$ subsystem. The
latter are responsible for the singularities proportional to powers of
$\alpha_s/v$ and $\alpha_s\ln v$. The factorization formula, valid at
NLL order for unpolarized electron-positron beams and top quarks, was
derived in Ref.~\cite{FarrellHoang1}. Accounting for electron-positron
beam polarization and polarized top quarks the factorization formula
for fully polarized electrons and positrons has the form
\begin{eqnarray}
\lefteqn{
\left(\frac{d\sigma}{d E_H}(E_H\approx E_H^{0})\right)^\pm \, = \,
\frac{8\,N_c\,\left[(1+x_H-4x_t)^2-4x_H\right]^{1/2}}{s^{3/2}\,m_t^2}\,
}
\nonumber\\[2mm]& &\mbox{} \hspace{1cm}
\times\,\left(\,c^2_0(\nu)\, F^Z_{0,\pm} +
\sum_{i=-1,0,+1}c_{(1,i),\pm}^2(\nu)\,F_{(1,i),\pm}^{\gamma Z}\,\right)\,
\,\mbox{Im}\left[\, G^c(C_F\alpha_s(m_t \nu),v,m_t,\nu)\,\right]\,
\,,
\label{dsdEHEFT}
\end{eqnarray}
with
\begin{eqnarray}
x_t & \equiv & \frac{m_t^2}{s}\,,\qquad
x_H \, \equiv \, \frac{m_H^2}{s}\,,\qquad
x_Z \, \equiv \, \frac{m_Z^2}{s}
\,.
\label{const2}
\end{eqnarray}
Here, $c_0$ and $c_{(1,i)}$ are the hard singlet and triplet
QCD Wilson coefficients which depend on the effective theory renormalization
parameter $\nu$, $ F^Z_{0,\pm}$ and $F_{(1,i),\pm}^{\gamma Z}$ are the
hard electroweak tree-level matching conditions, and $G^c$ is the
Green's function of the NLL Schr\"odinger equation of the effective
theory for the top quarks. A detailed discussion of these quantities
will follow shortly.
The index denotes the helicity of the electrons, i.e. ``$-$'' refers
to right-handed positrons and left-handed electrons and the index
``$+$'' refers to left-handed positrons and right-handed
electrons. Since the electron mass is neglected, the cross section
vanishes if both electron and positron have the same helicity.
For arbitrary polarization $P_+$ of the positrons and $P_-$ of the
electrons the spectrum reads
\begin{eqnarray}
\left(\frac{d\sigma}{d E_H}\right)
& = &
\frac{1}{4}(1+P_-)(1-P_+)\, \left(\frac{d\sigma}{d E_H}\right)^+
\,+\,\frac{1}{4}(1-P_-)(1+P_+)\, \left(\frac{d\sigma}{d E_H}\right)^-
\,,
\end{eqnarray}
where the polarization of a beam with $N_+$ right-handed particles and
$N_-$ left-handed particles is defined as
\begin{equation}
P=\frac{N_+-N_-}{N_++N_-}\equiv\frac{N_+-N_-}{N_{\rm tot}}
\end{equation}
and can take on values between $-1$ and $1$.
The first two terms is Eq.~(\ref{dsdEHEFT}) are the hard factors and the third
term is the imaginary part of the zero-distance Green function of the NLL
Schr\"odinger equation that can be derived from the effective theory
Lagrangian. The Green function describes the effects of the low-energy
nonrelativistic dynamics on the $t\bar t$ production rate for the top pair
being in an S-wave state and does not depend on the polarization of
the electron-positron beams. It depends on the effective theory
renormalization scaling parameter $\nu$ and is proportional to the
time-ordered product of the effective theory operators describing the
nonrelativistic QCD dynamics for the
production and annihilation of the $t\bar t$ pair at leading
logarithmic (LL) and NLL order.~\footnote{
The renormalization scaling parameter $\nu$ has mass dimension zero and is
used in the effective theory to describe the correlated running
of soft and ultrasoft fluctuations~\cite{LMR}. The hard effective
theory matching scale (at the top quark mass) is at $\nu=1$ and
low-energy matrix elements are evaluated for $\nu\sim v\sim \alpha_s$ to
avoid the appearance of large logarithmic terms.
}
At LL
order (in dimensional regularization)
the Green function has the simple analytic form
\begin{eqnarray}
G^c_{\rm LL}(a,v,m_t,\nu) & = &
\frac{m_t^2}{4\pi}\left\{\,
i\,v - a\left[\,\ln\left(\frac{-i\,v}{\nu}\right)
-\frac{1}{2}+\ln 2+\gamma_E+\psi\left(1\!-\!\frac{i\,a}{2\,v}\right)\,\right]
\,\right\}
+ \,\frac{m_t^2\,a}{4 \pi}\,\,\frac{1}{4\,\epsilon}
\,.
\nonumber\\
\label{deltaGCoul}
\end{eqnarray}
For the NLL order Green function we use the numerical techniques and codes of
the TOPPIC program developed in Ref.~\cite{Jezabek1} (see also
Ref.~\cite{Strassler1}) and determine an exact solution of the full NLL
Schr\"odinger equation employing the approach of Refs.~\cite{hmst}.
We estimate the QCD uncertainties in the normalization of the Higgs energy
spectrum from the NLL order Green function as 5\%~\cite{FarrellHoang1,HoangEpi}.
Note that
we account for the top quark finite lifetime by shifting the $t\bar t$
invariant mass $Q$ used in the Green function into the complex plane such
that the top quark relative velocity reads
\begin{eqnarray}
v & = &
\sqrt{\frac{Q-2 m_t-2\delta m_t(\nu)+i\Gamma_t}{m_t}}
\,,
\label{vdef}
\end{eqnarray}
where
\begin{eqnarray}
Q^2 \, = \, s + m_H^2 - 2\sqrt{s}\, E_H
\,.
\end{eqnarray}
This accounts for the top quark finite lifetime consistently at LL
order, see for example~\cite{HoangReisser1}. A consistent NLL description of
finite lifetime effects and electroweak corrections shall be included in a
subsequent
publication. The term $\delta m_t$ in Eq.~(\ref{vdef}) is a residual mass term
that has to be specified perturbatively at each order to fix which top quark
mass definition is being employed. In the pole mass scheme the residual
mass term vanishes to all orders. We use the 1S mass
scheme~\cite{Hoangupsilon,HoangTeubner}. The corresponding expression for
$\delta m_t$ at NLL order
can also be found in Ref.~\cite{FarrellHoang1}. We use the 1S top quark mass and
implement the residual mass term in the soft factor of the factorization
formula because it avoids the pole mass renormalon problem~\cite{Vcrenormalon}
and leads to a $t\bar t$ resonance peak position that is stable under higher
order perturbative corrections~\cite{synopsis}. For the NLL order QCD
corrections to the hard factors, which are discussed below, we neglect the
corrections that arise from the residual mass term because the numerical
effects are at the 1\% level and substantially smaller than the uncertainties
from low-energy QCD effects. This approximation was also used in
Ref.~\cite{FarrellHoang1}.
Concerning the hard contributions in Eq.~(\ref{dsdEHEFT}),
the first term in the parenthesis gives the
contribution for the $t\bar t$ pair in a S-wave spin singlet state and the other
three terms give the contributions for the $t\bar t$ pair in the three
S-wave spin triplet
$(+1,0,-1)$ states. As described already in Ref.~\cite{FarrellHoang1} we use
the helicity basis for the top and antitop spinors in the endpoint where
$k_1=k_2$ (see Fig.~\ref{fig1}) to define the singlet and the triplet states.
In this basis there are additional $v$-suppressed (NLL) contributions to the
triplet $\pm 1$ contribution that arise from S-P wave interference
terms and originate from the interference of vector and axial-vector
contributions at the $t\bar t$ vertex. These additional order $v$
contributions cancel in the sum of the
triplet contributions and can also be avoided if a spin basis is used that
does not depend on the momenta of the top quarks~\cite{Parke1}. Since here we
are not interested in the phenomenology of top polarization these additional
NLL order contributions are not included in Eq.~(\ref{dsdEHEFT}).
The functions $F^{Z,\gamma Z}$ are the tree level (hard) matching conditions
for the contributions of the respective $t\bar t$ spin states. They read
\begin{eqnarray}
F_{(1,+1),\pm}^{\gamma,Z} & = & F_{(1,-1),\pm}^{\gamma,Z}
\nonumber\\[2mm] & = &
\frac{ 2\alpha^2 \lambda_t^2}{6}\,
\frac{(1 - x_H + 4x_t)^2}{(1 + x_H - 4x_t)^2}\,
\left (\, Q_e^2 Q_t^2 + \frac{v_t^2\left(v_e \mp a_e\right)^2}{(1 - x_Z)^2}
+ \frac{2 Q_e Q_t \left(v_e \mp a_e\right) v_t}{(1 - x_Z)}
\right)
\nonumber\\[2mm] & &
+ \, \frac{4 \alpha^2 g_Z \lambda_t}{3}\,
\frac{(x_t x_Z)^{1/2}(1 - x_H + 4x_t)}
{(1 + x_H - 4x_t)(4x_t - x_Z)(1 - x_Z)}\,
\left( \frac{v_t^2 \left(v_e \mp a_e\right)^2}{(1 - x_Z)}
+ Q_e Q_t (v_e \mp a_e)v_t \right)
\nonumber\\[2mm] & &
+ \frac{4\alpha^2 g_Z^2 v_t^2\left(v_e \mp a_e\right)^2}{3}\,
\frac{x_t x_Z}{(4x_t - x_Z)^2(1 - x_Z)^2}
\,,
\label{F1def}
\end{eqnarray}
\begin{eqnarray}
F_{(1,0),\pm}^{\gamma,Z} & = &
\frac{16 \alpha^2 \lambda_t^2}{3}\,
\frac{ x_t}{(1 + x_H - 4x_t)^2}\,
\left(\,
Q_e^2 Q_t^2 + \frac{v_t^2\left(v_e \mp a_e\right)^2}{(1 - x_Z)^2}
+ \frac{2 Q_e Q_t \left(v_e \mp a_e\right) v_t}{(1 - x_Z)}
\right)
\nonumber\\[2mm] & &
+ \frac{4 \alpha^2 g_Z \lambda_t}{3} \,
\frac{(x_t x_Z)^{1/2}(1 - x_H + 4x_t)}
{(1 + x_H - 4x_t)(4x_t - x_Z)(1 - x_Z)}\,
\left( \frac{v_t^2 \left(v_e \mp a_e\right)^2}{(1 - x_Z)}
+ Q_e Q_t \left(v_e \mp a_e\right) v_t
\right)
\nonumber\\[2mm] & & +
\frac{ \alpha^2 g_Z^2 v_t^2\left(v_e \mp a_e\right)^2}{12}\,
\frac{(1 - x_H + 4x_t)^2 x_Z}{(4x_t - x_Z)^2(1 - x_Z)^2}
\,,
\\[4mm]
F_{0,\pm}^{Z} & = &
\frac{\alpha^2 g_Z^2 a_t^2\left(v_e \mp a_e\right)^2}{12}\,
\frac{(1 - x_H + 4x_t)^2 - 16 x_t}{(1 - x_Z)^2\,x_Z}
\,,
\label{F0def}
\end{eqnarray}
where
\begin{eqnarray}
v_f = \frac{T_3^f-2 Q_f s_w^2}{2s_w c_w}\,,
\quad
a_f = \frac{T_3^f}{2s_w c_w} \,,
\quad
\lambda_t = \frac{e}{2 s_w}\frac{m_t}{M_W} \,,
\quad
g_Z = \frac{e}{2 s_w c_w} \,,
\quad
\alpha = \frac{e^2}{4\pi}\,.
\label{const1}
\end{eqnarray}
Here, $Q_f$ and $T_3^f$ are the fermion charge and weak isospin,
$e$ is the electric charge and $s_w$ ($c_w$) the sine (cosine) of the
Weinberg angle.
The functions $c_i(\nu)$ are the hard QCD Wilson coefficients and depend on
$m_t, m_H$ and the c.m.\,energy $\sqrt{s}$. They also depend on the
renormalization parameter $\nu$ which accounts for the
renormalization group running of the effective currents that produce and
annihilate the $t\bar t$
pair in the various S-wave spin states. To achieve reliable predictions the
renormalization scaling parameter $\nu$ has to be chosen of order $\alpha_s$,
i.e.\,of order of the average top velocity in the $t\bar t$ c.m.\,system. For
this choice the imaginary part of the
zero-distance Green function does not contain any large logarithms
from ratios of the hard scales and the small nonrelativistic scales, the top
three momentum $\mathbf p_t\sim m_t v$ and the top kinetic energy $E_t\sim m_t v^2$
defined in the $t\bar t$ c.m.\,system. All large logarithms are summed into
the hard QCD coefficients. At NLL order the renormalization group
evolution of the hard QCD coefficients can be parameterized as
\begin{eqnarray}
c_{(1,i),\pm}(\nu) & = & c_{(1,i),\pm}(1)\,\exp\left(f(\nu,2) \right) \,,
\qquad (i=0,\pm 1)
\nonumber\\[4mm]
c_{0}(\nu) & = & c_{0}(1)\,\exp\left(f(\nu,0) \right)\,.
\label{currentWilson}
\end{eqnarray}
The function $f$ was given in Ref.~\cite{FarrellHoang1} using the results
obtained in Refs.~\cite{HoangStewartultra,Pineda1}. Whereas the
renormalization group running of the coefficients can be determined within the
effective theory, and is independent
of the short distance process, the matching conditions at $\nu=1$ are
process-dependent. We use the convention that the LL
matching conditions for the $c_i(\nu)$ are normalized to unity. At NLL order
the matching
conditions are obtained from matching the factorization formula expanded to
order $\alpha_s$ to the corresponding full theory Higgs energy
distribution at ${\cal O}(\alpha_s)$ in the endpoint region expanded to
${\cal O}(v)$ for stable top quarks and using $\nu=1$ ($\mu=m_t$) for the
renormalization scaling parameters. The full theory predictions are taken from the
numerical codes obtained in Ref.~\cite{Denner1}. More
information on the numerical matching procedure can be found in
Ref.~\cite{FarrellHoang1}. The NLL matching conditions can be parameterized in
the form
\begin{eqnarray}
c_{(1,i),\pm}(\nu=1) & = & 1 + \frac{C_F\alpha_s(m_t)}{2}\,\delta
c_{(1,i),\pm}(\sqrt{s},m_t,m_H) \,,
\qquad (i=0,\pm 1)
\nonumber\\[2mm]
c_{1,\pm}(\nu=1) & = & 1 + \frac{C_F\alpha_s(m_t)}{2}\,\delta
c_{1,\pm}(\sqrt{s},m_t,m_H) \,,
\nonumber\\[2mm]
c_{0}(\nu=1) & = & 1 + \frac{C_F\alpha_s(m_t)}{2}\,\delta
c_{0}(\sqrt{s},m_t,m_H)
\,,
\label{matchcond}
\end{eqnarray}
and numerical results for the NLL order contributions for various choices
of $\sqrt{s}$, $m_t$ and $m_H$ are given in Tab.~\ref{tab1}.
\tabcolsep1.5mm
\begin{table}
\begin{center}
\begin{tabular}{|c||c|c||l|l||l|l|l|l|l|}\hline
$\sqrt s$ & $m_t$ & $m_H$ & \multicolumn{1}{|c|}{$\delta c_{1,+}$} &
\multicolumn{1}{|c|}{$\delta c_{1,-}$} & \multicolumn{1}{|c|}{$\delta
c_{(1,\pm1),+}$} & \multicolumn{1}{|c|}{$\delta c_{(1,\pm1),-}$} &
\multicolumn{1}{|c|}{$\delta c_{(1,0),+}$} &
\multicolumn{1}{|c|}{$\delta c_{(1,0),-}$}&
\multicolumn{1}{|c|}{$\delta c_{0}$}\\ \hline \hline
500 & 170 & 115 & -2.3011(2) & -2.2703(2) & -2.2954(2) & -2.2654(2) &
-2.3134(2) & -2.2807(2) & -0.573(4) \\ \hline
490 & 170 & 115 & -2.2910(4) & -2.2618(4) & -2.2867(4) & -2.2581(4) &
-2.3001(4) & -2.2695(4) & -0.565(5) \\ \hline
480 & 170 & 115 & -2.2804(7) & -2.2528(7) & -2.2775(7) & -2.2503(7) &
-2.2866(7) & -2.2581(7) & -0.557(6) \\ \hline
470 & 170 & 115 & -2.2689(5) & -2.2430(5) & -2.2672(5) & -2.2415(5) &
-2.2724(5) & -2.2460(5) & -0.547(9) \\ \hline
460 & 170 & 115 & -2.257(1) & -2.232(1) & -2.256(1) & -2.232(1) &
-2.258(1) & -2.233(1) & -0.54(1) \\ \hline
\hline
500 & 170 & 120 & -2.2992(4) & -2.2681(4) & -2.2940(4) & -2.2637(4) &
-2.3105(4) & -2.2776(4) & -0.572(4) \\ \hline
490 & 170 & 120 & -2.2890(6) & -2.2596(6) & -2.2852(6) & -2.2563(6) &
-2.2971(6) & -2.2664(6) & -0.564(5) \\ \hline
480 & 170 & 120 & -2.2779(4) & -2.2501(4) & -2.2754(4) & -2.2479(4) &
-2.2830(4) & -2.2544(4) & -0.555(4) \\ \hline
470 & 170 & 120 & -2.2660(9) & -2.2399(9) & -2.2648(9) & -2.2389(9) &
-2.2684(9) & -2.2419(9) & -0.546(9) \\ \hline
\hline
500 & 170 & 140 & -2.2931(6) & -2.2610(6) & -2.2901(6) & -2.2584(6) &
-2.2994(6) & -2.2663(6) & -0.568(9) \\ \hline
490 & 170 & 140 & -2.2815(6) & -2.2510(6) & -2.2800(6) & -2.2498(6) &
-2.2845(6) & -2.2536(6) & -0.559(9) \\ \hline
\hline
500 & 175 & 115 & -2.2871(3) & -2.2605(3) & -2.2831(3) & -2.2571(3) &
-2.2956(3) & -2.2678(3) & -0.562(2) \\ \hline
490 & 175 & 115 & -2.2767(4) & -2.2516(4) & -2.2740(4) & -2.2492(4) &
-2.2824(4) & -2.2565(4) & -0.554(2) \\ \hline
480 & 175 & 115 & -2.2657(6) & -2.2421(6) & -2.2641(6) & -2.2407(6) &
-2.2689(6) & -2.2449(6) & -0.544(9) \\ \hline
470 & 175 & 115 & -2.2536(9) & -2.2315(9) & -2.2531(9) & -2.2311(9) &
-2.2546(9) & -2.2324(9) & -0.54(1) \\ \hline
\hline
500 & 175 & 120 & -2.2848(5) & -2.2580(5) & -2.2813(5) & -2.2550(4) &
-2.2923(4) & -2.2645(4) & -0.561(5) \\ \hline
490 & 175 & 120 & -2.2741(5) & -2.2488(5) & -2.2719(5) & -2.2469(5) &
-2.2789(5) & -2.2529(5) & -0.553(4) \\ \hline
480 & 175 & 120 & -2.263(1) & -2.2389(8) & -2.2616(8) & -2.2380(8) &
-2.265(1) & -2.2409(8) & -0.544(6) \\ \hline
\hline
500 & 175 & 140 & -2.2766(5) & -2.2489(5) & -2.2752(5) & -2.2477(5) &
-2.2793(5) & -2.2512(5) & -0.556(5) \\ \hline
\end{tabular}
\end{center}
\caption{Numerical values for the matching conditions
for the singlet and triplet hard QCD coefficients for typical values
$\sqrt{s}$, $m_t$ and $m_H$. The masses and energies are given in units of
GeV. Note that $c_{(1,+1),\pm}=c_{(1,-1),\pm}$ due to parity.}
\label{tab1}
\end{table}
The singlet matching conditions do not depend on the
electron-positron polarization because there is only one non-trivial
QCD form factor in the full theory
that can contribute to the hard QCD matching conditions for the
effective theory spin singlet $t\bar t$ current. In Feynman gauge it
originates from the pseudoscalar Goldstone-$t\bar t$ vertex.
For the triplet currents, on the other hand, several form
factors contribute in the full theory $t\bar t$ vertices, therefore the
matching conditions are polarization-dependent for the parameterization used in
Eq.~(\ref{dsdEHEFT}).
If the polarization of the $t\bar t$ final states is not
accounted for, the factorization formula can be written in a simpler form
using for the $t\bar t$ spin triplet contributions the definitions
\begin{eqnarray}
c_{1,\pm}^2(\nu)\,F_{1,\pm}^{\gamma Z}
& \equiv &
\sum_{i=-1,0,+1}c_{(1,i),\pm}^2(\nu)\,F_{(1,i),\pm}^{\gamma Z}
\,, \qquad
F_{1,\pm}^{\gamma Z} \, \equiv \,
\sum_{i=-1,0,+1}\,F_{(1,i),\pm}^{\gamma Z}
\,,
\nonumber\\[2mm]
c_{1,\pm}^2(\nu) & = &
\frac{\sum_{i=-1,0,+1}c_{(1,i),\pm}^2(\nu)\,F_{(1,i),\pm}^{\gamma Z}}
{F_{1,\pm}^{\gamma Z}}
\,.
\label{averagedef}
\end{eqnarray}
In Ref.~\cite{FarrellHoang1} the results for the triplet contributions were
presented in this form.
\section{The Low Higgs Energy Endpoint Region}
\label{sectionlowHE}
In Fig.~\ref{fig2} the prediction for the unpolarized Higgs energy spectrum
obtained from the factorization formula in Eq.~(\ref{dsdEHEFT}) has been
displayed at LL (dotted lines) and NLL (solid lines) order in the
nonrelativistic expansion for the effective theory renormalization parameters
$\nu=0.1,0.2,0.4$. The parameters are $\sqrt{s}=500$~GeV, $m_t^{\rm
1S}=175$~GeV, $m_H=120$~GeV, and
\begin{eqnarray}
\begin{array}{ll}
\Gamma_t=1.43~\mbox{GeV}\,, & \\
M_Z=91.1876~\mbox{GeV}\,, & \quad M_W=80.423~\mbox{GeV}, \\
\alpha^{-1}=137.036\,, & \quad c_w=M_W/M_Z\,.
\end{array}
\label{parameters}
\end{eqnarray}
\begin{figure}[t]
\begin{center}
\epsfig{file=figures/fig2.eps,height=6cm}
\caption{The unpolarized Higgs energy spectrum in the
nonrelativistic expansion at LL (dotted lines) and NLL (solid
lines) order for $\nu=0.1,\,0.2,\,0.4$. The fixed-order expansion
is also shown at Born level (lower dotted line) and at $\mathcal
O(\alpha_s)$ for $\mu=\sqrt s$ (lower dashed line) at for
$\mu=\sqrt s\,v$ (upper dashed line). The cross section at NLL
order fails to reproduce the correct physical behavior of the
fixed-order results from the loop expansion in the {\it low} Higgs
energy regime. At the 1S peak the upper (lower) NLL order
curve corresponds to the effective theory renormalization
parameter $\nu=0.2(0.1)$.
\label{fig2} }
\end{center}
\end{figure}
We have also plotted the tree level (lower dotted line) and the
${\cal O}(\alpha_s)$ Higgs energy spectrum for $\mu=\sqrt{s}$ (lower dashed
line) and for $\mu=\sqrt s\, v$ (upper dashed line) where $v$ is the
$t\bar t$ relative velocity defined in Eq.~\eqref{vdef}. Since the hard scale
as well as the relative momentum of the top quarks are scales that are relevant
for nonrelativistic $t\bar t$ production, the difference between the two scale
choices illustrates the ambiguity contained in the fixed-order calculation
close to the large Higgs energy
endpoint. A detailed discussion of the deficiencies of the fixed-order
predictions in the endpoint region and quality of the nonrelativistic
expansion and the theoretical normalization uncertainty of the NLL
order prediction has been given in Ref.~\cite{FarrellHoang1} and shall
not be repeated here. The issue
we want to point out in Fig.~\ref{fig2} is that the predictions obtained
from the factorization formula in Eq.~(\ref{dsdEHEFT}), which properly
accounts for the summation of all NLL order contributions in the
{\it large} Higgs energy region,
is not compatible with the correct physical behavior at the {\it low} Higgs
energy endpoint $E_H=m_H$. There the Higgs boson is produced at rest (in the
lab frame) and the Higgs energy spectrum has to go to zero as do
the tree level and ${\cal O}(\alpha_s)$ predictions. In particular, at
the low Higgs energy endpoint region there is no singular enhancement
from the matrix elements, and due to phase space suppression the
coefficient functions $G_i$ of e.g. the tree level Higgs energy spectrum
(see Appendix \ref{app1}) vanish like
$G_i \sim \hat\beta$ with
\begin{equation}
\hat\beta \, = \,
\left(\,
\frac{ m_H\,(\sqrt{s} - m_H)^2\,(\,(\sqrt{s} - m_H)^2 - 4m_t^2\,)}
{m_t^2\, s^{3/2}}
\,\right)^{1/2}\,\sqrt{ v_{\rm max}^2-v^2 }
\, + \, {\cal O}( v_{\rm max}^2-v^2)^{3/2}
\,.
\end{equation}
This endpoint behavior cannot be obtained within the nonrelativistic expansion
in small $v$ even if the endpoint is located at a velocity much smaller than
one, see Eq.~(\ref{vmaxdef}). Terms that are formally from beyond NLL order
in $v$ thus need to be summed up to achieve a correct low Higgs energy
endpoint behavior.
It is useful for the construction of a factorization formula which
can account for the correct physical low Higgs energy behavior that the
full theory
tree level Higgs energy spectrum, both for the $t\bar t$ pair in the spin
singlet and for the (combined) triplet configuration, does not have
order $v$ (NLL) corrections to the
leading endpoint behavior in the large Higgs energy endpoint, i.e.
\begin{eqnarray}
\left(\frac{d\sigma}{d E_H}(E_H\approx E_H^{0})\right)^\pm_{1, {\rm Born}} & = &
\left[\frac{2\,N_c\,\left[(1+x_H-4x_t)^2-4x_H\right]^{1/2}}{s^{3/2}\,\pi}\,
F_{1,\pm}^{\gamma Z}\,\right] v \, + \, {\cal O}(v^3)
\,,
\nonumber \\[4mm]
\left(\frac{d\sigma}{d E_H}(E_H\approx E_H^{0})\right)^\pm_{0, {\rm Born}} & = &
\left[\frac{2\,N_c\,\left[(1+x_H-4x_t)^2-4x_H\right]^{1/2}}{s^{3/2}\,\pi}\,
F^Z_{0,\pm}\,\right] v \, + \, {\cal O}(v^3)
\,.
\end{eqnarray}
At NLL order it is thus consistent to use the full tree level $E_H$ spectrum
in the large Higgs energy endpoint
instead of the constant LL matching conditions $F^{Z,\gamma Z}$
given in Eqs.~(\ref{F1def})-(\ref{F0def}),
\begin{eqnarray}
F_{1,\pm}^{\gamma Z} & \to &
\left(\frac{d\sigma}{d E_H}\right)^\pm_{\rm Born}\,
\frac{F_{1,\pm}^{\gamma Z}}{F_{0,\pm}^{Z}+F_{1,\pm}^{\gamma Z}}\,
\left[\frac{2\,N_c\,\left[(1+x_H-4x_t)^2-4x_H\right]^{1/2}}{s^{3/2}\,\pi}\,v
\,\right]^{-1}
\,,
\nonumber\\[4mm]
F_{0,\pm}^{Z} & \to &
\left(\frac{d\sigma}{d E_H}\right)^\pm_{\rm Born}\,
\frac{F_{0,\pm}^{Z}}{F_{0,\pm}^{Z}+F_{1,\pm}^{\gamma Z}}\,
\left[\frac{2\,N_c\,\left[(1+x_H-4x_t)^2-4x_H\right]^{1/2}}{s^{3/2}\,\pi}\,v
\,\right]^{-1}
\,,
\label{modifiedrules}
\end{eqnarray}
where $(\frac{d\sigma}{d E_H})^\pm_{\rm Born}$ is the full tree level
Higgs energy spectrum for the polarized $e^+e^-$ initial state. Note that the
replacement prescription in Eq.~(\ref{modifiedrules}) can only be applied for
Higgs energies smaller than $E_H^0$, for larger Higgs energies
Eq.~\eqref{dsdEHEFT} is left unchanged.
For the convenience of the reader we have given the analytic expressions for
the tree level Higgs energy spectrum in the appendix using up to minor
modifications the conventions of Ref.~\cite{Dawson2}. They also correct a
few typos that were contained in Ref.~\cite{Dawson2} and pointed out
before in Ref.~\cite{FarrellHoang1}. For the case of an unpolarized
$t\bar t$ final state, using the
prescription~(\ref{modifiedrules}) in the factorization
formula~(\ref{dsdEHEFT}) leads to a modified factorization formula that
resums correctly all NLL order terms. In addition it has
the correct physical behavior at the low Higgs energy endpoint
$E_H=m_H$.
The modified NLL factorization formula based on Eqs.~(\ref{dsdEHEFT}),
(\ref{averagedef}), and (\ref{modifiedrules}) is not unique,
alternative prescriptions to achieve the correct physical low Higgs energy
endpoint behavior are conceivable. However, different prescriptions will only
affect the low Higgs energy endpoint where the $E_H$ spectrum vanishes, and
they should therefore not have a large numerical impact. While the modified
NLL factorization formula contains the exact tree level contribution, its
${\cal O}(\alpha_s)$
contribution (in the expansion in powers of $\alpha_s$) differs from the exact
${\cal O}(\alpha_s)$ result obtained in Ref.~\cite{Denner1} since it includes
only the QCD corrections of the large Higgs energy endpoint. Thus an estimate
of the intrinsic uncertainty in our prescription can be gained by comparing
its ${\cal O}(\alpha_s)$ terms with the exact result from
Refs.~\cite{Denner1,Dittmaier1}. For stable and unpolarized top quarks
the first two terms in the $\alpha_s$ expansion of our modified
factorization formula read
\begin{eqnarray}
\left(\frac{d\sigma}{d E_H}(E_H)\right)^\pm_{\rm NLL} & = &
\left(\frac{d\sigma}{d E_H}(E_H)\right)^\pm_{\rm Born} +
\left(\frac{d\sigma}{d E_H}(E_H)\right)^\pm_{{\cal O}(\alpha_s)} +
{\cal O}(\alpha_s^2)
\,,
\label{Oasexpand}
\end{eqnarray}
where
\begin{eqnarray}
\left(\frac{d\sigma}{d E_H}(E_H)\right)^\pm_{{\cal O}(\alpha_s)} & = &
C_F\alpha_s\,
\bigg[\,
\frac{F_{0,\pm}^{Z}\delta c_0+F_{1,\pm}^{\gamma Z}\delta c_{1,\pm}}
{F_{0,\pm}^{Z}+F_{1,\pm}^{\gamma Z}}
+ \frac{\pi}{2}\bigg(1-\frac{4m_t^2}{Q^2}\bigg)^{-1/2}
\,\bigg]
\,\left(\frac{d\sigma}{d E_H}(E_H)\right)^\pm_{\rm Born}
\,.
\nonumber\\ &&
\label{Oasapprox}
\end{eqnarray}
In Tab.~\ref{tab2} numerical results for the exact total ${\cal O}(\alpha_s)$
unpolarized cross section, $\sigma_{\rm exact}^{{\cal O}(\alpha_s)}$~\cite{Denner1},
and for the ${\cal O}(\alpha_s)$ approximation
from Eq.~(\ref{Oasexpand}), $\sigma_{\rm NLL}^{{\cal O}(\alpha_s)}$,
are shown for various c.m.\,energies and
$m_t=175$~GeV, $m_H=120$~GeV, $\Gamma_t=0$, $\mu=\sqrt{s}$.
For c.m.\,energies below $500$~GeV the
deviation increases with the c.m.\,energy. It vanishes at the
three-body threshold $\sqrt{s}\approx 2m_t+m_H$ and reaches the level
of 1.5\% for $\sqrt{s}=500$~GeV.
\begin{table}
\begin{center}
\begin{tabular}{|c||c|c|c|}\hline $\sqrt s$ &
$\sigma_{\rm exact}^{\alpha_s}$ & $\sigma_{\rm NLL}^{\alpha_s}$ &
rel. dev. $(\%)$ \\ \hline \hline
475 & 0.0311 & 0.0309 & 0.6 \\ \hline
480 & 0.0908 & 0.0900 & 0.9 \\ \hline
490 & 0.254 & 0.251 & 1.2 \\ \hline
500 & 0.446 & 0.439 & 1.5 \\ \hline
550 & 1.366 & 1.343 & 1.7 \\ \hline
600 & 1.953 & 1.924 & 1.5 \\ \hline
700 & 2.356 & 2.348 & 0.4 \\ \hline
\end{tabular}
\end{center}
\caption{The total cross section using the exact $\mathcal
O(\alpha_s)$ result $\sigma_{\rm exact}^{\mathcal
O(\alpha_s)}$ from Ref.~\cite{Denner1} and the
approximation based on Eq.~(\ref{Oasexpand}), $\sigma_{\rm NLL}^{\mathcal
O(\alpha_s)}$. The third column
shows the relative deviation in percent.
The difference between the two calculations is
maximal for c.m.\,energies around $550$~GeV. }
\label{tab2}
\end{table}
In Fig.~\ref{fig3} the exact ${\cal O}(\alpha_s)$ unpolarized Higgs energy
spectrum (black lines) and the ${\cal O}(\alpha_s)$ approximation in
Eq.~(\ref{Oasexpand}) (gray lines) are displayed in $0.1$~GeV bins for
$\sqrt{s}=490, 500, 600$, and $700$~GeV, $m_t=175$~GeV,
$m_H=120$~GeV, $\Gamma_t=0$, and $\mu=\sqrt s$. Note that for the strong
coupling we use $\alpha_s(500~\mbox{GeV})=0.09396$. The other
parameters are chosen as in Eq.~(\ref{parameters}).
For $\sqrt s = 500$ GeV the relative deviation in the Higgs energy spectrum is
at most $2.8\%$. The difference is smaller for lower c.m.\,energies since the
maximal possible top relative velocity $v^{\rm max}$ is increasing
with the c.m.\,energy, see Eq.~(\ref{vmaxdef}). The results indicate
that the intrinsic uncertainty of our approach is substantially
smaller than the theoretical uncertainty of $5\%$ from uncalculated
higher order QCD effects~\cite{FarrellHoang1,HoangEpi}.
\begin{figure}[t]
\begin{center}
\epsfig{file=figures/fig3a.ps,height=4.5cm}\hspace{0.5cm}
\epsfig{file=figures/fig3b.ps,height=4.5cm}\\[1em]
\epsfig{file=figures/fig3c.ps,height=4.5cm}\hspace{0.5cm}
\epsfig{file=figures/fig3d.ps,height=4.5cm}
\caption{
The exact ${\cal O}(\alpha_s)$ unpolarized Higgs energy
spectrum from Ref.~\cite{Denner1} (black lines) and the
${\cal O}(\alpha_s)$ approximation in Eq.~(\ref{Oasexpand})
(gray lines) for different c.m.\,energies $\sqrt s$ for $m_t=175$~GeV,
$m_H=120$~GeV, and $\mu=\sqrt s$.
\label{fig3} }
\end{center}
\end{figure}
In Figs.~\ref{fig3} c and d and in Tab.~\ref{tab2} we have analyzed the
difference between the exact ${\cal O}(\alpha_s)$ results and the
${\cal O}(\alpha_s)$ approximation based on Eq.~(\ref{Oasexpand}) for larger
c.m.\,energies as well. It is a surprising fact that the fairly simple
expression
in Eq.~(\ref{Oasapprox}), which contains only tree level information and
the NLL QCD information from the large Higgs energy endpoint, can
also account very well for the exact ${\cal O}(\alpha_s)$ results at higher
energies, where real gluon radiation is non-negligible. For
c.m.\,energies between $500$ and $700$~GeV the approximation based on
Eq.~(\ref{Oasexpand}) deviates from the exact results
by at most $1.8\%$ for the unpolarized total cross section,
where the maximal deviation is reached for $\sqrt{s}\approx
550$~GeV. Since the numerical evaluation of Eq.~(\ref{Oasapprox}) is
substantially faster than for the exact ${\cal O}(\alpha_s)$
result~\cite{Denner1}, it can be useful as an efficient approximation
formula for higher c.m.\,energies.
\section{Numerical Analysis}
\label{sectionanalysis}
\begin{figure}[t]
\begin{center}
\epsfig{file=figures/fig4.eps,height=9.5cm}
\caption{The unpolarized Higgs energy spectrum for different
c.m.\,energies at NLL order (solid
lines) using the modified factorization formula
based on Eqs.~(\ref{dsdEHEFT}), (\ref{averagedef}), and
(\ref{modifiedrules}) for the renormalization
parameters $\nu=0.1,0.2,0.4$, at ${\cal O}(\alpha_s)$ (dashed
lines) from Ref.~\cite{Dittmaier1} with $\mu=\sqrt s$, and at
Born level (dotted line). At the 1S peak the upper (lower) NLL
order curve corresponds to the effective theory renormalization
parameter $\nu=0.2(0.1)$.
\label{fig4} }
\end{center}
\end{figure}
In Fig.~\ref{fig4} the unpolarized Higgs energy spectrum at NLL order (solid
lines) using the modified factorization formula
based on Eqs.~(\ref{dsdEHEFT}), (\ref{averagedef}), and (\ref{modifiedrules})
is displayed for the renormalization parameters $\nu=0.1,0.2,0.4$ for
the c.m.\,energies $\sqrt{s}=485,490,495,500$~GeV and
$m_t=m_t^{\rm 1S}=175$~GeV, $m_H=120$~GeV. The other parameters are chosen
as in Eq.~(\ref{parameters}). For comparison we also show the tree level
prediction (dotted lines) and the ${\cal O}(\alpha_s)$
results~\cite{Denner1} (dashed lines) with $\mu=\sqrt s$ for a stable
top quark. The nonrelativistic NLL order results show a substantial
enhancement compared to the tree level and one-loop QCD predictions.
The Higgs energy spectrum in the effective theory extends beyond the endpoint
$E_H^0$ that is obtained for the stable top quark case.
This is because the top quarks can be produced off-shell with
invariant masses smaller than $m_t$ if the top quark decay is accounted
for. With the present technology the finite top quark lifetime can only be
implemented systematically in an expansion in the top quark
off-shellness, which is naturally provided by the nonrelativistic
expansion we use here.
It is conspicuous that the spectrum above the endpoint $E_H^0$ in the
NLL prediction falls off quite slowly. Since the average c.m.\,top
quark velocity increases with the Higgs energy for $E_H>E_H^0$ we
define the total cross section by applying a cut on the Higgs energy above
$E_H^0$ such that the average c.m.\,top velocity remains below
$v_{\rm cut}=0.2$. We
fix the relation between the maximal Higgs energy and $v_{\rm cut}$ by
the relation
$E_H^{\rm cut} = (s+m_H^2-Q^2_{\rm cut})/(2\sqrt{s})$, which is
exact in the stable top case. Here,
$Q^2_{\rm cut} \equiv (4m_t^2)/(1+v_{\rm cut}^2)$
is the minimal $t\bar t$ invariant mass.
Note that $Q_{\rm cut}$ is smaller than $2 m_t$ because for
$E_H>E_H^0$ we are in the bound state regime. As mentioned before,
we plan a systematic treatment of finite lifetime and off-shell
effects at the NLL order level in a subsequent publication.
In Tab.~\ref{tab3} the impact of the NLL order summations on the total cross
section for unpolarized $t\bar t$ pairs and polarized electron-positron beams
is analyzed numerically for various c.m.\,energies, top quark masses and Higgs
masses. The other parameters are chosen as in Eq.~(\ref{parameters})
except for the case $m_t=170$~GeV where we use $\Gamma_t=1.31$~GeV.
\tabcolsep2mm
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c||c|l|c||c|l|c|}\hline $\sqrt
s$ & $m_t$ & $m_H$ & $\sigma_{\rm Born}^+ \mbox{(fb)}$ &
\multicolumn{1}{|c|}{$\sigma_{\rm NLL}^+ \mbox{(fb)}$} & $\sigma_{\rm
NLL}^+/\sigma_{\rm Born}^+$ & $\sigma_{\rm Born}^- \mbox{(fb)}$&
\multicolumn{1}{|c|}{$\sigma_{\rm NLL}^- \mbox{(fb)}$} & $\sigma_{\rm
NLL}^-/\sigma_{\rm Born}^-$\\ \hline \hline
$500$ & $170$ & $115$ & $ 0.644$ & $ 0.989(49)$ & $ 1.54$ & $ 1.660$ & $ 2.568(128)$ & $ 1.55$\\\hline
$490$ & $170$ & $115$ & $ 0.444$ & $ 0.754(37)$ & $ 1.70$ & $ 1.149$ & $ 1.965(98)$ & $ 1.71$\\\hline
$480$ & $170$ & $115$ & $ 0.260$ & $ 0.516(25)$ & $ 1.98$ & $ 0.674$ & $ 1.347(67)$ & $ 2.00$\\\hline
$470$ & $170$ & $115$ & $ 0.108$ & $ 0.285(14)$ & $ 2.64$ & $ 0.281$ & $ 0.747(37)$ & $ 2.66$\\\hline
$460$ & $170$ & $115$ & $ 0.014$ & $ 0.086(4)$ & $ 6.17$ & $ 0.036$ & $ 0.226(11)$ & $ 6.21$\\\hline
$500$ & $170$ & $120$ & $ 0.486$ & $ 0.783(39)$ & $ 1.61$ & $ 1.258$ & $ 2.040(101)$ & $ 1.62$\\\hline
$490$ & $170$ & $120$ & $ 0.312$ & $ 0.568(28)$ & $ 1.82$ & $ 0.809$ & $ 1.483(74)$ & $ 1.83$\\\hline
$480$ & $170$ & $120$ & $ 0.159$ & $ 0.355(17)$ & $ 2.23$ & $ 0.413$ & $ 0.929(46)$ & $ 2.25$\\\hline
$470$ & $170$ & $120$ & $ 0.046$ & $ 0.159(7)$ & $ 3.48$ & $ 0.120$ & $ 0.418(20)$ & $ 3.50$\\\hline
$500$ & $170$ & $140$ & $ 0.102$ & $ 0.229(11)$ & $ 2.24$ & $ 0.268$ & $ 0.604(30)$ & $ 2.26$\\\hline
$490$ & $170$ & $140$ & $ 0.029$ & $ 0.101(5)$ & $ 3.48$ & $ 0.076$ & $ 0.268(13)$ & $ 3.51$\\\hline
$500$ & $175$ & $115$ & $ 0.459$ & $ 0.787(39)$ & $ 1.72$ & $ 1.181$ & $ 2.039(101)$ & $ 1.73$\\\hline
$490$ & $175$ & $115$ & $ 0.268$ & $ 0.538(26)$ & $ 2.01$ & $ 0.692$ & $ 1.399(69)$ & $ 2.02$\\\hline
$480$ & $175$ & $115$ & $ 0.111$ & $ 0.298(14)$ & $ 2.68$ & $ 0.288$ & $ 0.777(38)$ & $ 2.70$\\\hline
$470$ & $175$ & $115$ & $ 0.014$ & $ 0.091(4)$ & $ 6.32$ & $ 0.037$ & $ 0.236(11)$ & $ 6.35$\\\hline
$500$ & $175$ & $120$ & $ 0.322$ & $ 0.593(29)$ & $ 1.84$ & $ 0.832$ & $ 1.541(77)$ & $ 1.85$\\\hline
$490$ & $175$ & $120$ & $ 0.164$ & $ 0.371(18)$ & $ 2.26$ & $ 0.425$ & $ 0.967(48)$ & $ 2.28$\\\hline
$480$ & $175$ & $120$ & $ 0.047$ & $ 0.167(8)$ & $ 3.54$ & $ 0.123$ & $ 0.437(21)$ & $ 3.56$\\\hline
$500$ & $175$ & $140$ & $ 0.030$ & $ 0.107(5)$ & $ 3.55$ & $ 0.079$ & $ 0.281(14)$ & $ 3.57$\\\hline
\end{tabular}
\end{center}
{\caption{The total cross section in units of fb at Born level for
stable top quarks and at NLL order for unstable top quarks
using $\nu = 0.2$ for fully polarized electron-positron beams. The
index refers to the polarization of the electron beam. The masses
and $\sqrt s$ are given in units of GeV. For $m_t=(170,175)$~GeV
we use $\Gamma_t=(1.31,1.43)$~GeV.}
\label{tab3} }
\end{table}
In Tab.~\ref{tab3}, $\sigma_{\rm Born}$ refers to the tree level cross
section for stable top quarks
(see the appendix for explicit expressions) and $\sigma_{\rm NLL}$ to the NLL
total cross section as defined above and based on the modified factorization
formula discussed in Sec.~\ref{sectionlowHE}. The NLL order predictions were
obtained for the effective theory renormalization parameter
$\nu=0.2$. The uncertainties
given for $\sigma_{\rm NLL}$ reflect the 5\% theoretical error from higher
order QCD and relativistic corrections as discussed in
Ref.~\cite{FarrellHoang1}. The results in Tab.~\ref{tab3} demonstrate the
importance of the summation of the singular terms proportional to
$(\alpha_s/v)^n$ and $(\alpha_s\ln v)^n$ that arise in the endpoint region,
and of the off-shell effects that arise from the finite top quark lifetime.
Compared to the tree level predictions the enhancement is more
pronounced for smaller c.m.\,energies and larger top or Higgs masses.
It is a realistic option for the ILC project to polarize the $e^+e^-$ beams
up to $(P_+,P_-)=(0.6,-0.8)$~\cite{TESLATDR}. Since this can
further enhance the cross section we have also assessed its merits for the
process at hand.
In Figs.~\ref{fig5} the total cross section for unpolarized top quarks at the
tree level (dashed lines) and at NLL order (solid lines) is shown as a
function of $\sqrt{s}$ and $m_H$ for unpolarized electron-positron beams
$(P_+,P_-)=(0,0)$ and for $(P_+,P_-)=(0.6,-0.8)$. The other parameters are
chosen as in
Eq.~(\ref{parameters}), see also the figure caption for more details. For the
NLL cross section the predictions for the three choices $\nu=0.1,0.2,0.4$ for
the renormalization scaling parameter are shown.
\begin{figure}[t]
\begin{center}
\epsfig{file=figures/fig5a.ps,bb=80 450 560 730,height=4.7cm}
\vspace{0.8cm}
\epsfig{file=figures/fig5b.ps,bb=80 450 560 730,height=4.7cm}\\[-2em]
\caption{
The total cross section for unpolarized top quarks at
tree level (dashed lines) and at NLL order (solid lines) as a
function of $\sqrt{s}$ (left panel) and as a function of $m_H$ (right
panel) for unpolarized electron-positron beams $(P_+,P_-)=(0,0)$
(respective lower curves) and for $(P_+,P_-)=(0.6,-0.8)$
(respective upper curves).
\label{fig5} }
\end{center}
\end{figure}
The results demonstrate that using electron-positron polarization the cross
section can be enhanced by roughly a factor of two over the unpolarized cross
section. Compared to the tree level predictions
for unpolarized electron-positron beams, which were the basis of previous
experimental analyses~\cite{Justetalk}, QCD effects and beam polarization
$(P_+,P_-)=(0.6,-0.8)$ can enhance the cross section by about a
factor of $4$ or even more for $\sqrt{s}=500$~GeV, depending on the
Higgs mass. Because of the limited statistics
for $t\bar t H$ production during the first phase of the ILC project,
these results are important for realistic experimental simulations of
Yukawa coupling measurements.
\section{Conclusion}
\label{sectionconclusion}
We have analyzed the impact of summing the QCD singularities proportional to
$(\alpha_s/v)^n$ and $(\alpha_s\ln v)^n$ that arise in the large Higgs energy
endpoint region for the process $e^+e^-\to t\bar t H$ for c.m.\,energies up to
$500$~GeV, i.e. energies which can be achieved during the first phase
of the ILC project. The singularities cause the breakdown of usual
multi-loop perturbation theory in powers of $\alpha_s$ and
originate from nonrelativistic dynamical QCD effects that arise because the
relative velocity of the $t\bar t$ pair is small. A consistent theoretical
treatment requires the use of nonrelativistic effective theory methods and
includes a systematic treatment of off-shell effects caused by the finite
top quark lifetime. In Ref.~\cite{FarrellHoang1} we derived a
factorization formula for the large Higgs energy endpoint region for large
c.m.\,energies above $500$~GeV. In the present work we have extended
the approach to
c.m.\,energies below $500$~GeV, where the top quark pair is nonrelativistic in
the entire phase space, and we have also accounted for the effects of
electron-positron beam polarization. We have determined the predictions for
the Higgs energy spectrum and the total cross section at NLL order for the QCD
effects and at LL order for the top quark finite lifetime and for off-shell
effects. The QCD effects enhance the total cross section by roughly a
factor of two relative to the Born prediction for
$\sqrt{s}=500$~GeV. Using polarized electron-positron beams the cross
section can be further enhanced over the unpolarized case by another
factor of approximately two. Our results are
important for realistic simulation studies for Yukawa coupling measurements in
the first phase of the ILC project.
\begin{acknowledgments}
We would like to thank S.~Dittmaier and M.~Roth for
useful discussions and for providing us their numerical codes
from Ref.~\cite{Denner1}, and T.~Teubner for providing the TOPPIC
code. A.H. thanks A.~Juste for useful discussions and suggestions.
\end{acknowledgments}
\vskip 1cm
\begin{appendix}
\section{Tree Level Higgs Energy Spectrum}
\label{app1}
Correcting the typos of Ref.~\cite{Dawson2} the tree level Higgs energy
spectrum in the process $e^+e^-\to t\bar t H$ for polarized electron-positron
beams reads ($x_E\equiv 2 E_H/\sqrt{s}$,
$\sigma_{\rm pt}\equiv 4\pi \alpha^2/(3s)$)
\begin{eqnarray}
\left(\frac{d\sigma(E_H)}{d x_E}\right)_{\rm Born}^\pm & = &
\sigma_{\rm pt}\,\frac{N_c}{8\pi^2}\bigg\{\,
\bigg[\,
Q_e^2 Q_t^2
+ \frac{2 Q_e Q_t (v_e \mp a_e) v_t}{1 - x_Z}
+ \frac{(v_e \mp a_e)^2(v_t^2 + a_t^2)}{(1 - x_Z)^2}
\,\bigg]\,G_1
\nonumber\\[.5em] & &
{} + \frac{(v_e \mp a_e)^2}{(1 - x_Z)^2}\, \bigg[\,
a_t^2\sum_{i=2}^6\,G_i+v_t^2(G_4+G_6) \,\bigg]
+ \frac{Q_e Q_t (v_e \mp a_e) v_t}{1 - x_Z}\,G_6\,\bigg\}
\,,
\nonumber\\
\end{eqnarray}
where the coefficient functions are given by
\begin{eqnarray}
G_1 & = &
\frac{ 2\lambda_t^2}{(\hat\beta^2 - x_E^2)x_E}\,
\bigg\{
-4 \hat\beta (4x_t - x_H)(2x_t + 1)x_E
\nonumber\\[2mm] & &
{}+
(\hat\beta^2 - x_E^2)\big[16 x_t^2 + 2x_H^2 - 2x_H x_E + x_E^2
- 4x_t(3x_H - 2 - 2x_E)\big]\,
\ln\bigg(\frac{x_E + \hat\beta}{x_E - \hat\beta}\bigg) \bigg\}
,\qquad
\\[4mm]
G_2 & = &
\frac{-2\lambda_t^2}{(\hat\beta^2 - x_E^2)x_E}\,
\bigg\{
\hat\beta x_E\,\big[-96 x_t^2 + 24 x_t x_H
- (-x_H + 1 + x_E)(x_E^2 - \hat\beta^2)\big]
\nonumber\\[2mm] & &
{} +
2(\hat\beta^2 - x_E^2)\,\big[ 24 x_t^2 + 2(x_H^2 - x_H x_E)
+ x_t(-14 x_H + 12 x_E + x_E^2)\big]\,
\ln\bigg(\frac{x_E + \hat\beta}{x_E - \hat\beta}\bigg) \bigg\}
.
\end{eqnarray}
These first two coefficients describe the s-channel exchange of the
photon and the Z boson where the Higgs boson is radiated
off one of the top quarks~\cite{Dawson2}. A missing factor $s$ is
introduced in the first line of the formula for $G_2$.
The coefficient functions $G_3$ to $G_6$ describe the emission of the Higgs
boson from the Z-boson,
\begin{eqnarray}
G_3 & = &
\frac{ -2\hat\beta g_Z^2 x_t}{x_Z(x_H - x_Z + 1 - x_E)^2}
\,\bigg\{
4x_H^2 + 12 x_Z^2 + 2x_Z x_E^2
\nonumber\\[2mm] & &
{} + (-1 + x_E)x_E^2
- x_H\big[ 8x_Z + (-4 + 4x_E + x_E^2)\big] \,\bigg\}
\,, \\[4mm]
G_4 & = &
\frac{ \hat\beta g_Z^2 x_Z}{6(x_H - x_Z + 1 - x_E)^2}
\,\bigg\{
48 x_t + 12 x_H - (-24 + \hat\beta^2 + 24 x_E - 3x_E^2) \,\bigg\}
\,, \\[4mm]
G_5 & = &
\frac{4 \lambda_t g_Z x_t^{1/2}}{x_Z^{1/2}(-x_H + x_Z -1 + x_E)}
\,\bigg\{
\hat\beta \big[ 6x_Z + x_E(-x_H - 1 + x_E)\big]
\nonumber\\[2mm] & &
{} +
2\big[ x_H(x_H - 3x_Z + 1 - x_E) +
x_t(-4x_H + 12 x_Z + x_E^2)\big]\,
\ln\bigg(\frac{x_E + \hat\beta}{x_E - \hat\beta}\bigg) \bigg\}
\,, \\[4mm]
G_6 & = &
\frac{-8 \lambda_t g_Z (x_t x_Z)^{1/2}}{-x_H + x_Z - 1 + x_E}
\,\bigg\{
\hat\beta + (4x_t - x_H + 2 - x_E) \,
\ln\bigg(\frac{x_E + \hat\beta}{x_E - \hat\beta}\bigg) \bigg\}
\,.
\end{eqnarray}
These terms give contributions to the Higgs energy spectrum of less
than a few percent in the energy range between 500 GeV and 1 TeV.
The overall signs of $G_5$ and $G_6$ are changed relative to
\cite{Dawson2}. The couplings and constants are defined in
Eqs.~(\ref{const1},\ref{const2}) and the term $\hat{\beta}$ is given by
\begin{equation}
\hat{\beta} \, = \, \left(\,
\frac{ 4\,(E_H^2 - m_H^2)\,(E_H^0 - E_H)}{\sqrt{s}\,
(\,(E_H^0 - E_H)\,\sqrt{s} + 2m_t^2\,)} \,\right)^{1/2}
\,,
\label{betahut}
\end{equation}
with the large Higgs energy endpoint being defined as
\begin{equation}
E_H^0 \, \equiv \, \frac{ s+m_H^2-4 m_t^2}{2\sqrt{s}}
\,.
\end{equation}
\end{appendix}
|
2,869,038,154,625 | arxiv | \section*{Appendix}
\noindent {\it Microscopics of the model.---} ${\cal N} = 2^{*}$ gauge theory is obtained as a deformation of the maximally supersymmetric $SU(N)$ Yang-Mills theory. The field content of ${\cal N}=4$ SYM theory includes the
gauge field $A_\mu$, four Majorana fermions $\psi_a$, and three complex
scalars $\phi_i$, all in the adjoint
representation. SYM theory can be deformed by adding two
independent `mass' terms \cite{Hoyos:2011uh}
\begin{equation}\label{lagdef}
\delta {\cal L}= -2\,\int d^4x\,\left[ \,m_b^2\,{\cal O}_b
+m_f\,{\cal O}_f\,\right]
\end{equation}
where
\begin{eqnarray}
{\cal O}_b&=&\frac13 {\mathop{\rm Tr}}\left(\, |\phi_1|^2 + |\phi_2|^2 - 2\,|\phi_3|^2
\,\right)\,,
\nonumber\\
{\cal O}_f&=& -{\mathop{\rm Tr}}\biggl( i\,\psi_1\psi_2 -\sqrt{2}g_\mt{YM}\,\phi_3
[\phi_1,\phi_1^\dagger] +\sqrt{2}g_\mt{YM}\,\phi_3
[\phi_2^\dagger,\phi_2] \nonumber\\
&&+ {\rm h.c.}\biggr)
+\frac23 m_f\, {\mathop{\rm Tr}}\left(\, |\phi_1|^2 + |\phi_2|^2 +
|\phi_3|^2\, \right)\,.
\label{obof}
\end{eqnarray}
The relevant
deformation \eqref{lagdef} breaks scale invariance and, when $m_b=m_f$, half of the supersymmetries of the parent SYM;
for general mass parameters $m_b\ne m_f$ supersymmetry is completely broken.
In the planar limit and for large 't Hooft coupling, ${\cal N}=2^*$ gauge theory possesses a holographically
dual gravitational description \cite{Pilch:2000ue,Buchel:2000cn} which provides an opportunity
to study its thermodynamic \cite{Buchel:2003ah,Buchel:2007vy},
hydrodynamic \cite{Buchel:2004hw,Benincasa:2005iv,Buchel:2007mf,Buchel:2008uu}, and far from equilibrium properties
\cite{Buchel:2012gw,Buchel:2015saa}. The duality between ${\cal N}=2^*$ gauge theory at strong coupling
and the gravitational Pilch-Warner effective action \cite{Pilch:2000ue} (PW) allows for precision tests of the holographic correspondence in a nonconformal setting \cite{Buchel:2013id,Bobev:2013cja}.
\vspace{10 pt}
\noindent {\it Holographic equations of motion.---} Under the assumptions of homogeneity and isotropy, we obtain the following
equations of motion, describing dynamics of ${\cal N}=2^*$ gauge theory at strong coupling within the dual gravitational framework:
\begin{eqnarray}\nonumber
&&0=d_+'\Sigma+2 {\Sigma'}\ d_+\ln\Sigma+
\frac \Sigma6\ V,\\ \nonumber
&&0=A''-6(\ln\Sigma)'\ d_+\ln\Sigma +4\chi' d_+\chi +12\alpha' d_+\alpha
-\frac V6 \\ \nonumber
&&d_+'\alpha+\frac{3}{2}\ \left((\ln\Sigma)'d_+\alpha+\alpha' d_+\ln\Sigma \right)
-\frac {1}{48} \partial_\alpha V, \\ \label{ev}
&&0=d_+'\chi+\frac{3}{2}\
\left((\ln\Sigma)'d_+\chi+\chi' d_+\ln\Sigma \right)-\frac{1}{16}\partial_\chi V,
\label{ev1}
\end{eqnarray}
as well as the Hamiltonian constraint equation:
\begin{equation}\label{ham}
0=\Sigma''+\left(4 (\alpha')^2+\frac 43 (\chi')^2\right) \Sigma
\end{equation}
and the momentum constraint equation:
\begin{eqnarray}\nonumber
0&=&d^2_+\Sigma -2 A\Sigma' -(4 A \Sigma'+A' \Sigma)d_+\ln\Sigma
\\
&&+\left(4 (d_+\alpha)^2+\frac 43 (d_+\chi)^2\right)\Sigma
-\frac 13 \Sigma A V.
\label{mom}
\end{eqnarray}
In Eqs. \eqref{ev}-\eqref{mom}
we denoted $'= \frac{\partial}{\partial r}$, $\dot\ =\frac{\partial}{\partial t}$.
The initial state of the gauge theory is specified by providing the scalar profiles
$\alpha(0,r)$ and $\chi(0,r)$ and solving the constraint \eqref{ham}
subject to the boundary conditions given by Eq.~(9) from the article. Eqs.~\eqref{ev} can then be used
to dynamically evolve the state.
\vspace{10 pt}
\noindent {\it ${\cal N}=2^*$ in the conformal limit.---}
When $m_b=m_f=0$, the case of the parent conformal SYM,
equations of motion
\eqref{ev1}-\eqref{mom}
yield the \emph{exact} solution
\begin{equation}
\alpha=\chi=0\,,\ \Sigma=\frac {ar}{2}\,,\ A=\frac{r^2}{8}\left(1-\frac{\mu^4}{r^4a^4}\right)-\frac {\dot a}{a}\ r,
\label{sym}
\end{equation}
where the \emph{dimensionful} constant $\mu$ is related to the local temperature~via
\begin{equation}
T= \frac{\mu}{4 \pi\,a}.
\label{eq.tempexpand}
\end{equation}
We find in this case, in agreement with \cite{Apostolopoulos:2008ru},
\begin{equation}\label{cftres}
\epsilon=\frac 38\pi^2 N^2 T^4+\frac{3N^2(\dot a)^4}{32\pi^2a^4}\,,\ P=\frac 13\epsilon
-\frac{N^2(\dot a)^2\ddot a}{8\pi^2 a^3}.
\end{equation}
It is clear (see \cite{Apostolopoulos:2008ru}) that the stress tensor
\eqref{cftres} arises from a conformal transformation performed on an equilibrium state in Minkowski while redefining the time variable to bring the background metric to take the FLRW form.
|
2,869,038,154,626 | arxiv | \section{Introduction}
Black hole quasinormal modes (QNMs), describing the characteristic oscillations of black holes, have attracted a lot of attention recently, see for example reviews~\cite{Chandrasekhar:1985kt,Kokkotas:1999bd,Berti:2009kk,Konoplya:2011qq} and references therein. Due to the existence of an event horizon, the black hole spacetimes are intrinsically dissipative so that quasinormal frequencies are complex in general and the imaginary part is associated with the timescale of the perturbation. QNMs play vital roles on various aspects, ranging from gravitational wave astronomy~\cite{Berti:2015itd,Barack:2018yly} to the application in the context of the anti--de Sitter/conformal field theory (AdS/CFT) correspondence~\cite{Maldacena:1997re,Gubser:1998bc,Witten:1998qj}.
The AdS/CFT correspondence states that QNMs of a ($D+1$)-dimensional asymptotically AdS black hole or brane are poles of the retarded Green's function in the dual conformal field theory in $D$ dimensions at strong coupling. Horowitz and Hubeny first studied scalar QNMs on Schwarzschild-AdS black holes~\cite{Horowitz:1999jd} (see also~\cite{Chan:1996yk,Chan:1999sc}), and numerous works were then followed to explore QNMs of various spin fields on asymptotically AdS black holes, see for example~\cite{Wang:2000gsa,Wang:2000dt,Govindarajan:2000vq,Zhu:2001vi,Birmingham:2001hc,Cardoso:2001bb,Cardoso:2001vs,Moss:2001ga,Birmingham:2001pj,Konoplya:2002ky,Musiri:2003rv,Berti:2003ud,Jing:2005uy,Hertog:2004bb,Giammatteo:2004wp,Siopsis:2004as,Gutsche:2019blp,Abdalla:2019irr,Che:2019jvy,Lin:2019fte,Aragon:2020tvq,Chernicoff:2020kmf,Konoplya:2017zwo,Hendi:2018hdo,Gonzalez:2018xrq}.
Mathematically QNMs are defined as eigenvalues of perturbation equations with physically relevant boundary conditions. Considering a lot of studies already performed in literatures, however, a generic boundary condition is still lacking. Recently, we have proposed the vanishing energy flux principle~\cite{Wang:2015goa,Wang:2016dek}, which may be applied both to the Regge-Wheeler-Zerilli and to the Teukolsky formalisms, and leads to two sets of Robin type boundary conditions and has been successfully employed to explore QNMs of the Maxwell~\cite{Wang:2015goa,Wang:2015fgp} and Dirac fields~\cite{Wang:2017fie,Wang:2019qja}. In this paper, we follow the same rationale and generalize our previous studies of the Maxwell QNMs on Schwarzschild-AdS black holes, by adding a \textit{global monopole} on the backgrounds.
The global monopoles, as a special class of topological defects, may be formed in the early universe through the spontaneous symmetry breaking of the global O(3) symmetry to U(1)~\cite{Kibble:1976sj,Vilenkin:1984ib}, according to the Grand Unified Theories. The gravitational properties of monopoles have been extensively studied, and an unusual property induced by global monopoles is that it possesses a solid deficit angle. This property makes black holes with a global monopole and without a global monopole topologically different, and thus leads to interesting physical consequences~\cite{Pan:2008xz,Chen:2005vq,Yu:2002st,Chen:2009vz,Piedra:2019ytw,Secuk:2019njc,Soroushfar:2020wch,Zhang:2014xha}.
The purpose of this study is twofold. On one hand, we explore the impact of the global monopole on the Maxwell quasinormal spectrum on Schwarzschild-AdS black holes, by imposing vanishing energy flux boundary conditions. On the other hand, it is well known that, on spherically symmetric backgrounds, the Maxwell equations may be written either in the Regge-Wheeler-Zerilli or in the Teukolsky formalisms. As we argued before~\cite{Wang:2015goa}, by imposing vanishing energy flux boundary conditions, the Maxwell equations in both formalisms lead to the same quasinormal spectrum. Here we show explicitly, through calculating normal modes in both formalisms with vanishing energy flux boundary conditions, that it is \textit{indeed} the case, even if a global monopole is included.
The structure of this paper is organized as follows. In Section~\ref{seceq} we introduce the Schwarzschild-AdS black holes with a global monopole, and show the Maxwell equations both in the Regge-Wheeler-Zerilli and in the Teukolsky formalisms. In Section~\ref{secbc} we present the explicit boundary conditions, based on the vanishing energy flux principle, for both the Regge-Wheeler-Zerilli variable and the Teukolsky variable of the Maxwell field. We then perform an analytic matching calculation for small AdS black holes in Section~\ref{secana}, and a numeric calculation in Section~\ref{secnum}. Final remarks and conclusions are presented in the last section.
\section{background geometry and the field equations}
\label{seceq}
In this section, we first briefly review the background geometry we shall study, i.e. Schwarzschild-AdS black holes with a global monopole, and then present equations of motion for the Maxwell fields on the aforementioned backgrounds both in the Regge-Wheeler-Zerilli and in the Teukolsky formalisms.
\subsection{The line element}
We start by considering the following line element of a Schwarzschild-AdS black hole with a global monopole
\begin{equation}
ds^2=\dfrac{\Delta_r}{r^2}dt^2-\dfrac{r^2}{\Delta_r}dr^2-r^2\left(d\theta^2+\sin^2\theta d\varphi^2\right) \;,\label{metric}
\end{equation}
with the metric function
\begin{equation}
\Delta_r\equiv r^2\left(\tilde{\eta}^2+\frac{r^2}{L^2}\right)-2Mr\;,\label{metricfunc}
\end{equation}
where $L$ is the AdS radius, $M$ is the mass parameter. Here the dimensionless parameter $\tilde{\eta}^2$ is defined by
\begin{equation}
\tilde{\eta}^2\equiv 1-8\pi\eta^2\;,\label{monopole}
\end{equation}
where $\eta$ is the global monopole parameter, and the Schwarzschild-AdS spacetimes may be recovered when $8\pi\eta^2=0$. The Hawking temperature may be calculated, and one obtains
\begin{equation}
T_H=\dfrac{\kappa}{2\pi}=\dfrac{3r_+^2+\tilde{\eta}^2L^2}{4\pi r_+L^2}\;,\nonumber
\end{equation}
where $r_+$ is the event horizon determined by the non-zero real root of $\Delta_r(r_+)=0$, and where the mass parameter has been expressed in terms of $r_+$ as
\begin{equation}
M=\dfrac{r_+(\tilde{\eta}^2L^2+r_+^2)}{2L^2}\;.\nonumber
\end{equation}
By introducing the following coordinates transformation
\begin{equation}
\tilde{t}=\tilde{\eta}t\;,\;\;\;\;\;\;\tilde{r}=\dfrac{r}{\tilde{\eta}}\;,\label{ttrt}
\end{equation}
and a new mass paramter
\begin{equation}
\tilde{M}=\dfrac{M}{\tilde{\eta}^3}\;,\label{masst}
\end{equation}
Eq.~\eqref{metric} becomes
\begin{align}
ds^2=&\left(1-\frac{2\tilde{M}}{\tilde{r}}+\frac{\tilde{r}^2}{L^2}\right)d\tilde{t}^{\;2}-\left(1-\frac{2\tilde{M}}{\tilde{r}}+\frac{\tilde{r}^2}{L^2}\right)^{-1}d\tilde{r}^2\nonumber\\&-\tilde{\eta}^2\tilde{r}^2\left(d\theta^2+\sin^2\theta d\varphi^2\right) \;.\label{metrict}
\end{align}
Now it becomes clear that the global monopole introduces a solid deficit angle, so that the solid angle of the above spacetime is $4\pi\tilde{\eta}^2$.
\subsection{Equations of motion in the Regge-Wheeler-Zerilli formalism}
In a spherically symmetric background, one may obtain variable separated and decoupled Maxwell equations by using the Regge-Wheeler-Zerilli method~\cite{Regge:1957td, Zerilli:1970se}. For that purpose, we start from the Maxwell equations
\begin{equation}
\nabla_{\nu}F^{\mu\nu}=0\;,\label{Maxwelleq}
\end{equation}
where the field strength tensor is defined as $F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$.
We then expand the vector potential $A_\mu$ in terms of the scalar and vector spherical harmonics~\cite{Ruffini:1973}
\begin{equation}
A_{\mu}=\sum_{\ell, m}\left(\left[\begin{array}{c} 0 \\
0 \\
a^{\ell m}(t,r) \boldsymbol {S}_{\ell m}\end{array}\right]+\left[
\begin{array}{c}j^{\ell m}(t,r)Y_{\ell m} \\
h^{\ell m}(t,r)Y_{\ell m} \\
k^{\ell m}(t,r)\boldsymbol {Y}_{\ell m}
\end{array}\right]\right)\;,\label{Vpotential}
\end{equation}
with the definition of the vector spherical harmonics
\begin{equation}
\boldsymbol {S}_{\ell m}=
\left(\begin{array}{c} \frac{1}{\sin \theta} \partial_{\varphi}Y_{\ell m} \\
-\sin \theta \partial_{\theta}Y_{\ell m}\end{array}\right)\;,\;\;\;
\boldsymbol {Y}_{\ell m}=
\left(\begin{array}{c} \partial_{\theta}Y_{\ell m} \\
\partial_{\varphi}Y_{\ell m}\end{array}\right)\;,\nonumber
\end{equation}
where $Y_{\ell m}$ are the scalar spherical harmonics, $m$ is the azimuthal number, and $\ell$ is the angular momentum quantum number. Note that the first term in the right hand side of Eq.~\eqref{Vpotential} has parity $(-1)^{\ell+1}$ while the second term has parity $(-1)^\ell$, and we shall call the former (latter) the axial (polar) modes. By substituting Eq.~\eqref{Vpotential} into Eq.~\eqref{Maxwelleq} with the assumption
\begin{align}
a^{\ell m}(t,r)=e^{-i\omega t}a^{\ell m}(r)\;,\;\;\;j^{\ell m}(t,r)=e^{-i\omega t}j^{\ell m}(r)\;,\nonumber\\
h^{\ell m}(t,r)=e^{-i\omega t}h^{\ell m}(r)\;,\;\;\;k^{\ell m}(t,r)=e^{-i\omega t}k^{\ell m}(r)\;,\nonumber
\end{align}
one obtains the Schrodinger-like radial wave equation
\begin{equation}
\left(\frac{d^2}{dr_{*}^2}+\omega^2-\ell(\ell+1)\dfrac{\Delta_r}{r^4}\right)\Psi(r)=0\;,\label{RWZeq}
\end{equation}
where the tortoise coordinate is defined as
\begin{equation}
\dfrac{dr_*}{dr}=\dfrac{r^2}{\Delta_r}\;,\label{tortoisecoor}
\end{equation}
with $\Psi(r)=a^{\ell m}(r)$ for axial modes, and
\begin{equation}
\Psi(r)=\dfrac{r^2}{\ell(\ell+1)}\left(-i\omega h^{\ell m}(r)-\dfrac{dj^{\ell m}(r)}{dr}\right)\;,\nonumber
\end{equation}
for polar modes.
\subsection{Equations of motion in the Teukolsky formalism}
Equations of motion of the Maxwell fields may be also derived within the Teukolsky formalism~\cite{Teukolsky:1973ha}. This approach is based on the Newmann-Penrose algorithm~\cite{Newman:1961qr}, and is particularly relevant to study linear perturbations of the massless spin fields on rotating black hole backgrounds. In this subsection we outline the radial equations, which may be obtained following the procedures presented in~\cite{Khanal:1983vb}.
The radial equation is
\begin{equation}
\Delta_r^{-s}\dfrac{d}{dr}\left(\Delta_r^{s+1}\dfrac{d R_{s}(r)}{dr}\right)+H(r)R_{s}(r)=0\;,\label{Teukeq}
\end{equation}
with
\begin{eqnarray}
H(r)=\dfrac{K_r^2-i s K_r \Delta_r^\prime}{\Delta_r}+2is K_r^\prime +\dfrac{s+|s|}{2}\Delta_r^{\prime\prime}
-\lambda\;,\nonumber
\end{eqnarray}
where $K_r=\omega r^2$, $\lambda=\ell(\ell+1)$ and the spin parameter is $s=\pm1$.
\section{boundary conditions}
\label{secbc}
In order to solve the radial equations, given by Eqs.~\eqref{RWZeq} and~\eqref{Teukeq}, one has to impose physically relevant boundary conditions, both at the horizon and at infinity. At the horizon, we impose the commonly used ingoing wave boundary conditions. At infinity, we impose \textit{the vanishing energy flux principle}, proposed in~\cite{Wang:2015goa} (see also~\cite{Wang:2016dek,Wang:2016zci}), which have already been employed to study the Maxwell~\cite{Wang:2015goa,Wang:2015fgp} and the Dirac~\cite{Wang:2017fie,Wang:2019qja} QNMs on asymptotically AdS spacetimes. Based on this principle, in the following we derive explicit boundary conditions for Eqs.~\eqref{RWZeq} and~\eqref{Teukeq}, which are obtained in the Regge-Wheeler-Zerilli and in the Teukolsky formalisms respectively, and we will show both equations with the corresponding boundary conditions lead to the same spectrum in the next section.
\subsection{Boundary conditions in the Regge-Wheeler-Zerilli formalism}
We start from the energy-momentum tensor of the Maxwell field, which is given by
\begin{equation}
T_{\mu \nu}=F_{\mu\sigma}F^\sigma_{\;\;\;\nu}+\dfrac{1}{4}g_{\mu\nu}F^2\;.\label{EMTensor}
\end{equation}
Then the spatial part of the radial energy flux may be calculated as
\begin{equation}
\mathcal{F}|_r\propto\dfrac{\Delta_r}{r^2}\Psi(r)\Psi^\prime(r)\;,\label{RWZbc1}
\end{equation}
where $\prime$ denotes the derivative with respect to $r$.
By expanding Eq.~\eqref{RWZeq} asymptotically as
\begin{equation}
\Psi\sim a_{0}+\frac{a_{1}}{r}+\mathcal{O}\left(\frac{1}{r^2}\right)\;,\label{RWZasysol}
\end{equation}
Eq.~\eqref{RWZbc1} becomes
\begin{equation}
\mathcal{F}|_{r,\infty}\propto a_0a_1\;.\nonumber
\end{equation}
Then the vanishing energy flux principle, i.e. $\mathcal{F}|_{r,\infty}=0$, leads to
\begin{align}
a_0&=0\;,\label{RWZbc2-1}\\
a_1&=0\;.\label{RWZbc2-2}
\end{align}
\subsection{Boundary conditions in the Teukolsky formalism}
The explicit boundary conditions for the Teukolsky variables of the Maxwell fields on a global monopole Schwarzschild-AdS black hole can be derived directly, following the similar prescriptions described in~\cite{Wang:2015goa,Wang:2016zci}. Since the monopole parameter does not alter the asymptotic structure of AdS spacetimes, one may get exactly the same boundary conditions as to the Schwarzschild-AdS case, and the results are listed in the following.
To be specific, we focus on the boundary conditions for $R_{-1}$. From Eq.~\eqref{Teukeq} one obtains the asymptotic behavior of $R_{-1}$ as
\begin{equation}
R_{-1} \sim \;\alpha^{-} r+\beta^{-}+\mathcal{O}(r^{-1})\;,\label{asysol}
\end{equation}
and the vanishing energy flux principle leads to~\cite{Wang:2015goa,Wang:2016zci}
\begin{align}
&\dfrac{\alpha^{-}}{\beta^{-}}=\dfrac{i}{\omega L^2}\;,\label{Teubc1}
\\
&\dfrac{\alpha^{-}}{\beta^{-}}=\dfrac{i\omega}{-\ell(\ell+1)+\omega^2L^2}\; .\label{Teubc2}
\end{align}
\section{Analytics}
\label{secana}
\subsection{Normal modes}
The normal modes of the Maxwell fields on an empty AdS background with a global monopole are calculated \textit{analytically} in this subsection, both in the Regge--Wheeler--Zerilli and in the Teukolsky formalisms, by solving Eq.~\eqref{RWZeq} with boundary conditions~\eqref{RWZbc2-1} and~\eqref{RWZbc2-2}, and Eq.~\eqref{Teukeq} with boundary conditions~\eqref{Teubc1} and~\eqref{Teubc2}. These calculations provide a concrete example to show \textit{explicitly} that, \textit{vanishing energy flux} is a generic principle, which can be applied to both formalisms and leads to the same spectrum.
\subsubsection{Normal modes in the Regge--Wheeler--Zerilli formalism}
In a pure AdS spacetime with a global monopole ($M=0$), the metric function becomes
\begin{equation}
\Delta_r=r^2\left(\tilde{\eta}^2+\dfrac{r^2}{L^2}\right)\;,\nonumber
\end{equation}
then the radial equation~\eqref{RWZeq} can be solved, and one obtains
\begin{align}
&\Psi(r)=r^{\tilde{\ell}+1}\left(r^2+\tilde{L}^2\right)^{-\frac{\tilde{\omega}\tilde{L}}{2}}\Big[c_1F\Big(\frac{1+\tilde{\ell}-\tilde{\omega}\tilde{L}}{2},\Big.\Big.\nonumber \\ & \Big.\Big.\frac{2+\tilde{\ell}-\tilde{\omega}\tilde{L}}{2},\frac{3}{2}+\tilde{\ell};-\frac{r^2}{\tilde{L}^2}\Big)-c_2e^{-2i\pi\tilde{\ell}}\Big(\frac{\tilde{L}}{r}\Big)^{2\tilde{\ell}+1}\Big.\nonumber \\ & \Big.F\Big(-\frac{\tilde{\ell}+\tilde{\omega}\tilde{L}}{2},\frac{1-\tilde{\ell}-\tilde{\omega}\tilde{L}}{2},\frac{1}{2}-\tilde{\ell};-\frac{r^2}{\tilde{L}^2}\Big)\Big]\;.\label{AdSRWZeq}
\end{align}
Here $c_1$, $c_2$ are two integration constants with dimension of inverse length, $F(a,b,c,z)$ is the hypergeometric function, and
\begin{equation}
\tilde{\ell}=\dfrac{1}{2}\Big(-1+\dfrac{\sqrt{4\ell^2+4\ell+\tilde{\eta}^2}}{\tilde{\eta}}\Big)\;,\;
\tilde{L}=\tilde{\eta}L\;,\;\tilde{\omega}=\dfrac{\omega}{\tilde{\eta}^2}\;,\label{pararelation}
\end{equation}
where $\ell=1,2,3,\cdot\cdot\cdot$. By expanding Eq.~\eqref{AdSRWZeq} at large $r$, we get relations between $c_1$ and $c_2$, i.e.
\begin{equation}
\dfrac{c_2}{c_1}=e^{2i\pi\tilde{\ell}}\dfrac{\Gamma\left(\frac{3}{2}+\tilde{\ell}\right)\Gamma\left(\frac{1-\tilde{\ell}-\tilde{\omega}\tilde{L}}{2}\right)\Gamma\left(\frac{1-\tilde{\ell}+\tilde{\omega}\tilde{L}}{2}\right)}{\Gamma\left(\frac{1}{2}-\tilde{\ell}\right)\Gamma\left(\frac{2+\tilde{\ell}-\tilde{\omega}\tilde{L}}{2}\right)\Gamma\left(\frac{2+\tilde{\ell}+\tilde{\omega}\tilde{L}}{2}\right)}\;,\label{RWZrela1}
\end{equation}
which corresponds to the first boundary condition given by Eq.~\eqref{RWZbc2-1}, and
\begin{equation}
\dfrac{c_2}{c_1}=e^{2i\pi\tilde{\ell}}\frac{\Gamma\left(\frac{3}{2}+\tilde{\ell}\right)\Gamma\left(\frac{-\tilde{\ell}-\tilde{\omega}\tilde{L}}{2}\right)\Gamma\left(\frac{-\tilde{\ell}+\tilde{\omega}\tilde{L}}{2}\right)}{\Gamma\left(\frac{1}{2}-\tilde{\ell}\right)\Gamma\left(\frac{1+\tilde{\ell}-\tilde{\omega}\tilde{L}}{2}\right)\Gamma\left(\frac{1+\tilde{\ell}+\tilde{\omega}\tilde{L}}{2}\right)}\;,\label{RWZrela2}
\end{equation}
which corresponds to the second boundary condition given by Eq.~\eqref{RWZbc2-2}.
Then by expanding Eq.~\eqref{AdSRWZeq} at small $r$
\begin{equation}
\Psi(r)\sim\;c_1r^{1+\tilde{\ell}}\tilde{L}^{-\tilde{\omega}\tilde{L}}-c_2e^{-2i\pi\tilde{\ell}}r^{-\tilde{\ell}}\tilde{L}^{1+2\tilde{\ell}-\tilde{\omega}\tilde{L}}\;,
\end{equation}
we shall set $c_2=0$ to get a regular solution at the origin. This condition leads to two sets of normal modes
\begin{equation}
\Gamma\Big(\frac{2+\tilde{\ell}-\tilde{\omega}\tilde{L}}{2}\Big)=-N\;\Rightarrow\;\tilde{\omega}_{1, N} \tilde{L}=2N+\tilde{\ell}+2\;,\label{normal1}
\end{equation}
from Eq.~\eqref{RWZrela1}, and
\begin{equation}
\Gamma\Big(\frac{1+\tilde{\ell}-\tilde{\omega}\tilde{L}}{2}\Big)=-N\;\Rightarrow\;\tilde{\omega}_{2, N} \tilde{L}=2N+\tilde{\ell}+1\;,\label{normal2}
\end{equation}
from Eq.~\eqref{RWZrela2}, where $N=0,1,2,\cdot\cdot\cdot$. The above two normal modes, by noticing that $\tilde{\ell}$ is not an integer anymore, are \textit{different}. This is an interesting observation, since for the case without a global monopole, the two sets of the Maxwell normal modes are isospectral up to one mode~\cite{Wang:2015goa}.
\subsubsection{Normal modes in the Teukolsky formalism}
In this case the radial Teukolsky equation~\eqref{Teukeq} becomes
\begin{align}
&\Delta_rR_{-1}''(r)+\left(\dfrac{K_r^2+i K_r \Delta_r^\prime}{\Delta_r}-2iK_r^\prime
-\ell(\ell+1)\right)R_{-1}(r)\nonumber\\
&=0\;,\label{Teufareq1}
\end{align}
with
\begin{equation}
\Delta_r= r^2 \Big(\tilde{\eta}^2+\dfrac{r^2}{L^2}\Big)\;,\;\;\;\;\;\;K_r=\omega r^2 .\nonumber
\end{equation}
The general solution for Eq.~\eqref{Teufareq1} is
\begin{align}
&R_{-1}=r^{\tilde{\ell}+1}(r-i\tilde{L})^{\frac{\tilde{\omega} \tilde{L}}{2}}(r+i\tilde{L})^{-\tilde{\ell}-\frac{\tilde{\omega}\tilde{L}}{2}}\Big[c_3F\Big(\tilde{\ell},\tilde{\ell}+1\Big.\Big.\nonumber \\ & \Big.\Big.+\tilde{\omega}\tilde{L},2\tilde{\ell}+2;\dfrac{2r}{r+i\tilde{L}}\Big)+c_4(-2)^{-2\tilde{\ell}-1}\Big(1+\dfrac{i\tilde{L}}{r}\Big)^{2\tilde{\ell}+1}\Big.\nonumber \\ & \Big.F\Big(-\tilde{\ell}-1,-\tilde{\ell}+\tilde{\omega}\tilde{L},-2\tilde{\ell};\dfrac{2r}{r+i\tilde{L}}\Big)\Big]\;,\label{Teufarsol}
\end{align}
where $F(a,b,c;z)$ is again the hypergeometric function, $c_3$ and $c_4$ are two integration constants with dimension of inverse length. These two constants are related to each other by the boundary conditions through expanding Eq.~\eqref{Teufarsol} at large $r$:
\begin{itemize}
\item[$\bullet$] By imposing the first boundary condition given in Eq.~\eqref{Teubc1}, one gets a first relation between $c_3$ and $c_4$
\begin{equation}
\dfrac{c_4}{c_3}=(-2)^{1+2\tilde{\ell}}\dfrac{\tilde{\ell}}{1+\tilde{\ell}}\dfrac{F(1+\tilde{\ell},1+\tilde{\ell}+\tilde{\omega}\tilde{L},2+2\tilde{\ell};2)}{F(-\tilde{\ell},-\tilde{\ell}+\tilde{\omega} \tilde{L},-2\tilde{\ell};2)}\;.\label{Teuc1c2bc1}
\end{equation}
\item[$\bullet$] By imposing the second boundary condition given in Eq.~\eqref{Teubc2}, on the other hand, one gets a second relation between $c_3$ and $c_4$
\begin{equation}
\dfrac{c_4}{c_3}=(-2)^{1+2\tilde{\ell}}\dfrac{\tilde{\ell}}{\tilde{\ell}+1}\dfrac{\mathcal{A}_1}{\mathcal{A}_2}\;,\label{Teuc1c2bc2}
\end{equation}
where
\begin{align}
\mathcal{A}_1=&(1+\tilde{\ell}) F(\tilde{\ell},1+\tilde{\ell}+\tilde{\omega}\tilde{L},2+2\tilde{\ell};2)\nonumber\\&+\tilde{\omega}\tilde{L} F(1+\tilde{\ell},1+\tilde{\ell}+\tilde{\omega}\tilde{L},2+2\tilde{\ell};2)\;,\nonumber\\
\mathcal{A}_2=&\tilde{\ell} F(1-\tilde{\ell},-\tilde{\ell}+\tilde{\omega}\tilde{L},-2\tilde{\ell};2)\nonumber\\&-\tilde{\omega}\tilde{L} F(-\tilde{\ell},-\tilde{\ell}+\tilde{\omega}\tilde{L},-2\tilde{\ell};2)\;.\label{expA}
\end{align}
\end{itemize}
Then from the small $r$ behavior of Eq.~\eqref{Teufarsol}
\begin{equation}
R_{-1}\sim c_3e^{i\pi\tilde{\ell}}2^{1+2\tilde{\ell}}\tilde{L}^{-2\tilde{\ell}}r^{\tilde{\ell}+1}+c_4\dfrac{-i \tilde{L}}{r^{\tilde{\ell}}}\;,\label{farsolnear}
\end{equation}
we have to set $c_4=0$ in order to get a regular solution of $R_{-1}$ at the origin. This regularity condition picks the normal modes, from Eqs.~\eqref{Teuc1c2bc1} and~\eqref{Teuc1c2bc2}:
\begin{eqnarray}
&&F(1+\tilde{\ell},1+\tilde{\ell}+\tilde{\omega}\tilde{L},2+2\tilde{\ell};2)=0\;,\nonumber\\
&&\Rightarrow\;\;\tilde{\omega}_{1,N}\tilde{L}=2N+\tilde{\ell}+2\;,\label{Teuknormalmode1}\\
&&\mathcal{A}_1=0\;,\nonumber\\&&\Rightarrow\;\;\tilde{\omega}_{2,N}\tilde{L}=2N+\tilde{\ell}+1\;,\label{Teuknormalmode2}
\end{eqnarray}
where again $N=0,1,2,\cdot\cdot\cdot$, and two sets of normal modes are \textit{different}. One may observe that normal modes obtained in the Teukolsky formalism, given in Eqs.~\eqref{Teuknormalmode1} and~\eqref{Teuknormalmode2}, are exactly the same with the counterpart obtained in the Regge-Wheeler-Zerilli formalism, given in Eqs.~\eqref{normal1} and~\eqref{normal2}, which indicates the equivalence of the two formalisms and the universality of the vanishing energy flux boundary conditions.
\subsection{Quasinormal modes for small black holes}
In this subsection, we perform an analytic calculation of quasinormal frequencies for the Maxwell fields on a small Schwarzschild-AdS black hole with a global monopole, by using an asymptotic matching method. Note that for this case the analytic calculation is only applicable to the Teukolsky formalism.
\subsubsection{Near region}
In the near region, and with small black hole approximation $r_+\ll \tilde{L}$, Eq.~\eqref{Teukeq} becomes
\begin{equation}
\left(\Delta_r\dfrac{d^2}{dr^2}+\dfrac{\tilde{\eta}^4r_{+}^2\bar{\omega}}{\Delta_{r}}-\ell(\ell+1)\right)R_{-1}(r)=0\;,\label{Teumatch1}
\end{equation}
with
\begin{equation}
\bar{\omega}=\Big(\tilde{\omega}r_++\dfrac{i}{2}\Big)^2+\frac{1}{4}\;,\;\;\;\Delta_r=\tilde{\eta}^2r(r-r_+)\;,\label{omegabar}
\end{equation}
where $\tilde{\omega}$ is defined in Eq.~\eqref{pararelation}. By defining a new dimensionless variable
\begin{equation}
z\equiv1-\dfrac{r_+}{r}\;,\nonumber
\end{equation}
it is convenient to transform Eq.~\eqref{Teumatch1} into
\begin{align}
&z(1-z)\dfrac{d^2R_{-1}}{dz^2}-2z\dfrac{dR_{-1}}{dz}+\Big(\dfrac{\bar{\omega}(1-z)}{z}-\dfrac{\tilde{\ell}(\tilde{\ell}+1)}{(1-z)}\Big)R_{-1}\nonumber\\
&=0\;,\label{Teumatch2}
\end{align}
where $R_{-1}\equiv R_{-1}(z)$, and $\tilde{\ell}$ is defined in Eq.~\eqref{pararelation}. The above equation can be solved in terms of the hypergeometric function
\begin{equation}
R_{-1}\sim z^{1-i\tilde{\omega}r_+} (1-z)^{\tilde{\ell}}F(\tilde{\ell}+1, \tilde{\ell}+2-2i\tilde{\omega}r_+, 2-2i\tilde{\omega}r_+; z)\;,\label{matchingsolnear}
\end{equation}
where an ingoing boundary condition has been imposed. In order to match the far region solution, here we shall further expand the near region solution, given in Eq.~\eqref{matchingsolnear}, at large $r$. To do so, we take the $z\rightarrow1$ limit and use the property of the hypergeometric function~\cite{abramowitz+stegun}, then obtain
\begin{equation}
R_{-1}\sim \Gamma(2-2i\tilde{\omega}r_+) \left[\dfrac{R^{\rm near}_{-1,1/r}}{r^{\tilde{\ell}}}+R^{\rm near}_{-1,r}r^{\tilde{\ell}+1}\right]\;,\label{Teuknearfar}
\end{equation}
where
\begin{align}
&R^{\rm near}_{-1,1/r}=\dfrac{\Gamma(-2\tilde{\ell}-1)r_+^{\tilde{\ell}}}{\Gamma(1-\tilde{\ell}-2i\tilde{\omega}r_+)\Gamma(-\tilde{\ell})}\;,\nonumber\\
&R^{\rm near}_{-1,r}=\dfrac{\Gamma(2\tilde{\ell}+1)r_+^{-\tilde{\ell}-1}}{\Gamma(\tilde{\ell}+1)\Gamma(\tilde{\ell}+2-2i\tilde{\omega}r_+)}\;.\label{TeuknearfarCoeff}
\end{align}
\subsubsection{Far region}
In the far region, the black hole effects may be neglected, and the solution is given by Eq.~\eqref{Teufarsol}. In order to match this solution with the near region solution, we shall expand Eq.~\eqref{Teufarsol} at small $r$, then obtain
\begin{equation}
R_{-1}\sim\dfrac{R^{\rm far}_{-1,1/r}}{r^{\tilde{\ell}}}+R^{\rm far}_{-1,r}r^{\tilde{\ell}+1}\;,\label{Teukfarnear}
\end{equation}
with
\begin{equation}
R^{\rm far}_{-1,1/r}\equiv -i\tilde{L}c_4\;,\;\;\;
R^{\rm far}_{-1,r}\equiv2^{1+2\tilde{\ell}}e^{i\pi\tilde{\ell}}\tilde{L}^{-2\tilde{\ell}}c_3\;,\label{TeukfarnearCoeff}
\end{equation}
where the constants $c_3$ and $c_4$ are related with each other by Eqs.~\eqref{Teuc1c2bc1} and~\eqref{Teuc1c2bc2}, corresponding to the first and second boundary conditions.
\subsubsection{Overlap region}
In the overlap region the solutions, obtained in the near region given by Eq.~\eqref{Teuknearfar} and in the far region given by Eq.~\eqref{Teukfarnear}, are the same up to a constant. Then one may impose the matching condition, $R^{\rm near}_{-1, r}R^{\rm far}_{-1, 1/r}=R^{\rm far}_{-1, r}R^{\rm near}_{-1, 1/r}$, which gives
\begin{align}
&\dfrac{\Gamma(-2\tilde{\ell}-1)}{\Gamma(-\tilde{\ell})}\dfrac{\Gamma(\tilde{\ell}+1)}{\Gamma(2\tilde{\ell}+1)}\dfrac{\Gamma(\tilde{\ell}+2-2i\tilde{\omega}r_+)}{\Gamma(-\tilde{\ell}+1-2i\tilde{\omega}r_+)}\left(\dfrac{r_+}{\tilde{L}}\right)^{2\tilde{\ell}+1}\nonumber\\
&=\left(\dfrac{-i}{2}\right)^{1+2\tilde{\ell}}\dfrac{c_4}{c_3}\;.\label{Teukrel}
\end{align}
By imposing the \textit{first} boundary condition and using the corresponding relation between $c_3$ and $c_4$ given by Eq.~\eqref{Teuc1c2bc1}, one obtains
\begin{align}
&\dfrac{\Gamma(-2\tilde{\ell}-1)}{\Gamma(-\tilde{\ell})}\dfrac{\Gamma(\tilde{\ell}+1)}{\Gamma(2\tilde{\ell}+1)}\dfrac{\Gamma(\tilde{\ell}+2-2i\tilde{\omega}r_+)}{\Gamma(-\tilde{\ell}+1-2i\tilde{\omega}r_+)}\left(\dfrac{r_+}{\tilde{L}}\right)^{2\tilde{\ell}+1}\nonumber\\
&=i^{1+2\tilde{\ell}}\dfrac{\tilde{\ell}}{1+\tilde{\ell}}\dfrac{F(1+\tilde{\ell},1+\tilde{\ell}+\tilde{\omega}\tilde{L},2+2\tilde{\ell};2)}{F(-\tilde{\ell},-\tilde{\ell}+\tilde{\omega} \tilde{L},-2\tilde{\ell};2)}\;,\label{Teukqnm1}
\end{align}
while by imposing the \textit{second} boundary condition and using the corresponding relation between $c_3$ and $c_4$ given by Eq.~\eqref{Teuc1c2bc2}, one obtains
\begin{align}
&\dfrac{\Gamma(-2\tilde{\ell}-1)}{\Gamma(-\tilde{\ell})}\dfrac{\Gamma(\tilde{\ell}+1)}{\Gamma(2\tilde{\ell}+1)}\dfrac{\Gamma(\tilde{\ell}+2-2i\tilde{\omega}r_+)}{\Gamma(-\tilde{\ell}+1-2i\tilde{\omega}r_+)}\left(\dfrac{r_+}{\tilde{L}}\right)^{2\tilde{\ell}+1}\nonumber\\
&=i^{1+2\tilde{\ell}}\dfrac{\tilde{\ell}}{1+\tilde{\ell}}\dfrac{\mathcal{A}_1}{\mathcal{A}_2}\;,\label{Teukqnm2}
\end{align}
where $\mathcal{A}_1$ and $\mathcal{A}_2$ are given by Eq.~\eqref{expA}.
For a small black hole ($r_+\ll \tilde{L}$), at the leading order of $r_+/\tilde{L}$, the left terms in Eqs.~\eqref{Teukqnm1} and~\eqref{Teukqnm2} vanish, and then we shall require the right terms in both equations to vanish as well. These conditions lead to two sets of normal modes, given by Eqs.~\eqref{Teuknormalmode1} and~\eqref{Teuknormalmode2}. Then QNMs of small black holes may be obtained perturbatively by solving Eqs.~\eqref{Teukqnm1} and~\eqref{Teukqnm2}, on top of normal modes. To achieve this goal, we expand the frequency
\begin{equation}
\tilde{\omega}_j\tilde{L}=\tilde{\omega}_{j, N}\tilde{L}+i\delta_j\;,\label{Teukexpan}
\end{equation}
where $j=1, 2$, and $\tilde{\omega}_{j, N}$ refer to normal modes. Here $\delta_j$ is complex in general, and its real part, i.e. $\Re(\delta_j)$, reflects damping rate of a black hole. The general expression of $\delta_j$, which is usually messy and lengthy but can be derived straightforwardly by substituting Eq.~\eqref{Teukexpan} into Eqs.~\eqref{Teukqnm1} and~\eqref{Teukqnm2}.
\section{Numeric reuslts}
\label{secnum}
Beyond the regime where the asymptotic matching method is valid, one has to look for black hole quasinormal spectrum by resorting to numerics. In this part, we utilize a numeric pseudospectral method, adapted from our previous works~\cite{Wang:2019qja,Wang:2021upj}, to solve the Maxwell equations given in the Regge-Wheeler-Zerilli formalism~\eqref{RWZeq} with the corresponding boundary conditions given by Eqs.~\eqref{RWZbc2-1} and~\eqref{RWZbc2-2}.~\footnote{Note that, as we have checked, the same spectrum may be also obtained by solving the Teukolsky equation~\eqref{Teukeq} with the corresponding boundary conditions given by Eqs.~\eqref{Teubc1} and~\eqref{Teubc2}.}
Before we introduce the pseudospectral method, here goes a few comments on the dimensionless form of Eq.~\eqref{RWZeq} which is essential for numeric calculations. For the case we considered in this paper, one may either take the unit of $L$ or take the unit of $\tilde{L}$ (with the definition given in Eq.~\eqref{pararelation}). For the former choice, Eq.~\eqref{RWZeq} may be written as
\begin{equation}
\left[\frac{\Delta_r}{r^2}\frac{d}{dr}\left(\frac{\Delta_r}{r^2}\frac{d}{dr}\right)+\omega^2L^2-\ell(\ell+1)\dfrac{\Delta_r}{r^4}\right]\Psi(r)=0\;,\label{RWZeq2}
\end{equation}
where $r$ is an abbreviation of $\tfrac{r}{L}$ so that it is dimensionless, and
\begin{equation}
\frac{\Delta_r}{r^2}=\frac{r-r_+}{r}\left(\tilde{\eta}^2+r^2+r_+r+r_+^2\right)\;,
\end{equation}
where $r_+$ is a dimensionless event horizon.
By taking the unit of $\tilde{L}$, Eq.~\eqref{RWZeq} becomes
\begin{equation}
\left[g(r)\frac{d}{dr}\left(g(r)\frac{d}{dr}\right)+\tilde{\omega}^2\tilde{L}^2-\tilde{\ell}(\tilde{\ell}+1)\dfrac{g(r)}{r^2}\right]\Psi(r)=0\;,\label{RWZeq3}
\end{equation}
with
\begin{equation}
g(r)=\frac{r-r_+}{r}\left(1+r^2+r_+r+r_+^2\right)\;,\label{eqg}
\end{equation}
where $r$ is an abbreviation of dimensionless radial coordinate $\tfrac{r}{\tilde{L}}$, $\tilde{\omega}$, $\tilde{L}$ and $\tilde{\ell}$ are given in Eq.~\eqref{pararelation}.
By noticing that $g(r)$ has the same form with the metric of Schwarzschild-AdS, so Eq.~\eqref{RWZeq3} is exactly the same with the Maxwell equation on Schwarzschild-AdS, by replacing $\ell$ with $\tilde{\ell}$. This is also the Maxwell equation one may obtain by starting from the metric given by Eq.~\eqref{metrict}. Therefore, in our numeric calculations, we take the unit of $\tilde{L}$ and set $\tilde{L}=1$, and calculate the frequencies $\tilde{\omega}$. As we have checked, in the unit of $\tilde{L}$, the quasinormal frequencies have the uniform behaviors for various values of $r_+$, $\ell$ and $N$, and which is consistent with the physical picture that the global monopole produces the repulsive force.
In order to employ a pseudospectral method conveniently, we first transform Eq.~\eqref{RWZeq}, which is a quadratic eigenvalue problem, into a linear eigenvalue problem, by
\begin{equation}
\Psi=e^{-i\omega r_\ast}\phi\;,\label{spectraltrans}
\end{equation}
where the tortoise coordinate $r_\ast$ is still defined in Eq.~\eqref{tortoisecoor}. Then changing the coordinate from $r$ to $z$ through
\begin{equation}
z=1-\dfrac{2r_+}{r}\;,\label{rtoz}
\end{equation}
which brings the integration domain from $r\in[r_+,\infty]$ to $z\in[-1,+1]$, and discretizing the $z$ coordinate according to the Chebyshev points
\begin{equation}
z_j=\cos\left(\dfrac{j\pi}{n}\right)\;,\;\;\;\;\;\;j=0,1,...,n\;,\label{spectralpoints}
\end{equation}
where $n$ denotes the number of grid points, Eq.~\eqref{RWZeq} turns into an algebraic equation
\begin{equation}
(M_0+\tilde{\omega} M_1)\phi(z)=0\;.\label{spectraleq2}
\end{equation}
Here $M_0$ and $M_1$ are matrices, which may be constructed straightforwardly by discretizing Eq.~\eqref{RWZeq} in terms of the Chebyshev points and Chebyshev differential matrices~\cite{trefethen2000spectral}.
Boundary conditions associated to $\phi(z)$, may be derived from the transformation given by Eq.~\eqref{spectraltrans}. At the horizon, since an ingoing wave boundary condition is satisfied automatically, we simply impose a regular boundary condition for $\phi(z)$. At infinity, from Eqs.~\eqref{spectraltrans} and~\eqref{RWZasysol}, one obtains
\begin{equation}
\phi(z)=0\;,\label{spectralbc1}
\end{equation}
corresponding to the condition given in Eq.~\eqref{RWZbc2-1}, and
\begin{equation}
\dfrac{\phi^\prime(z)}{\phi(z)}=\dfrac{i\tilde{\omega}}{2r_+}\;,\label{spectralbc2}
\end{equation}
corresponding to the condition given in Eq.~\eqref{RWZbc2-2}.
\bigskip
One should note that we use $\tilde{\omega}_1$ ($\tilde{\omega}_2$) to represent the quasinormal frequency corresponding to the first (second) boundary conditions. A few selected data are presented below to demonstrate, in particular, the impact of global monopole on the spectrum. Also note that in our numeric calculations we focus on black holes with size $r_+\le1$ since in this regime the monopole effects are more relevant.~\footnote{Moreover, for large AdS black holes, the Maxwell spectrum may bifurcate, which has been explored in detail in our previous paper~\cite{Wang:2021upj}. }
\begin{table}
\caption{\label{EMmonopole} Quasinormal frequencies of the Maxwell fields on global monopole-Schwarzchild-AdS black holes with $8\pi\eta^2=0.1$, $N=0$, and for different black hole sizes $r_+$ with two different boundary conditions.}
\begin{ruledtabular}
\begin{tabular}{ l l l }
$r_+$ & $\tilde{\omega}_1 (\ell=1)$ & $\tilde{\omega}_2 (\ell=2)$ \\
\hline
0 & 3.0723 & 3.1300\\
0.2 & 2.5872 - 4.4684$\times 10^{-2}$ i & 2.9154 - 5.8491$\times 10^{-5}$ i\\
0.4 & 2.2876 - 0.3951 i & 2.8200 - 2.1218$\times 10^{-2}$ i\\
0.6 & 2.1772 - 0.7998 i & 2.7286 - 0.1226 i\\
0.8 & 2.1392 - 1.1934 i & 2.6826 - 0.2483 i\\
1 & 2.1292 - 1.5823 i & 2.6573 - 0.3742 i \\
\end{tabular}
\end{ruledtabular}
\end{table}
In Fig.~\ref{Fig_comp}, we compare the analytic calculations with numeric data, by taking the angular momentum quantum number $\ell=1$, the overtone number $N=0$ and the monopole parameter $8\pi\eta^2=0.05$, and find a good agreement for small black holes.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\hspace{-4mm}\includegraphics[clip=true,width=0.386\textwidth]{MaxwellGlobalAdSetab095compdiffunit.pdf}
\end{tabular}
\end{center}
\caption{\label{Fig_comp} Comparison of the imaginary part of quasinormal frequencies for the two sets of fundamental modes with $\ell=1$ and $8\pi\eta^2=0.05$, between the analytic matching approximation (dashed lines) and the numerical data (solid lines). Note that we use double logarithmic coordinates in this figure.}
\end{figure}
A few numeric data are tabulated in Table.~\ref{EMmonopole}. As one may observe, by taking $8\pi\eta^2=0.1$ and $N=0$, the real part of the Maxwell QNMs decreases while the magnitude of the imaginary part increases as the black hole size $r_+$ increases, similarly to the Schwarzschild-AdS case. In particular, the isospectrality of the modes for $\ell=1$ with the first boundary condition and $\ell=2$ with the second boundary condition is broken, due to the presence of the global monopole.
The effect of the angular momentum quantum number $\ell$ on the Maxwell quasinormal spectrum is presented In Fig.~\ref{Fig_elleffects}, for a black hole with size $r_+=1$, the global monopole $8\pi\eta^2=0.1$ and with the overtone number $N=0$. We observe, similarly to the Schwarzschild-AdS case (i.e. $8\pi\eta^2=0$) reported in~\cite{Wang:2015goa}, that for both boundary conditions the real part of the Maxwell quasinormal frequencies increases while the magnitude of imaginary part decreases, as $\ell$ increases.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\hspace{-4mm}\includegraphics[clip=true,width=0.356\textwidth]{MaxellEffectsN0rp1etab09.pdf}
\end{tabular}
\end{center}
\caption{\label{Fig_elleffects} The impact of the angular momentum quantum number $\ell$ on the real and imaginary parts of quasinormal modes for the first (red) and second (blue) boundary conditions.}
\end{figure}
As the main goal of this paper, we explore the impact of the global monopole on the Maxwell spectrum in Fig.~\ref{Fig_monopole}. As an illustrative example, here we take $r_+=0.5$, $\ell=1$, $N=0$, and we observe that, for both boundary conditions, the real (the magnitude of imaginary) part of the Maxwell QNMs increases (decreases) as the global monopole $8\pi\eta^2$ increases. As we have checked for various values of $r_+$, $\ell$ and $N$, the above mentioned behaviors are held. This may be understood as follows. From Eq.~\eqref{RWZeq3}, it shows clearly that the monopole parameter only appears in $\tilde{\ell}$ and $\tilde{\ell}$ plays the same role as $\ell$. From Fig.~\ref{Fig_ellt}, we observe $\tilde{\ell}$, by fixing $\ell$, increases as the monopole parameter $8\pi\eta^2$ increases, indicating the global monopole produces the repulsive force. This implies that, for larger monopole parameter, the perturbation fields (the Maxwell fields here) live longer around black holes, i.e. decay slower, exactly as shown in Fig.~\ref{Fig_monopole}.
\begin{figure*}
\begin{center}
\begin{tabular}{c}
\hspace{-4mm}\includegraphics[clip=true,width=0.363\textwidth]{MaxetabEffectsrp05ell1N0bc1Rediffunit.pdf}\hspace{12mm}\includegraphics[clip=true,width=0.363\textwidth]{MaxetabEffectsrp05ell1N0bc2Rediffunit.pdf}
\vspace{6mm}
\\
\hspace{-2mm}\includegraphics[clip=true,width=0.363\textwidth]{MaxetabEffectsrp05ell1N0bc1Imdiffunit.pdf}\hspace{12.5mm}\includegraphics[clip=true,width=0.363\textwidth]{MaxetabEffectsrp05ell1N0bc2Imdiffunit.pdf}
\end{tabular}
\end{center}
\caption{\label{Fig_monopole} The monopole effects on the real (top) and imaginary (bottom) part of quasinormal spectrum with the first (left) and second (right) boundary conditions, by taking $r_+=0.5$, $\ell=1$ and $N=0$ as an illustrative example. The similar behaviors are also observed for other values of $r_+$, $\ell$ and $N$.}
\end{figure*}
\begin{figure}
\begin{center}
\begin{tabular}{c}
\hspace{-4mm}\includegraphics[clip=true,width=0.36\textwidth]{elltVaryeta.pdf}
\end{tabular}
\end{center}
\caption{\label{Fig_ellt} Variation of $\tilde{\ell}$ with respect to the monopole parameter $8\pi\eta^2$, by taking $\ell=1$ as an example. It shows clearly that $\tilde{\ell}$ increases as the monopole parameter $8\pi\eta^2$ increases.}
\end{figure}
We have also studied the dependence of the Maxwell quasinormal frequencies on the overtone number $N$ in Fig.~\ref{Fig_Neffects}. For this case, we take $r_+=0.5$ and $\ell=1$. It is shown, from the left and middle panels, that for two boundary conditions, both the real part and the magnitude of imaginary part of the Maxwell frequencies increase as $N$ increases, and the excited modes for both branches are approximately evenly spaced in N. In the right panel, we display the imaginary part in terms of the real part of the Maxwell QNMs. It shows interestingly that two branches of QNMs (for excited states) lie on the same line for different N. This phenomenon has also been observed for the Dirac case~\cite{Wang:2019qja}, and indicates that, although two branches of QNMs are different, they are similar in the sense that the excited modes of one branch may be interpolated from the other branch.
\begin{figure*}
\begin{center}
\begin{tabular}{c}
\hspace{-4mm}\includegraphics[clip=true,width=0.3\textwidth]{MaxNEffectsell1rp05etab09Re.pdf}\;\;\;\hspace{2mm}\includegraphics[clip=true,width=0.3\textwidth]{MaxNEffectsell1rp05etab09Im.pdf}\;\;\;\hspace{2mm}
\includegraphics[clip=true,width=0.3\textwidth]{MaxNEffectsell1rp05etab09ReIm.pdf}
\end{tabular}
\end{center}
\caption{\label{Fig_Neffects} The impact of the overtone number $N$ on the real (left) and imaginary (middle) parts of quasinormal modes for the first (red) and second (blue) boundary conditions, with fixed $r_+=0.5$, $8\pi\eta^2=0.1$ and $\ell=1$. We also present the imaginary part in terms of the real part of QNMs in the right panel.}
\end{figure*}
\section{Discussion and Final Remarks}
\label{discussion}
In this paper we have studied the Maxwell quasinormal spectrum on a global monopole Schwarzschild-AdS black hole, by imposing a generic Robin type boundary condition. To this end, we first presented the Maxwell equations both in the Regge-Wheeler-Zerilli and in the Teukolsky formalisms and derived the explicit boundary conditions for the Regge-Wheeler-Zerilli and the Teukolsky variables, based on the vanishing energy flux principle. Then the Maxwell equations were solved in each formalism, both analytically and numerically.
In a pure AdS space with a global monopole, we have solved the Maxwell equations analytically in the aforementioned two formalisms. We found that two boundary conditions in each formalism lead to two \textit{different} normal modes, due to the presence of the global monopole. This is very different with the Schwarzschild-AdS case where normal modes obtained from two boundary conditions are the same, up to one mode. In the small black hole and low frequency approximations, we also solved the Maxwell equations in the Teukolsky formalism by using an analytic matching method and we verified that the analytic calculations coincide with the numeric data well.
We then varied the black hole size $r_+$, the angular momentum quantum number $\ell$, and the overtone number $N$, in the presence of a global monopole; and analyzed their effects on the two sets of the Maxwell quasinormal spectrum in the numeric calculations. We observed that, the impact of $r_+$, $\ell$ and $N$ on the Maxwell QNMs are very similar to the Schwarzschild-AdS case. In particular, we explored the monopole effects on the Maxwell spectrum, and we found that for both boundary conditions, the real part of the Maxwell spectrum increases while the magnitude of imaginary part decreases as the monopole parameter $8\pi\eta^2$ increases. These trends are direct consequences of the fact that the global monopole produces the repulsive force.
Finally, we would like to stress that the above mentioned QNMs behaviors were obtained in the unit of $\tilde{L}$. One may alternatively use the unit of $L$, and as we have checked for this case the monopole effects on the Maxwell spectrum are more involved. In the former choice, the Maxwell equations on Schwarzschild-AdS black holes with a global monopole may be reformulated to the Maxwell equations without a global monopole but with the modified angular momentum quantum number $\tilde{\ell}$, so that the repulsive nature of the global monopole becomes more clear.
\bigskip
\noindent{\bf{\em Acknowledgements.}}
This work is supported by the National Natural Science Foundation of China under Grant Nos. 11705054, 11881240252, 11775076, 11875025, 12035005, and by the Hunan Provincial Natural Science Foundation of China under Grant Nos. 2018JJ3326 and 2016JJ1012.
\bibliographystyle{h-physrev4}
|
2,869,038,154,627 | arxiv | \section{Introduction} \label{sec:intro}
Supervised learning can achieve good performance given considerable amounts of labelled data for training. One essential factor accounting for the recent successes in deep learning and image classification is the ImageNet database which contains more than 14 million hand-annotated images \cite{deng2009imagenet}. However, there exist many tasks in real-world applications where sufficient labelled data are not available, hence the performance of traditional supervised learning approaches can degrade significantly. One promising technique alleviating this problem is transfer learning which aims to transfer knowledge learned from the source domain to the target domain in which labelled data are sparse and expensive to collect \citep{weiss2016survey}. In many scenarios, domain adaptation is required since the data distributions in the source and target domains can be different and the models trained with source domain data are not directly applicable to the target domain \citep{patel2015visual}.
Since domain adaptation is a promising solution to the training data sparsity issue in many real-world applications, it has been studied in a variety of research tasks including image classification \citep{wang2019unifying}, semantic segmentation \citep{zhao2019multi}, depth estimation \citep{atapour2018real}, speech emotion recognition \citep{zhou2019transferable}, text classification \citep{zhou2019multi} and many others.
Domain adaptation approaches aim to model the domain shift between source and target domains and reduce the discrepancy by aligning the data distributions \citep{wang2019unifying,wang2020unsupervised}. In the scope of classification problems, this is usually boiled down to aligning the marginal and class conditional distributions across domains \citep{wang2018visual,chen2018joint}. However, most existing works are based on the assumption of homogeneity, i.e., the source and target data are represented in the same feature space with unaligned distributions \citep{zhao2019multi,wang2019unifying,zhang2019domain,wang2020unsupervised}. These approaches may not be applicable in situations where the source and target domains are \textit{heterogeneous} in the forms of data modalities (e.g., texts vs images) or representations (e.g., features extracted with different methods).
Attempts have been made to extend the success of domain adaptation approaches to the HDA problems, however, it is non-trivial for the common subspace learning methods due to the heterogeneous feature spaces across the source and target domains. One common solution to such extension is to learn two domain-specific projections instead of one unified projection for the source and target domains in HDA problems \cite{wang2011heterogeneous,li2018heterogeneous}. Nevertheless, there are at least two limitations in these existing methods. One is most of them use Maximum Mean Discrepancy (MMD) as the objective to learn the projection matrices. MMD based objectives have been outperformed by more recent ones based on locality preserving projection \cite{wang2020unsupervised,li2019locality} in homogeneous domain adaptation. In HDA problems, locality preserving objectives have not been well explored despite some attempts in \cite{wang2011heterogeneous,li2018heterogeneous}. In this paper, we present a succinct yet effective algorithm by extending the locality preserving objectives for heterogeneous domain adaptation. The other limitation of existing HDA approaches is the way how they exploit the unlabelled target-domain data are sub-optimal. In our work, we propose a novel selective pseudo-labelling strategy to take advantage of the unlabelled target-domain data. The selection is based on the classification confidence and applies to a variety of classification models (e.g., Nearest Neighbour, SVM and Neural Networks).
Specifically, we address the heterogeneous domain adaptation problem where the source and target data are represented in heterogeneous feature spaces. Following the same spirits of previous domain adaptation approaches \citep{wang2018visual,wang2019unifying,wang2020unsupervised}, we try to learn a common latent subspace where both source and target data can be projected and well aligned in the learnt subspace. Specifically, we learn domain-specific projections using a novel Cross-Domain Structure Preserving Projection (CDSPP) algorithm which is an extension of the classic Locality Preserving Projection (LPP) algorithm \citep{he2004locality}. CDSPP can facilitate class consistency preserving to learn domain-specific projections which can be used to map heterogeneous data representations into a common subspace for recognition. CDSPP is simple yet effective in solving the heterogeneous domain adaptation problem as empirically validated by our experimental results on several benchmark datasets. To take advantage of the unlabelled target-domain data in the semi-supervised HDA setting, a selective pseudo-labelling strategy is employed to progressively optimise the projections and target data label predictions. The contributions of this work can be summarised as follows:
\begin{itemize}
\item[-] A novel Cross-Domain Structure Preserving Projection algorithm is proposed for heterogeneous domain adaptation and the algorithm has a concise solution by solving a generalized eigenvalue problem;
\item[-] The proposed CDSPP algorithm is naturally for supervised HDA and we extend it to solve the semi-supervised HDA problems by employing an iterative pseudo-labelling approach;
\item[-] We validate the effectiveness of the proposed method on several benchmark datasets including the newly introduced Office-Home which contains much more classes than the previously used ones; the experimental results provide evidence our algorithm outperforms prior art.
\end{itemize}
\section{Related Work} \label{sec:related}
Most exiting research in domain adaptation for classification is based on the assumption of homogeneity \cite{wang2020unsupervised,li2019locality,li2020maximum}. The approaches are dedicated to either learning a domain-invariant feature extraction model (e.g., deep CNN \citep{chen2019progressive,zhang2019domain}) or learning a unified feature projection matrix \citep{wang2018visual,wang2019unifying,wang2020unsupervised} for all domains whilst neither of them applies to HDA. In this section, we briefly review related works on heterogeneous domain adaptation.
The existing approaches to HDA can be roughly categorized into \textit{cross-domain mapping} and \textit{common subspace learning}.
\subsection{Cross-Domain Mapping}
Cross-domain mapping approaches learn a projection from the source to the target domain. The projection can be learned for either \textit{feature transformation} \citep{hubert2016learning,shen2018unsupervised} or \textit{model parameter transformation} (e.g., SVM weights \citep{zhou2019multi,mozafari2016svm}). Feature transformation approaches learn a projection to map the source data into the target data by aligning the data distribution \citep{hubert2016learning} or the second-order moment \citep{shen2018unsupervised}. As a result, the transformed source data can help to learn a classifier for the target domain. To avoid mapping a lower-dimensional feature to a higher-dimensional space, PCA is usually employed to learn subspaces for both domains respectively \citep{hubert2016learning} as a pre-processing which can suffer from information loss.
Model parameter transformation approaches focus mainly on SVM classifier weights. For a multi-class classification problem, one-vs-all classifiers are learned for source and target domains using the respective labelled samples. Subsequently, the cross-domain mapping is learned from the paired class-level weight vectors \citep{zhou2019multi,mozafari2016svm}. Since the number of classes is far less than the number of samples, these approaches are more computationally efficient but rely too much on the learned classifiers and overlooked abundant information underlying the data distribution.
\subsection{Common Subspace Learning}
Common subspace learning is a more popular strategy for HDA. It learns domain-specific projections which map source and target domain data into a common subspace.
To this end, different approaches have been proposed with varying algorithms, e.g., Manifold Alignment \citep{wang2011heterogeneous,li2018transfer,fang2018discriminative,wu2021heterogeneous}, Canonical Correlation Analysis \citep{yan2017learning}, Coding Space Learning \citep{li2017locality,li2018heterogeneous,deng2019multiclass}, Deep Matrix Completion \citep{li2019heterogeneous} and Deep Neural Networks \citep{zhou2019deep,yao2019heterogeneous}.
Despite the diversity of implementation, the main objective of common subspace learning based HDA is similar, i.e., the alignment of the source and target domains.
To align the distributions, \citep{hubert2016learning,li2017locality,li2018heterogeneous,li2018transfer,li2019heterogeneous} chose to minimize the Maximum Mean Discrepancy (MMD) in their objectives which, however, can only align the means of domains (for marginal distributions) and the means of classes (for conditional distributions). As a result, the subspace learned via minimizing the MMD is not sufficiently discriminative. One alternative to MMD is the manifold learning using graph Laplacian \citep{wang2011heterogeneous,li2018heterogeneous,li2018transfer}.
Li et al. \citep{li2013learning} proposed a Heterogeneous Feature Augmentation (HFA) method and its semi-supervised version SHFA by learning domain-specific projections and a classifier (i.e. SVM) simultaneously. However, the computational complexity is $\mathcal{O}(n^3)$, where $n$ is the number of labelled samples and makes it extremely slow when $n$ is large.
Li et al. \citep{li2017locality} learned new feature representations for source and target data by encoding them with a shared codebook which requires the original features have the same dimensions for source and target domains. PCA was employed for this purpose as a pre-processing but can suffer from information loss. Lately, the authors incorporated the learning of two domain-specific projections (in place of PCA) into the coding framework \citep{li2018heterogeneous}. This work is similar to ours in the sense of local consistency using the graph regularization, however, it fails to align cross-domain class consistency due to the use of $k$ nearest neighbours to construct the similarity graph. In our work, the similarity graph is constructed based on class consistency, hence promoting the cross-domain conditional distribution alignment.
Transfer Independently Together (TIT) was proposed in \citep{li2018transfer}. It also learns domain-specific projections to align data distributions in the learned common subspace. The algorithm was based on a collection of tricks including kernel space, MMD, sample reweighting and landmark selection. In contrast, our solution is concise with one simple objective of cross-domain structure preserving. Recently, Huang et al. \citep{huang2020heterogeneous} proposed a novel algorithm, named heterogeneous discriminative features learning and label propagation (HDL). This algorithm is similar to ours in that both tend to preserve structure information in the learned common subspace. However, different objectives have been formulated. Our algorithm explicitly promotes the intra-class similarity for both within-domain and cross-domain samples, whilst HDL fails to consider the intra-class similarity for samples from the same domain in their formulation. In addition, different strategies of unlabelled target sample exploration were employed in two algorithms.
In summary, although manifold learning has been well studied in HDA, the existing formulations for domain-specific projection learning are either inefficient or ineffective. Our approach solves this issue and addresses the HDA problem with a novel CDSPP algorithm.
\section{Method} \label{sec:method}
To facilitate our presentation, we firstly describe the heterogeneous domain adaptation problem and notations used throughout this paper.
Given a labelled dataset $\mathcal{D}^s = \{(\bm{x}^s_i,y^s_i)\}, i = 1,2,...,n_s$ from the source domain $\mathcal{S}$, and a labelled dataset $\mathcal{D}^t = \{\bm{x}^t_i,y^t_i\}, i = 1,2,...,n_t$ from the target domain, $\bm{x}^s_i \in \mathbb{R}^{d_s}$ and $\bm{x}^{t}_i \in \mathbb{R}^{d_t}$ represent the feature vectors of $i$-th labelled samples in the source and target domains respectively; $d_s$ and $d_t$ are the dimensionalities of the source and target features; $y^s_i \in \mathcal{Y}$ and $y^t_i\in \mathcal{Y}$ denote the corresponding sample labels; $n_s$ and $n_t$ are the number of source and labelled target samples respectively. Let $\bm{X}^s \in \mathbb{R}^{d_s\times n_s}$ and $\bm{X}^t \in \mathbb{R}^{d_t\times n_t}$ be the feature matrices of labelled source and target data collectively, supervised HDA aims to learn a model from labelled source and target data, which can be used to classify samples from an unlabelled dataset $\mathcal{D}^u = \{\bm{x}^u_i\}, i = 1,2,...,n_u$ from the target domain, whose feature vectors can be collectively denoted as $\bm{X^u} \in \mathbb{R}^{d_t\times n_u}$.
The number of labelled target samples $n_t$ is usually very small, hence it is difficult to capture the data distribution in the target domain. Semi-supervised HDA takes advantage of the unlabelled target samples $\bm{X^u}$ during model training and can usually achieve better performance.
In this section, we describe the CDSPP algorithm which is naturally for supervised heterogeneous domain adaptation but can be used to address the semi-supervised heterogeneous domain adaptation problem by incorporating it into an iterative learning framework \citep{wang2019unifying,wang2020unsupervised} as shown in Figure \ref{fig:framework}.
\begin{figure}
\centering
{\includegraphics[width=\textwidth]{framework.pdf}}
{\caption{An illustration of the heterogeneous domain adaptation problem and our proposed approach using cross-domain structure preserving projection. Left: the HDA problem aims at recognizing unlabelled target-domain samples with the access of labelled source-domain samples and limited labelled target-domain samples. Right: The red and the blue colours are used to represent the feature vectors of samples in the target and source domains respectively; markers of different shapes represent samples from different classes; dashed markers represent unlabelled samples; our proposed CDSPP iteratively learn a common subspace in which the unlabelled target-domain samples are pseudo-labelled and selectively added to the training data set to promote the subspace learning in the next iteration.}
\label{fig:framework}}
\end{figure}
\subsection{Locality Preserving Projection} \label{sec:lpp}
To make the paper self-contained, we briefly describe the original LPP algorithm \citep{he2004locality} before introducing our proposed CDSPP in the next subsection. Locality Preserving Projection (LPP) was proposed by \citet{he2004locality} to learn a favourable subspace where the local structures of data in the original feature space can be well preserved. Suppose $\bm{x}_i \in \mathbb{R}^{d_0}$ and $\bm{x}_j\in \mathbb{R}^{d_0}$ are two data points in the original feature space, LPP aims at learning a projection matrix $\bm{P} \in \mathbb{R}^{d\times d_0}$ ($d<<d_0$) so that data points close to each other in the original space will still be close in the projected subspace. The objective of LPP can be formulated as:
\begin{equation}
\label{eq:lpp}
\min_{\bm{P}} \sum_{i,j} ||\bm{P}^T \bm{x}_i - \bm{P}^T \bm{x}_j||_2^2 \bm{W}_{ij},
\end{equation}
where $\bm{W}$ is the adjacency matrix of the graph constructed by all the data points. According to \cite{he2004locality}, the edges of the graph can be created by either $\epsilon-$neighbourhoods or $k$-nearest neighbours. The edge weights can be determined by the heat kernel $W_{ij} = e^{-\frac{||\bm{x}_i-\bm{x}_j||^2}{t}}$ or the simple binary assignment (i.e. all edges have the weights of 1).
Note that LPP is an unsupervised learning method without the need for labelling information. In the following subsection, we will describe how to extend the LPP algorithm to solve the HDA problems where there exist two heterogeneous domains and a mixture of labelled and unlabelled data.
\subsection{Cross-Domain Structure Preserving Projection} \label{sec:CDSPP}
The supervised version of LPP \citep{wang2017zero} was proved to be able to learn a subspace of better separability than other dimensionality reduction algorithms such as Linear Discriminant Analysis (LDA) \citep{wang2019unifying}. One limitation of LPP is that it can only learn the subspace from samples represented in a homogeneous feature space. To address this problem, we extend the traditional LPP so that its favourable characteristics can benefit cross-domain common subspace learning. Specifically, we aim to learn a projection matrix $\bm{P}_s \in \mathbb{R}^{d_s \times d}$ for the source domain and a projection matrix $\bm{P}_t \in \mathbb{R}^{d_t \times d}$ for the target domain to project the samples from source and target domains into a common subspace whose dimensionality is $d$. We expect the samples projections are close to one another if they are from the same class regardless of which domain they are from. To this end, we have the following objective:
\begin{equation}
\label{eq:cost}
\begin{array}{ll}
\displaystyle \min_{\bm{P}_s,\bm{P}_t} &(\sum_{i,j}^{n_s} || \bm{P}_s^T \bm{x}_i^s - \bm{P_s}^T \bm{x}_j^s||_2^2 \bm{W}_{ij}^s \\
\displaystyle & +\sum_i^{n_s} \sum_j^{n_t} || \bm{P}_s^T \bm{x}_i^s - \bm{P}_t^T \bm{x}_j^t||_2^2 \bm{W}_{ij}^c \\
\displaystyle & +\sum_{i,j}^{n_t} || \bm{P}_t^T \bm{x}_i^t - \bm{P}_t^T \bm{x}_j^t||_2^2 \bm{W}_{ij}^t)
\end{array}
\end{equation}
where $\bm{P}^T$ is the transpose of $\bm{P}$; $\bm{W}^s \in \mathbb{R}^{n_s \times n_s}$ is the similarity matrix of the source samples and $\bm{W}^s_{ij} = 1$ if $y^s_i = y^s_j$, 0 otherwise. Similary, $\bm{W}^t \in \mathbb{R}^{n_t \times n_t}$ is the similarity matrix of the \textit{labelled} target samples and $\bm{W}^t_{ij} = 1$ if $y^t_i = y^t_j$, 0 otherwise. $\bm{W}^c \in \mathbb{R}^{n_s \times n_t}$ is the cross-domain similarity matrix and $\bm{W}^c_{ij} = 1$ if $y^s_i = y^t_j$, 0 otherwise. It is noteworthy that all the feature vectors are $l2$-normalised to get rid of the effect of different magnitudes across features. This pre-processing has been proved to be useful for common subspace learning in \cite{wang2017zero,wang2019unifying,wang2020unsupervised}.
\begin{proposition}
The objective in Eq.(\ref{eq:cost}) can be reformulated as follows:
\begin{equation}
\label{eq:costDiv}
\max_{\bm{P}_s,\bm{P}_t} \frac{tr({\bm{X}^s}^T \bm{P}_s \bm{P}_t^T \bm{X}^t {\bm{W}^c}^
T)}{tr({\bm{X}^s}^T \bm{P}_s \bm{P}_s^T \bm{X}^s \bm{L}^s) + tr({\bm{X}^t}^T \bm{P}_t \bm{P}_t^T \bm{X}^t \bm{L}^t)}
\end{equation}
where $\bm{L}^s = \bm{D}^s - \bm{W}^s + \frac{1}{2}\bm{D}^{cs}$ and $\bm{L}^t = \bm{D}^t - \bm{W}^t + \frac{1}{2}\bm{D}^{ct}$; $\bm{D}^s \in \mathbb{R}^{n_s \times n_s}$ is a diagonal matrix with $\bm{D}^s_{ii} = \sum_j^{n_s} \bm{W}^s_{ij}$ and $\bm{D}^t \in \mathbb{R}^{n_t \times n_t}$ is a diagonal matrix with $\bm{D}^t_{jj} = \sum_i^{n_t} \bm{W}^t_{ij}$; $\bm{D}^{cs} \in \mathbb{R}^{n_s\times n_s}$ is a diagonal matrix with $\bm{D}_{ii}^{cs} = \sum_j^{n_t} \bm{W}^c_{ij}$ and $\bm{D}^{ct} \in \mathbb{R}^{n_t\times n_t}$ is a diagonal matrix with $\bm{D}_{jj}^{ct} = \sum_i^{n_s} \bm{W}^c_{ij}$.
\end{proposition}
\begin{proof}
By firstly doing the binomial expansion then transforming it to the form of matrix multiplication and trace of matrices, the first term in Eq.(\ref{eq:cost}) can be reformulated as follows:
\begin{equation}
\label{eq:term1}
\begin{array}{ll}
\sum_{i,j}^{n_s} || \bm{P}_s^T \bm{x}_i^s - \bm{P_s}^T \bm{x}_j^s||_2^2 \bm{W}_{ij}^s \\
= \sum_{i,j}^{n_s} ({\bm{x}_i^s}^T \bm{P}_s \bm{P}_s^T \bm{x}_i^s - 2 {\bm{x}_i^s}^T \bm{P}_s \bm{P}_s^T \bm{x}_j^s + {\bm{x}_j^s}^T \bm{P}_s \bm{P}_s^T \bm{x}_j^s) \bm{W}^s_{ij}\\
= 2 \sum_i^{n_s} {\bm{x}_i^s}^T \bm{P}_s \bm{P}_s^T \bm{x}_i^s \bm{D}_{ii}^s - 2\sum_{i,j}^{n_s} {\bm{x}_i^s}^T \bm{P}_s \bm{P}_s^T \bm{x}_j^s \bm{W}^s_{ij} \\
= 2 tr({\bm{X}^s}^T \bm{P}_s \bm{P}_s^T \bm{X}^s \bm{D}^s) - 2 tr({\bm{X}^s}^T \bm{P}_s \bm{P}_s^T \bm{X}^s \bm{W}^s) \\
\end{array}
\end{equation}
In the similar way, the third term in Eq.(\ref{eq:cost}) can be rewritten as:
\begin{equation}
\label{eq:term3}
\begin{array}{ll}
\sum_{i,j}^{n_t} || \bm{P}_t^T \bm{x}_i^t - \bm{P}_t^T \bm{x}_j^t||_2^2 \bm{W}_{ij}^t \\
= 2 tr({\bm{X}^t}^T \bm{P}_t \bm{P}_t^T \bm{X}^t \bm{D}^t) - 2 tr({\bm{X}^t}^T \bm{P}_t \bm{P}_t^T \bm{X}^t \bm{W}^t)
\end{array}
\end{equation}
The second term in Eq.(\ref{eq:cost}) can be rewritten as:
\begin{equation}
\label{eq:term2}
\begin{array}{ll}
\sum_i^{n_s} \sum_j^{n_t} || \bm{P}_s^T \bm{x}_i^s - \bm{P}_t^T \bm{x}_j^t||_2^2 \bm{W}_{ij}^c \\
= \sum_i^{n_s} \sum_j^{n_t} ({\bm{x}_i^s}^T \bm{P}_s \bm{P}_s^T \bm{x}_i^s - 2 {\bm{x}_i^s}^T \bm{P}_s \bm{P}_t^T \bm{x}_j^t \\
\qquad\qquad\qquad + {\bm{x}_j^t}^T \bm{P}_t \bm{P}_t^T \bm{x}_j^t) \bm{W}^c_{ij}\\
= \sum_i^{n_s} {\bm{x}_i^s}^T \bm{P}_s \bm{P}_s^T \bm{x}_i^s \bm{D}_{ii}^{cs} - 2\sum_i^{n_s}\sum_j^{n_t} {\bm{x}_i^s}^T \bm{P}_s \bm{P}_t^T \bm{x}_j^t \bm{W}^c_{ij} \\
\qquad\qquad\qquad + \sum_j^{n_t} {\bm{x}_j^t}^T \bm{P}_t \bm{P}_t^T \bm{x}_j^t \bm{D}_{jj}^{ct} \\
= tr({\bm{X}^s}^T \bm{P}_s \bm{P}_s^T \bm{X}^s \bm{D}^{cs}) - 2 tr({\bm{X}^s}^T \bm{P}_s \bm{P}_t^T \bm{X}^t {\bm{W}^c}^T) \\
\qquad\qquad\qquad + tr({\bm{X}^t}^T \bm{P}_t \bm{P}_t^T \bm{X}^t \bm{D}^{ct})\\
\end{array}
\end{equation}
Substitute Eqs.(\ref{eq:term1}-\ref{eq:term2}) into the objective Eq.(\ref{eq:cost}), we have the following form of objective:
\begin{equation}\label{eq:costMatrixForm}
\begin{array}{ll}
\displaystyle \min_{\bm{P}_s,\bm{P}_t} \big(tr({\bm{X}^s}^T \bm{P}_s \bm{P}_s^T \bm{X}^s \bm{L}^s) + tr({\bm{X}^t}^T \bm{P}_t \bm{P}_t^T \bm{X}^t \bm{L}^t) \\
- tr({\bm{X}^s}^T \bm{P}_s \bm{P}_t^T \bm{X}^t {\bm{W}^c}^T)\big)
\end{array}
\end{equation}
where $\bm{L}^s = \bm{D}^s - \bm{W}^s + \frac{1}{2}\bm{D}^{cs}$ and $\bm{L}^t = \bm{D}^t - \bm{W}^t + \frac{1}{2}\bm{D}^{ct}$.
Minimizing the objective in Eq.(\ref{eq:costMatrixForm}) is equivalent to maximizing the objective in Eq.(\ref{eq:costDiv}).
\end{proof}
\begin{proposition}
\label{propEigen}
The objective in Eq.(\ref{eq:costDiv}) is equivalent to the following generalized eigenvalue problem and the optimal projection matrix $\bm{P}=\begin{bmatrix}\bm{P}_s\\ \bm{P}_t \end{bmatrix}$ can be formed by $d$ eigenvectors corresponding to the largest $d$ eigenvalues:
\begin{equation}
\label{eq:eig}
\bm{A} \bm{P} = (\bm{B}+\alpha \bm{I}) \bm{P}\Lambda
\end{equation}
where $\bm{I} \in \mathbb{R}^{(n_s+n_t)\times(n_s+n_t)}$ is an identity matrix, $\alpha$ is a hyper-parameter for regularization \citep{wang2017zero}, $\Lambda$ is a diagonal eigenvalue matrix and
\begin{gather}\label{eq:a}
\bm{A} = \begin{bmatrix} \bm{0} & \bm{X}^s\bm{W}^c{\bm{X}^t}^T \\ \bm{X}^t{\bm{W}^c}^T {\bm{X}^s}^T & \bm{0} \end{bmatrix},
\end{gather}
\begin{gather}\label{eq:b}
\bm{B} =
\begin{bmatrix} \bm{X}^s \bm{L}^s {\bm{X}^s}^T & \bm{0} \\ \bm{0} & \bm{X}^t \bm{L}^t {\bm{X}^t}^T \end{bmatrix}.
\end{gather}
\end{proposition}
\begin{proof}
To make the proof process concise, we introduce notations $\bm{S}_s=\bm{X}^s \bm{L}^s {\bm{X}^s}^T$, $\bm{S}_t=\bm{X}^t \bm{L}^t {\bm{X}^t}^T$ and $\bm{S}_c=\bm{X}^s\bm{W}^c{\bm{X}^t}^T$.
Let
\begin{equation}
\label{eq:costJ}
\mathcal{J}(\bm{P}_s,\bm{P}_t) = \frac{tr(\bm{P}_t^T \bm{S}_c^T \bm{P}_s)}{tr(\bm{P}_s^T \bm{S}_s \bm{P}_s)+tr(\bm{P}_t^T \bm{S}_t \bm{P}_t)}
\end{equation}
be the objective function in Eq.(\ref{eq:costDiv}), we calculate the partial derivatives \citep{petersen2008matrix} of $\mathcal{J}$ w.r.t. $\bm{P}_s$ and $\bm{P}_t$ respectively, set them to 0 and get the following equations:
\begin{equation}
\label{eq:partialPs}
\bm{S}_c \bm{P}_t =\frac{2 tr(\bm{P}_t^T \bm{S}_c \bm{P}_s)}{tr(\bm{P}_s^T \bm{S}_s \bm{P}_s)+tr(\bm{P}_t^T \bm{S}_t \bm{P}_t)} \bm{S}_s \bm{P}_s
\end{equation}
\begin{equation}
\label{eq:partialPt}
\bm{S}_c^T \bm{P}_s =\frac{2 tr(\bm{P}_t^T \bm{S}_c \bm{P}_s)}{tr(\bm{P}_s^T \bm{S}_s \bm{P}_s)+tr(\bm{P}_t^T \bm{S}_t \bm{P}_t)} \bm{S}_t \bm{P}_t
\end{equation}
Note that the coefficients on the right side of Eqs(\ref{eq:partialPs}-\ref{eq:partialPt}) are exactly the objective in Eq.(\ref{eq:costJ}). It is easy to construct the following generalized eigenvalue problem by combining Eqs.(\ref{eq:partialPs}-\ref{eq:partialPt}):
\begin{gather}
\begin{bmatrix} \bm{0} & \bm{S}_c \\ \bm{S}_c^T & \bm{0} \end{bmatrix}
\begin{bmatrix} \bm{P}_s \\ \bm{P}_t \end{bmatrix} =
\begin{bmatrix} \bm{S}_s & \bm{0} \\ \bm{0} & \bm{S}_t \end{bmatrix}
\begin{bmatrix} \bm{P}_s \\ \bm{P}_t \end{bmatrix} \Lambda.
\end{gather}
The maximum objective is given by the largest eigenvalue solution to the generalized eigenvalue problem \citep{he2004locality} and the eigenvectors corresponding to the largest $d$ eigenvalues will form the projection matrix $\bm{P}_s$ and $\bm{P}_t$.
\end{proof}
\subsection{Recognition in the Subspace}\label{sec:recognition}
Once the projection matrices $\bm{P}_s$ and $\bm{P}_t$ are learned, we are able to project all the labelled samples into the learned common subspace by $\bm{z}^s_i = \bm{P}_s^T \bm{x}^s_i$ and $\bm{z}^t_i = \bm{P}_{t}^T \bm{x}^t_i$. Similar to the pre-processing for the training data, the feature vectors $\bm{x}$ need to be $l2$-normalised before being projected to the subspace. For the same reason, we also apply $l2$-normalisation to the projected vectors $\bm{z}$. The $l2$-normalisation re-allocates the projected vectors in the subspace to the surface of a hyper-sphere which will benefit the measurement of distances when do the recognition using the nearest neighbour method. More importantly, the $l2$-normalisation adds non-linearity to the process so that our proposed CDSPP method can handle practical problems when linear projection assumptions do not hold.
For each class, we calculate the class mean $\bar{\bm{z}}_c$ for $c=1,2,...,C$ using all the labelled sample from both source and target domains. Given an unlabelled target sample $\bm{x}^u$, we classify it to the closest class in terms of its Euclidean distances to the class means:
\begin{equation}
\label{eq:recognition}
y^* = \argmin_c d(\bar{\bm{z}}_c, \bm{P}_t^T \bm{x}^u)
\end{equation}
The proposed CDSPP for supervised HDA is summarized in Algorithm \ref{alg:hdasup}.
\textbf{Relation to DAMA} The CDSPP algorithm is quite similar to DAMA proposed in \citep{wang2011heterogeneous} at the first glance, however, they are essentially different from each other in that CDSPP does not seek to push the sample projections belonging to different classes apart, since the penalty imposed for this purpose (e.g., maximizing the term $B$ in \citep{wang2011heterogeneous}) might misguide the solution to focus too much on the separation of classes which are originally close to each other and hurt the overall separability of the learned subspace. In contrast, our objective in Eq.(\ref{eq:cost}) can guarantee the separability of the learned subspace by promoting the preserving of cluster structures underlying the original data distributions, which is simpler but more effective as validated by experiments.
\begin{algorithm}
\caption{Supervised HDA using CDSPP}
\label{alg:hdasup}
\renewcommand{\algorithmicinput}{\textbf{Input:}}
\renewcommand{\algorithmicoutput}{\textbf{Output:}}
\renewcommand{\algorithmicensure}{\textbf{Training:}}
\renewcommand{\algorithmicrequire}{\textbf{Testing:}}
\begin{algorithmic}[1]
\INPUT labelled source data set $\mathcal{D}^s = \{(\bm{x}^s_i,y^s_i)\}, i = 1,2,...,n_s$ and labelled target data set $\mathcal{D}^t=\{\bm{x}_i^t,y_i^t\},i=1,2,...,n_t$, the dimensionality of subspace $d$.
\OUTPUT The projection matrix $\bm{P}_s$ and $\bm{P}_t$ for source and target domains, the labels predicted for unlabelled target data $\bm{X}^u$.
\ENSURE
\STATE Learn the projection $\bm{P}_s$ and $\bm{P}_t$ using labelled data $\mathcal{D}^s \cup \mathcal{D}^t$ by solving the generalized eigenvalue problem in Eq.(\ref{eq:eig});
\REQUIRE
\STATE Classify unlabelled target samples $\bm{X}^u$ using Eq.(\ref{eq:recognition}).
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{Semi-supervised HDA using CDSPP}
\label{alg:hda}
\renewcommand{\algorithmicinput}{\textbf{Input:}}
\renewcommand{\algorithmicoutput}{\textbf{Output:}}
\renewcommand{\algorithmicensure}{\textbf{Training:}}
\renewcommand{\algorithmicrequire}{\textbf{Testing:}}
\begin{algorithmic}[1]
\INPUT labelled source data set $\mathcal{D}^s = \{(\bm{x}^s_i, y^s_i)\}, i = 1,2,...,n_s$, labelled target data set $\mathcal{D}^t=\{\bm{x}_i^t, y^t_i\},i=1,2,...,n_t$, unlabelled target data set $\mathcal{D}^u=\{\bm{x}_i^u\},i=1,2,...,n_u$ the dimensionality of subspace $d$, number of iteration $T$.
\OUTPUT The projection matrix $\bm{P}_s$ and $\bm{P}_t$ for source and target domains, the labels predicted for unlabelled target data $\bm{X}^u$.
\ENSURE
\STATE Initialize $k$=1;
\STATE Learn the projection $\bm{P}_s^{(0)}$ and $\bm{P}_t^{(0)}$ using labelled data $\mathcal{D}^s \cup \mathcal{D}^t$ by solving the generalized eigenvalue problem in Eq.(\ref{eq:eig});
\STATE Get the unlabelled target data set $\mathcal{D}^u$;
\WHILE {$k \leq T$}
\STATE Label all the samples from $\mathcal{D}^u$ by Eq.(\ref{eq:recognition});
\STATE Select a subset of (top $kn_u/T$ most confident) pseudo-labelled target samples $\mathcal{S}^{(k)} \subseteq \mathcal{D}^u$;
\STATE Learn $\bm{P}_s^{(k)}$ and $\bm{P}_t^{(k)}$ using a combination of labelled and pseudo-labelled data sets $\mathcal{D}^s \cup \mathcal{D}^t \cup \mathcal{S}^{(k)}$;
\STATE $k \leftarrow k+1$;
\ENDWHILE
\REQUIRE
\STATE Classify unlabelled target samples $\bm{X}^u$ using Eq.(\ref{eq:recognition}).
\end{algorithmic}
\end{algorithm}
\subsection{Extending to Semi-Supervised HDA} \label{sec:shda}
The CDSPP algorithm is naturally suitable for supervised HDA but can be extended to semi-supervised HDA by incorporating it into an iterative pseudo-labelling framework \citep{wang2019unifying}. Given a set of unlabelled target samples $\bm{X}^u$, they can be labelled by Eq.(\ref{eq:recognition}). The pseudo-labelled target samples can be used to update the projection matrices $\bm{P}_s$ and $\bm{P}_t$. However, when the domain shift is large and the number of labelled target samples is limited, the pseudo-labels can be wrong for a considerable number of target samples. In this case, the mistakenly pseudo-labelled target samples might hurt projection learning. To reduce this risk, confidence aware pseudo-labelling is proposed in \citep{wang2019unifying}. We employ the same idea and progressively select the most confidently pseudo-labelled target samples for the next iteration of CDSPP learning. The proposed CDSPP for semi-supervised HDA is summarized in Algorithm \ref{alg:hda}.
\subsection{Complexity Analysis}
The time complexity of CDSPP is mainly contributed by two parts: the matrix multiplications in Eqs.(\ref{eq:a}-\ref{eq:b}) and the eigen decomposition problem. The complexity of matrix multiplications is $\mathcal{O}((n_s+n_t)d_sd_t)$. The complexity of eigen decomposition is generally $\mathcal{O}((d_s+d_t)^3)$. As a result, the CDSPP algorithm has a complexity of $\mathcal{O}((n_s+n_t)d_sd_t+(d_s+d_t)^3)$. In the case of semi-supervised HDA, the time complexity will be increased by $T$ times and the value of $n_t$ increases by the number of selected pseudo-labelled target samples in each iteration.
\section{Experiments} \label{sec:experiments}
To evaluate the effectiveness of the proposed method in heterogeneous domain adaptation, we conduct thorough experiments on commonly used benchmark datasets. We compare the proposed approach with existing HDA methods and analyze its sensitivity to hyper-parameters.
\subsection{Datasets and Experimental Settings}
\textbf{Office-Caltech} \citep{gong2012geodesic} is an image dataset containing four domains: Amazon (A), Webcam (W), DSLR (D) and Caltech (C) from 10 common classes. Two image features (i.e. 4096-dim Decaf$_6$ and 800-dim SURF) are used for cross-domain adaptation.
\textbf{Multilingual Reuters Collection (MRC)} \citep{amini2009learning} is a cross-lingual text classification dataset containing 6 classes in 5 languages (i.e. EN, FR, GE, IT, SP). We follow the settings in \citep{hubert2016learning} extracting BoW features and applying PCA to get heterogeneous feature dimensions (i.e. 1131, 1230, 1417, 1041, 807 respectively) for five domains. In our experiments, SP serves as the target domain and the other four languages as the source domains respectively. As a result, we have four HDA tasks.
\textbf{NUS-WIDE} \citep{chua2009nus} and \textbf{ImageNet} \citep{deng2009imagenet} datasets are employed for text to image domain adaptation. Following \citep{chen2016transfer} we consider 8 overlapping classes using tag information represented by 64-dim features from NUS-WIDE as the source domain and 4096-dim Decaf$_6$ features of images from ImageNet as the target domain.
However, the above datasets contain very limited numbers of classes and may not discriminate capabilities of different methods. We introduce \textbf{Office-Home} \citep{venkateswara2017deep} containing four domains (i.e. Art, Clipart, Product and Real-world) as a new testbed for HDA. We use VGG16 \citep{simonyan2014very} and ResNet50 \citep{he2016deep} models pre-trained on ImageNet to extract 4096-dim and 2048-dim features. More details of the datasets and protocols used in our experiments are summarized in Table \ref{table:datasets}.
\begin{table}[!t]
\centering
\centering
\caption[]{The statistics of datasets (notations: LSS/c -- labelled Source Samples per class; LTS/c -- labelled Target Sample per class; UTS/c -- Unlabelled Target Samples per class; all -- all samples except the ones chosen as labelled target samples).
}
\label{table:datasets}
\resizebox{0.8\columnwidth}{!}
\begin{tabular}{ccccccc}
\hline
Dataset & \# Domain & \# Task & \# Class & \shortstack[c]{\# LSS/c} & \shortstack[c]{\# LTS/c} & \shortstack[c]{\# UTS/c} \\ \hline
Office-Caltech & 4 & 16 & 10 & 20 & 3 & all \\
MRC & 5 & 4 & 6 & 100 & 10 & 500\\
NUS-ImageNet & 2 & 1 & 8 & 100 & 3 & 100\\
Office-Home & 4 & 16 & 65 & 20 & 3 & all\\
\hline
\end{tabular}%
}
}
\end{table}
\subsection{Comparative Methods}
To evaluate the effectiveness of the proposed CDSPP in different HDA problems, we conduct a comparative study and compare the performance of CDSPP with state-of-the-art methods in both supervised and semi-supervised settings. Specifically, we compare with SVM$_t$, HFA \citep{li2013learning}, CDLS$\_$sup \citep{hubert2016learning} and a variant of DAMA \cite{wang2011heterogeneous} under the supervised HDA setting (i.e. the unlabelled target samples are not available during training).
\begin{itemize}
\item {SVM$_t$} is a baseline method that trains an SVM model on the target dataset $\mathcal{D}^t$ in a conventional supervised learning manner without using the source domain data.
\item HFA (Heterogeneous Feature Augmentation \citep{li2013learning}) is designed to solve the supervised HDA problem by augmenting the original features $\bm{x}^s, \bm{x}^t$ with transformed features $\bm{P}\bm{x}^s$, $\bm{Q}\bm{x}^t$ and zero vectors. The projection matrices $\bm{P}$ and $\bm{Q}$ for the source and target domains map the original features into a common subspace so that the similarity of features across two domains can be directly compared. The objective of learning $\bm{P}$ and $\bm{Q}$ is incorporated into the framework of classifier (i.e. SVM) training.
\item CDLS$\_sup$ (Cross-Domain Landmark Selection \citep{hubert2016learning}) is the supervised version of CDLS which aims to learn a projection matrix $\bm{A}$ to map source-domain data into the target domain. The objective is to align the cross-domain marginal and conditional data distributions by minimizing the Maximum Mean Discrepancy (MMD).
\item DAMA$\_sup$ (Domain Adaptation Using Manifold Alignment \cite{wang2011heterogeneous}) is originally designed for semi-supervised HDA problems. Similar to our proposed CDSPP, it also aims to learn two projection matrices to map source and target domain data to a common subspace where the manifolds of data from two domains are aligned. We adapt it for supervised HDA by considering only labelled data when constructing the feature similarity matrix $\bm{W}$, the label based similarity matrix $\bm{W}^s$ and dissimilarity matrix $\bm{W}^d$. Different from the suggestion in the original paper, we use an optimal $\mu=0.1$ throughout our experiments since this setting achieves the best performance.
\end{itemize}
For semi-supervised HDA, we compare with DAMA \citep{wang2011heterogeneous}, SHFA \citep{li2013learning}, CDLS \citep{hubert2016learning}, PA \citep{li2018heterogeneous}, TIT \citep{li2018transfer}, STN \citep{yao2019heterogeneous}, DDACL\citep{yao2020discriminative}, SSAN \citep{li2020simultaneous}
and DAMA+, our extension of DAMA by incorporating it into our iterative learning framework (c.f. Section \ref{sec:shda}).
\begin{itemize}
\item DAMA \cite{wang2011heterogeneous} is employed in the semi-supervised HDA experiments in its original form except the hyper-parameter $\mu$ is set as 0.1 as our experimental results show empirically it gives the optimal performance.
\item SHFA (Semi-supervised HFA \cite{li2013learning}) is an extension of HFA. It takes advantage of the unlabelled target-domain data by replacing the SVM in HFA with a Transductive SVM (T-SVM) \cite{collobert2006large} model.
\item CDLS \cite{hubert2016learning} is designed for semi-supervised HDA. As described above, it aims to learn a projection matrix $\bm{A}$ to map source-domain data into the target domain so that cross-domain data can be aligned. When unlabelled target-domain data are available in the semi-supervised HDA, the unlabelled data are pseudo-labelled by the supervised version CDLS$\_sup$. Subsequently, the pseudo-labelled data are used to update the projection $\bm{A}$. The processes are repeated for multiple iterations. In particular, the instances are weighted by learnable weights when constructing the objective function.
\item PA (Progressive Alignment \cite{li2018heterogeneous}) and TIT (Transfer Independently Together \cite{li2018transfer}) share a similar framework to CDLS but employ different algorithms of transformation matrix learning (involving MMD, graph embedding and regularisation) and different instance weight estimation strategies. The unlabelled target-domain data are also pseudo-labelled to optimize the transformation matrices iteratively.
\item STN (Soft Transfer Network \cite{yao2019heterogeneous}) jointly learns a domain-shared classifier and a domain-invariant subspace in an end-to-end manner. The network model is learned by optimising the objective similar to those in the aforementioned works, i.e., MMD. Besides, the unlabelled target-domain data are used by the soft-label strategy.
\item DDACL (Discriminative Distribution Alignment with Cross-entropy Loss \cite{yao2020discriminative}) trains an adaptive classifier by both reducing the distribution divergence and enlarging distances between class centroids.
\item SSAN (Simultaneous Semantic Alignment Network \cite{li2020simultaneous}) employs an implicit semantic correlation loss to transfer the correlation knowledge of source categorical prediction distributions to the target domain. A triplet-centroid alignment mechanism is explicitly applied to align feature representations for each category by leveraging target pseudo-labels. Note that the results of best accuracy of the test samples throughout the training process were reported in \cite{li2020simultaneous}, we argue that this is not achievable in practice since the labels of test samples are not available during training. Instead, we report the results achieved in the last iterations in our experiments.
\item DAMA+ is our adaptation of the original DAMA by incorporating the DAMA algorithm into our proposed iterative learning framework with selective pseudo-labelling. Specifically, we use the supervised version of DAMA described above to initialise the projection matrices and get the pseudo-labels of unlabelled target-domain data. The selected most confidently pseudo-labelled target-domain data will contribute to the update of projection matrices in the next iteration of learning. Finally, the optimal projection matrices and predicted target-domain data labels are obtained.
\item CDSPP+PCA is a variant of CDSPP by applying PCA to the original features and CDSPP is subsequently applied to the low-dimensional features. This pre-processing is specially designed for handcrafted features in the MRC and NUS-ImageNet datasets and 50 principal components are reserved for all features.
\end{itemize}
In all experiments, we use the optimal parameters suggested in the original papers for the comparative methods if not otherwise specified whilst set the hyper-parameters of CDSPP empirically as $d$ equal to the number of classes in the dataset, $\alpha=10$ and $T=5$. More details of hyper-parameter value selection will be discussed later.
\subsection{Comparison Results}
Although there exist fixed experimental protocols in terms of the number of labelled samples used for training as shown in Table \ref{table:datasets}, there is no standard data splits publicly available to follow. As will be demonstrated in our experimental results, selecting different samples for training can lead to significant performance variance. We generate data splits randomly in our experiments\footnote{The data splits and code are released: https://github.com/hellowangqian/cdspp-hda}. To mitigate the biases caused by the data selection, ten random data splits are generated for each adaptation task. We report the mean and standard deviation of the classification accuracy over these ten trials for each adaptation task. The results for all comparative methods are reproduced using the same data splits for a direct comparison. The implementations released by the authors are employed in our experiments. As a result, the results in this paper are not comparable with those reported in other papers since different sample selections have been used in our experiments. Our experimental results of both supervised and semi-supervised HDA on four datasets are shown in Tables \ref{table:mrc-tag2image}-\ref{table:resnet2vgg} from which we can obtain the following insights.
\begin{table}[!t]
\centering
\centering
\caption[]{Mean(std) of classification accuracy (\%) over ten trials for cross-language and tag-to-image adaptation under supervised (denoted by $*$) and semi-supervised settings (each column represents one Source $\to$ Target adaptation task).
}
\label{table:mrc-tag2image}
\resizebox{0.9\columnwidth}{!}
\begin{tabular}{lccccc|c}
\hline
Method & EN$\to$SP & FR$\to$SP& GE$\to$SP& IT$\to$SP & Avg & Tag$\to$Image \\ \hline
SVM$_t$ * & 67.0(2.4) & 67.0(2.4) & 67.0(2.4) & 67.0(2.4) & 67.0 & 60.6(6.0) \\
HFA \citep{li2013learning} * & 68.1(3.0) & 68.0(3.0) & 68.0(3.0) & 68.0(3.0) & 68.0 & 67.5(2.5)\\
CDLS\_sup \citep{hubert2016learning} * & 63.0(3.6) & 63.4(2.4) & 64.0(2.2) & 64.6(3.6) & 63.8 & 66.3(3.9)\\
DAMA\_sup * & 66.8(2.5) & 66.3(3.3) & 66.3(3.0) & 66.7(2.7) & 66.5 & 66.9(2.6) \\
CDSPP\_sup (Ours) * & 67.2(2.8) & 67.3(2.9) & 67.3(2.9) & 67.3(2.8) & 67.3 & 67.2(3.0)\\
\hline
DAMA \citep{wang2011heterogeneous} & 67.0(2.5) & 66.6(3.1) & 66.7(3.0) & 67.4(2.8) & 66.9 & 67.0(2.5)\\
SHFA \citep{li2013learning}& 66.9(3.7) & 66.1(2.7) & 67.5(3.1) & 67.4(2.2) & 67.0 & 68.1(2.7)\\
CDLS \citep{hubert2016learning} & 69.4(3.0) & 69.4(3.0) & 69.4(3.2) & 69.3(3.1) & 69.4 & 69.6(2.1)\\
PA \citep{li2018heterogeneous}& \bf 71.4(2.9) & \bf 71.6(2.9) & \bf 71.7(3.0) & \bf 72.3(2.5) & \bf 71.7 & 70.5(4.0)\\
TIT \citep{li2018transfer} & 67.1(2.8) & 67.6(2.6) & 66.1(3.5) & 67.8(2.0) & 67.2 & 70.7(3.4)\\
STN \citep{yao2019heterogeneous} & 67.1(3.6) & 67.3(2.5) & 66.9(3.5) & 66.7(3.8) & 67.0 & 74.3(5.2)\\
DDACL \cite{yao2020discriminative} & 70.2(3.0) & 70.4(3.1) & 70.8(3.0) & 70.9(3.0) & 70.6 & 73.8(2.8) \\
SSAN \cite{li2020simultaneous}& 69.9(2.9)& 69.4(2.8)& 69.3(4.0)& 70.2(2.5)& 69.7 & 71.4(1.2) \\
DAMA + & 68.9(2.1) & 68.8(4.0) & 68.9(2.7) & 68.2(3.5) & 68.7 & 73.4(4.3)\\
CDSPP (Ours) & 69.1(3.2) & 69.0(3.6) & 68.8(3.2) & 68.8(3.0) & 68.9 & 74.7(3.4)\\
CDSPP+PCA (Ours) & \bf 71.2(3.2) & \bf 71.7(3.1) & \bf 71.4(3.0) & \bf 72.1(3.0) & \bf 71.6 & \bf 76.5(3.3) \\
\hline
\end{tabular}%
}
}
\end{table}
\begin{table*}[!htbp]
\centering
\centering
\caption[]{Mean(std) of classification accuracy (\%) over ten trials on the Office-Caltech dataset using SURF (source) and Decaf$_6$ (target) features under supervised (denoted by $*$) and semi-supervised settings (each column represents one Source $\to$ Target adaptation task).}
\label{table:surf2decaf}
\resizebox{\columnwidth}{!}
\begin{tabular}{l cccc cccc ccccc cccc}
\hline
Method & C$\to$C & C$\to$A & C$\to$D & C$\to$W&A$\to$C & A$\to$A&A$\to$D & A$\to$W & D$\to$C & D$\to$A & D$\to$D & D$\to$W & W$\to$C & W$\to$A & W$\to$D & W$\to$W& Avg \\ \hline
SVM$_t$ * & 73.6(4.9) & 87.9(2.2) & 92.3(3.6) & 88.4(3.8)& 73.6(4.9) & 87.9(2.2) & 92.3(3.6) & 88.4(3.8)& 73.6(4.9) & 87.9(2.2) & 92.3(3.6) & 88.4(3.8)& 73.6(4.9) & 87.9(2.2) & 92.3(3.6) & 88.4(3.8)&85.5\\
HFA \citep{li2013learning} * & 80.1(2.3) & 88.9(1.9) & 91.6(3.6) & 90.7(3.5) & 80.2(2.3) & 88.9(1.9) & 91.5(3.6) & 90.5(3.6) & 80.2(2.2) & 88.8(1.9) & 91.8(3.6) & 90.7(3.5) & 80.2(2.3) & 88.8(1.9) & 91.5(3.7) & 90.6(3.7) & 87.8\\
CDLS\_sup \citep{hubert2016learning} *& 76.1(2.1) & 86.6(3.2) & 91.3(4.7) & 87.4(3.5) & 75.9(3.5) & 87.0(2.8) & 90.6(3.8) & 86.0(3.6) & 51.5(4.4) & 74.2(2.4) & 86.6(3.2) & 77.2(5.1) & 74.7(4.1) & 85.4(3.0) & 90.5(3.8) & 86.0(3.5) & 81.7\\
DAMA\_sup * & 78.7(2.4) & 87.3(2.2) & 91.5(2.6) & 88.6(4.3) & 77.4(3.2) & 85.9(2.4) & 90.7(3.3) & 88.2(4.1) & 79.6(2.2) & 88.8(1.6) & 90.1(3.6) & 89.4(4.1) & 78.5(2.6) & 87.4(2.0) & 89.1(3.1) & 88.6(4.7) & 86.2\\
CDSPP\_sup (Ours) * & 80.3(2.0) & 89.0(1.9) & 92.0(3.5) & 90.7(3.8) & 80.3(2.1) & 89.1(1.9) & 91.7(3.7) & 90.7(3.7) & 79.8(2.1) & 88.9(1.8) & 90.4(3.9) & 90.1(3.9) & 80.4(2.2) & 89.0(1.8) & 91.5(4.1) & 90.6(3.8) & 87.8\\
\hline
DAMA \citep{wang2011heterogeneous}& 76.6(2.6) & 86.2(1.9) & 91.0(2.5) & 88.2(4.3) & 73.6(4.7) & 83.3(2.6) & 88.8(3.7) & 86.5(4.4) & 77.5(2.5) & 88.4(1.6) & 90.7(4.2) & 90.1(3.8) & 76.1(2.9) & 86.0(2.3) & 87.7(4.7) & 86.8(5.8) & 84.8\\
SHFA \citep{li2013learning}& 77.1(2.8) & 86.2(3.8) & 93.0(3.6) & 90.0(2.6) & 80.5(3.1) & 86.7(2.2) & 94.3(2.5) & 90.0(4.0) & 81.6(2.1) & 88.5(2.9) & 93.5(3.9) & 92.0(4.1) & 80.5(1.8) & 88.5(2.4) & 93.5(3.5) & 89.5(4.2) & 87.8\\
CDLS \citep{hubert2016learning} & 80.6(1.8) & 88.8(2.1) & 93.0(3.2) & 91.1(3.7) & 80.6(1.8) & 88.8(2.1) & 92.0(3.0) & 91.0(4.5) & 78.4(2.7) & 87.2(2.3) & 93.0(3.7) & 88.9(5.6) & 81.0(2.0) & 88.6(2.2) & 92.1(3.3) & 91.4(4.2) & 87.9\\
PA \citep{li2018heterogeneous} & 87.2(1.1) & 90.8(1.3) & 92.9(3.3) & 93.9(3.9) & 87.0(1.1) & 90.5(1.7) & 94.7(2.5) & 94.0(3.9) & 87.0(1.3) & 90.5(2.0) & \bf 94.5(2.8) & 94.3(3.7) & 87.0(1.3) & 90.7(1.5) & 93.4(4.1) & 92.8(4.6) & 91.3\\
TIT \citep{li2018transfer} & 84.9(1.7) & 89.9(1.6) & 94.6(3.1) & 92.2(4.3) & 84.6(1.5) & 89.7(1.7) & 94.6(2.2) & 92.3(4.9) & 82.7(1.5) & 88.7(1.9) & 94.3(2.7) & 92.1(4.0) & 84.7(1.6) & 89.5(1.8) & 92.5(2.8) & 92.5(4.3) & 90.0\\
STN \citep{yao2019heterogeneous} & 88.2(1.7) & 92.4(0.7) & 94.4(2.0) & 92.8(4.9) & \bf 88.4(1.6) & 92.5(0.7) & 95.0(2.0) & 93.9(4.1) & 87.9(1.7) & 92.2(0.5) & 94.4(2.5) & 93.3(5.0) & \bf 88.2(1.8) & 92.6(0.8) & 93.9(3.2) & 92.2(5.1) & 92.0 \\
DDACL \cite{yao2020discriminative} & 86.5(1.6) & 91.8(0.9) & 94.2(2.8) & 93.5(3.4) & 86.2(1.9) & 83.1(11.2) & 89.1(5.9) & 92.3(3.9) & 86.2(1.7) & 91.8(1.1) & 93.4(3.6) & 93.6(3.0) & 86.8(1.7) & 92.0(0.8) & 94.4(3.2) & 94.0(3.1) & 90.6\\
SSAN \cite{li2020simultaneous} & 80.9(8.7)& 89.8(2.8)& \bf 95.8(2.0)& \bf 94.2(2.1)& 84.9(4.7)& 89.0(4.0)& 93.1(3.6)& 93.1(3.1)& 81.0(4.7)& 90.3(1.5)& 93.9(3.6)& 82.6(14.7)& 84.3(2.2)& 86.9(10.0)& 93.5(5.2)& \bf 95.0(2.1)& 89.3 \\
DAMA+ & 88.1(1.7) & \bf 92.7(0.6) & 93.9(1.7) & 92.2(4.1) & 88.0(1.3) & \bf 92.9(0.6) & 93.9(2.1) & 92.8(4.2) & 87.7(1.9) & \bf 93.2(0.5) & 92.1(5.3) & 94.0(3.3) & 88.1(2.1) & \bf 92.7(0.7) & 94.8(1.6) & 93.5(3.9) & 91.9\\
CDSPP (Ours) & \bf 88.3(0.7) & 92.3(0.7) & 95.6(1.5) & 94.1(4.1) & 88.1(1.0) & 92.6(0.5) & \bf 95.7(1.0) & \bf 94.6(3.8) & \bf 88.1(0.6) & 92.7(0.5) & 93.5(4.6) & \bf 94.7(3.5) & 88.1(1.0) & 92.5(0.5) & \bf 95.7(1.3) & 94.3(3.8) & \bf 92.6\\
\hline
\end{tabular}%
}
}
\end{table*}
\begin{table*}[!htbp]
\centering
\centering
\caption[]{Mean(std) of classification accuracy (\%) over ten trials on the Office-Home dataset using VGG16 (source) and ResNet50 (target) features under supervised (denoted by $*$) and semi-supervised settings (each column represents one Source $\to$ Target adaptation task).}
\label{table:vgg2resnet}
\resizebox{\columnwidth}{!}
\begin{tabular}{l cccc cccc ccccc cccc}
\hline
Method & A$\to$A & A$\to$C & A$\to$P & A$\to$R & C$\to$A & C$\to$C& C$\to$P & C$\to$R & P$\to$A & P$\to$C & P$\to$P & P$\to$R & R$\to$A & R$\to$C & R$\to$P & R$\to$R & Avg \\ \hline
SVM$_t *$& 51.8(1.2) & 41.4(1.6) & 71.0(1.4) & 65.8(2.3)& 51.8(1.2) & 41.4(1.6) & 71.0(1.4) & 65.8(2.3)& 51.8(1.2) & 41.4(1.6) & 71.0(1.4) & 65.8(2.3)& 51.8(1.2) & 41.4(1.6) & 71.0(1.4) & 65.8(2.3) & 57.5\\
CDLS\_sup \citep{hubert2016learning} $*$ & 58.7(0.9) & 45.7(1.5) & 75.0(0.8) & 69.8(1.9) & 53.4(1.0) & 48.6(1.0) & 73.9(0.9) & 67.8(1.8) & 55.0(0.9) & 45.9(1.4) & 78.0(0.8) & 70.2(1.5) & 56.5(1.1) & 46.8(1.5) & 76.2(0.5) & 72.4(1.4) & 62.1\\
DAMA\_sup * & 56.6(2.8) & 43.6(2.2) & 72.0(1.4) & 67.8(2.4) & 42.7(4.8) & 39.8(5.4) & 64.8(5.9) & 57.5(4.5) & 52.4(3.9) & 40.4(4.1) & 70.1(5.7) & 63.6(3.8) & 51.8(3.6) & 42.4(3.4) & 68.8(5.1) & 65.5(4.7) & 56.2\\
CDSPP\_sup (Ours) $*$ & 60.8(1.2) & 49.5(1.1) & 76.3(0.8) & 71.9(1.8) & 59.4(1.4) & 50.4(1.0) & 76.1(0.9) & 71.6(1.8) & 59.8(1.2) & 49.6(1.1) & 78.0(0.9) & 72.4(1.4) & 60.4(1.3) & 49.8(0.9) & 76.9(1.0) & 73.3(1.6) & 64.8\\
\hline
DAMA \citep{wang2011heterogeneous} & 55.6(3.3) & 43.8(2.1) & 71.1(2.1) & 66.4(3.5) & 43.1(4.7) & 39.3(5.2) & 62.9(5.7) & 56.4(4.7) & 52.1(4.1) & 40.4(4.6) & 69.9(4.3) & 64.3(5.3) & 51.9(3.6) & 42.0(4.3) & 68.3(5.0) & 65.1(4.5) & 55.8\\
CDLS \citep{hubert2016learning} & 62.1(0.9) & 46.9(1.2) & 76.8(0.7) & 71.5(2.3) & 55.7(1.3) & 47.4(1.2) & 76.7(0.6) & 70.8(2.0) & 56.4(1.1) & 47.0(1.2) & 77.8(0.6) & 71.5(2.0) & 56.7(1.2) & 47.6(1.3) & 77.5(0.4) & 72.2(2.0) & 63.4\\
PA \citep{li2018heterogeneous} & 59.8(1.2) & 48.2(1.5) & 80.0(1.2) & 75.5(1.8) & 59.8(1.1) & 48.2(1.3) & 80.0(1.3) & 75.4(1.9) & 59.5(1.5) & 48.2(1.4) & 80.0(1.6) & 75.7(1.9) & 59.6(1.3) & 48.2(1.5) & 79.9(1.4) & 75.7(1.8) & 65.8\\
TIT \citep{li2018transfer} & 55.6(1.0) & 44.7(1.3) & 74.3(1.0) & 70.3(1.8) & 56.1(0.9) & 45.5(1.1) & 74.7(0.7) & 70.2(1.7) & 55.9(1.1) & 45.3(1.3) & 74.9(0.9) & 70.2(1.8) & 55.5(1.5) & 44.6(1.4) & 74.7(0.8) & 69.9(2.0) & 61.4\\
STN \citep{yao2019heterogeneous} & 62.6(1.4) & 51.2(1.5) & 78.7(3.9) & 74.5(4.3) & 56.1(3.8) & 52.2(2.2) & 77.0(4.0) & 71.1(6.0) & 60.7(1.3) & 49.3(6.0) & \bf 82.4(1.0) & 75.8(2.8) & 61.0(1.3) & 50.6(3.2) & 80.4(0.9) & 75.7(4.4) & 66.2 \\
DDACL \cite{yao2020discriminative} & 50.3(2.2) & 39.8(2.4) & 59.4(2.8) & 56.1(3.4) & 45.1(2.0) & 36.3(3.0) & 60.9(2.9) & 56.8(2.0) & 40.3(1.5) & 34.2(2.3) & 55.7(9.1) & 43.0(9.9) & 41.9(2.4) & 36.5(2.0) & 52.4(5.1) & 51.5(9.2) & 47.5\\
SSAN \cite{li2020simultaneous}& 50.5(1.9)& 40.1(3.0)& 70.9(1.8)& 63.9(3.0)& 43.9(2.9)& 42.5(5.0)& 67.8(1.2)& 61.9(2.9)& 44.1(2.6)& 38.1(3.5)& 77.3(0.9)& 66.2(1.3)& 45.7(3.9)& 38.6(3.8)& 71.7(4.0)& 68.8(2.5)& 55.8 \\
DAMA+ & 62.1(2.4) & 49.0(1.4) & 77.7(1.9) & 75.0(2.5) & 54.0(5.2) & 44.7(6.1) & 75.6(3.7) & 69.0(3.4) & 60.9(2.7) & 46.9(3.1) & 76.9(3.5) & 72.5(1.9) & 60.3(1.9) & 48.6(3.7) & 76.7(2.8) & 73.4(3.3) & 63.9\\
CDSPP (Ours) & \bf 65.7(1.0) & \bf 54.8(2.0) & \bf 81.0(1.5) & \bf 78.4(1.1) & \bf 65.0(1.4) & \bf 55.1(1.6) & \bf 80.9(1.6) & \bf 78.5(1.2) & \bf 65.6(0.4) & \bf 54.7(1.9) & 81.5(1.1) & \bf 78.8(1.0) & \bf 65.5(0.9) & \bf 54.6(1.6) & \bf 80.9(1.6) & \bf 79.4(0.9) & \bf 70.0\\
\hline
\end{tabular}%
}
}
\end{table*}
\begin{table*}[!htbp]
\centering
\centering
\caption[]{Mean(std) of classification accuracy (\%) over ten trials on the Office-Home dataset using ResNet50 (source) and VGG16 (target) features under supervised (denoted by $*$) and semi-supervised settings (each column represents one Source $\to$ Target adaptation task).}
\label{table:resnet2vgg}
\resizebox{\columnwidth}{!}
\begin{tabular}{l cccc cccc ccccc cccc}
\hline
Method & A$\to$A & A$\to$C & A$\to$P & A$\to$R & C$\to$A & C$\to$C& C$\to$P & C$\to$R & P$\to$A & P$\to$C & P$\to$P & P$\to$R & R$\to$A & R$\to$C & R$\to$P & R$\to$R & Avg \\ \hline
SVM$_t$ * & 40.3(1.4) & 30.5(1.6) & 63.3(1.7) & 56.3(2.9) & 40.3(1.4) & 30.5(1.6) & 63.3(1.7) & 56.3(2.9) & 40.3(1.4) & 30.5(1.6) & 63.3(1.7) & 56.3(2.9) & 40.3(1.4) & 30.5(1.6) & 63.3(1.7) & 56.3(2.9) & 47.6\\
CDLS\_sup \citep{hubert2016learning} * & 51.4(1.1) & 36.5(1.0) & 69.6(1.1) & 63.5(2.0) & 46.4(1.2) & 39.2(1.0) & 68.7(1.2) & 62.0(1.6) & 47.2(1.2) & 36.4(0.8) & 73.1(1.0) & 64.6(1.9) & 48.6(1.1) & 37.1(1.1) & 70.9(1.2) & 66.4(2.0) & 55.1\\
DAMA\_sup * & 46.9(1.8) & 35.6(1.8) & 65.9(1.4) & 60.3(1.8) & 43.4(2.4) & 32.5(3.7) & 60.3(6.0) & 56.3(3.0) & 44.1(4.0) & 31.8(3.6) & 62.2(4.0) & 56.4(4.0) & 45.3(3.2) & 34.4(1.6) & 60.9(4.6) & 60.3(2.1) & 49.8\\
CDSPP (Ours)* & 49.7(1.1) & 39.2(1.0) & 69.5(1.3) & 63.7(2.0) & 48.3(1.2) & 40.4(1.3) & 69.5(1.5) & 63.4(1.8) & 48.5(1.1) & 38.9(0.8) & 71.3(1.4) & 64.1(1.9) & 49.0(1.2) & 39.4(1.1) & 70.1(1.3) & 65.0(2.1) & 55.6\\
\hline
DAMA \citep{wang2011heterogeneous} & 46.7(2.0) & 33.6(2.5) & 66.2(1.7) & 57.8(3.4) & 43.1(4.0) & 32.0(4.5) & 60.2(6.2) & 55.7(5.0) & 44.3(3.7) & 32.0(4.1) & 65.5(5.6) & 59.8(3.5) & 45.3(3.4) & 34.8(2.6) & 65.0(4.4) & 60.9(3.5) & 50.2\\
CDLS \citep{hubert2016learning} & 54.9(1.1) & 36.6(1.1) & 71.1(0.8) & 65.9(1.3) & 47.8(1.4) & 39.8(1.2) & 69.5(1.2) & 63.6(1.4) & 49.7(1.2) & 36.8(1.2) & 75.6(0.8) & 67.9(1.6) & 52.3(1.0) & 38.5(1.3) & 73.1(1.0) & 69.6(1.6) & 57.0\\
PA \citep{li2018heterogeneous} & 51.4(1.0) & 38.3(1.3) & 73.7(1.2) & 67.4(1.6) & 51.2(1.4) & 38.2(1.2) & 73.6(1.2) & 67.4(1.6) & 51.2(1.1) & 38.1(1.4) & 73.6(1.2) & 67.3(1.9) & 51.2(0.9) & 38.2(1.2) & 73.7(1.2) & 67.4(1.4) & 57.6\\
TIT \citep{li2018transfer} & 46.8(1.7) & 36.4(1.2) & 69.4(0.9) & 62.5(1.8) & 47.0(1.7) & 36.4(1.1) & 69.3(1.1) & 62.0(2.2) & 46.8(1.7) & 36.4(1.1) & 69.8(0.9) & 62.4(2.1) & 45.9(1.6) & 36.0(1.3) & 69.4(1.2) & 62.5(2.1) & 53.7\\
STN \citep{yao2019heterogeneous} & 52.6(1.5) & 41.2(2.4) & 74.9(1.0) & 69.2(1.5) & 51.2(1.1) & 42.5(1.2) & 75.3(1.2) & 69.6(1.0) & 53.0(1.2) & 41.7(1.4) & \bf 77.3(1.2) & 70.7(1.4) & 52.7(1.9) & 41.7(1.4) & \bf 76.6(1.0) & 71.6(1.3) & 60.1 \\
DDACL \cite{yao2020discriminative} & 33.8(2.3) & 27.5(1.6) & 52.2(4.0) & 46.8(1.6) & 31.8(2.3) & 24.3(1.6) & 50.8(1.9) & 44.0(3.5) & 32.0(2.7) & 23.4(2.8) & 49.0(7.9) & 39.9(7.3) & 32.4(2.7) & 24.9(1.6) & 46.5(3.7) & 45.5(4.4) & 37.8\\
SSAN\cite{li2020simultaneous}& 42.2(4.1)& 30.4(2.3)& 61.9(3.7)& 56.5(2.6)& 37.9(1.6)& 32.3(2.3)& 62.1(1.5)& 53.4(3.4)& 38.1(2.0)& 29.9(1.7)& 69.0(2.9)& 58.0(1.9)& 37.5(2.3)& 29.6(1.7)& 63.3(2.2)& 57.9(3.5)& 47.5 \\
DAMA+ & 49.1(2.9) & 37.5(1.2) & 71.1(1.5) & 65.4(2.5) & 49.7(1.9) & 32.9(4.1) & 68.3(3.3) & 63.2(3.7) & 48.9(3.1) & 33.3(3.6) & 68.1(2.5) & 61.4(3.9) & 49.9(3.1) & 36.3(2.2) & 67.1(2.4) & 64.6(2.1) & 54.2\\
CDSPP (Ours) & \bf 55.6(1.1) & \bf 44.7(1.8) & \bf 75.2(1.6) & \bf 71.7(1.4) & \bf 54.5(1.2) & \bf 46.0(1.6) & \bf 75.7(1.6) & \bf 71.4(1.9) & \bf 54.7(1.2) & \bf 45.0(1.6) & 76.0(1.8) & \bf 71.8(1.6) & \bf 55.0(1.3) & \bf 44.9(2.0) & 75.8(1.8) & \bf 72.1(1.8) & \bf 61.9\\
\hline
\end{tabular}%
}
}
\end{table*}
Table \ref{table:mrc-tag2image} (except the last column) lists the comparison results on the MRC dataset. The baseline method SVM$_t$ achieves an accuracy of 67.0\% using only 10 labelled target domain samples per class for training. The labelled source domain data can benefit the performance with proper domain adaptation but the improvement is marginal for both HFA and our proposed CDSPP. The supervised version of CDLS uses PCA to learn a subspace from the target domain, hence the dimensionality of subspace cannot be higher than $n_t-1$. Due to such limitation, CDLS\_sup performs worse than others when the number of labelled target samples is small which is usually the case for HDA problems. For the semi-supervised HDA, DAMA and SHFA perform no better than the baseline method SVM$_t$ which was also observed in existing works \citep{hubert2016learning,li2018heterogeneous,li2018transfer}. The best performance (71.7\%) is achieved by PA \citep{li2018heterogeneous} and our proposed CDSPP is marginally worse with the average classification accuracy of 68.9\%. However, when applying PCA to reduce the text features to a lower dimensionality of 50, the performance of CDSPP is improved from 68.9\% to 71.6\%, comparable with the best performance 71.7\% achieved by PA. This demonstrates the fact handcrafted text features (i.e. bag-of-features) used in the MRC dataset contain noisy variables which cannot be well handled by the CDSPP algorithm itself but a pre-processing like PCA suffices to address this issue.
Table \ref{table:mrc-tag2image} (rightmost column) also presents the results of tag-to-image adaptation on the NUS-ImageNet dataset. There is only one adaptation task (i.e. Tag$\to$Image) in this dataset. In the supervised HDA setting, the baseline method SVM$_t$ is outperformed by all three comparative methods with large margins among which HFA achieves the best performance of 67.5\% as opposed to the accuracy of 67.2\% by our proposed CDSPP\_sup. However, HFA is more computationally expensive than others as discussed in \citep{li2013learning}. In the semi-supervised HDA setting, our method achieves the best performance with an accuracy of 74.7\%. The performance of our CDSPP can be further improved to 76.5\% when PCA is applied to reduce the dimensionality of the text features to 50.
Similar results can also be observed in Table \ref{table:surf2decaf} for the image classification experiments on Office-Caltech. Both HFA and our CDSPP achieve the same average accuracy of 87.8\% in the supervised HDA setting. CDLS\_sup performs worse than the baseline method SVM\_t again due to the restricted PCA dimensions as discussed above. In the semi-supervised HDA, our CDSPP achieves the best results in 6 out of 16 adaptation tasks and has the highest average accuracy of 92.6\%.
The experimental results for the challenging Office-Home dataset are shown in Table \ref{table:vgg2resnet} and Table \ref{table:resnet2vgg}. The difference between these two tables lies in the features used for the source/target domains are VGG16/ResNet50 and ResNet50/VGG16 respectively. In this experiment, the methods HFA and SHFA are excluded due to their extremely long computation time given the scale of this dataset. It can be seen that CDLS\_sup, for the first time, outperforms the baseline method SVM$_t$ on this dataset since the total number of labelled target samples is 195 which no longer restricts the PCA dimension in this algorithm. Two more recent approaches DDACL \cite{yao2020discriminative} and SSAN \cite{li2020simultaneous}, however, perform poorly on this more challenging dataset although they achieve good performance on three simpler datasets. One reasonable explanation is that these two approaches along with many others benefit from the clustering characteristics of the original features and can easily recognize the target samples cluster-wisely. For the more challenging dataset, the classes are prone to overlap in a low-dimensional subspace if the projections are not properly learned. The simultaneous learning of the classifier and feature projections tends to result in an overfitted classifier to the labelled and pseudo-labelled samples and the overfitting can be an issue when the labelled target samples cannot represent the distribution of their corresponding classes in the subspace. As a result, they suffer from negative adaptation when the pseudo-labels are inaccurate at the beginning and less robustness to the choice of labelled target samples. This also provides evidence for the necessity of new test beds for HDA approaches. In both tables, the best performances were achieved by our CDSPP for most adaptation tasks in both supervised and semi-supervised settings. Specifically, CDSPP achieves an average accuracy of 70.0\% when VGG16 and ResNet50 features were employed for source and target domains, significantly better than the second-best performance 66.2\% achieved by STN \citep{yao2019heterogeneous}. Similar results can be observed in Table \ref{table:resnet2vgg}, CDSPP achieves the best performance of 61.9\% as opposed to the second-best 60.1\% by STN \citep{yao2019heterogeneous}. The significant performance improvement gained by CDSPP on the Office-Home dataset is attributed to the fact this dataset is much more challenging than other datasets since it contains much more classes (65 vs 10, 8, 6). We believe Office-Home is a more appropriate testbed for discriminating different HDA methods.
In addition, the performance comparison between DAMA and DAMA+ provide further evidence that the use of the iterative learning framework described in Section \ref{sec:shda} is beneficial to semi-supervised HDA. On the other hand, the superior performance of CDSPP to DAMA+ across all datasets validates the fact that our CDSPP is essentially different from DAMA as discussed in Section \ref{sec:recognition}. In the supervised HDA experiments, CDSPP also outperforms our adaptation of DAMA consistently on four datasets and the performance gap on the challenging Office-Home dataset is particularly significant. The other interesting phenomenon that can be observed from Tables \ref{table:surf2decaf}-\ref{table:vgg2resnet} is the semi-supervised DAMA (i.e. the original version in \cite{wang2011heterogeneous}) performs no better than its supervised version (i.e. DAMA\_sup adapted by ourselves). This demonstrates that the way how DAMA \cite{wang2011heterogeneous} exploits the unlabelled target-domain data is ineffective. By contrast, the selective pseudo-labelling strategy employed in our proposed CDSPP is more effective and can be readily used by other HDA algorithms.
\subsection{On the Number of labelled Target Samples}
We conducted additional experiments of semi-supervised HDA to compare our proposed CDSLPP with other methods when different numbers of labelled target samples were used for training. Specifically, we set the number of labelled target samples as 5, 10, 15 or 20 for the MRC dataset whilst for the other three datasets the investigated numbers of labelled target samples were within the collection of $\{1, 3, 5, 7, 9\}$. For the MRC and NUS-ImageNet datasets, all adaptation tasks (i.e. $EN/FR/GE/IT \to SP$ and $Tag\to Image$, respectively) were repeated for ten trials with randomly selected data (the same as those used in the previous experiment). To save computational time without loss of generality, we only conducted the first four adaptation tasks for the first three trials for the Office-Caltech ($C\to C, C\to A, C\to D, C\to W$) and Office-Home ($A\to A, A\to C, A\to P, A\to R$
with VGG16 and ResNet50 as the source and target features, respectively) datasets in this experiment. For each dataset, the average classification accuracy over all the conducted adaptation tasks in this dataset is reported for comparison.
\begin{figure*}
\centering
{\includegraphics[width=1\textwidth]{NumOfLTS.pdf}}
{\caption{Comparison results when different numbers of labelled target samples are used. }
\label{fig:numOfLTS}}
\end{figure*}
The experimental results are shown in Figure \ref{fig:numOfLTS} from which we can draw some conclusions. (1) The performance of all methods is improved with the increase of labelled target samples since more labelled target samples provide additional information for the training. (2) The performance margins between different methods decrease when more labelled target samples are used for training. This phenomenon demonstrates these methods have different capabilities of cross-domain knowledge transfer which is of vital importance when there are limited labelled data in the target domain. (3) Our proposed CDSPP algorithm outperforms the others in three out of four datasets regardless of the number of labelled target samples. The superiority of CDSPP to other methods is more significant when less labelled target samples are available. (4) On the MRC dataset, our method performs the best when 5 labelled target samples are used but outperformed by CDLS \citep{hubert2016learning} and \citep{li2018heterogeneous} when more labelled target samples are available.
\subsection{On the Effect of Hyper-parameters}
In all our experiments described above, we empirically set the dimensionality of the common subspace $d$ equal to the number of classes in the dataset and set the hyper-parameters $\alpha=10$ (c.f. Eq.(\ref{eq:eig})) and the number of iterations $T=5$ (c.f. Algorithm \ref{alg:hda}). In this experiment, we will show how these values were selected and the fact that our algorithm is not sensitive to these hyper-parameters across all the datasets. Similar to the experimental settings in the previous section, we repeated all the adaptation tasks for ten trials for the MRC and NUS-ImageNet datasets and repeated the first four adaptation tasks for the first three trials for the Office-Caltech and Office-Home datasets to save time without loss of generality. The average accuracy over all the investigated adaptation tasks is reported for each dataset when a specific hyper-parameter value is used.
Firstly, we investigate the effect of the subspace dimension $d$. The values of $d$ were from the set $\{ 2, 4, 6, 8, 10, 16, 32, 64/65, 128, 256, 512 \}$ which contains the class numbers of four datasets (i.e. 6, 8, 10 and 65) as well as other candidate values less or greater than the class numbers. The experimental results are shown in the left graph of Figure \ref{fig:sensitivity}. It is not hard to see that the best performance can be achieved when the value of $d$ is no less than the number of classes in each dataset. A greater value of $d$ does not further improve the performance but a smaller value of $d$ leads to a significant performance drop.
As a result, it is easy to select an optimal value of the subspace dimension for our proposed CDSPP.
Subsequently, We investigate the effect of the regularization parameter $\alpha$ in Eq.(\ref{eq:eig}) by conducting experiments with the values of $\alpha$ selected from $\{0.01, 0.1, 1, 10, 100, 1000\}$. The experimental results are shown in the middle graph of Figure \ref{fig:sensitivity} from which we can see that the optimal values of $\alpha$ should be between 10 and 100 across all datasets. A smaller value of $\alpha$ leads to performance drops for all datasets except Office-Caltech. This validates the necessity of the regularization term in Eq.(\ref{eq:eig}) in our method and it is not very sensitive to the value of $\alpha$. Similar findings have been validated in the traditional LPP algorithm by \citet{wang2017zero}.
Finally, we are concerned about the number of iterations $T$ by setting $T=\{1,3,5,7,9,11,15,21\}$. The right-side graph in Figure \ref{fig:sensitivity} shows that the CDSPP algorithm performs generally well when $T\geq 5$. Increasing the number of iterations further can only improve the performance on the NUS-ImageNet dataset very marginally but will increase the computational cost significantly. As a result, we selected $T=5$ as the optimal value in all our experiments.
\begin{figure*}
\centering
{\includegraphics[width=1\textwidth]{hyper-parameters.pdf}}
{\caption{Performance sensitivity to hyper-parameters. }
\label{fig:sensitivity}}
\end{figure*}
\subsection{Qualitative Evaluation} \label{sec:qualitative}
To give an intuitive explanation of how our algorithm can align two heterogeneous domains progressively, we take the tag-to-image adaptation task in the NUS-ImageNet dataset as an example and visualise the distribution of samples in the learned subspace. As shown in Figure \ref{fig:visualisation}(a), the original features from the two domains are independent of each other although the clustering characteristics are evident. Figure \ref{fig:visualisation}(b) illustrates how the three labelled target samples (``circles") are pulled closer to the corresponding source classes (``squares") after the first iteration of CDSPP. More importantly, due to the property of structure preservation of CDSPP, the unlabelled target samples (``crosses") are also moving towards their corresponding source clusters. In Figure \ref{fig:visualisation}(c), we can see more target samples are pseudo-labelled (``crosses" within ``circles") and the source and target domains are further aligned. Such progressive pseudo-labelling and domain alignment are enhanced in Figure \ref{fig:visualisation}(d) and no significant improvement can be observed in the following iterations (e) and (f). This is consistent with the recognition results achieved by our CDSPP in this particular experiment (i.e. from the first to the fifth iteration, recognition accuracy is 70.1\%, 76.7\%, 79.1\%, 78.9\% and 79.0\%, respectively).
It is obvious that the clustering of eight classes has converged after the third iteration and the two domains are relatively well aligned. The samples which are misclassified in the final iteration are those located in the overlapping regions of two classes. The overlap comes from the original features as shown in Figure \ref{fig:visualisation}(a) and can be mitigated in different ways. The best way is to extract more discriminative features to avoid such distribution overlap from the beginning which, however, is beyond our focus of this paper. Alternatively, one can use a more capable domain adaptation algorithm such as our proposed CDSPP to mitigate the class overlap by learning the most discriminative features from the original ones. In addition, the choice of labelled target samples also makes a difference. Taking a closer look at Figure \ref{fig:visualisation}(a), we can see one of the three randomly selected labelled target samples for class 5 is far away from the target cluster of class 5. When this outlier is pulled closer to the source cluster of class 5, some samples from class 2 and class 6 are also mistakenly pulled close to the source cluster of class 5 as shown in Figure \ref{fig:visualisation}(b). These observations also imply it is important to choose the most representative target samples to label for improved performance in practice.
\begin{figure*}[ht!]
\centering
\includegraphics[width=\textwidth]{visualisation.pdf}
\caption{Visualisation of the learned subspace for the NUS-ImageNet dataset (i.e. the tag to image adaptation task) using the proposed CDSPP, best view in colour. (Results are from one of the ten trials with a specific random seed; eight classes 1-8 are represented by different colours; ``squares": labelled source samples; ``crosses": unlabelled target samples; ``circles": labelled or pseudo-labelled target samples; (a) the original features learned by two separate PCA projections independently; (b)-(f) projections in the subspace learned by CDSPP after 1st-5th iteration.)}
\label{fig:visualisation}
\end{figure*}
\subsection{On the Computational Efficiency}\label{sec:comptime}
We compare the computational efficiency of different methods by calculating the time cost of each method in the experiments. The experiments are conducted on a laptop with an Intel Core i5-7300HQ CPU @ 2.5 GHz and 32 GB memory. For neural network based methods STN and SSAN, the Nvidia Titan Xp GPUs are used. The results are shown in Table \ref{table:time}. The computational time is calculated by averaging the time for all adaptation tasks (i.e. 4, 1, 16 and 16 tasks for MRC, NUS-ImageNet, Office-Caltech and Office-Home respectively) over three trials. By comparison, our proposed CDSPP is generally the most efficient method on three out of four datasets. The exception on Office-Caltech is because CDLS and TIT use dimensionality reduction such as PCA to reduce the dimensionality of Decaf features from 4096 to a much lower value whilst our CDSPP uses the original 4096-dimensional features. From Table \ref{table:time} we can also see different methods have the varying capability of scaling to larger datasets (e.g., from NUS-ImageNet to Office-Home) in terms of both feature dimensionality and the number of samples. In particular, SHFA takes an excessively long time before completing one single adaptation task of Office-Home in our experiment hence is marked as $Inf$ in the table. STN and SSAN take the most time across all datasets since neural networks are trained for a large number of iterations which is generally much less efficient compared with our CDSPP which can be solved by eigen-decomposition.
\begin{table}[!t]
\centering
\centering
\caption[]{Computation time (s) of different methods on four datasets (the total time of all adaptation tasks in each dataset is calculated).
}
\label{table:time}
\resizebox{0.9\columnwidth}{!}
\begin{tabular}{lrrrr}
\hline
Method & MRC & NUS-ImageNet & Office-Caltech & Office-Home \\ \hline
DAMA \citep{wang2011heterogeneous} & 46 & 7 & 58 & 477\\
SHFA \citep{li2013learning}& 917 & 25 & 255 & Inf \\
CDLS \citep{hubert2016learning} & 168 & \bf 6 & \bf 47 & 272\\
PA \citep{li2018heterogeneous}& 617 & 30 & 121 & 3991\\
TIT \citep{li2018transfer} & 175 & 11 & 52 & 1740\\
STN \citep{yao2019heterogeneous} & 2734 & 343 & 7134 & 40857 \\
DDACL \citep{yao2020discriminative} & 622 & 169 & 2940 & 3421 \\
SSAN \citep{li2020simultaneous} & 9520 & 1229 & 13245 & 47145 \\
DAMA + & 49 & 21 & 288 & 1390\\
CDSPP (Ours) & \bf 16 & 7 & 161 & \bf 256\\
\hline
\end{tabular}%
}
}
\end{table}
\section{Conclusion and Future Work} \label{sec:conclusion}
We propose a novel algorithm CDSPP for HDA and extend it to the semi-supervised setting by incorporating it into an iterative learning framework. Experimental results on several benchmark datasets demonstrate the proposed CDSPP is not only computationally efficient but also can achieve state-of-the-art performance on four datasets. We also investigate the effect of the number of labelled target samples in the performance of different methods and found that the use of too many labelled target samples will suppress the performance distinction among different methods. The newly introduced benchmark dataset Office-Home for HDA is proved a proper testbed for HDA since it is more challenging with much more classes than others and the performances of investigated methods on this dataset are more significantly varied. In addition, the proposed method for HDA is not sensitive to hyper-parameters and it is easy to select optimal hyper-parameter values across varying datasets.
One limitation of the proposed method is that its performance relies on the quality of pre-extracted features. As we have observed in our experiments on the MRC dataset, proper pre-processing of features can affect the domain adaptation performance significantly. One direction of future work to address this issue is to unify the feature extraction neural networks and domain adaptation. For HDA, the source and target domains are different either in the data modality (e.g., text and image) or in the feature space. As a result, two individual neural networks are needed for feature extraction before feeding the features into the domain adaptation module. Our selective pseudo-labelling strategy described in this paper can also be easily applied to exploit the unlabelled target-domain data when training the unified neural networks for HDA.
\bibliographystyle{apa}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.